A Python library for metaheuristic optimization and collaborative search, bringing together 307 optimization algorithms across swarm, evolutionary, trajectory, physics-inspired, nature-inspired, human-inspired, and mathematical families.
This README targets pymetaheuristic-v5+. It can be installed with:
pip install pymetaheuristicFor legacy, the old library can still be installed with:
pip install pymetaheuristic==1.9.5New to Python or prefer a graphical interface? The pymetaheuristic Lab provides a convenient Web App to run optimizations without writing extensive code.
import pymetaheuristic
# Start the web service using:
pymetaheuristic.web_app()
# Terminate the web service using:
pymetaheuristic.web.web_stop()This Google Colab Demo is intended for quick demos only. For the best experience, run the Web UI locally or open it directly in a full browser.
- Introduction
- Installation and Package Overview
- 2.1 Installation
- 2.2 Package Overview
- 2.3 Optimization, Telemetry, Export, and Plotting Example --- [Colab Demo] ---
- 2.4 Termination Criteria --- [Colab Demo] ---
- 2.5 Constraint Handling Example --- [Colab Demo] ---
- 2.6 Cooperative Multi-island Example --- [Colab Demo] ---
- 2.7 Orchestrated Cooperation Example --- [Colab Demo] ---
- 2.8 Chaotic Maps and Transfer Functions --- [Colab Demo] ---
- 2.9 Hyperparameter Tuner --- [Colab Demo] ---
- 2.10 Save, Load, and Checkpoint --- [Colab Demo] ---
- 2.11 Benchmark Runner --- [Colab Demo] ---
- Algorithm Details
- Test Functions --- [Colab Demo] ---
- Other Libraries
pymetaheuristic is a Python optimization library built around metaheuristics, benchmark functions, stepwise execution, telemetry, cooperation, rule-based orchestration, constraint-aware evaluation, composable termination criteria, typed variable spaces, chaotic initialization, transfer functions, hyperparameter tuning, and benchmark sweeps. The package provides:
- a broad collection of metaheuristic algorithms
- benchmark functions for testing and visualization
- a stepwise engine API for controlled execution
- telemetry, export helpers, evaluation-indexed convergence data, and save/load for experiments
- cooperative multi-island optimization
- rule-based orchestration for collaborative optimization
- built-in constrained optimization support plus named repair strategies (
clip,wang,reflect,rand,limit_inverse) - composable
Terminationobject with four independent stopping conditions - automatic per-step diversity and exploration/exploitation tracking in history
- matplotlib-based diversity, convergence, runtime, and explore/exploit charts, including evaluation-indexed convergence plots
- typed variable space (
FloatVar,IntegerVar,CategoricalVar,PermutationVar,BinaryVar) - ten chaotic maps plus
lhs,obl, andsobolpopulation initialization presets - eight transfer functions and
BinaryAdapterfor binary/discrete optimization HyperparameterTunerfor grid/random hyperparameter searchBenchmarkRunnerfor multi-algorithm × multi-problem sweepssave_result,load_result,save_checkpoint,load_checkpointfor persistence- callback system with lifecycle hooks and callback-driven early stopping
- object-based
ProblemAPI with parametrized bounds,latex_code(), and curated test-problem wrappers - reusable
levy_flight()utility and human-readablealgorithm.info()metadata
Standard installation:
pip install pymetaheuristic| Area | Main objects / functions | What it covers |
|---|---|---|
| Core Optimization | optimize, list_algorithms, get_algorithm_info, create_optimizer |
Single-algorithm optimization, algorithm discovery, and inspection of default parameters |
| Termination | Termination, EarlyStopping, callbacks |
Composable stopping criteria: max_steps, max_evaluations, max_time, max_early_stop, target_fitness, and callback-driven stops |
| Constraints and Feasibility | optimize(..., constraints=..., constraint_handler=...) |
Constrained optimization with inequality/equality constraints, feasibility-aware evaluation |
| Benchmarks and Plots (Plotly) | FUNCTIONS, get_test_function, plot_function, plot_convergence, compare_convergence, plot_benchmark_summary, plot_island_dynamics, plot_collaboration_network, plot_population_snapshot |
Built-in benchmark functions and Plotly-based landscape, convergence, and cooperation visualizations |
| History Charts (Matplotlib) | plot_global_best_chart, plot_diversity_chart, plot_explore_exploit_chart, plot_runtime_chart, plot_run_dashboard, plot_diversity_comparison |
Per-step diversity, exploration/exploitation, runtime, and convergence charts using matplotlib |
| Telemetry and Export | summarize_result, export_history_csv, export_population_snapshots_json, convergence_data |
Experiment summarization, evaluation-indexed convergence extraction, and export of history and snapshots |
| IO (Persistence) | save_result, load_result, save_checkpoint, load_checkpoint, result_to_json, result_from_json |
Save and restore results; checkpoint-and-resume for long runs |
| Typed Variable Space | FloatVar, IntegerVar, BinaryVar, CategoricalVar, PermutationVar, build_problem_spec, decode_position, encode_position |
Define mixed-type search spaces; automatic encode/decode to/from continuous representation |
| Problem Objects | Problem, FunctionalProblem, SphereProblem, RastriginProblem, AckleyProblem, RosenbrockProblem, ZakharovProblem, get_test_problem |
N-dimensional object-based problem definitions with parametrized bounds and latex_code() |
| Chaotic Maps | ChaoticMap, chaotic_sequence, chaotic_population, AVAILABLE_CHAOTIC_MAPS |
Ten chaotic maps for diversity-preserving population initialisation and perturbation |
| Initialisation Presets | uniform_population, lhs_population, obl_population, sobol_population, get_init_function, AVAILABLE_INIT_STRATEGIES |
Composable initialisation strategies for any algorithm through init_function= or init_name= |
| Transfer Functions | apply_transfer, binarize, BinaryAdapter, vstf_01–vstf_04, sstf_01–sstf_04, AVAILABLE_TRANSFER_FUNCTIONS |
Eight transfer functions mapping continuous positions to binary probabilities for binary optimization |
| Repair and Random Utilities | limit, limit_inverse, wang, rand, reflect, get_repair_function, levy_flight |
Named bound-repair policies and a reusable Lévy-flight sampler |
| Hyperparameter Tuner | HyperparameterTuner |
Grid or random search over algorithm hyperparameters across multiple trials |
| Benchmark Runner | BenchmarkRunner |
Multi-algorithm × multi-problem sweeps with statistical aggregation |
| Cooperation | cooperative_optimize, replay_cooperative_result |
Multi-island cooperative optimization |
| Orchestration | orchestrated_optimize, OrchestrationSpec, CollaborativeConfig, RulesConfig |
Checkpoint-driven cooperation with fixed or rule-based orchestration |
| Reference | print_root_exports, print_reference, search_reference |
Programmatic argument reference for all callables |
To quickly inspect parameters:
import pymetaheuristic
# List
pymetaheuristic.print_root_exports()
# Detail
pymetaheuristic.print_reference("optimize")optimize is the main high-level entry point for running a single metaheuristic on a user-defined objective function. The user specifies the algorithm, search bounds, and computational budget, while optional keyword arguments configure the selected optimizer and control diagnostics, such as history storage and population snapshots. The function returns a structured result object containing the best solution found, its objective value, and optional run traces that can later be summarized, exported, or plotted. In the example below, optimize applies Particle Swarm Optimization (PSO) to the Easom function over a bounded two-dimensional domain, stores the optimization trajectory, and then summarizes the run with summarize_result.
When store_history and store_population_snapshots are enabled, the returned result object contains enough information to support post-run analysis, reproducibility, and visualization. The history can be exported as a tabular CSV file, population states can be saved as JSON snapshots for later inspection, and convergence can be visualized directly with the built-in plotting utilities. In the example below, PSO is applied to the Sphere function, the optimization trace is exported to disk, and the convergence behavior is plotted for immediate inspection.
import numpy as np
import pymetaheuristic
# To use a built-in test function instead, uncomment the next line:
# easom = pymetaheuristic.get_test_function("easom")
# Or define your own objective function.
# The input must be a list (or array-like) of variable values,
# and its length corresponds to the problem dimension.
def easom(x = [0, 0]):
x1, x2 = x
return -np.cos(x1) * np.cos(x2) * np.exp(-(x1 - np.pi) ** 2 - (x2 - np.pi) ** 2)
result = pymetaheuristic.optimize(
algorithm = "pso",
target_function = easom,
min_values = (-5, -5),
max_values = ( 5, 5),
max_steps = 30,
seed = 42,
store_history = True,
store_population_snapshots = True,
)
print(result.best_fitness)
print(len(result.history))
print(pymetaheuristic.summarize_result(result))
pymetaheuristic.export_history_csv(result, "population_history.csv")
pymetaheuristic.export_population_snapshots_json(result, "population_snapshots.json")
fig = pymetaheuristic.plot_convergence(result)
fig.show()Termination is a composable stopping-criteria object that replaces (or extends) the individual max_steps, max_evaluations, target_fitness, and timeout_seconds keyword arguments. The first condition that triggers ends the run.
Four independent condition types are supported:
- MG (
max_steps): maximum number of macro-steps / iterations. - FE (
max_evaluations): maximum number of objective-function evaluations. - TB (
max_time): wall-clock time bound in seconds. - ES (
max_early_stop): early stopping — halt if the global best has not improved by more thanepsilonfor this many consecutive steps.
import numpy as np
import pymetaheuristic
def easom(x = [0, 0]):
x1, x2 = x
return -np.cos(x1) * np.cos(x2) * np.exp(-(x1 - np.pi) ** 2 - (x2 - np.pi) ** 2)
# Build a composable termination with multiple conditions
# The run stops as soon as ANY condition is triggered.
term = pymetaheuristic.Termination(
max_steps = 1000,
max_evaluations = 50000,
max_time = 30.0, # 30-second wall-clock limit
max_early_stop = 25, # stop if no improvement for 25 steps
epsilon = 1e-8,
)
result = pymetaheuristic.optimize(
algorithm = "pso",
target_function = easom,
min_values = (-5, -5),
max_values = ( 5, 5),
termination = term,
seed = 42,
)
print(f"Best fitness: {result.best_fitness:.6f}")
print(f"Steps run: {result.steps}")
print(f"Evaluations: {result.evaluations}")
print(f"Termination reason: {result.termination_reason}")This example illustrates how optimize can be applied to constrained optimization problems. The user provides one or more constraint functions alongside the objective, and the solver evaluates candidate solutions by combining objective quality with constraint satisfaction according to the selected handling strategy. In this case, the "deb" constraint handler applies feasibility-based comparison rules, so feasible solutions are preferred over infeasible ones, and among infeasible candidates, those with smaller violations are favored. The returned result, therefore, includes not only the best position and penalized search outcome but also metadata describing the raw objective value, the magnitude of constraint violation, and whether the final solution is feasible.
import pymetaheuristic
# ─────────────────────────────────────────────────────────────────────────────
# Variables: 1) wire diameter d, 2) mean coil diameter D, 3) number of coils N
# Solution: f* ≈ 0.012665
# ─────────────────────────────────────────────────────────────────────────────
def tension_spring(x = [0, 0, 0]):
d, D, N = x[0], x[1], x[2]
return (N + 2) * D * d**2
constraints = [
lambda x: 1 - (x[1]**3 * x[2]) / (71785 * x[0]**4),
lambda x: (4*x[1]**2 - x[0]*x[1]) / (12566*(x[1]*x[0]**3 - x[0]**4)) + 1/(5108*x[0]**2) - 1,
lambda x: 1 - 140.45*x[0] / (x[1]**2 * x[2]),
lambda x: (x[0] + x[1]) / 1.5 - 1,
]
result = pymetaheuristic.optimize(
algorithm = "pso",
target_function = tension_spring,
min_values = (0.05, 0.25, 2.0),
max_values = (2.00, 1.30, 15.0),
constraints = constraints,
constraint_handler = "deb",
max_steps = 2500,
seed = 42,
)
print(result.best_position)
print(result.best_fitness)
print(result.metadata["best_raw_fitness"])
print(result.metadata["best_violation"])
print(result.metadata["best_is_feasible"])Other constraints examples:
constraint = [lambda x: x[0] + x[1] - 1.0] # x0 + x1 <= 1
constraints = [
lambda x: x[0]**2 + x[1]**2 - 4.0, # x0^2 + x1^2 <= 4
lambda x: -x[0], # x0 >= 0
lambda x: -x[1], # x1 >= 0
lambda x: x[2] - 5.0, # x2 <= 5
lambda x: 2.0 - x[2], # x2 >= 2
lambda x: abs(x[0] - x[1]) - 0.5, # |x0 - x1| <= 0.5
lambda x: max(x[0], x[1]) - 3.0, # max(x0, x1) <= 3
lambda x: x[0]*x[1] - 2.0, # x0*x1 <= 2
lambda x: np.sin(x[0]) + x[1] - 1.5, # sin(x0) + x1 <= 1.5
lambda x: {"type": "eq", "value": x[0] - x[1]} # x0 = x1
]
def c1(x):
return x[0] + x[1] - 1.0 # x0 + x1 <= 1
def c2(x):
return -x[0] # x0 >= 0
def c3(x):
return {"type": "eq", "value": x[0] - x[1]} # x0 = x1
constraints = [c1, c2, c3]cooperative_optimize extends the framework from single-optimizer execution to a collaborative multi-island setting, where several heterogeneous metaheuristics explore the same search space in parallel and periodically exchange information. This interface is useful when the user wants to combine complementary search behaviors—for example, swarm-based, evolutionary, and trajectory-based methods—within a single optimization run. The migration mechanism controls when candidate solutions are shared, how many are transferred, and how communication is structured through a topology such as a ring. In the example below, PSO, GA, SA, and ABCO are executed as cooperating islands on the Easom function, with periodic migration events that allow promising solutions discovered by one method to influence the others.
import numpy as np
import pymetaheuristic
def easom(x = [0, 0]):
x1, x2 = x
return -np.cos(x1) * np.cos(x2) * np.exp(-(x1 - np.pi) ** 2 - (x2 - np.pi) ** 2)
result = pymetaheuristic.cooperative_optimize(
islands = [
{"algorithm": "pso", "config": {"swarm_size": 25}},
{"algorithm": "ga", "config": {}},
{"algorithm": "sa", "config": {"temperature_iterations": 20}},
{"algorithm": "abco", "config": {}},
],
target_function = easom,
min_values = (-5, -5),
max_values = ( 5, 5),
max_steps = 20,
migration_interval = 5,
migration_size = 2,
topology = "ring",
seed = 42,
)
print(result.best_fitness)
print(len(result.events))orchestrated_optimize adds an adaptive decision layer atop cooperative multi-island optimization. Instead of relying only on fixed migration schedules, the run is periodically inspected at predefined checkpoints, and an orchestration policy decides whether corrective actions such as rebalancing, perturbation, restarting, or waiting should be applied. This interface is useful when the user wants cooperation to become state-aware and responsive to signals such as stagnation, loss of diversity, or uneven progress across islands. In the example below, PSO, GA, and SA cooperate on the Easom function under a rule-based orchestration policy, and the resulting object records not only the best solution found but also the sequence of checkpoints and the decisions taken during the run.
import numpy as np
import pymetaheuristic
def easom(x = [0, 0]):
x1, x2 = x
return -np.cos(x1) * np.cos(x2) * np.exp(-(x1 - np.pi) ** 2 - (x2 - np.pi) ** 2)
config = pymetaheuristic.CollaborativeConfig(
orchestration = pymetaheuristic.OrchestrationSpec(
mode = "rules",
checkpoint_interval = 5,
max_actions_per_checkpoint = 2,
warmup_checkpoints = 1,
),
rules = pymetaheuristic.RulesConfig(
stagnation_threshold = 4,
low_diversity_threshold = 0.05,
high_diversity_threshold = 0.25,
perturbation_sigma = 0.05,
),
)
result = pymetaheuristic.orchestrated_optimize(
islands = [
{"label": "pso", "algorithm": "pso", "config": {"swarm_size": 20}},
{"label": "ga", "algorithm": "ga", "config": {"population_size": 20}},
{"label": "sa", "algorithm": "sa", "config": {"temperature": 10.0}},
],
target_function = easom,
min_values = (-5, -5),
max_values = ( 5, 5),
max_steps = 20,
seed = 42,
config = config,
)
print(result.best_fitness)
print(len(result.checkpoints))
print(len(result.decisions))Chaotic maps are initializations based on deterministic chaotic sequences that improve early population diversity and help avoid premature convergence. Ten maps are available: logistic, tent, bernoulli, chebyshev, circle, cubic, icmic, piecewise, sine, gauss. The default for population initialization is random. Transfer functions map continuous positions to bit-flip probabilities, enabling any continuous metaheuristic to solve binary or Boolean problems. Four V-shaped (v1–v4) and four S-shaped (s1–s4) functions are available. BinaryAdapter wraps any algorithm and automatically applies the transfer function.
import itertools
import numpy as np
import pymetaheuristic
# Knapsack Instance
weights = np.array([23, 31, 29, 44, 53, 38, 63, 85, 89, 82], dtype = int)
values = np.array([92, 57, 49, 68, 60, 43, 67, 84, 87, 72], dtype = int)
capacity = 165
n_items = len(weights)
# Known Optimum
# x = [1, 1, 1, 1, 0, 1, 0, 0, 0, 0]
# profit = 309
# weight = 165
# Target Function:
def knapsack(bits):
bits = np.asarray(bits, dtype = int)
total_w = np.sum(weights * bits)
total_v = np.sum(values * bits)
if total_w > capacity:
return 1000.0 + (total_w - capacity)
return -float(total_v)
# Optimize
engine = pymetaheuristic.create_optimizer(
algorithm = "ga",
target_function = knapsack,
min_values = [0.0] * n_items,
max_values = [1.0] * n_items,
population_size = 15,
max_steps = 300,
seed = 42,
init_name = "chaotic:tent",
)
# Results
adapter = pymetaheuristic.BinaryAdapter(engine, transfer_fn = "v2")
result = adapter.run()
found_profit = -result.best_fitness
print("\nMetaheuristic result")
print("Best profit:", found_profit)
print("Binary solution reported:", result.metadata.get("binary_best_position"))HyperparameterTuner performs grid or random search over an algorithm's hyperparameters. It runs each configuration for n_trials independent trials, aggregates results, and returns a DataFrame (if pandas is available) or a list of dicts. The best_params and best_fitness attributes summarise the optimal configuration found.
import numpy as np
import pymetaheuristic
def easom(x = [0, 0]):
x1, x2 = x
return -np.cos(x1) * np.cos(x2) * np.exp(-(x1 - np.pi) ** 2 - (x2 - np.pi) ** 2)
tuner = pymetaheuristic.HyperparameterTuner(
algorithm = "pso",
param_grid = {
"swarm_size": [20, 50, 100],
"w": [0.4, 0.7, 0.9],
"c1": [1.5, 2.0],
"c2": [1.5, 2.0],
"init_name": ["uniform", "chaotic:tent"],
},
target_function = easom,
min_values = [-5, -5],
max_values = [ 5, 5],
termination = pymetaheuristic.Termination(max_steps = 200),
n_trials = 5,
objective = "min",
seed = 42,
search = "grid",
)
df = tuner.run()
summary = tuner.summary()
print(f"Best params: {tuner.best_params}")
print(f"Best fitness: {tuner.best_fitness:.6f}")
print(summary.head())The IO module provides a set of functions for persisting results and resuming interrupted runs.
save_result/load_result: pickle a completedOptimizationResultto disk.result_to_json/result_from_json: export a human-readable JSON summary.save_checkpoint/load_checkpoint: pickle a running(engine, state)pair; resume by callingengine.step(state)in a loop.
import numpy as np
import pymetaheuristic
# Easom:
def easom(x = [0, 0]):
x1, x2 = x
return -np.cos(x1) * np.cos(x2) * np.exp(-(x1 - np.pi)**2 - (x2 - np.pi)**2)
# Optimize - Run
result = pymetaheuristic.optimize(
algorithm = algorithm_id,
target_function = easom,
min_values = (-5, -5),
max_values = ( 5, 5),
max_steps = 25, # iterations
seed = 42,
store_history = True,
store_population_snapshots = True,
)
# Save & Load a Completed Result
pymetaheuristic.save_result(result, "easom_ga.pkl")
r2 = pymetaheuristic.load_result("easom_ga.pkl")
print(f"Reloaded best fitness: {r2.best_fitness:.6f}")
print(f"Reloaded best position: {r2.best_position}")
# Export and Read a JSON Summary
pymetaheuristic.result_to_json(result, "easom_ga.json")
summary = pymetaheuristic.result_from_json("easom_ga.json")
print(f"JSON best_fitness: {summary['best_fitness']}")
print(f"JSON best_position: {summary['best_position']}")
# Checkpoint and Resume
engine = pymetaheuristic.create_optimizer(
algorithm = algorithm_id,
target_function = easom,
min_values = (-5, -5),
max_values = ( 5, 5),
max_steps = 25, # iterations
seed = 42,
store_history = True,
store_population_snapshots = True,
)
state = engine.initialize()
# Run
for _ in range(0, 100):
state = engine.step(state)
pymetaheuristic.save_checkpoint(engine, state, "easom_checkpoint.pkl")
print(f"Checkpoint saved at step {state.step}, best = {state.best_fitness:.6f}")
# Resume from Checkpoint
engine2, state2 = pymetaheuristic.load_checkpoint("easom_checkpoint.pkl")
while not engine2.should_stop(state2):
state2 = engine2.step(state2)
result_resumed = engine2.finalize(state2)
print(f"Resumed best fitness: {result_resumed.best_fitness:.6f}")
print(f"Resumed best position: {result_resumed.best_position}")BenchmarkRunner performs multi-algorithm × multi-problem comparative sweeps. It executes every algorithm on every problem for a configurable number of independent trials, records the best fitness and wall-clock time for each run, and captures failed trials without interrupting the sweep. The raw results are returned as a tidy DataFrame that can be aggregated into summary statistics, rank tables, and publication-quality compact tables. Parallel execution across trials is available through the n_jobs argument. After calling .run(), the five dedicated Plotly-based visualisation functions — plot_benchmark_barplots, plot_benchmark_boxplots, plot_benchmark_rank_heatmap, plot_benchmark_runtime, and plot_benchmark_convergence — produce interactive charts. All five return go.Figure objects that can be further customised, displayed inline in Jupyter or Colab, or saved to HTML, PNG, SVG, or PDF.
import pandas as pd
import pymetaheuristic
# Algorithms
algorithms = ["acgwo", "gwo", "i_gwo", "fox", "tlbo"]
# Problems
rastrigin = pymetaheuristic.get_test_function("rastrigin")
rosenbrock = pymetaheuristic.get_test_function("rosenbrocks_valley")
problems = [
{
"name": "Rastrigin-5D",
"target_function": rastrigin,
"min_values": [-5.12] * 5,
"max_values": [ 5.12] * 5,
"objective": "min",
},
{
"name": "Rosenbrock-5D",
"target_function": rosenbrock,
"min_values": [-30.0] * 5,
"max_values": [ 30.0] * 5,
"objective": "min",
},
]
# Runner
termination = pymetaheuristic.Termination(max_steps = 250)
runner = pymetaheuristic.BenchmarkRunner(
algorithms = algorithms,
problems = problems,
termination = termination,
n_trials = 5,
seed = 42,
n_jobs = 1,
)
raw_df = runner.run(show_progress = True)
# Raw Results
failed_df = raw_df[raw_df["error"].notna()].copy()
valid_df = raw_df[raw_df["error"].isna()].copy()
summary_df = runner.summary().copy()
# Rank Table
rank_table = summary_df.pivot(index = "algorithm", columns = "problem", values = "rank")
rank_table["average_rank"] = rank_table.mean(axis = 1)
rank_table = rank_table.sort_values("average_rank")You can inspect the default parameters of any metaheuristic in the library using get_algorithm_info().
import pymetaheuristic
from pprint import pprint
# Get Info
algorithm_id = "pso" # change this to any ID from the table, e.g. "de", "ga", "gwo", "woa"
info = pymetaheuristic.get_algorithm_info(algorithm_id)
# Results
print("Algorithm ID:", algo_info["algorithm_id"])
print("Algorithm Name:", algo_info["algorithm_name"])
print("")
print("Default Parameters:")
pprint(algo_info["defaults"])
Algorithm ID: pso
Algorithm Name: Particle Swarm Optimization
Default Parameters:
{'c1': 2.0, 'c2': 2.0, 'decay': 0, 'swarm_size': 30, 'w': 0.9}The table below summarizes the optimization engines currently available in the library. The Algorithm column reports the conventional algorithm name, ID gives the identifier used in the codebase, Family provides a coarse methodological grouping, Population indicates whether the algorithm maintains an explicit candidate population, Candidate Injection indicates whether the algorithm is currently marked as able to absorb external candidates during cooperative or orchestrated workflows, Restart shows whether native restart support is declared, and Snapshot Fit provides a practical recommendation for using store_population_snapshots in the current implementation. Click the algorithm name to open its primary reference or original source. All algorithms support checkpointing through the library framework, and all constraint handling is available through the framework-level constraint machinery.
| Algorithm | ID | Family | Population | Candidate Injection | Restart | Snapshot Fit |
|---|---|---|---|---|---|---|
| Adam (Adaptive Moment Estimation) | adam |
math | No | No | No | No |
| Adaptive Chaotic Grey Wolf Optimizer | acgwo |
swarm | Yes | Yes | No | Yes |
🔍 View complete Metaheuristic reference table
| Algorithm | ID | Family | Population | Candidate Injection | Restart | Snapshot Fit |
|---|---|---|---|---|---|---|
| Adam (Adaptive Moment Estimation) | adam |
math | No | No | No | No |
| Adaptive Chaotic Grey Wolf Optimizer | acgwo |
swarm | Yes | Yes | No | Yes |
| Adaptive Exploration State-Space Particle Swarm Optimization | aesspso |
swarm | Yes | Yes | No | Yes |
| Adaptive Random Search | ars |
trajectory | Yes | Yes | No | Yes |
| African Vultures Optimization Algorithm | avoa |
swarm | Yes | Yes | No | Yes |
| Ali Baba and the Forty Thieves | aft |
human | Yes | Yes | No | Yes |
| Anarchic Society Optimization | aso |
swarm | Yes | Yes | No | Yes |
| Ant Colony Optimization | aco |
swarm | Yes | No | No | Yes |
| Ant Colony Optimization (Continuous) | acor |
swarm | Yes | Yes | No | Yes |
| Ant Lion Optimizer | alo |
swarm | Yes | Yes | No | Yes |
| Aquila Optimizer | ao |
swarm | Yes | Yes | No | Yes |
| Archerfish Hunting Optimizer | aho |
swarm | Yes | Yes | No | Yes |
| Archimedes Optimization Algorithm | arch_oa |
physics | Yes | Yes | No | Yes |
| Arithmetic Optimization Algorithm | aoa |
swarm | Yes | Yes | No | Yes |
| Artemisinin Optimization | artemisinin_o |
nature | Yes | Yes | No | Yes |
| Artificial Algae Algorithm | aaa |
swarm | Yes | No | Yes | Yes |
| Artificial Bee Colony Optimization | abco |
swarm | Yes | Yes | No | Yes |
| Artificial Ecosystem Optimization | aeo |
human | Yes | Yes | No | Yes |
| Artificial Electric Field Algorithm | aefa |
physics | Yes | Yes | No | Yes |
| Artificial Fish Swarm Algorithm | afsa |
swarm | Yes | Yes | No | Yes |
| Artificial Gorilla Troops Optimizer | agto |
swarm | Yes | Yes | No | Yes |
| Artificial Hummingbird Algorithm | aha |
swarm | Yes | Yes | No | Yes |
| Artificial Lemming Algorithm | ala |
swarm | Yes | Yes | No | Yes |
| Artificial Protozoa Optimizer | apo |
swarm | Yes | Yes | No | Yes |
| Artificial Rabbits Optimization | aro |
swarm | Yes | Yes | No | Yes |
| Atom Search Optimization | aso_atom |
physics | Yes | Yes | No | Yes |
| Automated Design of Variation Operators | autov |
evolutionary | Yes | Yes | No | Yes |
| Bacterial Chemotaxis Optimizer | bco |
nature | Yes | Yes | No | Yes |
| Bacterial Foraging Optimization | bfo |
swarm | Yes | Yes | No | Yes |
| Bald Eagle Search | bes |
swarm | Yes | Yes | No | Yes |
| Barnacles Mating Optimizer | bmo |
swarm | Yes | Yes | No | Yes |
| Bat Algorithm | bat_a |
swarm | Yes | Yes | No | Yes |
| Battle Royale Optimization | bro |
human | Yes | Yes | No | Yes |
| Bees Algorithm | bea |
swarm | Yes | Yes | No | Yes |
| BFGS Quasi-Newton Method | bfgs |
math | No | No | No | No |
| Binary Space Partition Tree Genetic Algorithm | bspga |
evolutionary | Yes | Yes | No | Yes |
| Biogeography-Based Optimization | bbo |
evolutionary | Yes | Yes | No | Yes |
| Bird Swarm Algorithm | bsa |
swarm | Yes | Yes | No | Yes |
| Black Widow Optimization | bwo |
evolutionary | Yes | Yes | No | Yes |
| Black-winged Kite Algorithm | bka |
swarm | Yes | Yes | No | Yes |
| Bonobo Optimizer | bono |
swarm | Yes | Yes | No | Yes |
| Brain Storm Optimization | bso |
human | Yes | Yes | No | Yes |
| Brown-Bear Optimization Algorithm | bboa |
swarm | Yes | Yes | No | Yes |
| Butterfly Optimization Algorithm | boa |
swarm | Yes | Yes | No | Yes |
| Camel Algorithm | camel |
swarm | Yes | Yes | No | Yes |
| Capuchin Search Algorithm | capsa |
swarm | Yes | Yes | No | Yes |
| Cat Swarm Optimization | cat_so |
swarm | Yes | Yes | No | Yes |
| Chameleon Swarm Algorithm | chameleon_sa |
swarm | Yes | Yes | No | Yes |
| Chaos Game Optimization | cgo |
math | Yes | Yes | No | Yes |
| Cheetah Based Optimization | cddo |
swarm | Yes | Yes | No | Yes |
| Cheetah Optimizer | cdo |
swarm | Yes | Yes | No | Yes |
| Chicken Swarm Optimization | chicken_so |
swarm | Yes | No | No | Yes |
| Child Drawing Development Optimization Algorithm | cddo_child |
human | Yes | Yes | No | Yes |
| Chimp Optimization Algorithm | choa |
swarm | Yes | Yes | No | Yes |
| Chernobyl Disaster Optimizer | cdo_chernobyl |
physics | Yes | Yes | No | Yes |
| Circle-Based Search Algorithm | circle_sa |
math | Yes | Yes | No | Yes |
| Circulatory System Based Optimization | csbo |
swarm | Yes | Yes | No | Yes |
| Clonal Selection Algorithm | clonalg |
evolutionary | Yes | Yes | No | Yes |
| Coati Optimization Algorithm | coati_oa |
swarm | Yes | Yes | No | Yes |
| Cockroach Swarm Optimization | cockroach_so |
swarm | Yes | Yes | No | Yes |
| Competitive Swarm Optimizer | cso |
swarm | Yes | Yes | No | Yes |
| COOT Bird Optimization | coot |
swarm | Yes | Yes | No | Yes |
| Coral Reefs Optimization | cro |
evolutionary | Yes | Yes | No | Yes |
| Coronavirus Herd Immunity Optimization | chio |
human | Yes | Yes | No | Yes |
| Cosmic Evolution Optimization | ceo_cosmic |
physics | Yes | Yes | No | Yes |
| Covariance Matrix Adaptation Evolution Strategy | cmaes |
evolutionary | Yes | Yes | No | Yes |
| Coyote Optimization Algorithm | coa |
swarm | Yes | Yes | No | Yes |
| Crayfish Optimization Algorithm | crayfish_oa |
swarm | Yes | Yes | No | Yes |
| Cross Entropy Method | cem |
distribution | Yes | Yes | No | Yes |
| Crow Search Algorithm | csa |
swarm | Yes | Yes | No | Yes |
| Cuckoo Search | cuckoo_s |
swarm | Yes | Yes | No | Yes |
| Cultural Algorithm | ca |
evolutionary | Yes | Yes | No | Yes |
| Dandelion Optimizer | do_dandelion |
physics | Yes | Yes | No | Yes |
| Deep Sleep Optimiser | dso |
human | Yes | Yes | No | Yes |
| Deer Hunting Optimization Algorithm | doa |
human | Yes | Yes | No | Yes |
| Differential Evolution | de |
evolutionary | Yes | Yes | No | Yes |
| Differential Evolution MTS | hde |
evolutionary | Yes | Yes | No | Yes |
| Dispersive Fly Optimization | dfo |
swarm | Yes | Yes | No | Yes |
| Dolphin Echolocation Optimization | deo_dolphin |
swarm | Yes | Yes | No | Yes |
| Dragonfly Algorithm | da |
swarm | Yes | Yes | No | Yes |
| Dung Beetle Optimizer | dbo |
swarm | Yes | Yes | No | Yes |
| Dwarf Mongoose Optimization Algorithm | dmoa |
swarm | Yes | Yes | No | Yes |
| Dynamic Differential Annealed Optimization | ddao |
physics | Yes | Yes | No | Yes |
| Dynamic Virtual Bats Algorithm | dvba |
swarm | Yes | Yes | No | Yes |
| Earthworm Optimization Algorithm | eoa |
swarm | Yes | Yes | No | Yes |
| Ecological Cycle Optimizer | ecological_cycle_o |
swarm | Yes | Yes | No | Yes |
| Educational Competition Optimizer | eco |
human | Yes | Yes | No | Yes |
| Efficient Global Optimization | ego |
distribution | Yes | Yes | No | Yes |
| Egret Swarm Optimization Algorithm | esoa |
swarm | Yes | Yes | No | Yes |
| Electric Charged Particles Optimization | ecpo |
physics | Yes | Yes | No | Yes |
| Electrical Storm Optimization | eso |
physics | Yes | Yes | No | Yes |
| Electromagnetic Field Optimization | efo |
physics | Yes | Yes | No | Yes |
| Elephant Herding Optimization | eho |
swarm | Yes | Yes | No | Yes |
| Elk Herd Optimizer | elk_ho |
swarm | Yes | Yes | No | Yes |
| Emperor Penguin Colony | epc |
swarm | Yes | Yes | No | Yes |
| Energy Valley Optimizer | evo |
physics | Yes | Yes | No | Yes |
| Enzyme Activity Optimizer | eao |
nature | Yes | Yes | No | Yes |
| Equilibrium Optimizer | eo |
physics | Yes | Yes | No | Yes |
| Escape Algorithm | esc |
human | Yes | Yes | No | Yes |
| Evolution Strategy (mu + lambda) | es |
evolutionary | Yes | Yes | No | Yes |
| Evolutionary Programming | ep |
evolutionary | Yes | Yes | No | Yes |
| Exponential Distribution Optimizer | edo |
math | Yes | Yes | No | Yes |
| Exponential-Trigonometric Optimization | eto |
math | Yes | Yes | No | Yes |
| Fast Evolutionary Programming | fep |
evolutionary | Yes | Yes | No | Yes |
| FATA Geophysics Optimizer | fata |
physics | Yes | Yes | No | Yes |
| Feasibility Rule with Objective Function Information | frofi |
evolutionary | Yes | Yes | No | Yes |
| Fennec Fox Optimizer | ffo |
swarm | Yes | Yes | No | Yes |
| Fick's Law Algorithm | fla |
physics | Yes | Yes | No | Yes |
| Firefly Algorithm | firefly_a |
swarm | Yes | Yes | No | Yes |
| Fireworks Algorithm | fwa |
swarm | Yes | Yes | No | Yes |
| Fish School Search | fss |
swarm | Yes | Yes | No | Yes |
| Fitness Dependent Optimizer | fdo |
swarm | Yes | Yes | No | Yes |
| Fletcher-Reeves Conjugate Gradient | frcg |
math | No | No | No | No |
| Flood Algorithm | flood_a |
physics | Yes | Yes | No | Yes |
| Flow Direction Algorithm | fda |
swarm | Yes | Yes | No | Yes |
| Flower Pollination Algorithm | fpa |
swarm | Yes | Yes | No | Yes |
| Forensic-Based Investigation Optimization | fbio |
human | Yes | Yes | No | Yes |
| Forest Optimization Algorithm | foa |
swarm | Yes | Yes | No | Yes |
| Fossa Optimization Algorithm | foa_fossa |
swarm | Yes | Yes | No | Yes |
| Fox Optimizer | fox |
swarm | Yes | Yes | No | Yes |
| Fruit-Fly Algorithm | ffa |
swarm | Yes | Yes | No | Yes |
| Gaining-Sharing Knowledge Algorithm | gska |
human | Yes | Yes | No | Yes |
| Gazelle Optimization Algorithm | gazelle_oa |
swarm | Yes | Yes | No | Yes |
| Gekko Japonicus Algorithm | gja |
swarm | Yes | Yes | No | Yes |
| Generalized Normal Distribution Optimizer | gndo |
math | Yes | Yes | No | Yes |
| Genetic Algorithm | ga |
evolutionary | Yes | Yes | No | Yes |
| Genghis Khan Shark Optimizer | gkso |
swarm | Yes | Yes | No | Yes |
| Geometric Mean Optimizer | gmo |
swarm | Yes | Yes | No | Yes |
| Germinal Center Optimization | gco |
human | Yes | Yes | No | Yes |
| Geyser Inspired Algorithm | gea |
physics | Yes | Yes | No | Yes |
| Giant Pacific Octopus Optimizer | gpoo |
swarm | Yes | No | No | Yes |
| Giant Trevally Optimizer | gto |
swarm | Yes | Yes | No | Yes |
| Glider Snake Optimization | gso_glider_snake |
swarm | Yes | No | No | No |
| Glowworm Swarm Optimization | gso |
swarm | Yes | Yes | No | Yes |
| Golden Jackal Optimizer | gjo |
swarm | Yes | Yes | No | Yes |
| Gradient-Based Optimizer | gbo |
math | Yes | Yes | No | Yes |
| Gradient-Based Particle Swarm Optimization | gpso |
swarm | Yes | Yes | No | Yes |
| Grasshopper Optimization Algorithm | goa |
swarm | Yes | Yes | No | Yes |
| Gravitational Search Algorithm | gsa |
physics | Yes | Yes | No | Yes |
| Grey Wolf Optimizer | gwo |
swarm | Yes | Yes | No | Yes |
| Greylag Goose Optimization | ggo |
swarm | Yes | Yes | No | Yes |
| Growth Optimizer | go_growth |
swarm | Yes | Yes | No | Yes |
| Harmony Search Algorithm | hsa |
trajectory | Yes | No | No | Yes |
| Harris Hawks Optimization | hho |
swarm | Yes | Yes | No | Yes |
| Heap-Based Optimizer | hbo |
human | Yes | Yes | No | Yes |
| Henry Gas Solubility Optimization | hgso |
physics | Yes | Yes | No | Yes |
| Hiking Optimization Algorithm | hiking_oa |
human | Yes | Yes | No | Yes |
| Hill Climb Algorithm | hc |
trajectory | No | No | No | No |
| Hippopotamus Optimization Algorithm | ho_hippo |
swarm | Yes | Yes | No | Yes |
| Honey Badger Algorithm | hba_honey |
swarm | Yes | Yes | No | Yes |
| Horse Herd Optimization Algorithm | horse_oa |
swarm | Yes | Yes | No | Yes |
| Human Conception Optimizer | hco |
human | Yes | Yes | No | Yes |
| Human Evolutionary Optimization Algorithm | heoa |
human | Yes | Yes | No | Yes |
| Hunger Games Search | hgs |
swarm | Yes | Yes | No | Yes |
| Hunting Search Algorithm | hus |
swarm | Yes | Yes | No | Yes |
| Hybrid Bat Algorithm | hba |
swarm | Yes | Yes | No | Yes |
| Hybrid Self-Adaptive Bat Algorithm | hsaba |
swarm | Yes | Yes | No | Yes |
| Imperialist Competitive Algorithm | ica |
human | Yes | Yes | No | Yes |
| Improved Adaptive Grey Wolf Optimization | iagwo |
swarm | Yes | No | No | No |
| Improved Grey Wolf Optimizer | i_gwo |
swarm | Yes | Yes | No | Yes |
| Improved Kepler Optimization Algorithm | ikoa |
physics | Yes | Yes | No | Yes |
| Improved L-SHADE | ilshade |
evolutionary | Yes | Yes | No | Yes |
| Improved Multi-Operator Differential Evolution | imode |
evolutionary | Yes | Yes | No | Yes |
| Improved Whale Optimization Algorithm | i_woa |
swarm | Yes | Yes | No | Yes |
| Invasive Weed Optimization | iwo |
nature | Yes | Yes | No | Yes |
| Ivy Algorithm | ivya |
nature | Yes | Yes | No | Yes |
| Jaya Algorithm | jy |
swarm | Yes | Yes | No | Yes |
| Jellyfish Search Optimizer | jso |
swarm | Yes | Yes | No | Yes |
| Komodo Mlipir Algorithm | kma |
swarm | Yes | Yes | No | Yes |
| Krill Herd Algorithm | kha |
swarm | Yes | No | No | Yes |
| Leaf in Wind Optimization | liwo |
physics | Yes | Yes | No | Yes |
| Lévy Flight Distribution | lfd |
swarm | Yes | Yes | No | Yes |
| LSHADE-cnEpSin | lshade_cnepsin |
evolutionary | Yes | Yes | No | Yes |
| Life Choice-Based Optimizer | lco |
human | Yes | Yes | No | Yes |
| Light Spectrum Optimizer | lso_spectrum |
physics | Yes | Yes | No | Yes |
| Linear Subspace Surrogate Modeling Evolutionary Algorithm | l2smea |
evolutionary | Yes | Yes | No | Yes |
| Lion Optimization Algorithm | loa |
swarm | Yes | Yes | No | Yes |
| Liver Cancer Algorithm | lca |
nature | Yes | Yes | No | Yes |
| Lungs Performance-Based Optimization | lpo |
nature | Yes | Yes | No | Yes |
| Lyrebird Optimization Algorithm | loa_lyrebird |
swarm | Yes | Yes | No | Yes |
| Manta Ray Foraging Optimization | mrfo |
swarm | Yes | Yes | No | Yes |
| Mantis Shrimp Optimization Algorithm | mshoa |
swarm | Yes | Yes | No | Yes |
| Marine Predators Algorithm | mpa |
swarm | Yes | Yes | No | Yes |
| Market Game Optimization Algorithm | mgoa_market |
human | Yes | Yes | No | Yes |
| Memetic Algorithm | memetic_a |
evolutionary | Yes | Yes | No | Yes |
| Mirage-Search Optimizer | mso |
physics | Yes | Yes | No | Yes |
| Monarch Butterfly Optimization | mbo |
swarm | Yes | Yes | No | Yes |
| Monkey King Evolution V1 | mke |
evolutionary | Yes | Yes | No | Yes |
| Moss Growth Optimization | moss_go |
nature | Yes | Yes | No | Yes |
| Most Valuable Player Algorithm | mvpa |
human | Yes | Yes | No | Yes |
| Moth Flame Algorithm | mfa |
swarm | Yes | Yes | No | Yes |
| Moth Search Algorithm | msa_e |
swarm | Yes | Yes | No | Yes |
| Mountain Gazelle Optimizer | mgo |
swarm | Yes | Yes | No | Yes |
| Multi-Surrogate-Assisted Ant Colony Optimization | misaco |
swarm | Yes | Yes | No | Yes |
| Multi-Verse Optimizer | mvo |
swarm | Yes | Yes | No | Yes |
| Multifactorial Evolutionary Algorithm | mfea |
evolutionary | Yes | Yes | No | Yes |
| Multifactorial Evolutionary Algorithm II | mfea2 |
evolutionary | Yes | Yes | No | Yes |
| Multiple Trajectory Search | mts |
trajectory | Yes | Yes | No | Yes |
| Multiswarm-Assisted Expensive Optimization | samso |
swarm | Yes | Yes | No | Yes |
| Naked Mole-Rat Algorithm | nmra |
swarm | Yes | Yes | No | Yes |
| Narwhal Optimizer | nwoa |
swarm | Yes | Yes | No | Yes |
| Nelder-Mead Method | nmm |
trajectory | Yes | Yes | No | Yes |
| Neural Network-Based Dimensionality Reduction Evolutionary Algorithm (SO) | nndrea_so |
evolutionary | Yes | Yes | No | Yes |
| Nizar Optimization Algorithm | noa |
math | Yes | Yes | No | Yes |
| Northern Goshawk Optimization | ngo |
swarm | Yes | Yes | No | Yes |
| Nuclear Reaction Optimization | nro |
physics | Yes | Yes | No | Yes |
| Numeric Crunch Algorithm | nca |
math | Yes | Yes | No | Yes |
| Optimal Foraging Algorithm | ofa |
swarm | Yes | Yes | No | Yes |
| Osprey Optimization Algorithm | ooa |
swarm | Yes | Yes | No | Yes |
| Parameter-Free Bat Algorithm | plba |
swarm | Yes | Yes | No | Yes |
| Parent-Centric Crossover (G3-PCX style) | pcx |
evolutionary | Yes | Yes | No | Yes |
| Pareto Sequential Sampling | pss |
math | Yes | Yes | No | Yes |
| Parrot Optimizer | parrot_o |
swarm | Yes | Yes | No | Yes |
| Particle Swarm Optimization | pso |
swarm | Yes | Yes | No | Yes |
| Pathfinder Algorithm | pfa |
swarm | Yes | Yes | No | Yes |
| Pelican Optimization Algorithm | poa |
swarm | Yes | Yes | No | Yes |
| Physical Education Teacher Inspired Optimization | petio |
human | Yes | No | No | Yes |
| Pied Kingfisher Optimizer | pko |
swarm | Yes | Yes | No | Yes |
| Polar Fox Optimization | pfa_polar_fox |
swarm | Yes | No | No | No |
| Polar Lights Optimizer | plo |
physics | Yes | Yes | No | Yes |
| Political Optimizer | political_o |
human | Yes | Yes | No | Yes |
| Poor and Rich Optimization Algorithm | pro |
human | Yes | Yes | No | Yes |
| Population-Based Incremental Learning | pbil |
distribution | No | No | No | No |
| Prairie Dog Optimization Algorithm | pdo |
swarm | Yes | Yes | No | Yes |
| Puma Optimizer | puma_o |
swarm | Yes | Yes | No | Yes |
| Quadratic Interpolation Optimization | qio |
math | Yes | Yes | No | Yes |
| Queuing Search Algorithm | qsa |
human | Yes | Yes | No | Yes |
| Random Search | random_s |
trajectory | Yes | Yes | No | Yes |
| Rat Swarm Optimizer | rso |
swarm | Yes | Yes | No | Yes |
| Red-billed Blue Magpie Optimizer | rbmo |
swarm | Yes | Yes | No | Yes |
| Remora Optimization Algorithm | roa |
swarm | Yes | Yes | No | Yes |
| Reptile Search Algorithm | rsa |
swarm | Yes | Yes | No | Yes |
| RIME-ice Algorithm | rime |
physics | Yes | Yes | No | Yes |
| RMSProp | rmsprop |
math | No | No | No | No |
| Rock Hyraxes Swarm Optimization | rhso |
swarm | Yes | No | No | Yes |
| RUNge Kutta Optimizer | run |
math | Yes | Yes | No | Yes |
| Rüppell's Fox Optimizer | rfo |
swarm | Yes | Yes | No | Yes |
| Sailfish Optimizer | sfo |
swarm | Yes | Yes | No | Yes |
| Salp Swarm Algorithm | ssa |
swarm | Yes | Yes | No | Yes |
| Sammon Mapping Assisted Differential Evolution | sade_sammon |
evolutionary | Yes | Yes | No | Yes |
| Sand Cat Swarm Optimization | scso |
swarm | Yes | Yes | No | Yes |
| Satin Bowerbird Optimizer | sbo |
swarm | Yes | Yes | No | Yes |
| Sea Lion Optimization | slo |
swarm | Yes | Yes | No | Yes |
| Seagull Optimization Algorithm | soa |
swarm | Yes | Yes | No | Yes |
| Seahorse Optimizer | seaho |
swarm | Yes | Yes | No | Yes |
| Search And Rescue Optimization | saro |
human | Yes | Yes | No | Yes |
| Search Space Independent Operator Based Deep Reinforcement Learning | ssio_rl |
evolutionary | Yes | Yes | No | Yes |
| Secretary Bird Optimization Algorithm | sboa |
swarm | Yes | Yes | No | Yes |
| Self-Adaptive Bat Algorithm | saba |
swarm | Yes | Yes | No | Yes |
| Self-Adaptive Differential Evolution | jde |
evolutionary | Yes | Yes | No | Yes |
| Sequential Quadratic Programming | sqp |
math | No | No | No | No |
| Serval Optimization Algorithm | serval_oa |
swarm | Yes | Yes | No | Yes |
| Shuffle-based Runner-Root Algorithm | srsr |
swarm | Yes | Yes | No | Yes |
| Siberian Tiger Optimization | sto |
swarm | Yes | Yes | No | Yes |
| Simulated Annealing | sa |
trajectory | No | Yes | Yes | No |
| Sine Cosine Algorithm | sine_cosine_a |
swarm | Yes | Yes | No | Yes |
| Sinh Cosh Optimizer | scho |
math | Yes | Yes | No | Yes |
| Slime Mould Algorithm | sma |
nature | Yes | Yes | No | Yes |
| Snake Optimizer | so_snake |
swarm | Yes | Yes | No | Yes |
| Snow Ablation Optimizer | snow_oa |
physics | Yes | Yes | No | Yes |
| Social Ski-Driver Optimization | ssdo |
human | Yes | Yes | No | Yes |
| Social Spider Algorithm | sspider_a |
swarm | Yes | Yes | No | Yes |
| Social Spider Swarm Optimizer | sso |
swarm | Yes | Yes | No | Yes |
| Sparrow Search Algorithm | sparrow_sa |
swarm | Yes | Yes | No | Yes |
| Spider Monkey Optimization | smo |
swarm | Yes | Yes | No | Yes |
| Spotted Hyena Inspired Optimizer | shio |
swarm | Yes | Yes | No | Yes |
| Spotted Hyena Optimizer | sho |
swarm | Yes | Yes | No | Yes |
| Squirrel Search Algorithm | squirrel_sa |
swarm | Yes | Yes | No | Yes |
| Starfish Optimization Algorithm | sfoa |
swarm | Yes | Yes | No | Yes |
| Steepest Descent | sd |
math | No | No | No | No |
| Stellar Oscillator Optimization | soo |
physics | Yes | Yes | No | Yes |
| Student Psychology Based Optimization | spbo |
swarm | Yes | Yes | No | Yes |
| Success-History Adaptive Differential Evolution | shade |
evolutionary | Yes | Yes | No | Yes |
| Success-History Intelligent Optimizer | shio_success |
swarm | Yes | Yes | No | Yes |
| Superb Fairy-wren Optimization Algorithm | superb_foa |
swarm | Yes | Yes | No | Yes |
| Supply-Demand-Based Optimization | supply_do |
human | Yes | Yes | No | Yes |
| Surrogate-Assisted Cooperative Co-Evolutionary Algorithm of Minamo II | sacc_eam2 |
evolutionary | Yes | Yes | No | Yes |
| Surrogate-Assisted Cooperative Swarm Optimization | sacoso |
swarm | Yes | Yes | No | Yes |
| Surrogate-Assisted DE with Adaptive Multi-Subspace Search | sade_amss |
evolutionary | Yes | Yes | No | Yes |
| Surrogate-Assisted DE with Adaptive Training Data Selection Criterion | sade_atdsc |
evolutionary | Yes | Yes | No | Yes |
| Surrogate-Assisted Partial Optimization | sapo |
evolutionary | Yes | Yes | No | Yes |
| Symbiotic Organisms Search | sos |
swarm | Yes | Yes | No | Yes |
| Tabu Search | ts |
trajectory | No | No | No | No |
| Tasmanian Devil Optimization | tdo |
swarm | Yes | Yes | No | Yes |
| Teaching Learning Based Optimization | tlbo |
swarm | Yes | Yes | No | Yes |
| Teamwork Optimization Algorithm | toa |
human | Yes | Yes | No | Yes |
| Termite Life Cycle Optimizer | tlco |
swarm | Yes | Yes | No | Yes |
| Tianji Horse Racing Optimizer | thro |
human | Yes | Yes | No | Yes |
| Tornado Optimizer with Coriolis Force | toc |
physics | Yes | Yes | No | Yes |
| Tree Physiology Optimization | tpo |
nature | Yes | Yes | No | Yes |
| Triangulation Topology Aggregation Optimizer | ttao |
math | Yes | Yes | No | Yes |
| Tug of War Optimization | two |
physics | Yes | Yes | No | Yes |
| Tuna Swarm Optimization | tso |
swarm | Yes | Yes | No | Yes |
| Tunicate Swarm Algorithm | tsa |
swarm | Yes | Yes | No | Yes |
| Virus Colony Search | vcs |
swarm | Yes | Yes | No | Yes |
| Walrus Optimization Algorithm | waoa |
swarm | Yes | Yes | No | Yes |
| War Strategy Optimization | warso |
human | Yes | Yes | No | Yes |
| Water Cycle Algorithm | wca |
nature | Yes | Yes | No | Yes |
| Water Uptake and Transport in Plants | wutp |
nature | Yes | Yes | No | Yes |
| Wave Optimization Algorithm | wo_wave |
physics | Yes | Yes | No | Yes |
| Weighting and Inertia Random Walk Optimizer | info |
math | Yes | Yes | No | Yes |
| Whale Optimization Algorithm | woa |
swarm | Yes | Yes | No | Yes |
| White Shark Optimizer | wso |
swarm | Yes | Yes | No | Yes |
| Wildebeest Herd Optimization | who |
swarm | Yes | Yes | No | Yes |
| Wind Driven Optimization | wdo |
physics | Yes | Yes | No | Yes |
| Young's Double-Slit Experiment Optimizer | ydse |
physics | Yes | Yes | No | Yes |
| Zebra Optimization Algorithm | zoa |
swarm | Yes | Yes | No | Yes |
The graph module can be used with the built-in benchmark functions or with any user-defined scalar objective function that follows the same interface f(x) -> float. The unified plotting function automatically adapts the visualization to the number of variables:
**1D**: Line Plot (1 Variable, `plot_function_1d`)
**2D**: Contour Map and Heatmap (2 Variables,`plot_function_2d`)
**3D**: Interactive Surface Plot (2 Variables,`plot_function_3d`)
**ND**: Parallel-coordinates Plot & PCA Projection (3+ Variables,`plot_function_nd`)
import pymetaheuristic
rastrigin = pymetaheuristic.get_test_function("rastrigin")
# Plot
pymetaheuristic.plot_function_3d(
rastrigin,
min_values = (-5.12, -5.12),
max_values = ( 5.12, 5.12),
solutions = ([ 0, 0]),
title = "Rastrigin",
filepath = "out.html", # also supports .png / .svg / .pdf
)The table below summarizes the benchmark functions currently available in the library. The Function column reports the conventional function name, ID gives the callable identifier used in the codebase (when importing from pymetaheuristic.src.test_functions), Domain and Global Minimum describes, when applicable, the corresponding decision vector, and the known global optimum in terms of objective value.
All functions below use the minimization convention. Notation
| Symbol | Meaning |
|---|---|
| D | Number of decision variables. |
| x* | Global minimizer. |
| f* | Global minimum value. |
| 0D | D-dimensional vector of zeros. |
| 1D | D-dimensional vector of ones. |
| Function | ID | Domain | Global Minimum |
|---|---|---|---|
| Ackley | ackley |
[-32.768, 32.768]2 | f(x1, x2) = 0; (x1, x2) = (0, 0) |
| Beale | beale |
[-4.5, 4.5]2 | f(x1, x2) = 0; (x1, x2) = (3, 0.5) |
| Bohachevsky F1 | bohachevsky_1 |
[-100, 100]2 | f(x1, x2) = 0; (x1, x2) = (0, 0) |
| Bohachevsky F2 | bohachevsky_2 |
[-100, 100]2 | f(x1, x2) = 0; (x1, x2) = (0, 0) |
| Bohachevsky F3 | bohachevsky_3 |
[-100, 100]2 | f(x1, x2) = 0; (x1, x2) = (0, 0) |
| Booth | booth |
[-10, 10]2 | f(x1, x2) = 0; (x1, x2) = (1, 3) |
| Branin RCOS | branin_rcos |
x1 ∈ [-5, 10], x2 ∈ [0, 15] | f* = 0.3978873577 at (-π, 12.275), (π, 2.275), (3π, 2.475) |
| Bukin F6 | bukin_6 |
x1 ∈ [-15, -5], x2 ∈ [-3, 3] | f(x1, x2) = 0; (x1, x2) = (-10, 1) |
| Cross-in-Tray | cross_in_tray |
[-10, 10]2 | f* ≈ -2.0626118708 at (x1, x2) = (±1.349406609, ±1.349406609) |
| Drop-Wave | drop_wave |
[-5.12, 5.12]2 | f(x1, x2) = -1; (x1, x2) = (0, 0) |
| Easom | easom |
[-100, 100]2 | f(x1, x2) = -1; (x1, x2) = (π, π) |
| Eggholder | eggholder |
[-512, 512]2 | f* ≈ -959.6407; (x1, x2) ≈ (512, 404.2319) |
| Goldstein-Price | goldstein_price |
[-2, 2]2 | f(x1, x2) = 3; (x1, x2) = (0, -1) |
| Himmelblau | himmelblau |
[-5, 5]2 | f* = 0 at (3, 2), (-2.805118, 3.131312), (-3.779310, -3.283186), (3.584428, -1.848126) |
| Hölder Table | holder_table |
[-10, 10]2 | f* ≈ -19.208502568 at (x1, x2) = (±8.055023472, ±9.664590029) |
| Levi F13 | levi_13 |
[-10, 10]2 | f(x1, x2) = 0; (x1, x2) = (1, 1) |
| Matyas | matyas |
[-10, 10]2 | f(x1, x2) = 0; (x1, x2) = (0, 0) |
| McCormick | mccormick |
x1 ∈ [-1.5, 4], x2 ∈ [-3, 4] | f* ≈ -1.913222955; (x1, x2) ≈ (-0.54719756, -1.54719756) |
| Schaffer F2 | schaffer_2 |
[-100, 100]2 | f(x1, x2) = 0; (x1, x2) = (0, 0) |
| Schaffer F4 | schaffer_4 |
[-100, 100]2 | f* ≈ 0.292578632 at (0, ±1.25313), (±1.25313, 0) |
| Schaffer F6 | schaffer_6 |
[-100, 100]2 | f(x1, x2) = 0; (x1, x2) = (0, 0) |
| Six-Hump Camel Back | six_hump_camel_back |
x1 ∈ [-3, 3], x2 ∈ [-2, 2] | f* ≈ -1.031628453 at (0.089842, -0.712656), (-0.089842, 0.712656) |
| Three-Hump Camel Back | three_hump_camel_back |
[-5, 5]2 | f(x1, x2) = 0; (x1, x2) = (0, 0) |
| Function | ID | Domain | Global Minimum |
|---|---|---|---|
| Alpine 1 | alpine_1 |
[-10, 10]D | f(x) = 0; xi = 0, i = 1, ..., D |
| Alpine 2 | alpine_2 |
[0, 10]D | f* ≈ -(2.808131180)D; xi ≈ 7.917052698 [N1] |
| Axis Parallel Hyper-Ellipsoid | axis_parallel_hyper_ellipsoid |
[-5.12, 5.12]D | f(x) = 0; xi = 0, i = 1, ..., D |
| Bent Cigar | bent_cigar |
[-100, 100]D | f(x) = 0; x = 0D |
| Chung-Reynolds | chung_reynolds |
[-100, 100]D | f(x) = 0; x = 0D |
| Cosine Mixture | cosine_mixture |
[-1, 1]D | f(x) = -0.1D; x = 0D [N1] |
| Csendes | csendes |
[-1, 1]D | f(x) = 0; x = 0D |
| De Jong F1 / Sphere | de_jong_1 |
[-5.12, 5.12]D | f(x) = 0; x = 0D |
| Discus | discus |
[-100, 100]D | f(x) = 0; x = 0D |
| Dixon-Price | dixon_price |
[-10, 10]D | f(x) = 0; xi = 2-((2i - 2) / 2i), i = 1, ..., D |
| Elliptic | elliptic |
[-100, 100]D | f(x) = 0; x = 0D |
| Expanded Griewank plus Rosenbrock | expanded_griewank_plus_rosenbrock |
[-5, 5]D | f(x) = 0; x = 1D |
| Griewank | griewangk_8 |
[-600, 600]D | f(x) = 0; x = 0D |
| Happy Cat | happy_cat |
[-100, 100]D | f(x) = 0; x = -1D |
| HGBat | hgbat |
[-100, 100]D | f(x) = 0; x = -1D |
| Katsuura | katsuura |
[-100, 100]D | f(x) = 0; x = 0D [N2] |
| Levy | levy |
[-10, 10]D | f(x) = 0; x = 1D |
| Michalewicz | michalewicz |
[0, π]D | Dimension- and m-dependent [N3] |
| Modified Schwefel | modified_schwefel |
[-100, 100]D | f(x) = 0; x = 0D [N4] |
| Perm 0,d,beta | perm |
[-D, D]D | f(x) = 0; xi = 1 / i, i = 1, ..., D |
| Pinter | pinter |
[-10, 10]D | f(x) = 0; x = 0D |
| Powell | powell |
[-4, 5]D | f(x) = 0; x = 0D [N5] |
| Qing | qing |
[-500, 500]D | f(x) = 0; xi = ±√i, i = 1, ..., D |
| Quintic | quintic |
[-10, 10]D | f(x) = 0; each xi ∈ {-1, 2} |
| Rastrigin | rastrigin |
[-5.12, 5.12]D | f(x) = 0; x = 0D |
| Ridge | ridge |
[-100, 100]D | f(x) = 0; x = 0D [N6] |
| Rosenbrock Valley | rosenbrocks_valley |
[-5, 10]D | f(x) = 0; x = 1D |
| Salomon | salomon |
[-100, 100]D | f(x) = 0; x = 0D |
| Schumer-Steiglitz | schumer_steiglitz |
[-100, 100]D | f(x) = 0; x = 0D |
| Schwefel | schwefel |
[-500, 500]D | f(x) = 0; xi ≈ 420.968746228, i = 1, ..., D |
| Schwefel 2.21 | schwefel_221 |
[-100, 100]D | f(x) = 0; x = 0D |
| Schwefel 2.22 | schwefel_222 |
[-100, 100]D | f(x) = 0; x = 0D |
| Sphere 2 / Sum of Different Powers | sphere_2 |
[-1, 1]D | f(x) = 0; x = 0D |
| Sphere 3 / Rotated Hyper-Ellipsoid | sphere_3 |
[-65.536, 65.536]D | f(x) = 0; x = 0D |
| Step | step |
[-100, 100]D | f(x) = 0; abs(xi) < 1 [N7] |
| Step 2 | step_2 |
[-100, 100]D | f(x) = 0; -0.5 ≤ xi < 0.5 [N7] |
| Step 3 | step_3 |
[-100, 100]D | f(x) = 0; abs(xi) < 1 [N7] |
| Stepint | stepint |
[-5.12, 5.12]D | f = 25 - 6D*; xi ∈ [-5.12, -5) [N8] |
| Styblinski-Tang | styblinski_tang |
[-5, 5]D | f ≈ -39.166165704D*; xi ≈ -2.903534028 |
| Trid | trid |
[-D2, D2]D | f* = -D(D + 4)(D - 1) / 6; xi = i(D + 1 - i) |
| Weierstrass | weierstrass |
[-0.5, 0.5]D | f(x) = 0; x = 0D |
| Whitley | whitley |
[-10.24, 10.24]D | f(x) = 0; x = 1D |
| Zakharov | zakharov |
[-5, 10]D | f(x) = 0; x = 0D |
| Function | ID | Domain | Global Minimum |
|---|---|---|---|
| CEC 2022 F1 | cec_2022_f01 |
2, 10, 20 | f* = 300 |
| CEC 2022 F2 | cec_2022_f02 |
2, 10, 20 | f* = 400 |
| CEC 2022 F3 | cec_2022_f03 |
2, 10, 20 | f* = 600 |
| CEC 2022 F4 | cec_2022_f04 |
2, 10, 20 | f* = 800 |
| CEC 2022 F5 | cec_2022_f05 |
2, 10, 20 | f* = 900 |
| CEC 2022 F6 | cec_2022_f06 |
10, 20 | f* = 1800 |
| CEC 2022 F7 | cec_2022_f07 |
10, 20 | f* = 2000 |
| CEC 2022 F8 | cec_2022_f08 |
10, 20 | f* = 2200 |
| CEC 2022 F9 | cec_2022_f09 |
2, 10, 20 | f* = 2300 |
| CEC 2022 F10 | cec_2022_f10 |
2, 10, 20 | f* = 2400 |
| CEC 2022 F11 | cec_2022_f11 |
2, 10, 20 | f* = 2600 |
| CEC 2022 F12 | cec_2022_f12 |
2, 10, 20 | f* = 2700 |
Engineering benchmarks expose an objective function, along with bounds and constraints. Use get_engineering_benchmark("<id>") to retrieve objective, constraints, min_values, max_values, and best-known metadata. Constraint functions follow the package convention g(x) ≤ 0.
| Function | ID | Domain | Global Minimum | Constraints |
|---|---|---|---|---|
| Tension/compression spring design | tension_spring |
d ∈ [0.05, 2], D ∈ [0.25, 1.30], N ∈ [2, 15] | f* ≈ 0.012665; (d, D, N) ≈ (0.05169, 0.35675, 11.2871) [N9] | 4 inequalities |
| Welded beam design | welded_beam |
h ∈ [0.1, 2], l ∈ [0.1, 10], t ∈ [0.1, 10], b ∈ [0.1, 2] | f* ≈ 1.724852; (h, l, t, b) ≈ (0.20573, 3.47049, 9.03662, 0.20573) | 7 inequalities |
| Pressure vessel design, continuous relaxation | pressure_vessel |
Ts, Th ∈ [0, 99], R ∈ [10, 200], L ∈ [10, 240] | f* ≈ 5804.376217; (Ts, Th, R, L) ≈ (0.727591, 0.359649, 37.699012, 240) [N10] | 4 inequalities |
| Pressure vessel design, discrete thickness | pressure_vessel_discrete |
Ts, Th rounded upward to multiples of 1/16; R ∈ [10, 200], L ∈ [10, 240] | f* ≈ 6059.714335; (Ts, Th, R, L) ≈ (0.8125, 0.4375, 42.098446, 176.636596) [N10] | 4 inequalities |
| Speed reducer design | speed_reducer |
7 bounded design variables | f* ≈ 2994.471066; x ≈ (3.5, 0.7, 17, 7.3, 7.71532, 3.35021, 5.28665) | 11 inequalities |
| Three-bar truss design | three_bar_truss |
A1, A2 ∈ [0, 1] | f* ≈ 263.895843; (A1, A2) ≈ (0.788675, 0.408248) | 3 inequalities |
| Cantilever beam design | cantilever_beam |
xi ∈ [0.01, 100], i = 1, ..., 5 | f* ≈ 1.339956; x ≈ (6.016016, 5.309173, 4.494330, 3.501475, 2.152665) | 1 inequality |
| Gear train design | gear_train |
integer xi ∈ [12, 60], i = 1, ..., 4 | f* ≈ 2.700857 × 10-12; x = (16, 19, 43, 49) [N11] | box + integrality |
| Note | Meaning |
|---|---|
| N1 | Alpine 2 and Cosine Mixture have sign-convention traps in the literature. This package uses minimization-compatible signs. |
| N2 | Katsuura is implemented as the product expression minus 1, so the exposed minimum is 0 at the origin. |
| N3 | Michalewicz has no single dimension-free closed-form optimum. For m = 10, common reference values are approximately: D = 2, f = -1.8013; D = 5, f = -4.6877; D = 10, f* = -9.6602. |
| N4 | Modified Schwefel is exposed in shifted CEC-style coordinates, so the visible optimizer is 0D. |
| N5 | Powell requires D to be a multiple of 4. |
| N6 | This is the cumulative ridge implementation, not the BBOB sharp-ridge function. |
| N7 | Step functions have optimizer intervals, not isolated optimizer points. |
| N8 | Stepint is bound-dependent. With bounds [-5.12, 5.12]D, f = 25 - 6D*; without bounds, it is unbounded below. |
| N9 | Engineering-design rows are constrained benchmarks. The Python module exposes get_engineering_benchmark(id) so users can pass the returned objective, bounds, and constraints directly to pymetaheuristic.optimize. |
| N10 | Pressure vessel has two common variants. pressure_vessel is the continuous relaxation; pressure_vessel_discrete rounds shell/head thickness upward to multiples of 1/16 before objective and constraint evaluation. |
| N11 | Gear train is a discrete integer benchmark. The implementation rounds variables to the nearest integer tooth counts by default. |
- For Multiobjective Optimization or Many Objectives Optimization, try pyMultiobjective
- For Traveling Salesman Problems (TSP), try pyCombinatorial
This section is dedicated to everyone who helped improve or correct the code. Thank you very much!
- Raiser (01.MARCH.2022) - https://github.com/mpraiser - University of Chinese Academy of Sciences (China)

