Optimizers

class ionworkspipeline.optimizers.Optimizer(**kwargs: Any)

Base class for all optimizers.

Optimizers seek a single optimal point in parameter space that minimizes the objective function.

Parameters

**kwargs

Arguments passed to the underlying optimizer algorithm.

Extends: ionworkspipeline.data_fits.parameter_estimators.parameter_estimator.ParameterEstimator

run(x0: ndarray) OptimizerResult

Optimize the objective function.

Parameters

x0array_like

Initial guess for the independent variables.

Returns

resionworkspipeline.OptimizerResult

The result of the optimization.

class ionworkspipeline.optimizers.ScipyDifferentialEvolution(**kwargs: Any)

Global stochastic optimizer using differential evolution with parallel evaluation.

Differential evolution is a robust global optimization algorithm that evolves a population of candidate solutions across generations. It excels at handling multi-modal, non-convex objective landscapes and requires no gradient information.

Notes

  • Does not support custom equality or inequality constraints

  • Parallel workers significantly speed up optimization (use DataFits’ num_workers arg)

  • Initial guess x0 is ignored; initial population is generated from bounds

  • Polish option disabled by default as it conventionally significantly decreases performance

  • Callback logs only best solution per generation (not individual evaluations)

Parameters

workersint, default=1

Number of parallel workers for function evaluations. Use -1 for all CPU cores.

maxiterint, default=1000

Maximum number of generations.

popsizeint, default=15

Population size multiplier (total population = popsize * dimensionality).

strategystr, default=’best1bin’

Differential evolution strategy. Options include ‘best1bin’, ‘rand1bin’, ‘best2bin’, ‘rand2bin’, ‘currenttobest1bin’.

mutationfloat or tuple, default=(0.5, 1)

Mutation constant. Can be float or (min, max) tuple for adaptive mutation.

recombinationfloat, default=0.7

Crossover probability for parameter mixing.

seedint, optional

Random seed for reproducible results.

atol, tolfloat, optional

Absolute and relative tolerance for convergence.

**kwargs

Additional arguments passed to scipy.optimize.differential_evolution. See scipy documentation for complete options.

Examples

Basic usage (single worker for doctests):

>>> optimizer = ScipyDifferentialEvolution(maxiter=50, seed=42)
>>> optimizer.set_objective(sphere)
>>> optimizer.set_bounds((lower, upper))
>>> result = optimizer.run(x0)
>>> result.fun < 1e-3
True

Integration with DataFit:

>>> optimizer = ScipyDifferentialEvolution(maxiter=500)
>>> isinstance(optimizer, ScipyDifferentialEvolution)
True

Extends: ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer

run(x0: ndarray) OptimizeResult

Minimize objective using differential evolution.

Parameters

x0

Ignored. Initial population is randomly generated within bounds.

Returns

OptimizeResult

Optimization result with x (best solution), fun (best cost), success (convergence flag), and generation statistics.

set_evaluation_callback(callback: Callable[[list[ndarray], list[float]], None] | None = None) None

Configure callback for logging best solution after each generation.

Unlike other optimizers, this only logs the best solution per generation, not individual evaluations (which occur in parallel worker processes).

Parameters

callback

Function receiving lists of parameter vectors and costs. Called with single-element lists containing the generation’s best solution. Set to None to disable callbacks.

class ionworkspipeline.optimizers.ScipyLeastSquares(**kwargs: Any)

Nonlinear least squares optimizer using scipy’s Trust Region Reflective algorithm.

This optimizer is designed for problems where the objective returns a residual vector rather than a scalar cost. It minimizes the sum of squares of the residuals. Best suited for well-behaved, smooth problems with a clear residual structure.

Notes

  • Requires objective functions that return an array (residual vector)

  • Automatically handles linear algebra errors by returning NaN values

  • More efficient than general minimization for least-squares structure

  • Supports bound constraints but not general equality/inequality constraints

Parameters

methodstr, optional

Algorithm to use. Options: ‘trf’ (default), ‘dogbox’, ‘lm’.

ftol, xtol, gtolfloat, optional

Tolerance parameters for convergence criteria.

max_nfevint, optional

Maximum number of function evaluations.

**kwargs

Additional arguments passed to scipy.optimize.least_squares. See scipy documentation for complete options.

Examples

>>> optimizer = ScipyLeastSquares(method='trf', max_nfev=100)
>>> optimizer.set_objective(sphere_residuals)
>>> optimizer.set_bounds((lower, upper))
>>> result = optimizer.run(x0)
>>> np.allclose(result.x, [0, 0, 0], atol=1e-3)
True

Extends: ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer

run(x0: ndarray) OptimizeResult

Minimize the sum of squares of the objective residuals.

Parameters

x0

Initial parameter values.

Returns

OptimizeResult

Optimization result with x (solution), cost (final residual norm), success (convergence flag), and other attributes.

class ionworkspipeline.optimizers.ScipyMinimize(**kwargs: Any)

General-purpose scalar minimization with support for constraints.

Wraps scipy’s minimize function, providing access to multiple local optimization algorithms (e.g., L-BFGS-B, SLSQP, trust-constr, COBYQA). Suitable for smooth, scalar-valued objectives with optional equality and inequality constraints.

Notes

  • Requires objective functions that return a scalar value

  • Supports bound constraints and custom equality/inequality constraints

  • Choice of method depends on problem structure and constraint types

  • Some methods (e.g., ‘L-BFGS-B’) support bounds only, not general constraints

Parameters

methodstr, optional

Optimization algorithm. Common choices: - ‘L-BFGS-B’: Bound-constrained, gradient-based (default for bounded problems) - ‘SLSQP’: Sequential Least Squares, supports all constraint types - ‘trust-constr’: Modern trust-region method, supports all constraints - ‘COBYQA’: Derivative-free, supports nonlinear constraints

maxiterint, optional

Maximum number of iterations.

tolfloat, optional

Tolerance for termination.

**kwargs

Additional arguments passed to scipy.optimize.minimize. See scipy documentation for complete options.

Examples

>>> optimizer = ScipyMinimize(method='L-BFGS-B', options={'maxiter': 100})
>>> optimizer.set_objective(sphere)
>>> optimizer.set_bounds((lower, upper))
>>> result = optimizer.run(x0)
>>> np.allclose(result.x, [0, 0, 0], atol=1e-3)
True

Extends: ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer

run(x0: ndarray) OptimizeResult

Minimize a scalar objective function.

Parameters

x0

Initial parameter values.

Returns

OptimizeResult

Optimization result with x (solution), fun (final cost), success (convergence flag), and other method-specific attributes.

class ionworkspipeline.optimizers.ScipyShgo(**kwargs: Any)

Global optimizer using simplicial homology techniques.

SHGO (Simplicial Homology Global Optimization) uses topological techniques to identify and sample from all local minima basins. It’s particularly effective for problems with many local minima and supports general nonlinear constraints.

Notes

  • Deterministic algorithm (reproducible results without random seed)

  • Efficiently handles problems with many local optima

  • Supports bound, equality, and inequality constraints

  • May be slower than stochastic methods for high-dimensional problems

  • Initial guess x0 is ignored; sampling points determined by algorithm

Parameters

nint, default=100

Number of sampling points used in the algorithm.

itersint, default=1

Number of iterations for algorithm convergence.

sampling_methodstr, default=’simplicial’

Sampling strategy: ‘simplicial’ (default) or ‘sobol’.

minimizer_kwargsdict, optional

Additional arguments passed to the local minimizer.

**kwargs

Additional arguments passed to scipy.optimize.shgo. See scipy documentation for complete options.

Examples

>>> optimizer = ScipyShgo(n=100, iters=1)
>>> optimizer.set_objective(sphere)
>>> optimizer.set_bounds((lower, upper))
>>> result = optimizer.run(x0)  # x0 is ignored
>>> np.allclose(result.x, [0, 0, 0], atol=1e-3)
True

Extends: ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer

run(x0: ndarray) OptimizeResult

Minimize objective using SHGO algorithm.

Parameters

x0

Ignored. Sampling points are determined by the algorithm.

Returns

OptimizeResult

Optimization result with x (global minimum), fun (minimum cost), success (convergence flag), and information about local minima found.

class ionworkspipeline.optimizers.ScipyDualAnnealing(**kwargs: Any)

Global stochastic optimizer using dual annealing.

Dual annealing combines generalized simulated annealing with fast local search. It’s designed for global optimization with a good balance between exploration and exploitation, particularly effective for rugged objective landscapes.

Notes

  • Does not support custom equality or inequality constraints

  • Accepts optional initial guess x0 to seed the search

  • Stochastic algorithm (use seed parameter for reproducibility)

  • Generally faster convergence than pure simulated annealing

  • Good choice when gradient information is unavailable

Parameters

maxiterint, default=1000

Maximum number of global search iterations.

initial_tempfloat, default=5230

Initial temperature for the annealing schedule.

restart_temp_ratiofloat, default=2e-5

Temperature ratio for restart condition during local search.

visitfloat, default=2.62

Parameter for the visiting distribution (higher = more exploration).

acceptfloat, default=-5.0

Parameter for the acceptance distribution (lower = more exploitation).

seedint, optional

Random seed for reproducible results.

no_local_searchbool, default=False

If True, skip local minimization (pure generalized simulated annealing).

**kwargs

Additional arguments passed to scipy.optimize.dual_annealing. See scipy documentation for complete options.

Examples

Basic usage with initial guess:

>>> optimizer = ScipyDualAnnealing(maxiter=100, seed=42)
>>> optimizer.set_objective(sphere)
>>> optimizer.set_bounds((lower, upper))
>>> result = optimizer.run(x0)
>>> result.fun < 1e-3
True

Extends: ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer

run(x0: ndarray) OptimizeResult

Minimize objective using dual annealing.

Parameters

x0

Initial guess to seed the search. The algorithm may explore beyond this point during the global search phase.

Returns

OptimizeResult

Optimization result with x (best solution), fun (best cost), success (convergence flag), and annealing statistics.

class ionworkspipeline.optimizers.AskTellOptimizer(method: str = 'CMAES', log_to_screen: bool = False, sigma0: float | ndarray | None = None, max_iterations: int | None = None, max_unchanged_iterations: int | None = None, max_unchanged_iterations_threshold: float | None = None, min_iterations: int = 1, max_evaluations: int = 1000000, population_size: int | None = None, threshold: float | None = None, absolute_tolerance: float = 1e-05, relative_tolerance: float = 0.01, xtol: float | None = 1e-06, population_convergence_tol: float | None = 0.005, flat_fitness_tol: float | None = None, convergence_patience: int = 3, algorithm_options: dict[str, Any] | None = None, async_mode: bool = False, **kwargs: Any)

Optimizer using ask/tell algorithms for population-based and simplex optimization.

Supports CMAES, PSO, DifferentialEvolution, XNES, and Nelder-Mead.

Parameters

methodstr, optional

Optimization method. Default is “CMAES”. Must be one of: “CMAES”, “Nelder-Mead”, “PSO”, “DifferentialEvolution”, or “XNES”.

log_to_screenbool, optional

Whether to print optimization progress. Default is False.

sigma0float | np.ndarray, optional

Initial step size for population-based methods. Default is None.

max_iterationsint, optional

Maximum number of iterations. Default is None (auto-computed).

max_unchanged_iterationsint, optional

Stop after this many iterations without improvement. Default is None.

max_unchanged_iterations_thresholdfloat, optional

Threshold for determining improvement. Default is 1e-5.

min_iterationsint, optional

Minimum iterations before checking stopping criteria. Default is 1.

max_evaluationsint, optional

Maximum number of function evaluations. Default is 1e6.

population_sizeint, optional

Population size for population-based methods. Default is method-specific.

thresholdfloat, optional

Target objective value to stop optimization. Default is None.

absolute_tolerancefloat, optional

Absolute tolerance for unchanged iterations. Default is 1e-5.

relative_tolerancefloat, optional

Relative tolerance for unchanged iterations. Default is 1e-2.

xtolfloat, optional

Parameter change tolerance. Stops when the L-infinity norm of the change in x_guessed between generations drops below this value. Default is 1e-6.

population_convergence_tolfloat, optional

Population convergence tolerance. The exact convergence semantics are algorithm-dependent (e.g. DE also checks position-space diversity). Default is 1e-2.

flat_fitness_tolfloat, optional

Flat fitness tolerance. Stops when >= 50% of population members have fitness within this tolerance of the median. Default is None (disabled).

convergence_patienceint, optional

Number of consecutive generations the population convergence check must pass before the optimizer actually stops. Higher values guard against transient convergence signals. Default is 3.

algorithm_optionsdict, optional

Algorithm-specific configuration options (for PSO, DE, and CMA-ES). CMA-ES accepts any key from cma.CMAOptions().

async_modebool, optional

Use buffered ask_one/tell_one loop. Default is False.

**kwargs

Additional arguments.

Extends: ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer

property async_mode: bool

Whether true async (submit/wait_next) evaluation is enabled.

property num_workers: int

Number of parallel workers.

property optimizer: Algorithm | None

Access the underlying algorithm instance.

property parallel: bool

Whether parallel evaluation is enabled.

property population_size: int | None

The size of the population.

Returns the live algorithm’s population size when available, falls back to the user-specified value, and finally computes the method-specific default from bounds if both are None. This allows callers (e.g. the backend distributed-evaluator setup) to obtain a concrete value after set_bounds() but before run().

run(x0: list[float]) OptimizerResult

Run the optimization using the configured algorithm.

Parameters

x0list[float]

Initial parameter values.

Returns

iwp.OptimizerResult

Optimization result with optimized parameters and metadata.

set_evaluation_callback(callback: Callable[[list, list], None]) None

Set a callback function to be called after each batch evaluation.

The callback receives two arguments: the list of positions (xs) and the list of function values (fs) from the batch evaluation.

set_external_evaluator(evaluator: PopulationEvaluator) None

Set an external evaluator for population-based optimization.

When set, the optimizer will use this evaluator instead of creating its own internal Evaluator.

set_objective(objective: Callable[[list[float]], float]) None

Set the objective function to be minimized.

class ionworkspipeline.optimizers.PointEstimate

Point estimate optimizer - returns the initial guess without optimization.

Useful for evaluating a single parameter set or initializing pipelines.

Extends: ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer

classmethod from_schema(schema: Any) PointEstimate

Construct from a validated ionworks_schema schema instance.

Accepts both ionworks_schema.parameter_estimators.PointEstimateOptimizer and DummyOptimizer (which is an alias for PointEstimateOptimizer). Both schemas have no fields, so the schema instance is consumed only for type symmetry with the rest of the optimizer surface.

run(x0: ndarray) OptimizerResult

Optimize the objective function.

Parameters

x0array_like

Initial guess for the independent variables.

Returns

resionworkspipeline.OptimizerResult

The result of the optimization.

class ionworkspipeline.optimizers.Dummy(*args, **kwargs)

Alias for PointEstimate optimizer.

Extends: ionworkspipeline.data_fits.parameter_estimators.optimizers.point_estimate_optimizer.PointEstimate