Optimizers¶
- class ionworkspipeline.optimizers.Optimizer(**kwargs: Any)¶
Base class for all optimizers.
Optimizers seek a single optimal point in parameter space that minimizes the objective function.
Parameters¶
- **kwargs
Arguments passed to the underlying optimizer algorithm.
Extends:
ionworkspipeline.data_fits.parameter_estimators.parameter_estimator.ParameterEstimator- run(x0: ndarray) OptimizerResult¶
Optimize the objective function.
Parameters¶
- x0array_like
Initial guess for the independent variables.
Returns¶
- res
ionworkspipeline.OptimizerResult The result of the optimization.
- class ionworkspipeline.optimizers.ScipyDifferentialEvolution(**kwargs: Any)¶
Global stochastic optimizer using differential evolution with parallel evaluation.
Differential evolution is a robust global optimization algorithm that evolves a population of candidate solutions across generations. It excels at handling multi-modal, non-convex objective landscapes and requires no gradient information.
Notes¶
Does not support custom equality or inequality constraints
Parallel workers significantly speed up optimization (use DataFits’ num_workers arg)
Initial guess x0 is ignored; initial population is generated from bounds
Polish option disabled by default as it conventionally significantly decreases performance
Callback logs only best solution per generation (not individual evaluations)
Parameters¶
- workersint, default=1
Number of parallel workers for function evaluations. Use -1 for all CPU cores.
- maxiterint, default=1000
Maximum number of generations.
- popsizeint, default=15
Population size multiplier (total population = popsize * dimensionality).
- strategystr, default=’best1bin’
Differential evolution strategy. Options include ‘best1bin’, ‘rand1bin’, ‘best2bin’, ‘rand2bin’, ‘currenttobest1bin’.
- mutationfloat or tuple, default=(0.5, 1)
Mutation constant. Can be float or (min, max) tuple for adaptive mutation.
- recombinationfloat, default=0.7
Crossover probability for parameter mixing.
- seedint, optional
Random seed for reproducible results.
- atol, tolfloat, optional
Absolute and relative tolerance for convergence.
- **kwargs
Additional arguments passed to scipy.optimize.differential_evolution. See scipy documentation for complete options.
Examples¶
Basic usage (single worker for doctests):
>>> optimizer = ScipyDifferentialEvolution(maxiter=50, seed=42) >>> optimizer.set_objective(sphere) >>> optimizer.set_bounds((lower, upper)) >>> result = optimizer.run(x0) >>> result.fun < 1e-3 True
Integration with DataFit:
>>> optimizer = ScipyDifferentialEvolution(maxiter=500) >>> isinstance(optimizer, ScipyDifferentialEvolution) True
Extends:
ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer- run(x0: ndarray) OptimizeResult¶
Minimize objective using differential evolution.
Parameters¶
- x0
Ignored. Initial population is randomly generated within bounds.
Returns¶
- OptimizeResult
Optimization result with x (best solution), fun (best cost), success (convergence flag), and generation statistics.
- set_evaluation_callback(callback: Callable[[list[ndarray], list[float]], None] | None = None) None¶
Configure callback for logging best solution after each generation.
Unlike other optimizers, this only logs the best solution per generation, not individual evaluations (which occur in parallel worker processes).
Parameters¶
- callback
Function receiving lists of parameter vectors and costs. Called with single-element lists containing the generation’s best solution. Set to None to disable callbacks.
- class ionworkspipeline.optimizers.ScipyLeastSquares(**kwargs: Any)¶
Nonlinear least squares optimizer using scipy’s Trust Region Reflective algorithm.
This optimizer is designed for problems where the objective returns a residual vector rather than a scalar cost. It minimizes the sum of squares of the residuals. Best suited for well-behaved, smooth problems with a clear residual structure.
Notes¶
Requires objective functions that return an array (residual vector)
Automatically handles linear algebra errors by returning NaN values
More efficient than general minimization for least-squares structure
Supports bound constraints but not general equality/inequality constraints
Parameters¶
- methodstr, optional
Algorithm to use. Options: ‘trf’ (default), ‘dogbox’, ‘lm’.
- ftol, xtol, gtolfloat, optional
Tolerance parameters for convergence criteria.
- max_nfevint, optional
Maximum number of function evaluations.
- **kwargs
Additional arguments passed to scipy.optimize.least_squares. See scipy documentation for complete options.
Examples¶
>>> optimizer = ScipyLeastSquares(method='trf', max_nfev=100) >>> optimizer.set_objective(sphere_residuals) >>> optimizer.set_bounds((lower, upper)) >>> result = optimizer.run(x0) >>> np.allclose(result.x, [0, 0, 0], atol=1e-3) True
Extends:
ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer
- class ionworkspipeline.optimizers.ScipyMinimize(**kwargs: Any)¶
General-purpose scalar minimization with support for constraints.
Wraps scipy’s minimize function, providing access to multiple local optimization algorithms (e.g., L-BFGS-B, SLSQP, trust-constr, COBYQA). Suitable for smooth, scalar-valued objectives with optional equality and inequality constraints.
Notes¶
Requires objective functions that return a scalar value
Supports bound constraints and custom equality/inequality constraints
Choice of method depends on problem structure and constraint types
Some methods (e.g., ‘L-BFGS-B’) support bounds only, not general constraints
Parameters¶
- methodstr, optional
Optimization algorithm. Common choices: - ‘L-BFGS-B’: Bound-constrained, gradient-based (default for bounded problems) - ‘SLSQP’: Sequential Least Squares, supports all constraint types - ‘trust-constr’: Modern trust-region method, supports all constraints - ‘COBYQA’: Derivative-free, supports nonlinear constraints
- maxiterint, optional
Maximum number of iterations.
- tolfloat, optional
Tolerance for termination.
- **kwargs
Additional arguments passed to scipy.optimize.minimize. See scipy documentation for complete options.
Examples¶
>>> optimizer = ScipyMinimize(method='L-BFGS-B', options={'maxiter': 100}) >>> optimizer.set_objective(sphere) >>> optimizer.set_bounds((lower, upper)) >>> result = optimizer.run(x0) >>> np.allclose(result.x, [0, 0, 0], atol=1e-3) True
Extends:
ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer
- class ionworkspipeline.optimizers.ScipyShgo(**kwargs: Any)¶
Global optimizer using simplicial homology techniques.
SHGO (Simplicial Homology Global Optimization) uses topological techniques to identify and sample from all local minima basins. It’s particularly effective for problems with many local minima and supports general nonlinear constraints.
Notes¶
Deterministic algorithm (reproducible results without random seed)
Efficiently handles problems with many local optima
Supports bound, equality, and inequality constraints
May be slower than stochastic methods for high-dimensional problems
Initial guess x0 is ignored; sampling points determined by algorithm
Parameters¶
- nint, default=100
Number of sampling points used in the algorithm.
- itersint, default=1
Number of iterations for algorithm convergence.
- sampling_methodstr, default=’simplicial’
Sampling strategy: ‘simplicial’ (default) or ‘sobol’.
- minimizer_kwargsdict, optional
Additional arguments passed to the local minimizer.
- **kwargs
Additional arguments passed to scipy.optimize.shgo. See scipy documentation for complete options.
Examples¶
>>> optimizer = ScipyShgo(n=100, iters=1) >>> optimizer.set_objective(sphere) >>> optimizer.set_bounds((lower, upper)) >>> result = optimizer.run(x0) # x0 is ignored >>> np.allclose(result.x, [0, 0, 0], atol=1e-3) True
Extends:
ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer- run(x0: ndarray) OptimizeResult¶
Minimize objective using SHGO algorithm.
Parameters¶
- x0
Ignored. Sampling points are determined by the algorithm.
Returns¶
- OptimizeResult
Optimization result with x (global minimum), fun (minimum cost), success (convergence flag), and information about local minima found.
- class ionworkspipeline.optimizers.ScipyDualAnnealing(**kwargs: Any)¶
Global stochastic optimizer using dual annealing.
Dual annealing combines generalized simulated annealing with fast local search. It’s designed for global optimization with a good balance between exploration and exploitation, particularly effective for rugged objective landscapes.
Notes¶
Does not support custom equality or inequality constraints
Accepts optional initial guess x0 to seed the search
Stochastic algorithm (use seed parameter for reproducibility)
Generally faster convergence than pure simulated annealing
Good choice when gradient information is unavailable
Parameters¶
- maxiterint, default=1000
Maximum number of global search iterations.
- initial_tempfloat, default=5230
Initial temperature for the annealing schedule.
- restart_temp_ratiofloat, default=2e-5
Temperature ratio for restart condition during local search.
- visitfloat, default=2.62
Parameter for the visiting distribution (higher = more exploration).
- acceptfloat, default=-5.0
Parameter for the acceptance distribution (lower = more exploitation).
- seedint, optional
Random seed for reproducible results.
- no_local_searchbool, default=False
If True, skip local minimization (pure generalized simulated annealing).
- **kwargs
Additional arguments passed to scipy.optimize.dual_annealing. See scipy documentation for complete options.
Examples¶
Basic usage with initial guess:
>>> optimizer = ScipyDualAnnealing(maxiter=100, seed=42) >>> optimizer.set_objective(sphere) >>> optimizer.set_bounds((lower, upper)) >>> result = optimizer.run(x0) >>> result.fun < 1e-3 True
Extends:
ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer- run(x0: ndarray) OptimizeResult¶
Minimize objective using dual annealing.
Parameters¶
- x0
Initial guess to seed the search. The algorithm may explore beyond this point during the global search phase.
Returns¶
- OptimizeResult
Optimization result with x (best solution), fun (best cost), success (convergence flag), and annealing statistics.
- class ionworkspipeline.optimizers.Pints(method: str = 'CMAES', log_to_screen: bool = False, sigma0: float | ndarray | None = None, max_iterations: int | None = None, max_unchanged_iterations: int | None = None, max_unchanged_iterations_threshold: float | None = None, min_iterations: int = 1, max_evaluations: int = 1000000, population_size: int | None = None, threshold: float | None = None, absolute_tolerance: float = 1e-05, relative_tolerance: float = 0.01, use_f_guessed: bool = False, algorithm_options: dict[str, Any] | None = None, **kwargs: Any)¶
Optimizer using the Pints library: Probabilistic Inference on Noisy Time-Series.
Wraps Pints optimizers. Supports CMAES, PSO, DifferentialEvolution, XNES, SNES, and Nelder-Mead.
Parameters¶
- methodstr, optional
Optimization method. Default is “CMAES”. Must be one of: “CMAES”, “Nelder-Mead”, “PSO”, “DifferentialEvolution”, “XNES”, or “SNES”.
- log_to_screenbool, optional
Whether to print optimization progress. Default is False.
- sigma0float | np.ndarray, optional
Initial step size for population-based methods. Default is None.
- max_iterationsint, optional
Maximum number of iterations. Default is None (auto-computed based on problem dimension). For population-based methods uses Hansen’s pycma formula; for non-population methods uses SciPy’s linear scaling.
- max_unchanged_iterationsint, optional
Stop after this many iterations without improvement. Default is None (auto-computed based on problem dimension). For population-based methods uses Hansen’s pycma formula; for non-population methods uses a power-law scaling with log of max_iterations.
- max_unchanged_iterations_thresholdfloat, optional
Threshold for determining improvement. Default is 1e-5.
- min_iterationsint, optional
Minimum iterations before checking stopping criteria. Default is 1.
- max_evaluationsint, optional
Maximum number of function evaluations. Default is 1e6.
- population_sizeint, optional
Population size for population-based methods. Default is method-specific (CMAES uses
4 + floor(3 * ln(n_params))). Larger populations create more parallel work per iteration, improving cluster utilization when using distributed computing. Consider setting this to match or exceed your cluster’s CPU count for optimal parallelism.- thresholdfloat, optional
Target objective value to stop optimization. Default is None.
- absolute_tolerancefloat, optional
Absolute tolerance for unchanged iterations. Default is 1e-5.
- relative_tolerancefloat, optional
Relative tolerance for unchanged iterations. Default is 1e-2.
- use_f_guessedbool, optional
Track f_guessed instead of f_best. Default is False.
- algorithm_optionsdict, optional
Algorithm-specific configuration options.
For PSO, supported keys are:
inertia_weight: tuple[float, float] - (w_start, w_end) for linear decay. Enables inertia mode and disables constriction.constriction: bool - Use constriction coefficient (default: True)c1: float - Cognitive coefficient (default: 2.05)c2: float - Social coefficient (default: 2.05)boundary_handling: str | BoundaryHandling - “absorb”, “reflect”, “random”, “ignore”velocity_clamping: str | VelocityClamping - “none”, “fraction”, “adaptive”v_max_fraction: float - Fraction of search space for max velocity (default: 0.2)max_iterations: int - For adaptive parameter scheduling (default: 1000)
For DifferentialEvolution, supported keys are:
mutation_strategy: str | MutationStrategy - “rand_1”, “best_1”, “current_to_best_1”, “rand_2”, “best_2”, “current_to_pbest_1” (default: “current_to_pbest_1”)crossover_method: str | CrossoverMethod - “binomial”, “exponential” (default: “binomial”)F: float - Mutation scale factor (default: 0.5)CR: float - Crossover probability (default: 0.5)dither: bool | tuple[float, float] - Randomise F per generation for classic strategies. True uses [0.5, 1.0]; pass (low, high) for custom range (default: True)p_best_rate: float - Fraction of top individuals for pbest (default: 0.1)archive_size_ratio: float - Archive size as multiple of population (default: 1.0)memory_size: int - SHADE history entries (default: population_size)boundary_handling: str | BoundaryHandling - “absorb”, “reflect”, “random”, “ignore” (default: “reflect”)
- **kwargs
Additional arguments for the Pints optimizer constructor.
Notes¶
Parallelism is controlled via DataFit’s parallel and num_workers parameters, not through the optimizer directly. DataFit will automatically configure the optimizer’s parallelism settings based on its own configuration.
Uses standard Python multiprocessing with persistent workers for efficient parallel evaluation. For distributed evaluation across multiple nodes, see the backend’s distributed_evaluation module.
Examples¶
>>> optimizer = iwp.optimizers.Pints(method="CMAES", max_iterations=50) >>> optimizer.set_objective(sphere) >>> optimizer.set_bounds((lower, upper)) >>> result = optimizer.run(x0=x0) >>> result.fun < 1e-3 True
Configuring PSO with inertia weight (exploration vs exploitation):
>>> optimizer = iwp.optimizers.Pints( ... method="PSO", ... algorithm_options={ ... "inertia_weight": (0.9, 0.4), # Linear decay from 0.9 to 0.4 ... "boundary_handling": "reflect", ... } ... )
Using PSO with constriction coefficient (alternative to inertia):
>>> optimizer = iwp.optimizers.Pints( ... method="PSO", ... algorithm_options={ ... "constriction": True, ... "c1": 2.05, ... "c2": 2.05, ... } ... )
Parallel evaluation is configured via DataFit, not the optimizer directly:
>>> data_fit = iwp.DataFit( ... sample_objective, ... parameters=sample_params_to_fit, ... parallel=True, ... num_workers=2, ... )
Extends:
ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer- property num_workers: int¶
Number of parallel workers.
- property optimizer: Optimiser | None¶
Access the underlying Pints optimizer instance.
- property parallel: bool¶
Whether parallel evaluation is enabled.
- property population_size: int¶
The size of the population.
- run(x0: list[float]) OptimizerResult¶
Run the optimization using the configured Pints optimizer.
Parameters¶
- x0list[float]
Initial parameter values.
Returns¶
- iwp.OptimizerResult
Optimization result with optimized parameters and metadata.
- set_evaluation_callback(callback: Callable[[list, list], None]) None¶
Set a callback function to be called after each batch evaluation.
The callback receives two arguments: the list of positions (xs) and the list of function values (fs) from the batch evaluation.
Parameters¶
- callbackCallable[[list, list], None]
Function that takes (xs, fs) and performs logging or other operations
- set_external_evaluator(evaluator: PopulationEvaluator) None¶
Set an external evaluator for population-based optimization.
When set, the optimizer will use this evaluator instead of creating its own internal Evaluator. This enables distributed evaluation strategies like SPREAD scheduling across cluster nodes.
Parameters¶
- evaluatorPopulationEvaluator
An object with an evaluate(positions: list[list[float]]) -> list[float] method that evaluates a population of parameter vectors.
- set_objective(objective: Callable[[list[float]], float]) None¶
Set the objective function to be minimized.
- class ionworkspipeline.optimizers.PointEstimate¶
Point estimate optimizer - returns the initial guess without optimization.
Useful for evaluating a single parameter set or initializing pipelines.
Extends:
ionworkspipeline.data_fits.parameter_estimators.optimizers.optimizer.Optimizer- run(x0: ndarray) OptimizerResult¶
Optimize the objective function.
Parameters¶
- x0array_like
Initial guess for the independent variables.
Returns¶
- res
ionworkspipeline.OptimizerResult The result of the optimization.
- class ionworkspipeline.optimizers.Dummy(*args, **kwargs)¶
Alias for PointEstimate optimizer.
Extends:
ionworkspipeline.data_fits.parameter_estimators.optimizers.point_estimate_optimizer.PointEstimate