Ionworks Schema API¶
ionworks_schema provides Pydantic schemas for declaratively building
configurations for the ionworkspipeline battery
parameterization and simulation stack. Every runtime class in the pipeline
has a matching schema here, so you can assemble a pipeline configuration from
typed objects, serialize it to a config dict, and submit it to the Ionworks
API without installing the pipeline runtime itself.
For conceptual / mathematical background, see the Technical Guide. For runtime implementations of the same classes, see the ionworkspipeline API.
Core classes¶
The top-level entry points mirror ionworkspipeline’s top-level API.
- class ionworks_schema.BaseSchema¶
Bases:
BaseModelShared parent of every schema class in this package.
You won’t usually construct
BaseSchemadirectly — you’ll construct one of its concrete subclasses (iws.Pipeline,iws.DataFit,iws.Parameter, …). It provides the.to_config()method every schema uses to produce the dict you submit throughionworks-api, and it rejects unknown fields so typos are caught at construction time instead of silently lost on the way to the server.Extends:
pydantic.main.BaseModel- model_config = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'populate_by_name': True, 'validate_assignment': True, 'validate_by_alias': True, 'validate_by_name': True}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- to_config() dict¶
Build the dict you submit through
ionworks-api.This is the bridge between the friendly schema objects you construct in Python (
iws.DataFit(...),iws.Normal(...), …) and the JSON-shaped payload the Ionworks API expects when you submit a job. Build your schema, call.to_config(), and pass the result to the API client.Nested schemas are converted recursively,
Nonefields are dropped so the payload stays minimal, and a"type"key identifying the concrete class is appended for every component that needs to be re-identified at the server.
- run(*args, **kwargs)¶
- class ionworks_schema.Pipeline(elements: dict | None = None, output_file: str | None = None, name: str | None = None, description: str | None = None)
Bases:
BaseSchemaA sequence of steps that together produce a parameterised cell model.
Each step (an “element”) does one of: pull parameters from a source (
DirectEntry), fit parameters to measured data (DataFit), compute derived quantities (Calculation), or check the fitted parameters against held-out data (Validation). The parameters produced by one element are passed on to the next, so the order matters.Once you’ve added every element you need, call
.to_config()on the pipeline and submit the result throughionworks-apito run the whole job server-side.Parameters¶
- elementsdict, optional
Mapping of element name to pipeline element (
DataFit,DirectEntry,Validation,Calculation, …). The name is used to identify the element in the pipeline report.Noneis treated as an empty pipeline byto_config.- output_filestr, optional
Optional file path for persisting the final parameter values produced by the pipeline. If
None, parameters are not written to disk.- namestr, optional
Human-readable name for the pipeline, used in reports and logs.
- descriptionstr, optional
Free-text description of what the pipeline does.
Examples¶
>>> # known parameters (e.g. ambient temperature) >>> known = iws.direct_entries.DirectEntry( ... parameters={"Ambient temperature [K]": 298.15}, ... ) >>> # fit one parameter against an OCP measurement >>> obj = iws.objectives.OCPHalfCell( ... electrode="positive", ... data_input="path/to/ocp.csv", ... ) >>> fit = iws.DataFit( ... objectives={"ocp": obj}, ... parameters={"Positive electrode capacity [A.h]": iws.Parameter( ... "Positive electrode capacity [A.h]", initial_value=3.0, bounds=(2.0, 4.0), ... )}, ... ) >>> # validate the fit against held-out cycling data >>> val_obj = iws.objectives.CurrentDriven(data_input="path/to/cycle.csv") >>> val = iws.Validation(objectives={"cycle": val_obj}) >>> pipeline = iws.Pipeline({"known": known, "ocp fit": fit, "validate": val}) >>> config = pipeline.to_config() >>> # then submit `config` via ionworks-api
Extends:
ionworks_schema.base.BaseSchemaSee also:
ionworkspipeline.Pipeline(runtime implementation).- to_config() dict
Build the pipeline payload you submit through
ionworks-api.Walks every element you added to
elements, serialises it (each element’s ownto_config), and tags it with its kind ("entry","data_fit","validation", …) so the server knows how to dispatch it. Pass the returned dict to the Ionworks API to run the full pipeline.
- model_config = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'populate_by_name': True, 'validate_assignment': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- class ionworks_schema.DataFit(objectives, source='', parameters=None, cost=None, initial_guesses=None, optimizer=None, cost_logger=None, multistarts=None, num_workers=None, parallel=None, max_batch_size=None, initial_guess_sampler=None, priors=None, options=None)
Bases:
BaseSchemaFit a model’s parameters to measured experimental data.
A
DataFitstep says: “run these experiments through the model, compare the result to the measurements I supply, and adjust these parameters until the agreement is as good as possible”. One or moreobjectivesdescribe what experiments to compare against and which measured curves to match. Theparametersdict lists which parameters are free to move during the fit, and the optionalpriorsexpress what you already believe about their plausible values.The remaining fields (
cost,optimizer,initial_guesses,multistarts, …) tune how the fit runs. The defaults are sensible — you only need to set them if you want finer control over the optimisation algorithm, parallelism, or runtime budget.Parameters¶
- objectivesFittingObjective or DesignObjective or dict[str, FittingObjective | DesignObjective | dict]
What to fit against. Either a single objective (a
CurrentDriven,MSMRHalfCell, … fromiws.objectives) or a dict of named objectives if the fit spans multiple experiments.- sourcestr, optional
Free-text label for the data source (paper, dataset name, instrument). Shown in reports and provenance records.
- parametersdict[str, Parameter | pybamm.Symbol | callable] | None, optional
Which parameters are being fitted, and (optionally) how they relate to each other through pybamm expressions. At least one of
parametersorpriorsmust be set. Each value can be:an
iwp.Parameterobject, e.g.iwp.Parameter("x")a pybamm expression, in which case other referenced parameters must also be supplied as
iwp.Parameterobjects viapybamm.Parameterwrapping. For example:{ "param": 2 * pybamm.Parameter("half-param"), "half-param": iwp.Parameter("half-param"), }
works, but
{"param": 2 * iwp.Parameter("half-param")}does not.a function that constructs a pybamm expression referencing other parameters, which must again be explicitly supplied as
iwp.Parameterobjects:{ "main parameter": lambda x: ( pybamm.Parameter("other parameter") * x**2 ), "other parameter": iwp.Parameter("other parameter"), }
The dict key does not need to match the underlying pybamm parameter name —
DataFitfigures out which variable to fit from theiwp.Parameterreference.- costObjectiveFunction or str or dict or None, optional
How disagreement between model and data is summed up into a single number (e.g. sum-of-squares, log-likelihood). Leave unset for a sensible default.
- initial_guessesdict[str, float] or list[dict[str, float]] or None, optional
Starting point(s) for the optimiser. One dict applies to every restart; a list of dicts provides one starting point per restart.
- optimizerParameterEstimator or dict or None, optional
Which optimisation algorithm to use (e.g.
CMAES,PSO,ScipyMinimize). Leave unset for the default.- cost_loggerBaseSchema or dict or None, optional
Optional logger that records the cost trajectory and parameter values across the fit, for later inspection.
- multistartsint | None, optional
Number of independent restarts from different initial guesses. More restarts is more robust but takes longer.
- num_workersint | None, optional
Worker processes for running restarts in parallel.
Noneuses all CPU cores;1disables parallelism. Not supported on Windows.- parallelbool | None, optional
Whether to also parallelise within a single restart (for population-based optimisers). Auto-detected when
None.- max_batch_sizeint | None, optional
Cap on how many restarts run together in one batch. Leave unset for an auto-chosen value.
- initial_guess_samplerDistributionSampler or dict or None, optional
How to spread the multistart guesses across the parameter space (
LatinHypercubeby default).- priorsPrior or list[Prior] or dict or None, optional
What you already believe about the parameter values. Acts as a regulariser on the fit. May be supplied alone (the prior names become the fit parameters) or alongside
parameters(priors regularise the listed fit parameters).- optionsdict[str, Any] | None, optional
Advanced dict of runtime options:
seedfor reproducibility,maxiters/maxtimefor budgets, andlow_memoryto trim the log. Defaults are:options = { # Random seed for reproducibility. Defaults to a seed # generated from the current time. "seed": iwutil.random.generate_seed(), # Reduce log size: only append entries if the cost # improves the best-so-far by at least 0.1%. Defaults # to True for deterministic optimizers. "low_memory": True, # Maximum iterations per optimization job. "maxiters": None, # Maximum wall time (seconds) per job. With multistarts # the total may exceed this since many jobs run. "maxtime": None, }
Note:
maxitersandmaxtimeonly take effect whenmodel.convert_to_format == 'casadi'.
Examples¶
>>> # build the schema with the fields you care about >>> obj = iws.objectives.OCPHalfCell( ... electrode="positive", ... data_input="path/to/ocp.csv", ... ) >>> fit = iws.DataFit( ... objectives={"ocp": obj}, ... parameters={"Q_pe": iws.Parameter( ... "Positive electrode capacity [A.h]", initial_value=3.0, bounds=(2.0, 4.0), ... )}, ... priors={"Q_pe": iws.priors.Prior("Q_pe", iws.stats.Normal(3.0, 0.2))}, ... ) >>> config = iws.Pipeline({"ocp fit": fit}).to_config() >>> # then submit `config` via ionworks-api
Extends:
ionworks_schema.base.BaseSchemaSee also:
ionworkspipeline.DataFit(runtime implementation).- objectives: Annotated[dict[str, Annotated[dict[str, Any] | BaseObjective | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]] | Annotated[dict[str, Any] | BaseObjective | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]
- source: str
- cost: Annotated[dict[str, Any] | BaseSchema | str | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])] | None
- optimizer: Annotated[dict[str, Any] | ParameterEstimator | BaseSchema | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])] | None
- cost_logger: Annotated[dict[str, Any] | BaseSchema | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])] | None
- initial_guess_sampler: Annotated[dict[str, Any] | DistributionSampler | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])] | None
- priors: Annotated[dict[str, Any] | Prior | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])] | list[Annotated[dict[str, Any] | Prior | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]] | None
- wrap_bare_objective()
Wrap a bare objective in a dict, matching ionworkspipeline behavior.
Only applies to DataFit, not ArrayDataFit (which requires a dict keyed by independent variable values).
- validate_parameters_or_priors()
At least one of
parametersorpriorsmust be supplied.The runtime accepts both together — priors then act as regularizers on the listed fit parameters — so we mirror the runtime here rather than enforce a stricter mutual exclusion at the schema boundary.
- model_config = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'populate_by_name': True, 'validate_assignment': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- class ionworks_schema.ArrayDataFit(objectives, source='', parameters=None, cost=None, initial_guesses=None, optimizer=None, cost_logger=None, multistarts=None, num_workers=None, parallel=None, max_batch_size=None, initial_guess_sampler=None, priors=None, options=None)
Bases:
DataFitFit the same model separately at each value of an independent variable.
Use this when you have one experiment repeated at different conditions — typically temperatures, C-rates, or pulse indices — and you want one fitted parameter set per condition rather than one global fit.
objectivesis keyed by the independent variable value ({298.15: ..., 313.15: ...}); each entry is fitted independently and the results can be post-processed to extract how parameters depend on the variable.All other fields behave the same as
DataFit.Extends:
ionworks_schema.data_fit.data_fit.DataFitSee also:
ionworkspipeline.ArrayDataFit(runtime implementation).- objectives: Annotated[dict[Any, Annotated[dict[str, Any] | BaseObjective | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]] | Annotated[dict[str, Any] | BaseObjective | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]
- model_config = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'populate_by_name': True, 'validate_assignment': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- class ionworks_schema.Parameter(name, initial_value=None, bounds=None, prior=None, normalize=None, check_bounds=None, check_initial_value=None, initial_guess_distribution=None)
Bases:
BaseSchemaA model parameter that is free to move during a data fit.
Wrap any quantity you want the fit to adjust in a
Parameterso the pipeline knows it’s a free variable: its starting value, its plausible range, and (optionally) what you already believe about its likely value. Anything not wrapped in aParameteris treated as a fixed input.Parameters¶
- namestr
The name of the parameter. Should match the name used inside the model (e.g.
"Particle diffusion time [s]").- initial_valuefloat | None, optional
The value the optimiser starts from. If you leave it unset, the midpoint of finite bounds is used (or
1when there are no bounds).- boundstuple[float, float] | list[float] | None, optional
(lower, upper)tuple bracketing where you believe the true value lies. Defaults to no bounds. The upper bound must be strictly greater than the lower bound.- priorDistribution or Prior or None, optional
A probability distribution describing what you already believe about the parameter. Used by Bayesian and regularised fits.
- normalizebool | None, optional
Rescale by the initial value before optimisation so the optimiser sees comparable magnitudes. Defaults to
True.- check_boundsbool | None, optional
Validate the bounds at construction time. Defaults to
True.- check_initial_valuebool | None, optional
Validate that the initial value falls inside the bounds at construction time. Defaults to
True.- initial_guess_distributionDistribution or None, optional
When running multistart fits, this is the distribution the starting points are drawn from. Defaults to a uniform distribution over the bounds.
Examples¶
>>> # build the parameter with bounds, a prior, and a log transform >>> raw = iws.Parameter( ... "Negative particle diffusivity [m2.s-1]", ... initial_value=1e-14, ... bounds=(1e-16, 1e-12), ... prior=iws.stats.LogNormal(mean=-32.2, std=2.0), ... ) >>> param = iws.transforms.Log10(parameter=raw) >>> # slot it into a DataFit >>> obj = iws.objectives.Pulse(data_input="path/to/gitt.csv") >>> fit = iws.DataFit(objectives={"gitt": obj}, parameters={raw.name: param})
Extends:
ionworks_schema.base.BaseSchemaSee also:
ionworkspipeline.Parameter(runtime implementation).- name: str
- upper_bound_greater_than_lower()
Validate that upper bound is greater than lower bound.
- model_config = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'populate_by_name': True, 'validate_assignment': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
- class ionworks_schema.Library
Bases:
BaseModelMaterial library access.
Exposes the bundled set of reference materials (e.g. graphite, NMC, LFP) via static lookups by name.
Extends:
pydantic.main.BaseModelSee also:
ionworkspipeline.Library(runtime implementation).- static get_material(name: str) Material
Return the
Materialregistered undername.Raises¶
- KeyError
If
nameis not registered in the library.
- model_config = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class ionworks_schema.Material(name: str, description: str | None = None, parameter_values: dict[str, Any] | None = None, **kwargs)
Bases:
BaseModelMaterial configuration.
Holds a named parameter set — the three thermodynamic host-site fields for MSMR-style materials, or any other pybamm-compatible parameter values for the material.
Parameters¶
- namestr
Human-readable material identifier (e.g.
"NMC - Verbrugge 2017"). Used as the lookup key in the library.- descriptionstr, optional
Free-text description of the material — typically includes the citation or source the parameter values were taken from.
- parameter_valuesdict, optional
Mapping of pybamm parameter names to their values for this material. For MSMR materials typically contains the host-site standard potentials, occupancy fractions, and ideality factors.
Extends:
pydantic.main.BaseModelSee also:
ionworkspipeline.Material(runtime implementation).- model_config = {'extra': 'allow'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- name: str
- class ionworks_schema.Validation(objectives: dict[str, Annotated[dict[str, Any] | FittingObjective | DesignObjective | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]], summary_stats: list[Annotated[dict[str, Any] | ObjectiveFunction | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]] | None = None)
Bases:
BaseSchemaCheck a fitted model against held-out experimental data.
A
Validationstep takes the parameters produced earlier in the pipeline, simulates the experiments listed inobjectives, and compares those simulations to the measured data. The result tells you how well the model generalises beyond the data you fit on.Each
objectivedescribes one comparison (e.g. “current vs. time for this discharge”). Thesummary_statslist controls which scalar error metrics — RMSE, MAE, max error, … — get reported alongside the full time-series comparison.Parameters¶
- objectivesdict[str, FittingObjective | DesignObjective | dict]
One entry per experiment you want to compare against. The key is a human-readable label (used in the report); the value is the objective describing what to simulate and what to compare.
- summary_statslist[ObjectiveFunction | dict], optional
Which scalar error metrics to report (e.g.
RMSE(),MAE(),Max()). If you leave this unset, sensible defaults are filled in for fitting-style objectives so the report carries the same physical units as the measurements.
Examples¶
>>> obj1 = iws.objectives.CurrentDriven(data_input="path/to/cycle_1C.csv") >>> obj2 = iws.objectives.CurrentDriven(data_input="path/to/cycle_C2.csv") >>> val = iws.Validation( ... objectives={"1C": obj1, "C/2": obj2}, ... summary_stats=[iws.costs.RMSE(), iws.costs.MAE()], ... ) >>> config = iws.Pipeline({"validate": val}).to_config() >>> # then submit `config` via ionworks-api
Extends:
ionworks_schema.base.BaseSchemaSee also:
ionworkspipeline.Validation(runtime implementation).- objectives: dict[str, Annotated[dict[str, Any] | FittingObjective | DesignObjective | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]]
- summary_stats: list[Annotated[dict[str, Any] | ObjectiveFunction | Any, FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]] | None
- model_config = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'populate_by_name': True, 'validate_assignment': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
Submodule reference¶
- Objectives
- Calculations
AreaToSquareWidthHeightArrheniusDiffusivityFromMSMRDataArrheniusDiffusivityFromMSMRFunctionArrheniusLogLinearAverageMSMRParametersCalculationCellMassCyclableLithiumDensityFromVolumeAndMassDiameterToSquareWidthHeightDiffusivityDataInterpolantDiffusivityFromMSMRDataDiffusivityFromMSMRFunctionDiffusivityFromPulseElectrodeCapacityElectrodeSOHElectrodeSOHHalfCellElectrodeVolumeFractionFromLoadingElectrodeVolumeFractionFromPorosityEntropicChangeDataInterpolantEntropicChangeFromMSMRFunctionHalfCellNominalCapacityInitialConcentrationFromInitialStoichiometryHalfCellInitialSOCInitialSOCHalfCellInitialSOCfromMaximumStoichiometryInitialStoichiometryFromVoltageHalfCellInitialStoichiometryFromVoltageMSMRHalfCellInitialVoltageFromConcentrationJellyRollThermalDimensionsLumpedHeatCapacityAndDensityMSMRElectrodeSOHHalfCellMSMRFullCellCapacitiesMSMRFunctionOCPDataInterpolantOCPDataInterpolantMSMRExtrapolationOCPMSMRInterpolantOpenCircuitLimitsPorosityFromElectrodeVolumeFractionPouchCellThermalDimensionsSlopesToKnotsSlopesToKnots2DSpecificHeatCapacityStoichiometryAtMinimumSOCStoichiometryLimitsFromCapacitySurfaceAreaToVolumeRatio
- Data fit
- Direct entries
DirectEntryDirectEntryFunctionSchemaInitialStateOfChargeInitialTemperatureInitialVoltagePiecewiseInterpolation1DPiecewiseInterpolation2Darrhenius_butler_volmer_exchange_current_densityarrhenius_electrolyte_conductivityarrhenius_electrolyte_diffusivityarrhenius_particle_diffusivityaverage_ocpbruggemanconstant_electrolytefrom_jsonlandesfeind_electrolyteli_plating_defaultslithium_metal_anodemechanical_degradation_defaultsnyman_electrolyteone_cm2_cellsei_defaultsspm_defaultsstandard_defaultstemperatureszero_activation_energyzero_entropic_change
- Costs
- Priors
- Stats
- Transforms
- Parameter
- Parameter estimators
AskTellOptimizerCMAESCMAESOptionsDEOptionsDifferentialEvolutionDummyOptimizerDummySamplerGridSearchOptimizerPSOPSOOptionsParameterEstimatorPintsOptimizerPintsSamplerPointEstimateOptimizerPointEstimateSamplerSamplerScipyBasinhoppingScipyDifferentialEvolutionScipyDualAnnealingScipyLeastSquaresScipyMinimizeScipyShgoXNES
- Distribution samplers
- Library
- Models
- Objective functions
- Validation
- Core