Validation

A class for validating model predictions against experimental data.

class ionworkspipeline.Validation(objectives: dict[str, FittingObjective | DesignObjective], summary_stats: list[ObjectiveFunction] | None = None)

Validate model performance against experimental data.

Evaluates model accuracy by comparing predictions to data using specified objectives and computing summary statistics. Results include time-series comparisons and scalar metrics (RMSE, MAE, etc.).

Parameters

objectivesdict[str, FittingObjective | DesignObjective]

Objectives defining what to validate. Keys are objective names, values are objective instances with data and model configuration.

summary_statslist[ObjectiveFunction] | None, default=None

Error metrics to compute. For FittingObjective, defaults to [RMSE, MAE, Max]. For DesignObjective, uses metrics defined by the objective.

Examples

Validate voltage predictions:

>>> result = sample_validation.run(sample_parameter_values)
>>> "discharge" in sample_validation.summary_stats
True
>>> sample_validation.summary_stats["discharge"][0]["name"]
'RMSE [mV] (Voltage [V])'

Extends: ionworkspipeline.pipeline.PipelineElement

export_json(filename='validation_results.json', include_plots: bool = True, plots: list[str] | None = None, plot_kwargs: dict | None = None, backend: str = 'bokeh')

Export validation results to JSON format.

Parameters

filenamestr

The name of the JSON file to save.

include_plotsbool, default=True

Whether to include plot data in JSON format. If True, plots are stored as JSON objects that can be recreated.

plotslist[str] | None, default=None

The plots to include in the export. If None, model data and error plots will be included. Valid plots are the same as in export_report():

  • “model data”

  • “error”

  • “log error”

  • “cumulative rmse”

  • “error moving average”

plot_kwargsdict | None, default=None

Additional keyword arguments for plot creation.

backendstr, default=”bokeh”

Plotting backend to use: “bokeh” (default) or “plotly”. For bokeh, figures can be embedded with Bokeh.embed.embed_item(). For plotly, figures can be loaded with plotly.io.from_json().

Returns

None

The JSON file is saved to disk at the specified filename.

Notes

The JSON structure is:

{
    "validation_results": {
        "objective_name": {
            "data": {...},  # validation result data
            "plots": [...]  # list of figure JSON
        }
    },
    "summary_stats": {
        "objective_name": [
            {"name": "...", "metric": ...}
        ]
    }
}
export_report(filename='validation_report', include_bokehjs: str | bool = 'cdn', include_plotlyjs: str | bool = 'cdn', backend: str = 'bokeh', plots: list[str] | None = None, plot_kwargs: dict | None = None)

Export a report of the validation results.

Parameters

filenamestr

The name of the folder where the report will be saved. The main report will be named index.html inside this folder.

include_bokehjsstr | bool, default=’cdn’

Whether to include the bokeh.js library in the report (for bokeh backend). Default is ‘cdn’, which loads the library from the cdn. This reduces file size, but requires an internet connection.

include_plotlyjsstr | bool, default=’cdn’

Whether to include the plotly.js library in the report (for plotly backend). Default is ‘cdn’, which loads the library from the cdn. This reduces file size, but requires an internet connection.

backendstr, default=’bokeh’

Plotting backend to use: “bokeh” (default) or “plotly”.

plotslist[str] | None, default=None

The plots to include in the report. If None, model data and error plots will be included. Valid plots are:

  • “model data”

    Plots model and data voltage vs time.

  • “error”

    Plots error, model voltage - data voltage in mV vs time.

  • “log error”

    Plots log error, log10(abs(model voltage - data voltage)) vs time.

  • “cumulative rmse”

    Plots cumulative RMSE vs time.

  • “error moving average”

    Plots the moving average of the error vs time, with a window of 5 points.

plot_kwargsdict | None, default=None
Additional keyword arguments for plot creation:
  • window_lengthint, default=5

    The window length for the moving average in the error moving average plot.

run(parameter_values: ParameterValues) ParameterValues

Execute validation and compute error metrics.

Parameters

parameter_valuesiwp.ParameterValues

Model parameters to use for validation simulations.

Returns

iwp.ParameterValues

Empty ParameterValues (Validation doesn’t produce new parameters).

Examples

>>> sample_validation.run(sample_parameter_values)
{}
>>> "discharge" in sample_validation.summary_stats
True

Notes

Results are stored in instance attributes:

validation_resultsdict[str, dict[str, float]]

Time-series comparisons by objective name.

For FittingObjective:
  • Contains model vs. data with keys like “Model voltage [V]”, “Processed data voltage [V]”, “Error [mV]”, “Time [s]”, etc.

For DesignObjective:
  • Contains model time-series outputs like “Time [s]”, “Current [A]”, and simulated variables.

summary_statsdict[str, list[dict[str, float]]]

Scalar error metrics by objective name.

For FittingObjective:
  • List of dicts with ‘name’ (e.g., “RMSE [mV]”) and ‘metric’ (value).

For DesignObjective:
  • Single-item list with dict mapping metric names to values.

to_config() dict

Convert the Validation back to parser configuration format.

Returns

dict

Configuration dictionary that can be passed to parse_validation