Using callbacks in objectives

This notebook explains how to use a callback in an objective function. For details on the Callback class, see the API reference. Potential use cases for this are:

  • Plotting some outputs at each iteration of the optimization

  • Saving internal variables to plot once the optimization is complete

Some objectives have “internal callbacks” which are not intended to be user facing. These are standard callbacks that can be used to plot the results of an optimization by using DataFit.plot_fit_results(). For user-facing callbacks, users should create their own callback objects and call them directly for plotting, as demonstrated in this notebook.

Creating a custom callback

To implement a custom callback, create a class that inherits from iwp.callbacks.Callback and calls some specific functions. See the documentation for iwp.callbacks.Callback for more information on the available functions and their expected inputs.

import ionworkspipeline as iwp
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pybamm

class MyCallback(iwp.callbacks.Callback):
    def __init__(self):
        super().__init__()
        # Implement our own iteration counter
        self.iter = 0

    def on_objective_build(self, logs):
        self.data_ = logs["data"]

    def on_run_iteration(self, logs):
        # Print some information at each iteration
        inputs = logs["inputs"]
        V_model = logs["outputs"]["Voltage [V]"]
        V_data = self.data_["Voltage [V]"]

        # calculate RMSE, note this is not necessarily the cost function used in the optimization
        rmse = np.sqrt(np.nanmean((V_model - V_data) ** 2))

        print(f"Iteration: {self.iter}, Inputs: {inputs}, RMSE: {rmse}")
        self.iter += 1

    def on_datafit_finish(self, logs):
        self.fit_results_ = logs

    def plot_fit_results(self):
        """
        Plot the fit results.
        """
        data = self.data_
        fit = self.fit_results_["outputs"]

        fit_results = {
            "data": (data["Time [s]"], data["Voltage [V]"]),
            "fit": (fit["Time [s]"], fit["Voltage [V]"]),
        }

        markers = {"data": "o", "fit": "--"}
        colors = {"data": "k", "fit": "tab:red"}
        fig, ax = plt.subplots()
        for name, (t, V) in fit_results.items():
            ax.plot(
                t,
                V,
                markers[name],
                label=name,
                color=colors[name],
                mfc="none",
                linewidth=2,
            )
        ax.grid(alpha=0.5)
        ax.set_xlabel("Time [s]")
        ax.set_ylabel("Voltage [V]")
        ax.legend()

        return fig, ax

To use this callback, we generate synthetic data for a current-driven experiment and fit a SPM using the CurrentDriven objective.

model = pybamm.lithium_ion.SPM()
parameter_values = iwp.ParameterValues("Chen2020")
t = np.linspace(0, 3600, 1000)
sim = iwp.Simulation(model, parameter_values=parameter_values, t_eval=t, t_interp=t)
sim.solve()
data = pd.DataFrame(
    {x: sim.solution[x].entries for x in ["Time [s]", "Current [A]", "Voltage [V]"]}
)

# In this example we just fit the diffusivity in the positive electrode
parameters = {
    "Positive particle diffusivity [m2.s-1]": iwp.Parameter("D_s", initial_value=1e-15),
}

# Create the callback
callback = MyCallback()
objective = iwp.objectives.CurrentDriven(
    data, options={"model": model}, callbacks=callback
)
current_driven = iwp.DataFit(objective, parameters=parameters)

# make sure we're not accidentally initializing with the correct values by passing
# them in
params_for_pipeline = {k: v for k, v in parameter_values.items() if k not in parameters}

results = current_driven.run(params_for_pipeline)
Iteration: 0, Inputs: {'D_s': 1e-15}, RMSE: 4388.160815555677
Iteration: 1, Inputs: {'D_s': 1e-15}, RMSE: 4388.160815555677
Iteration: 2, Inputs: {'D_s': 2e-15}, RMSE: 0.06456578933255702
Iteration: 3, Inputs: {'D_s': 0.0}, RMSE: 9999999996.444777
Iteration: 4, Inputs: {'D_s': 1.5000000000001942e-15}, RMSE: 2222.6053772486343
Iteration: 5, Inputs: {'D_s': 2.2500000000000003e-15}, RMSE: 0.051217795421480056
Iteration: 6, Inputs: {'D_s': 2.1844141420086674e-15}, RMSE: 0.054456791196258035
Iteration: 7, Inputs: {'D_s': 2.3500000000000003e-15}, RMSE: 0.046579082886386326
Iteration: 8, Inputs: {'D_s': 2.4500000000000004e-15}, RMSE: 0.0422659538017378
Iteration: 9, Inputs: {'D_s': 2.5500000000000004e-15}, RMSE: 0.038241745217616745
Iteration: 10, Inputs: {'D_s': 2.6500000000000005e-15}, RMSE: 0.034475173077481276
Iteration: 11, Inputs: {'D_s': 2.7500000000000005e-15}, RMSE: 0.030947125624697685
Iteration: 12, Inputs: {'D_s': 2.850000000000001e-15}, RMSE: 0.02761916435096253
Iteration: 13, Inputs: {'D_s': 2.9914213562373103e-15}, RMSE: 0.02324630457710458
Iteration: 14, Inputs: {'D_s': 3.1914213562373104e-15}, RMSE: 0.017631974876678818
Iteration: 15, Inputs: {'D_s': 3.4742640687119296e-15}, RMSE: 0.010649464593192916
Iteration: 16, Inputs: {'D_s': 3.707960674244981e-15}, RMSE: 0.0055876056668946435
Iteration: 17, Inputs: {'D_s': 3.853263628836897e-15}, RMSE: 0.002714271514081071
Iteration: 18, Inputs: {'D_s': 3.9465074658600745e-15}, RMSE: 0.0009693999845395703
Iteration: 19, Inputs: {'D_s': 4.0465074658600746e-15}, RMSE: 0.0008198769405055739
Iteration: 20, Inputs: {'D_s': 4.0004089388974675e-15}, RMSE: 9.824342867984993e-06
Iteration: 21, Inputs: {'D_s': 3.990408938897468e-15}, RMSE: 0.00017313285282699755
Iteration: 22, Inputs: {'D_s': 4.010408938897467e-15}, RMSE: 0.00018484976847656318
Iteration: 23, Inputs: {'D_s': 3.999408938897468e-15}, RMSE: 1.3872395715665963e-05
Iteration: 24, Inputs: {'D_s': 4.001408938897468e-15}, RMSE: 2.530013056959819e-05
Iteration: 25, Inputs: {'D_s': 4.0000589388481905e-15}, RMSE: 7.569959663418887e-06
Iteration: 26, Inputs: {'D_s': 3.99995893884819e-15}, RMSE: 7.777816980568485e-06
Iteration: 27, Inputs: {'D_s': 4.0001589388481906e-15}, RMSE: 7.7789574214264e-06
Iteration: 28, Inputs: {'D_s': 4.0000489388481915e-15}, RMSE: 7.572012830192315e-06
Iteration: 29, Inputs: {'D_s': 4.00006893884819e-15}, RMSE: 7.572131845062754e-06
Iteration: 30, Inputs: {'D_s': 4.000057938848191e-15}, RMSE: 7.569974840192859e-06
Iteration: 31, Inputs: {'D_s': 4.000059938848191e-15}, RMSE: 7.569986746596976e-06
Iteration: 32, Inputs: {'D_s': 4.0000589388481905e-15}, RMSE: 7.569959663418887e-06

Now we use the results to plot the fit at the end of the optimization.

_ = results.plot_fit_results()
../../../_images/ea37bbfe86fa4eb55fa7bc8caab429974eb1c197cd00108780f13ca3a146f004.png

Cost logger

The DataFit class has an internal “cost-logger” attribute that can be used to log and visualize the cost function during optimization. This is useful for monitoring the progress of the optimization. The cost logger is a dictionary that stores the cost function value at each iteration. The cost logger can be accessed using the cost_logger attribute of the DataFit object.

By default, the cost logger tracks the cost function value. DataFit.plot_trace can be used the plot the progress at the end of the optimization.

objective = iwp.objectives.CurrentDriven(data, options={"model": model})
current_driven = iwp.DataFit(objective, parameters=parameters)
_ = current_driven.run(params_for_pipeline)
_ = current_driven.plot_trace()
../../../_images/ed09543096df847846dde2e323b781f4b40c2e396f53080a545f58283b5198e0.png

The cost logger can be changed by passing the cost_logger argument to the DataFit object. For example, the following example shows how to pass a cost logger that plots the cost function and parameter values every 10 seconds.

current_driven = iwp.DataFit(
    objective,
    parameters=parameters,
    cost_logger=iwp.CostLogger(plot_every=10),
)