Pipelines#

Pipelines let you chain together data fitting, calculations, and validation steps into a single server-side workflow. A pipeline is defined as an ordered dictionary of named elements, each with an element_type and type-specific configuration.

Element types#

Type

Purpose

entry

Provide initial parameter values to downstream elements

data_fit

Fit model parameters to experimental data

calculation

Run a calculation (e.g. OCP fitting)

validation

Validate a model against experimental data

Elements run in the order they appear. Later elements can reference results from earlier ones through the existing_parameters field.

Submitting a pipeline#

Entry element#

An entry element seeds the pipeline with known parameter values:

entry_config = {
    "element_type": "entry",
    "values": {
        "Negative particle diffusivity [m2.s-1]": 3.3e-14,
        "Positive particle diffusivity [m2.s-1]": 4e-15,
    },
}

Data-fit element#

A data_fit element optimizes model parameters against uploaded measurement data. Reference measurement data stored in Ionworks with the db:<id> prefix:

measurement_id = "..."  # from client.cell_measurement.create()

datafit_config = {
    "element_type": "data_fit",
    "objectives": {
        "test_1C": {
            "objective": "CurrentDriven",
            "model": {"type": "SPMe"},
            "data": f"db:{measurement_id}",
            "parameters": {
                "Ambient temperature [K]": 298.15,
            },
        },
    },
    "parameters": {
        "Negative particle diffusivity [m2.s-1]": {
            "bounds": [1e-14, 1e-13],
            "initial_value": 2e-14,
        },
        "Positive particle diffusivity [m2.s-1]": {
            "bounds": [1e-15, 1e-14],
            "initial_value": 2e-15,
        },
    },
    "cost": {"type": "RMSE"},
    "optimizer": {"type": "ScipyDifferentialEvolution"},
}

Combining elements#

Pass all elements as a dictionary to client.pipeline.create():

pipeline = client.pipeline.create({
    "elements": {
        "entry": entry_config,
        "fit data": datafit_config,
    },
})
print(f"Pipeline ID: {pipeline.id}")

Polling for results#

Pipelines run asynchronously. Poll client.pipeline.get() until the status is completed or failed:

import time

while True:
    pipeline = client.pipeline.get(pipeline.id)
    print(f"Status: {pipeline.status}")

    if pipeline.status == "completed":
        result = client.pipeline.result(pipeline.id)
        print("Fitted parameters:", result.element_results["fit data"])
        break
    elif pipeline.status == "failed":
        print("Pipeline failed:", pipeline.error)
        break

    time.sleep(2)

Listing pipelines#

# All pipelines
pipelines = client.pipeline.list()

# Filter by project
pipelines = client.pipeline.list(project_id="...", limit=10)

Data sources#

Pipeline elements can load data from several sources:

Prefix

Description

db:<measurement_id>

Measurement stored in Ionworks

file:<path>

Local CSV file

folder:<path>

Folder of CSV files

Next steps#

To run forward simulations instead of fitting to data, see Simulations.