Passing Custom Data Types through Steps (Materializers)
How to use materializers to pass custom data types through steps
A ZenML pipeline is built in a data-centric way. The outputs and inputs of steps define how steps are connected and the order in which they are executed. Each step should be considered as its very own process that reads and writes its inputs and outputs from and to the Artifact Store. This is where Materializers come into play.
A materializer dictates how a given artifact can be written to and retrieved from the artifact store and also contains all serialization and deserialization logic.
Whenever you pass artifacts as outputs from one pipeline step to other steps as inputs, the corresponding materializer for the respective data type defines how this artifact is first serialized and written to the artifact store, and then deserialized and read in the next step.
For most data types, ZenML already includes built-in materializers that automatically handle artifacts of those data types. For instance, all of the examples from the Steps and Pipelines section were using built-in materializers under the hood to store and load artifacts correctly.
However, if you want to pass custom objects between pipeline steps, such as a PyTorch model that does not inherit from torch.nn.Module
, then you need to define a custom Materializer to tell ZenML how to handle this specific data type.
Building a Custom Materializer
Base Implementation
Before we dive into how custom materializers can be built, let us briefly discuss how materializers in general are implemented. In the following, you can see the implementation of the abstract base class BaseMaterializer
, which defines the interface of all materializers:
Which Data Type to Handle?
Each materializer has an ASSOCIATED_TYPES
attribute that contains a list of data types that this materializer can handle. ZenML uses this information to call the right materializer at the right time. I.e., if a ZenML step returns a pd.DataFrame
, ZenML will try to find any materializer that has pd.DataFrame
in its ASSOCIATED_TYPES
. List the data type of your custom object here to link the materializer to that data type.
What Type of Artifact to Generate
Each materializer also has an ASSOCIATED_ARTIFACT_TYPE
attribute, which defines which zenml.enums.ArtifactType
is assigned to this data.
In most cases, you should choose either ArtifactType.DATA
or ArtifactType.MODEL
here. If you are unsure, just use ArtifactType.DATA
. The exact choice is not too important, as the artifact type is only used as a tag in some of ZenML’s visualizations.
Where to Store the Artifact
Each materializer has a uri
attribute, which is automatically created by ZenML whenever you run a pipeline and points to the directory of a file system where the respective artifact is stored (some location in the artifact store).
How to Store and Retrieve the Artifact
The load()
and save()
methods define the serialization and deserialization of artifacts.
load()
defines how data is read from the artifact store and deserialized,save()
defines how data is serialized and saved to the artifact store.
You will need to overwrite these methods according to how you plan to serialize your objects. E.g., if you have custom PyTorch classes as ASSOCIATED_TYPES
, then you might want to use torch.save()
and torch.load()
here.
Using a Custom Materializer
ZenML automatically scans your source code for definitions of materializers and registers them for the corresponding data type, so just having a custom materializer definition in your code is enough to enable the respective data type to be used in your pipelines.
Alternatively, you can also explicitly define which materializer to use for a specific step
Or you can use the configure()
method of the step. E.g.:
When there are multiple outputs, a dictionary of type {<OUTPUT_NAME>:<MATERIALIZER_CLASS>}
can be supplied to the .configure(output_materializers=...)
.
Note that .configure(output_materializers=...)
only needs to be called for
the output of the first step that produced an artifact of a given data type,
all downstream steps will use the same materializer by default.
Configuring Materializers at Runtime
As briefly outlined in the Runtime Configuration section, which materializer to use for the output of what step can also be configured within YAML config files.
For each output of your steps, you can define custom materializers to handle the loading and saving. You can configure them like this in the config:
The name of the output can be found in the function declaration, e.g. my_step() -> Output(a: int, b: float)
has a
and b
as available output names.
Similar to other configuration entries, the materializer name
refers to the class name of your materializer, and the file
should contain a path to the module where the materializer is defined.
Basic Example
Let’s see how materialization works with a basic example. Let’s say you have a custom class called MyObject
that flows between two steps in a pipeline:
Running the above without a custom materializer will result in the following error:
zenml.exceptions.StepInterfaceError: Unable to find materializer for output 'output' of type <class '__main__.MyObj'> in step 'step1'. Please make sure to either explicitly set a materializer for step outputs using step.with_return_materializers(...) or registering a default materializer for specific types by subclassing BaseMaterializer and setting its ASSOCIATED_TYPES class variable. For more information, visit https://docs.zenml.io/advanced-guide/pipelines/materializers
The error message basically says that ZenML does not know how to persist the object of type MyObj
(how could it? We just created this!). Therefore, we have to create our own materializer. To do this, you can extend the BaseMaterializer
by sub-classing it, listing MyObj
in ASSOCIATED_TYPES
, and overwriting load()
and save()
:
Pro-tip: Use the ZenML fileio
module to ensure your materialization logic
works across artifact stores (local and remote like S3 buckets).
Now ZenML can use this materializer to handle outputs and inputs of your customs object. Edit the pipeline as follows to see this in action:
Due to the typing of the inputs and outputs and the ASSOCIATED_TYPES
attribute of the materializer, you won’t necessarily have to add
.configure(output_materializers=MyMaterializer)
to the step. It should
automatically be detected. It doesn’t hurt to be explicit though.
This will now work as expected and yield the following output:
Code Summary
Skipping Materialization
Skipping materialization might have unintended consequences for downstream tasks that rely on materialized artifacts. Only skip materialization if there is no other way to do what you want to do.
While materializers should in most cases be used to control how artifacts are returned and consumed from pipeline steps, you might sometimes need to have a completely unmaterialized artifact in a step, e.g., if you need to know the exact path to where your artifact is stored.
An unmaterialized artifact is a zenml.materializers.UnmaterializedArtifact
. Among others, it has a property uri
that points to the unique path in the artifact store where the artifact is persisted. One can use an unmaterialized artifact by specifying UnmaterializedArtifact
as the type in the step:
Example
The following shows an example how unmaterialized artifacts can be used in the steps of a pipeline. The pipeline we define will look like this:
s1
and s2
produce identical artifacts, however s3
consumes materialized artifacts while s4
consumes unmaterialized artifacts. s4
can now use the dict_.uri
and list_.uri
paths directly rather than their materialized counterparts.
Note on using Materializers for Custom Artifact Stores
When creating a custom Artifact Store, you may encounter a situation where the default materializers do not function properly. Specifically, the fileio.open
method used in these materializers may not be compatible with your custom store due to not being implemented properly.
In this case, you can create a modified version of the failing materializer by copying it and modifying it to copy the artifact to a local path, then opening it from there. For example, consider the following implementation of a custom PandasMaterializer that works with a custom artifact store. In this implementation, we copy the artifact to a local path because we want to use the pandas.read_csv
method to read it. If we were to use the fileio.open
method instead, we would not need to make this copy.
It is worth noting that copying the artifact to a local path may not always be necessary and can potentially be a performance bottleneck.