When would you want to use it?
MLflow Tracking is a very popular tool that you would normally use in the iterative ML experimentation phase to track and visualize experiment results. That doesn’t mean that it cannot be repurposed to track and visualize the results produced by your automated pipeline runs, as you make the transition towards a more production oriented workflow. You should use the MLflow Experiment Tracker:- if you have already been using MLflow to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML.
- if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets)
- if you or your team already have a shared MLflow Tracking service deployed somewhere on-premise or in the cloud, and you would like to connect ZenML to it to share the artifacts and metrics logged by your pipelines
How do you deploy it?
The MLflow Experiment Tracker flavor is provided by the MLflow ZenML integration, you need to install it on your local machine to be able to register an MLflow Experiment Tracker and add it to your stack:- Scenario 1: This scenario requires that you use a local Artifact Store alongside the MLflow Experiment Tracker in your ZenML stack. The local Artifact Store comes with limitations regarding what other types of components you can use in the same stack. This scenario should only be used to run ZenML locally and is not suitable for collaborative and production settings. No parameters need to be supplied when configuring the MLflow Experiment Tracker, e.g:
- Scenario 5: This scenario assumes that you have already deployed an MLflow Tracking Server enabled with proxied artifact storage access. There is no restriction regarding what other types of components it can be combined with. This option requires authentication related parameters to be configured for the MLflow Experiment Tracker.
- Databricks scenario: This scenario assumes that you have a Databricks workspace, and you want to use the managed MLflow Tracking server it provides. This option requires authentication related parameters to be configured for the MLflow Experiment Tracker.
Authentication Methods
You need to configure the following credentials for authentication to a remote MLflow tracking server:-
tracking_uri
: The URL pointing to the MLflow tracking server. If using an MLflow Tracking Server managed by Databricks, then the value of this attribute should be"databricks"
. -
tracking_username
: Username for authenticating with the MLflow tracking server. -
tracking_password
: Password for authenticating with the MLflow tracking server. -
tracking_token
(in place oftracking_username
andtracking_password
): Token for authenticating with the MLflow tracking server. -
tracking_insecure_tls
(optional): Set to skip verifying the MLflow tracking server SSL certificate. -
databricks_host
: The host of the Databricks workspace with the MLflow managed server to connect to. This is only required iftracking_uri
value is set to"databricks"
. More information: Access the MLflow tracking server from outside Databricks
tracking_token
or tracking_username
and tracking_password
must be specified.
Basic Authentication
Secrets Manager (Recommended)
This option configures the credentials for the MLflow tracking service directly as stack component attributes.
This is not recommended for production settings as the credentials won’t be stored securely and will be clearly visible in the stack configuration.
zenml secret register
command:
How do you use it?
To be able to log information from a ZenML pipeline step using the MLflow Experiment Tracker component in the active stack, you need to enable an experiment tracker using the@step
decorator. Then use MLflow’s logging or auto-logging capabilities as you would normally do, e.g.:
Additional configuration
For additional configuration of the MLflow experiment tracker, you can passMLFlowExperimentTrackerSettings
to create nested runs or add additional tags to your MLflow runs: