kubernetes
integration that runs your pipelines on a Kubernetes cluster.
This component is only meant to be used within the context of remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the Kubernetes orchestrator if:- you’re looking lightweight way of running your pipelines on Kubernetes.
- you don’t need a UI to list all your pipelines runs.
- you’re not willing to maintain Kubeflow Pipelines on your Kubernetes cluster.
- you’re not interested in paying for managed solutions like Vertex.
How to deploy it
The Kubernetes orchestrator requires a Kubernetes cluster in order to run. There are many ways to deploy a Kubernetes cluster using different cloud providers or on your custom infrastructure, and we can’t possibly cover all of them, but you can check out our cloud guide If the above Kubernetes cluster is deployed remotely on the cloud, then another pre-requisite to use this orchestrator would be to deploy and connect to a remote ZenML server.How to use it
To use the Kubernetes orchestrator, we need:- The ZenML
kubernetes
integration installed. If you haven’t done so, run
- Docker installed and running.
- kubectl installed.
- A remote artifact store as part of your stack.
- A remote container registry as part of your stack.
-
A Kubernetes cluster deployed and the name of your Kubernetes context which points to this cluster. Run
kubectl config get-contexts
to see a list of available contexts.
<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>
which includes your code and use it to run your pipeline steps in Kubernetes. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
You can now run any ZenML pipeline using the Kubernetes orchestrator:
Additional configuration
For additional configuration of the Kubernetes orchestrator, you can passKubernetesOrchestratorSettings
which allows you to configure (among others) the following attributes:
pod_settings
: Node selectors, affinity and tolerations to apply to the Kubernetes Pods running your pipline. These can be either specified using the Kubernetes model objects or as dictionaries.