tekton
integration that uses Tekton Pipelines to run your pipelines.
This component is only meant to be used within the context of remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the Tekton orchestrator if:- you’re looking for a proven production-grade orchestrator.
- you’re looking for a UI in which you can track your pipeline runs.
- you’re already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.
- you’re willing to deploy and maintain Tekton Pipelines on your cluster.
How to deploy it
You’ll first need to set up a Kubernetes cluster and deploy Tekton Pipelines: AWS GCP Azure- A remote ZenML server. See the deployment guide for more information.
- Have an existing AWS EKS cluster set up.
- Make sure you have the AWS CLI set up.
-
Download and install
kubectl
and configure it to talk to your EKS cluster using the following command:
- Install Tekton Pipelines onto your cluster.
- A remote ZenML server. See the deployment guide for more information.
- Have an existing GCP GKE cluster set up.
- Make sure you have the Google Cloud CLI set up first.
-
Download and install
kubectl
and configure it to talk to your GKE cluster using the following command:
- Install Tekton Pipelines onto your cluster.
- A remote ZenML server. See the deployment guide for more information.
- Have an existing AKS cluster set up.
- Make sure you have the az CLI set up first.
-
Download and install
kubectl
and it to talk to your AKS cluster using the following command:
- Install Tekton Pipelines onto your cluster.
Running
state, try increasing the number of nodes in your cluster.
ZenML has only been tested with Tekton Pipelines >=0.38.3 and may not work with previous versions.
How to use it
To use the Tekton orchestrator, we need:- The ZenML
tekton
integration installed. If you haven’t done so, run
- Docker installed and running.
- kubectl installed.
- Tekton pipelines deployed on a remote cluster. See the deployment section for more information.
-
The name of your Kubernetes context which points to your remote cluster. Run
kubectl config get-contexts
to see a list of available contexts. - A remote artifact store as part of your stack.
- A remote container registry as part of your stack.
<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>
which includes your code and use it to run your pipeline steps in Tekton. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
Once the orchestrator is part of the active stack, we need to run zenml stack up
before running any pipelines. This command forwards a port, so you can view the Tekton UI in your browser.
You can now run any ZenML pipeline using the Tekton orchestrator:
Additional configuration
For additional configuration of the Tekton orchestrator, you can passTektonOrchestratorSettings
which allows you to configure (among others) the following attributes:
pod_settings
: Node selectors, affinity and tolerations to apply to the Kubernetes Pods running your pipline. These can be either specified using the Kubernetes model objects or as dictionaries.