Kaniko Image Builder
How to build container images with Kaniko
The Kaniko image builder is an image builder flavor provided with the ZenML kaniko
integration that uses Kaniko to build container images.
When to use it
You should use the Kaniko image builder if:
-
you’re unable to install or use Docker on your client machine.
-
you’re familiar with/already using Kubernetes.
How to deploy it
In order to use the Kaniko image builder, you need a deployed Kubernetes cluster.
How to use it
To use the Kaniko image builder, we need:
- The ZenML
kaniko
integration installed. If you haven’t done so, run
-
kubectl installed.
-
A remote container registry as part of your stack.
-
By default, the Kaniko image builder transfers the build context using the Kubernetes API. If you instead want to transer the build context by storing it in the artifact store, you need to register it with the
store_context_in_artifact_store
attribute set toTrue
. In this case, you also need a remote artifact store as part of your stack.
We can then register the image builder and use it in our active stack:
For more information and a full list of configurable attributes of the Kaniko image builder, check out the API Docs.
Authentication for the container registry and artifact store
The Kaniko image builder will create a Kubernetes pod which is running the build. This build pod needs to be able to pull from/push to certain container registries, and depending on the stack component configuration also needs to be able to read from the artifact store:
-
The pod needs to be authenticated to push to the container registry in your active stack.
-
In case the parent image you use in your
DockerSettings
is stored in a private registry, the pod needs to be authenticated to pull from this registry. -
If you configured your image builder to store the build context in the artifact store, the pod needs to be authenticated to read files from the artifact store storage.
ZenML is not yet able to handle setting all of the credentials of the various combinations of container registries and artifact stores on the Kaniko build pod, which is you’re required to set this up yourself for now. The following section outlines how to handle it in the most straightforward (and probably also most common) scenario, when the Kubernetes cluster you’re using for the Kaniko build is hosted on the same cloud provider as your container registry (and potentially the artifact store). For all other cases, check out the official Kaniko repository for more information.
AWS
GCP
Azure
-
Add permissions to push to ECR by attaching the
EC2InstanceProfileForImageBuilderECRContainerBuilds
policy to your EKS node IAM role. -
Configure the image builder to set some required environment variables on the Kaniko build pod:
Check out the Kaniko docs for more information.
-
Enable workload identity for your cluster
-
Follow the steps described here to create a Google service account, Kubernetes service account as well as a IAM policy binding between them.
-
Grant the Google service account permissions to push to your GCR registry and read from your GCP bucket.
-
Configure the image builder to run in the correct namespace and use the correct service account:
Check out the Kaniko docs for more information.
- Create a Kubernetes configmap for a Docker config that uses the Azure credentials helper:
-
Follow these steps to configure your cluster to use a managed identity
-
Configure the image builder to mount the configmap in the Kaniko build pod:
Check out the Kaniko docs for more information.
Passing additional parameters to the Kaniko build
If you want to pass additional flags to the Kaniko build, pass them as a json string when registering your image builder in the stack: