Seldon
How to deploy models to Kubernetes with Seldon Core
The Seldon Core Model Deployer is one of the available flavors of the Model Deployer stack component. Provided with the MLflow integration it can be used to deploy and manage models on an inference server running on top of a Kubernetes cluster.
When to use it?
Seldon Core is a production grade open source model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
Seldon Core also comes equipped with a set of built-in model server implementations designed to work with standard formats for packaging ML models that greatly simplify the process of serving models for real-time inference.
You should use the Seldon Core Model Deployer:
-
If you are looking to deploy your model on a more advanced infrastructure like Kubernetes.
-
If you want to handle the lifecycle of the deployed model with no downtime, including updating the runtime graph, scaling, monitoring, and security.
-
Looking for more advanced API endpoints to interact with the deployed model, including REST and GRPC endpoints.
-
If you want more advanced deployment strategies like A/B testing, canary deployments, and more.
-
if you have a need for a more complex deployment process which can be customized by the advanced inference graph that includes custom TRANSFORMER and ROUTER.
If you are looking for a more easy way to deploy your models locally, you can use the MLflow Model Deployer flavor.
How to deploy it?
ZenML provides a Seldon Core flavor build on top of the Seldon Core Integration to allow you to deploy and use your models in a production-grade environment. In order to use the integration you need to install it on your local machine to be able to register a Seldon Core Model deployer with ZenML and add it to your stack:
To deploy and make use of the Seldon Core integration we need to have the following prerequisites:
-
access to a Kubernetes cluster. The example accepts a
--kubernetes-context
command line argument. This Kubernetes context needs to point to the Kubernetes cluster where Seldon Core model servers will be deployed. If the context is not explicitly supplied to the example, it defaults to using the locally active context. -
Seldon Core needs to be preinstalled and running in the target Kubernetes cluster. Check out the official Seldon Core installation instructions.
-
models deployed with Seldon Core need to be stored in some form of persistent shared storage that is accessible from the Kubernetes cluster where Seldon Core is installed (e.g. AWS S3, GCS, Azure Blob Storage, etc.). You can use one of the supported remote storage flavors to store your models as part of your stack.
Since the Seldon Model Deployer is interacting with the Seldon Core model server deployed on a Kubernetes cluster, you need to provide a set of configuration parameters. These parameters are:
- kubernetes_context: the Kubernetes context to use to contact the remote Seldon Core installation. If not specified, the current configuration is used. Depending on where the Seldon model deployer is being used
- kubernetes_namespace: the Kubernetes namespace where the Seldon Core deployment servers are provisioned and managed by ZenML. If not specified, the namespace set in the current configuration is used.
- base_url: the base URL of the Kubernetes ingress used to expose the Seldon Core deployment servers.
- secret: the name of a ZenML secret containing the credentials used by Seldon Core storage initializers to authenticate to the Artifact Store
Configuring a Seldon Core in a Kubernetes cluster can be a complex and error-prone process, so we have provided a set of Terraform-based recipes to quickly provision popular combinations of MLOps tools. More information about these recipes can be found in the Open Source MLOps Stack Recipes.
Managing Seldon Core Credentials
The Seldon Core model servers need to access the Artifact Store in the ZenML stack to retrieve the model artifacts. This usually involve passing some credentials to the Seldon Core model servers required to authenticate with the Artifact Store. In ZenML, this is done by creating a ZenML secret with the proper credentials and configuring the Seldon Core Model Deployer stack component to use it, by passing the --secret
argument to the CLI command used to register the model deployer. We’ve already done the latter, now all that is left to do is to configure the s3-store
ZenML secret specified before as a Seldon Model Deployer configuration attribute with the credentials needed by Seldon Core to access the artifact store.
There are built-in secret schemas that the Seldon Core integration provides which can be used to configure credentials for the 3 main types of Artifact Stores supported by ZenML: S3, GCS and Azure.
you can use seldon_s3
for AWS S3 or seldon_gs
for GCS and seldon_az
for Azure. To read more about secrets, secret schemas and how they are used in ZenML, please refer to the Secrets Manager.
The following is an example of registering an S3 secret with the Seldon Core model deployer:
How do you use it?
We can register the model deployer and use it in our active stack:
The following code snippet shows how to use the Seldon Core Model Deployer to deploy a model inside a ZenML pipeline step:
Within the SeldonDeploymentConfig
you can configure:
model_name
: the name of the model in the KServe cluster and in ZenML.replicas
: the number of replicas with which to deploy the modelimplementation
: the type of Seldon inference server to use for the model. The implementation type can be one of the following:TENSORFLOW_SERVER
,SKLEARN_SERVER
,XGBOOST_SERVER
,custom
.resources
: the resources to be allocated to the model. This can be configured by passing a dictionary with therequests
andlimits
keys. The values for these keys can be a dictionary with thecpu
andmemory
keys. The values for these keys can be a string with the amount of CPU and memory to be allocated to the model.
A concrete example of using the Seldon Core Model Deployer can be found here.
For more information and a full list of configurable attributes of the Seldon Core Model Deployer, check out the API Docs.
Custom Model Deployment
When you have a custom use-case where Seldon Core pre-packaged inference servers cannot cover your needs, you can leverage the language wrappers to containerise your machine learning model(s) and logic. With ZenML’s Seldon Core Integration, you can create your own custom model deployment code by creating a custom predict function that will be passed to a custom deployment step responsible for preparing a Docker image for the model server.
This custom_predict
function should be getting the model and the input data as arguments and return the output data. ZenML will take care of loading the model into memory, starting the seldon-core-microservice
that will be responsible for serving the model, and running the predict function.
Then this custom predict function path
can be passed to the custom deployment parameters.
The full code example can be found here.
Advanced Custom Code Deployment with Seldon Core Integration
Before creating your custom model class, you should take a look at the custom Python model section of the Seldon Core documentation.
The built-in Seldon Core custom deployment step is a good starting point for deploying your custom models. However, if you want to deploy more than the trained model, you can create your own Custom Class and a custom step to achieve this.
Example of the custom class.
The built-in Seldon Core custom deployment step responsible for packaging, preparing and deploying to Seldon Core can be found here.