An MLOps recipe on how to deploy a model for inference into Seldon K8S cluster

May 05, 2020


Here is a quick example from our engineering team on how to deploy a model into Seldon. It is one of the better tools for placing your inference into a K8S cluster. Best of all Seldon is open-sourced, and this means it is free! Neu.ro team can help you set up and maintain your K8S cluster for Seldon.

See the vanilla mnist model deployed. The example features:

  1. Prepare and train a basic model on Neu.ro;
  2. Wrap the model into an inference HTTP server;
  3. Test inference on Neu.ro;
  4. Launch production inference on existing Seldon Core.

How to deploy a model for inference into Seldon K8S cluster