An MLOps recipe on how to deploy a model for inference into Seldon K8S cluster
May 05, 2020
Here is a quick example from our engineering team on how to deploy a model into Seldon. It is one of the better tools for placing your inference into a K8S cluster. Best of all Seldon is open-sourced, and this means it is free! Neu.ro team can help you set up and maintain your K8S cluster for Seldon.
See the vanilla mnist model deployed. The example features:
- Prepare and train a basic model on Neu.ro;
- Wrap the model into an inference HTTP server;
- Test inference on Neu.ro;
- Launch production inference on existing Seldon Core.