Seldon Core Platform
(Redirected from Seldon)
Jump to navigation
Jump to search
A Seldon Core Platform is an open-source ML model deployment system (for deploying machine learning models) on a Kubernetes cluster.
- Context:
- ...
- Example(s):
- Seldon, v2.2.0 [1] (~2023-02-06).
- Seldon, v1.4.0 [2] (~2020-10-26).
- Seldon, v1.1.0 [3] (~2019-12-18).
- …
- Counter-Example(s):
- See: Kubeflow, ML Model Deployment System.
References
2022
- chat
- Seldon is an open-source platform that helps deploy and manage machine learning models. It provides a scalable and secure infrastructure for deploying models and allows data scientists and engineers to focus on model development and experimentation without worrying about the underlying infrastructure.
Seldon can be used as a deployment component in an ML development and deployment environment, particularly for the deployment component. Seldon can help by providing:
- Model serving: Seldon provides a scalable and efficient way to serve machine learning models in production. It supports multiple model formats such as TensorFlow, PyTorch, and scikit-learn, and can be easily integrated with other deployment tools such as Kubernetes.
- A/B testing and canary releases: Seldon allows for A/B testing and canary releases to test new versions of models in production. It provides a way to compare the performance of different models and make data-driven decisions on which model to use.
- Explaining and interpreting models: Seldon provides a way to explain and interpret machine learning models. It supports multiple explainability techniques such as SHAP, LIME, and TreeExplainer, and provides visualizations to help understand how the models make predictions.
- Metrics and monitoring: Seldon provides metrics and monitoring capabilities to track the performance of deployed models in production. It can integrate with popular monitoring tools such as Prometheus and Grafana.
- Overall, Seldon can help streamline the deployment and management of machine learning models, allowing data scientists and engineers to focus on model development and experimentation while ensuring a scalable and secure infrastructure for model deployment.
- Seldon is an open-source platform that helps deploy and manage machine learning models. It provides a scalable and secure infrastructure for deploying models and allows data scientists and engineers to focus on model development and experimentation without worrying about the underlying infrastructure.
2019
- https://docs.seldon.io/projects/seldon-core/en/latest/
- QUOTE: ... Seldon Core is an open source platform for deploying machine learning models on a Kubernetes cluster.
- Deploy machine learning models in the cloud or on-premise.
- Get metrics and ensure proper governance and compliance for your running machine learning models.
- Create powerful inference graphs made up of multiple components.
- Provide a consistent serving layer for models built using heterogeneous ML toolkits.
- QUOTE: ... Seldon Core is an open source platform for deploying machine learning models on a Kubernetes cluster.
2020
- https://docs.seldon.io/projects/seldon-core/en/latest/workflow/github-readme.html
- QUOTE: ... With over 2M installs, Seldon Core is used across organisations to manage large scale deployment of machine learning models, and key benefits include:
- Easy way to containerise ML models using our language wrappers or pre-packaged inference servers.
- Out of the box endpoints which can be tested through Swagger UI, Seldon Python Client or Curl / GRPCurl
- Cloud agnostic and tested on AWS EKS, Azure AKS, Google GKE, Alicloud, Digital Ocean and Openshift.
- Powerful and rich inference graphs made out of predictors, transformers, routers, combiners, and more.
- A standardised serving layer across models from heterogeneous toolkits and languages.
- Advanced and customisable metrics with integration to Prometheus and Grafana.
- Full auditability through model input-output request logging integration with Elasticsearch.
- Microservice tracing through integration to Jaeger for insights on latency across microservice hops.
- QUOTE: ... With over 2M installs, Seldon Core is used across organisations to manage large scale deployment of machine learning models, and key benefits include: