An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models

Overview

Seldon Core: Blazing Fast, Industry-Ready ML

An open source platform to deploy your machine learning models on Kubernetes at massive scale.

Overview

Seldon core converts your ML models (Tensorflow, Pytorch, H2o, etc.) or language wrappers (Python, Java, etc.) into production REST/GRPC microservices.

Seldon handles scaling to thousands of production machine learning models and provides advanced machine learning capabilities out of the box including Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and more.

High Level Features

With over 2M installs, Seldon Core is used across organisations to manage large scale deployment of machine learning models, and key benefits include:

Getting Started

Deploying your models using Seldon Core is simplified through our pre-packaged inference servers and language wrappers. Below you can see how you can deploy our "hello world Iris" example. You can see more details on these workflows in our Documentation Quickstart.

Install Seldon Core

Quick install using Helm 3 (you can also use Kustomize):

kubectl create namespace seldon-system

helm install seldon-core seldon-core-operator \
    --repo https://storage.googleapis.com/seldon-charts \
    --set usageMetrics.enabled=true \
    --namespace seldon-system \
    --set istio.enabled=true
    # You can set ambassador instead with --set ambassador.enabled=true

Deploy your model using pre-packaged model servers

We provide optimized model servers for some of the most popular Deep Learning and Machine Learning frameworks that allow you to deploy your trained model binaries/weights without having to containerize or modify them.

You only have to upload your model binaries into your preferred object store, in this case we have a trained scikit-learn iris model in a Google bucket:

gs://seldon-models/v1.12.0-dev/sklearn/iris/model.joblib

Create a namespace to run your model in:

kubectl create namespace seldon

We then can deploy this model with Seldon Core to our Kubernetes cluster using the pre-packaged model server for scikit-learn (SKLEARN_SERVER) by running the kubectl apply command below:

$ kubectl apply -f - << END
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: iris-model
  namespace: seldon
spec:
  name: iris
  predictors:
  - graph:
      implementation: SKLEARN_SERVER
      modelUri: gs://seldon-models/v1.12.0-dev/sklearn/iris
      name: classifier
    name: default
    replicas: 1
END

Send API requests to your deployed model

Every model deployed exposes a standardised User Interface to send requests using our OpenAPI schema.

This can be accessed through the endpoint http:// /seldon/ / /api/v1.0/doc/ which will allow you to send requests directly through your browser.

Or alternatively you can send requests programmatically using our Seldon Python Client or another Linux CLI:

$ curl -X POST http://<ingress>/seldon/seldon/iris-model/api/v1.0/predictions \
    -H 'Content-Type: application/json' \
    -d '{ "data": { "ndarray": [[1,2,3,4]] } }'

{
   "meta" : {},
   "data" : {
      "names" : [
         "t:0",
         "t:1",
         "t:2"
      ],
      "ndarray" : [
         [
            0.000698519453116284,
            0.00366803903943576,
            0.995633441507448
         ]
      ]
   }
}

Deploy your custom model using language wrappers

For more custom deep learning and machine learning use-cases which have custom dependencies (such as 3rd party libraries, operating system binaries or even external systems), we can use any of the Seldon Core language wrappers.

You only have to write a class wrapper that exposes the logic of your model; for example in Python we can create a file Model.py:

import pickle
class Model:
    def __init__(self):
        self._model = pickle.loads( open("model.pickle", "rb") )

    def predict(self, X):
        output = self._model(X)
        return output

We can now containerize our class file using the Seldon Core s2i utils to produce the sklearn_iris image:

s2i build . seldonio/seldon-core-s2i-python3:0.18 sklearn_iris:0.1

And we now deploy it to our Seldon Core Kubernetes Cluster:

$ kubectl apply -f - << END
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: iris-model
  namespace: model-namespace
spec:
  name: iris
  predictors:
  - componentSpecs:
    - spec:
      containers:
      - name: classifier
        image: sklearn_iris:0.1
  - graph:
      name: classifier
    name: default
    replicas: 1
END

Send API requests to your deployed model

Every model deployed exposes a standardised User Interface to send requests using our OpenAPI schema.

This can be accessed through the endpoint http:// /seldon/ / /api/v1.0/doc/ which will allow you to send requests directly through your browser.

Or alternatively you can send requests programmatically using our Seldon Python Client or another Linux CLI:

$ curl -X POST http://<ingress>/seldon/model-namespace/iris-model/api/v1.0/predictions \
    -H 'Content-Type: application/json' \
    -d '{ "data": { "ndarray": [1,2,3,4] } }' | json_pp

{
   "meta" : {},
   "data" : {
      "names" : [
         "t:0",
         "t:1",
         "t:2"
      ],
      "ndarray" : [
         [
            0.000698519453116284,
            0.00366803903943576,
            0.995633441507448
         ]
      ]
   }
}

Dive into the Advanced Production ML Integrations

Any model that is deployed and orchestrated with Seldon Core provides out of the box machine learning insights for monitoring, managing, scaling and debugging.

Below are some of the core components together with link to the logs that provide further insights on how to set them up.


Standard and custom metrics with prometheus


Full audit trails with ELK request logging


Explainers for Machine Learning Interpretability


Outlier and Adversarial Detectors for Monitoring


CI/CD for MLOps at Massive Scale


Distributed tracing for performance monitoring

Where to go from here

Getting Started

Seldon Core Deep Dive

Pre-Packaged Inference Servers

Language Wrappers (Production)

Language Wrappers (Incubating)

Ingress

Production

Advanced Inference

Examples

Reference

Developer

About the name "Seldon Core"

The name Seldon (ˈSɛldən) Core was inspired from the Foundation Series (Scifi Novel) where it's premise consists of a mathematician called "Hari Seldon" who spends his life developing a theory of Psychohistory, a new and effective mathematical sociology which allows for the future to be predicted extremely accurate through long periods of time (across hundreds of thousands of years).

Commercial Support

We offer commercial support via our enterprise product Seldon Deploy. Please visit https://www.seldon.io/ for details and a trial.

Comments
  • Ingress refactor and Contour support.

    Ingress refactor and Contour support.

    Fixes #1754

    This is a work in progress PR looking to address #1754 and #1778

    Now functionally complete, including drop-in compatibility for existing ingress options. Additionally it supports a new mode of operation which creates independent virtual hosts for each Seldon Deployment.

    TODO

    • [x] Look into TLS options for HTTPProxy resources and how that could be integrated with Seldon
    • [x] Support for annotating HTTPProxy resources with ingress.class, this allows tying resources created by Seldon with a specific Contour deployment.
    • [ ] Testing in real clusters with Istio
    • [x] Testing in real clusters with Ambassador
    • [x] Testing in real clusters with Contour
    • [x] Ingress unit tests
    size/XXL ok-to-test 
    opened by josephglanville 117
  • Allow both http and grpc

    Allow both http and grpc

    What this PR does / why we need it:

    • Updates python wrapper, operator, executor, prepackaged servers to support both REST and gRPC at same time

    Which issue(s) this PR fixes:

    Fixes #2378 Fixes #2299

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    Python models will need to be rebuilt to take advantage of the exposing of both HTTP and gRPC.
    Upgrading the operator will cause an upgrade to all models running.
    
    size/XXL approved 
    opened by cliveseldon 110
  • Executor

    Executor

    Initial rewrite of engine in GoLang.

    Has more restricted functionality. See README in /executor.

    Presently handles Seldon REST and gRPC but with restriction entire graph must only be made of REST or gRPC components - no mixing.

    Designed so can be easily extended to handle other protocols, e.g. Tensorflow.

    Unlikely to be ready until post 1.0 due to push of functionality outside this component e.g., metrics, meta data.

    size/XXL approved lgtm 
    opened by cliveseldon 101
  • SeldonMetadata and GRPC support

    SeldonMetadata and GRPC support

    Standardise SeldonMetadata further and add GRPC support

    reopened: https://github.com/SeldonIO/seldon-core/pull/1969

    ToDo:

    • [x] Add GRPC tests to Executor
    • [x] Add proper integration test (not notebook involved)
    • [X] Update docs
    Added gRPC support to model metadata.
    
    size/XXL approved 
    opened by RafalSkolasinski 99
  • RedHat Community Operator

    RedHat Community Operator

    Fixes #1477

    • Adds a new folder in operator directory seldon-operator for RedHat community operators updates
    • Updates the operator so it can create all needed resources including webhook configurations and so only needs RBAC plus a deployment to be initially created.
      • For kustomize this is a "lite" config folder
      • For helm createResources setting is provided

    Tested on vanilla kubernetes and openshift clusters.

    Images will need updating after seldon 1.1 release upon which we can publish to community operators.

    size/XXL approved 
    opened by cliveseldon 99
  • Issue/1638 model metadata

    Issue/1638 model metadata

    closes https://github.com/SeldonIO/seldon-core/issues/1638

    Deprecates metadata method in Python wrapper and introduce init_metadata method.

    The init_metadata will be called once after model initialisation to validate and cache returned metadata. Output of init_metadata is merged with content of MODEL_METADATA environmental variable with values in latter taking priorities.

    For prepackaged model server (right now only sklearn) one can either define MODEL_METADATA environmental variables or add metadata.yaml file to the s3 bucket. Please see examples.

    size/XXL approved 
    opened by RafalSkolasinski 92
  • Kafka Support in Executor

    Kafka Support in Executor

    • Provide ability to run SeldonDeployments in Kafka mode with addition to crd serverType
    • Executor changes pull and push from topics
    • Updates docs for streaming
    • example in examples/kafka/cifar10

    Also fixes issues with grpc multiplexing and metrics.

    size/XXL approved 
    opened by cliveseldon 91
  • graph level metadata

    graph level metadata

    Initial implementation of graph-level metadata. Closes https://github.com/SeldonIO/seldon-core/issues/1728

    Implementation around following new structs defined in executor

    type MetadataTensor struct {
    	DataType string `json:"datatype,omitempty"`
    	Name     string `json:"name,omitempty"`
    	Shape    []int  `json:"shape,omitempty"`
    }
    
    type ModelMetadata struct {
    	Name     string           `json:"name,omitempty"`
    	Platform string           `json:"platform,omitempty"`
    	Versions []string         `json:"versions,omitempty"`
    	Inputs   []MetadataTensor `json:"inputs,omitempty"`
    	Outputs  []MetadataTensor `json:"outputs,omitempty"`
    }
    
    type GraphMetadata struct {
    	Name         string                   `json:"name,omitempty"`
    	Models       map[string]ModelMetadata `json:"models,omitempty"`
    	GraphInputs  []MetadataTensor         `json:"graphinputs,omitempty"`
    	GraphOutputs []MetadataTensor         `json:"graphoutputs,omitempty"`
    }
    

    note that struct GraphMetadata defines output format of graph level metadata

    size/XXL approved 
    opened by RafalSkolasinski 85
  • Add KEDA support to seldon-core

    Add KEDA support to seldon-core

    What this PR does / why we need it: Add KEDA support to seldon-core

    Which issue(s) this PR fixes:

    Fixes #2498

    Special notes for your reviewer: As part of this work, I also upgraded golang version to 1.14 as this is required for knative.dev/pkg module used by KEDA

    Does this PR introduce a user-facing change?:

    Added KEDA support to seldon-core
    
    size/XXL approved ok-to-test 
    opened by anggao 78
  • allow extra custom field in model metadata

    allow extra custom field in model metadata

    What this PR does / why we need it:

    Which issue(s) this PR fixes:

    Closes https://github.com/SeldonIO/seldon-core/issues/2312

    Special notes for your reviewer:

    Does this PR introduce a user-facing change?:

    Extend model metadata schema to allow custom `custom`. The new field is a mapping: string -> string and can be used to store arbitrary information as part of model metadata.,
    
    size/XXL 
    opened by RafalSkolasinski 74
  • tags backward compatibility in executor

    tags backward compatibility in executor

    Closes https://github.com/SeldonIO/seldon-core/issues/1474.

    To avoid missing some important details in between @seldondev messages I will post general updates to the issue itself.

    size/XL approved lgtm 
    opened by RafalSkolasinski 74
  • v2: update triton, grpc payload size, cli raw output processing

    v2: update triton, grpc payload size, cli raw output processing

    • Updates to a more recent triton server
    • Ensure CLI handles raw payload conversion when there is non-null payload but with all entries null
    • Increase max-grpc message sizes

    Note - MLServer would need an increase for this to fully work. So far have not seen an issue with Triton server and increased payload sizes for gRPC above the 4MB default.

    v2 
    opened by cliveseldon 1
  • V1 Python Upgrade SetupTools

    V1 Python Upgrade SetupTools

    opened by cliveseldon 0
  • Built-in Kafka health-check

    Built-in Kafka health-check

    Initial setup and configuration of Apache Kafka is not usually a trivial task (taking into account multitude of different providers). I believe administrators of Core V2 installation could benefit from a built-in health-check / status reporting about the health of Kafka integration.

    A simple healthy / non-healthy status of connection could be quite useful. We could also include possibility a built-in test that sends a test message through a test Kafka topic to validate that all is configured properly.

    v2 
    opened by RafalSkolasinski 2
  • Allow model errors after loading to be exposed to scheduler

    Allow model errors after loading to be exposed to scheduler

    Currently we are assuming that after a model is loaded successfully from a control plane load request the model remains good indefinitely which is probably not true in all cases:

    • a model might crash on the server due to various and not necessarily crashing the entire server
    • with overcommit logic there could be cases where we cannot reload the model for any reasons

    This is not good given that these errors are currently silent and not exposed for example in the scheduler. Inferences will likely to fail as a result.

    V2 has a dataplane status so the user might still be able to get to the true state of the model but I wonder if we can do better.

    One possibility is to allow agents to report model errors to the schedule outside of the control plan path.

    v2 
    opened by sakoush 0
  • Scheduler container not starting in local installation

    Scheduler container not starting in local installation

    Hi,

    I am trying to setup Seldon V2 on a VM using the docker compose by following the steps( https://docs.seldon.io/projects/seldon-core/en/v2/contents/getting-started/docker-installation/index.html ) . When I run make deploy-local , All the images got downloaded and started the containers except for scv2-scheduler-1 which is in Created state but not starting up with no logs, eventually giving me an error like this

    Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: chdir to cwd ("/home/nonroot") set in config.json failed: permission denied: unknown makeÆ1Å: *** ÆMakefile:292: start-allÅ Error 1 makeÆ1Å: Leaving directory '/home/a263257/seldon-repo/seldon-core/scheduler' make: *** ÆMakefile:5: deploy-localÅ Error 2

    Any suggestions are greatly appreciated.

    Thanks.

    Docker version 20.10.5+dfsg1, build 55c4c88 Docker Compose version v2.6.1 Go version go1.19.4 linux/amd64 Make version GNU Make 4.3

    Screenshot 2023-01-04 at 10 33 44 AM Screenshot 2023-01-04 at 10 22 20 AM Screenshot 2023-01-04 at 10 21 50 AM Screenshot 2023-01-04 at 2 22 57 PM v2 
    opened by habilmohammed 7
  • migrate from Gorilla Mux

    migrate from Gorilla Mux

    The gorilla/mux project became read only with the end of 2022. We are using it in few places in the scheduler in Seldon Core V2 and should migrate to a maintained solution.

    Depending on our needs gin-gonic or just limiting to standard library may fit our use case.

    Official announcement: link The NewStack article on the subject: link

    v2 
    opened by RafalSkolasinski 1
Releases(v1.15.0)
  • v1.15.0(Dec 6, 2022)

    All notable changes to this project will be documented in this file. Dates are displayed in UTC.

    Generated by auto-changelog.

    v1.15.0

    5 December 2022

    • Images update #4463
    • Bump MLServer version to 1.2.0 #4448
    • add tar to wrapper image as it is missing after move to ubi-minimal #4458
    • 4041 upgrading jaeger #4406
    • Bump MLServer version to 1.2.0 #4448
    • add tar to wrapper image as it is missing after move to ubi-minimal #4458
    • Removing dependabot bot #4456
    • factored out _make_rest_metrics_server #4446
    • 4041 upgrading jaeger #4406
    • Factor out _make_run_grpc_server #4351
    • update kind in ansible setup #4440
    • Add missing related images for v2 protocol #4437
    • Improve Kafka config handling in executor #4435
    • Set allowPrivilegeEscalation to false as default value for the service orchestrator #4427
    • Allow urllib3 >= 1.26.5 #4394
    • push of MLServer images for Red Hat certification #4417
    • Enable SSL_SASL for executor request logging #4416
    • scan -sc images only for 1.15.0 #4419
    • add mlserver-sc and mlserver-sc-slim to security scans #4418
    • Update libraries to solve CVEs reported for 1.15.0 build #4405
    • Openshift move to quay #4392
    • Fix Conda download version #4393
    • Fix vim CVE in Alibi images #4343
    • Reverted Conda base image to 4.13.0 until patched in 4.14.0 #4390
    • Ambassador v2 support #4290
    • factored out make_rest_server_debug/prod #4268
    • update k8s versions #4350
    • Use timer instead of time.After to prevent memory leaks in logger #4338
    • Fix resource lock type #4342
    • Embedding intro video in docs #4337
    • update conda in base image and use ubi9 #4329
    • Add reference for V2 Inference Protocol #4325
    • Add progress deadline support for SDeps #4235
    • Change service key to allow container services to always match correctly #4043
    • added comments about routing in predictChildren #4267
    • Add UTF-8 support for Flask jsonify #4271
    • Ensure request is chained before payload is logged #4301
    • Adding licenses updates as part of the post-release process #4289
    • add upgrading note on Flask 2.x upgrade #4288
    • move to Flask 2.x #4286
    • 1.14.1 to master #4287
    • Added fix for clashing zombie webhook #4265
    • Make verbosity configurable and not leak sensitive values #4249
    • Added fix for webhook issues on 1.12.0 #4256
    • Update stalebot.yml #4250
    • Adding prepackaged server separate pod instructions #4238
    • doc: add util comment && identation #4242
    • Adding stalebot for issues and PRs with defaults #4232
    • Fixed trailing dash created from helm split resources #4230
    • Fix Typo in Readme.md #4228
    • enh: Add support to configure PrepackedTriton with no storage initialiser #4216
    • Added fix for removed guard on webhook #4218
    • fixes foldering of the gpt2 minio notebook #4197
    • Allow leader election controls for manager #4211
    • factored out parse_args #4213
    • upgrade pip, conda and setuptools in s2i image #4210
    • Fix logging args.grpc_workers #4212
    • renamed server_[123]func to server[rest|grpc|custom]_func #4214
    • typo fix in logging bind_address of gRPC server #4200
    • fix metadata #4207
    • typo fix in logging number of gRPC threads used #4194
    • typo fix in logging number of gRPC workers #4195
    • fix link to minio example in triton page #4196
    • Added 1.15.0-dev tag #4174
    • add missing yaml styling for snippets #4170
    • update rest_predict_seldon hardcoded version in route #4161
    • update licenses for 1.15.0 release 449510d
    • Release 1.15.0 3502d35
    • new AWS installation guide and updated nav to reflect it 127ce8d
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Dec 2, 2022)

    Changelog

    All notable changes to this project will be documented in this file. Dates are displayed in UTC.

    Generated by auto-changelog.

    v2.0.0

    2 December 2022

    • do not include V1 changes in V2 changelog (#4473) #4474
    • fix image build #4470
    • Update docs to reference new SCv2 location #667
    • Rename Go modules for repo migration #666
    • Improve logging for config and log levels #453
    • Remove versions dropdown and align top bar with SCv1 docs #665
    • Fix nil pointer exception on scheduler restart #664
    • Add ReadTheDocs config file #663
    • remove unnecessary event on unload #662
    • Fix http calls mirror pipelines not working #660
    • Fix experiment stop #661
    • Agent grpc server max connections #655
    • adjust workflows for migration #654
    • Fix Server replica Helm templating #648
    • Add new constant rate scenario #652
    • revert pipeline create changes #651
    • use consistent name for consumer groups #649
    • k6 docker file fixes #637
    • Change to use mlserver 1.2.0 #647
    • Add ability to wait on 3 termination for drain #644
    • Add Helm parameterisation for server replicas #643
    • Update trailer check to one if block #642
    • Add huggingface capability to Helm charts #641
    • only add trailers headers if not nil #640
    • set parallel workers to zero for explainers and update notebooks #632
    • Use kafka 3.3.1 by default #631
    • Prevent terminationGracePeriodSeconds being treated as string in helm charts #628
    • Run misspell -w . on docs #626
    • Fix typos in pipeline docs #625
    • fix type of terminationGracePeriodSeconds in helm charts #623
    • Pipeline Readiness #547
    • HPA server autoscale #590
    • MLServer update to 1.2.0.rc5 #617
    • set chart version also for seldon-core-v2-certs #620
    • Add validation for empty pipeline steps #607
    • Adding UnloadEnvoyRequested model replica state #616
    • Add securityContext parameterisation to Kubernetes manifests for OpenShift compatibility #606
    • Run all notebooks and updates to pipeline validation #597
    • Model gateway logs #613
    • Fix k8s version updates #612
    • Respect MLServer content type in pipeline gateway http server responses #600
    • Add omitempty tag to parallel_workers field #603
    • Add resource parameterisation for all components in Helm chart #596
    • remove duplicated seldon-v2-crds file #594
    • Ensure kafka consumer reconnects happen by not ending consumers #595
    • Add pipeline state to k8s resource status #591
    • Fix pipeline http calls to use headers and reintroduce prom metrics removed #580
    • Revert to vanilla notebook to markdown conversion #585
    • Update kind and k8s versions in Ansible setup #586
    • Return model name in pipeline errors #583
    • Add Apache 2 Licence to code files #584
    • Override config file with CLI args for Seldon CLI #579
    • add rolling update md #582
    • Updates for Model rolling updates #566
    • Quickfix/cli pipeline error #572
    • Quickfix/agent scheduler restart #568
    • Issue 561 pipeline err #569
    • Remove redundant pipeline gateway and agent Prometheus metrics #554
    • Ensure model gateway keeps recalling scheduler and does not restart #563
    • Docs fix for readthedocs static images #565
    • Fix static image uris #564
    • Quickfix/fixstop notready #559
    • Updating styles + new images #548
    • Agent startup bug fix #558
    • only output warning in case of error #556
    • Add missing Ansible config vars + fix incorrect docs on metric names #553
    • Remove namespace from Prometheus metric names #550
    • Issue 518 agent stop cmd #523
    • small fixes for huggingface demo #549
    • Fix initial routes for Pipelines and delete of Pipelines from Envoy #543
    • cert download script and docs #544
    • Add LICENSE and script to add copyright to Go files #542
    • fix space in go file name #541
    • Speech to Sentiment Example Updates #540
    • Quickfix/add paper reference #525
    • Huggingface Speech to Sentiment Example #519
    • Bump MLServer version #538
    • small fixes for mlserver, docs and envoy yaml #537
    • [CLI] Support authority headers for control-plane subcommands #531
    • Add missing Envoy patch file for Kustomize generation of Helm charts #536
    • [CLI] Use positional args for resource name in CLI server-status subcommand #533
    • [CLI] Use cURL-style request metadata logging for gRPC #528
    • Add gRPC service name prefix in components Helm chart #530
    • Add initial drift and outlier docs #529
    • [CLI] Support authority headers for inference requests #526
    • Use cURL-style request metadata logging in Seldon CLI #524
    • Add Helm parameterisation for scheduler and Envoy service types #520
    • Add a note about resetting model autoscaling #517
    • Update autoscaling docs #514
    • Issue 507 model autoscaling docs #513
    • Add huggingface as runtime and example #511
    • Install doc updates and developer doc additions #510
    • Add server and certs Helm charts and raw manifests to published assets #508
    • Issue 445 scheduler model autoscale #472
    • Add ssl_verify_path for explainer TLS #495
    • Notebook doc updates #499
    • Run inference servers as non root locally #500
    • Bump Kustomize to v4.5.4 #497
    • Handle scheduler errors in controller and decide if retryable #484
    • undeploy local before deploy local #496
    • Fix Strimzi Helm values ZK indent bug + stale broker service name #492
    • tidy up notebook with more models for triton #494
    • revert REQUESTS_CA_BUNDLE #491
    • improve CLI config load errors #489
    • remove colors from outputs in batch examples #482
    • Update README.md #488
    • Use Helm chart for Kafka cluster setup #477
    • Add batch examples to docs #481
    • simplify overcommit notebook example #476
    • Add log level config to dataflow engine #456
    • add tritonclient example #443
    • inference examples and raw contents fix #468
    • Release 0.2 testing #463
    • Issue 451 metrics fix #455
    • fix kafka namespace: kafka -> seldon-mesh #464
    • Fix missing pipeline ID in data-flow engine consumer groups #462
    • Small docs updates #459
    • Envoy TLS #446
    • Issue 452 Fix nil deference in pipeline inspection in CLI #454
    • docs update #448
    • Model autoscaling (agent) #440
    • Kafka SSL plus refactor of Control Plane SSL #441
    • Add mTLS for data-flow engine #439
    • Issue-433 Upgrade dataflow engine dependencies #438
    • Remove timestamp, better verbose description #437
    • Add agent mTLS #430
    • Allow seldon inspect to output raw or json #432
    • minor fix for k6 tests env #431
    • fix hodometer docker build #428
    • Fix operator Docker build #427
    • fix typo #426
    • fix experiment yaml #424
    • Use separate Go module for generated API client #422
    • Shadows #404
    • Parameterized models #419
    • fix experiment version bug and add notebook #420
    • add missing #418
    • update notebook docs #417
    • Fix lazy reload #416
    • fix x-seldon-route headers in pipeline chains #415
    • Fix issues with versions of pipelines #414
    • Update to mlserver.1.2.0-dev5 #412
    • small docs update #411
    • Fix experiment bug #405
    • Refactor pipeline subscriber for separation of concerns #408
    • Tidy Kafka config handling #407
    • Scale KStream threads with pipeline steps #406
    • Add control plane TLS #397
    • Use Distroless image for dataflow engine #403
    • Re-use topology builder for entire pipeline in dataflow engine #402
    • Docker compose fixes and image size reduction #401
    • fix bugs with container dockerfiles #398
    • Issue-393 Use Distroless images for Go apps #396
    • Fix helm docs and updating linting #392
    • Namespaced controller #380
    • Revert "k8s codegen script and generated client (#377)" #391
    • update cifar10 example #390
    • Fix scheduler PVC volume not writable on GKE #387
    • Issue-373 Remove irrelevant k8s metrics from Hodometer #381
    • k8s codegen script and generated client #377
    • Add per pipeline histogram metrics #382
    • Fix: update k8s CRDs after experiment change #379
    • income example #378
    • Passing parameters to batched requests from data-flow engine #374
    • Enable pipelinegateway multi topics consumer #372
    • Refactor metrics and add separate pipeline and model metrics #371
    • Enable kafka kraft in k8s #370
    • Modelgateway topic to consumer consistent hashing #368
    • Pipeline Experiments #360
    • Read custom server example #361
    • Change admin client create for create topics #359
    • Various fixes for modelgateway usage at scale #358
    • Add mlflow model in k6 examples #349
    • fix sticky session usage with header addtion in lua #354
    • Add Makefile support for Compose build param #356
    • Add Hodometer service dependency on scheduler in Compose #355
    • add envs for otel enable in docker compose #347
    • Use right image for hodometer and add helper for pulling images #350
    • Add usage metrics (Hodometer) docs #343
    • Explainers #298
    • add release process description #329
    • fix trigger joins #340
    • add hodometer to the list of images to be built by GA #344
    • Move CHANGELOG.md to top level #342
    • Add hodometer deployments #318
    • v0.1.0 change log #337
    • Fix pipelinegateway panic upon kafka reconnect #339
    • Add sticky sessions for experiments #250
    • Fix shm config for loading python model on triton (k8s) #338
    • Ignore RC builds in generated core-release notes #336
    • Update README.md #330
    • V2 release process #327
    • add missing kustomize patch + version setting helpers #322
    • Remove duplicated Compose image tags in Makefile #325
    • add img overrides for compose #324
    • Ensure X-Request-ID is returned and allow CLI inspect to use #314
    • Add container merge sematics for easier custom servers #315
    • Add server status update batching #307
    • Sherif akoush/demo fixes #312
    • Add build info to Hodometer Docker images #304
    • Add batching for XDS server updates #248
    • Add Kubernetes metrics to Hodometer #299
    • Modelgateway issue #296
    • Experiment store #292
    • Sherif akoush/upgrade mlserver 1.1 #294
    • Sherif akoush/http lazy load fix #293
    • Sherif akoush/lazy load model in restart #291
    • Num modelgateway workers fix #290
    • Robustness fixes #287
    • Modelgateway workers from envar #288
    • Sherif akoush/create snapshot optimisation #285
    • Modelgateway threads #278
    • Fix trigger join #282
    • Pipeline db fixes #283
    • Allow model scaling, k6 constant throughput tests and Prometheus/Grafana in Docker Compose install #262
    • Sherif akoush/improve replica sorting #280
    • do not run action to build/push images on forks #269
    • fix workflow dispatch inputs for image building #268
    • Change Envoy LB Algorithm to Least Requests #265
    • Request metadata #264
    • Use static Kafka consumer in Kstreams #260
    • Scheduler db folder docker #261
    • Tracing config #225
    • Separate event publish from locked updates to data structures #254
    • add mnt folder to git with .keep file #255
    • small update to docs #253
    • Dataflow doc #245
    • Update index.md #249
    • update cifar10 demo #247
    • Metrics dashboard docs #244
    • fix http reverse proxy port issue #243
    • add workflow that builds and push images #234
    • Cli command updates #242
    • Sherif akoush/metrics dashboard #241
    • add longer default timeout in Envoy configuration #239
    • Optimise XDS server route creation #237
    • v2 control plane grpc + various fixes for scalability #229
    • Some text updates, new reference #236
    • Update index.md #235
    • Pipeline Persistence #188
    • Add locks to streams #231
    • small docs title page update #228
    • add locks around stream send #226
    • docs updates #227
    • Add Hodometer stub receiver #211
    • update cli export flags docs #224
    • Experiment status #212
    • fix trigger NullPointer exception #221
    • docs update #222
    • fix external port for kafka when running internally #220
    • Change port to avoid conflict #219
    • read events in go routines #215
    • Update server snapshot creation in scheduler #214
    • Sherif akoush/fix v2client load #210
    • Sherif akoush/add evict metrics #184
    • CIFAR10 Example and CLI Pipeline inspect #207
    • Add short names to CRDs #209
    • update pipeline status and finalizer check #208
    • Cli updates #199
    • Helm chart creation #193
    • Kafka configuration via config files #189
    • Alibi-detect iris drift detection example #191
    • throw errors in CLI on bad yaml #198
    • Add usage metrics collector #181
    • Add model metadata to CLI #187
    • update install docs for Ansible #190
    • Allow max message size in kafka and grpc #186
    • change kafka to bitnami #185
    • Allow for more informative scheduling errors #182
    • Add kafka Produce and Consume Tracing Spans #178
    • Sherif akoush/report same message from agent #183
    • Requests batch processing transformer implementation #135
    • fix docker compose for host network #177
    • Ansible: add jaeger and opentelemetry #172
    • Add server extra capabilities #169
    • Update memory.go #173
    • Parametrize ansible #167
    • Update Configuration.kt #166
    • add inference docs #165
    • initial ansible playbooks #154
    • Further Docs #163
    • Open Telemetry Tracing #160
    • make pipeline different from modelname #162
    • Sherif akoush/k6 dataflow #149
    • Pipeline inputs #158
    • fix locks in pipeline state set #159
    • Add Pipeline Triggers #152
    • tutorial docs section #156
    • Allow local model folders #153
    • Docs update #150
    • Use Gradle directly in dataflow engine Docker build #148
    • allow both mlserver and triton to be started locally #146
    • Add clearer state logging for chainer and joiner #145
    • Add state listeners to kstreams to wait while rebalancing takes place #144
    • Conditional and Error Pipelines #143
    • Pipelines on k8s #142
    • Added install command for local examples notebook #141
    • Dataflow updates #140
    • fix dataflow bugs #139
    • Update dataflow joiner #137
    • Update golangci-lint to 1.45.2 #138
    • Pipelines with Join #136
    • Add features page placeholder #133
    • Add Docker setup for data-flow engine #134
    • Docs - add k8s resources #132
    • Docs Draft Outline #131
    • Fix data-flow Gradle setup #130
    • Add Kafka Streams data-flow engine #119
    • pick free port in test #128
    • add mlserver protos extensions back #127
    • Persisting k6 results to GCS bucket #125
    • Add an outline of software design doc #126
    • rename stream to modelgateway #124
    • Sherif akoush/various fixes for testing #117
    • initial docs setup #123
    • CLI plus updated sample notebooks #120
    • Disable auto-loading of models in MLServer at start-up via env vars #122
    • Pipeline operator #118
    • Pipelines #107
    • Wire up overcommit with scheduler #111
    • proto update for chainer #109
    • Experiments v1 #106
    • chainer protos #108
    • Sherif akoush/simplify locks #103
    • Sherif akoush/remove version code agent #102
    • Stream integration with Kafka #104
    • Add event bus for scheduler-internal events #99
    • Fix maybe parsing methods so not a fatal on not found #96
    • Refactor Agent cmd package and argument parsing #83
    • replace loaded models with versioned models key #88
    • Add memory sorter to default scheduler #91
    • fix rclone host docker compose config #94
    • Sherif akoush/update v2 protos in notebooks #93
    • Sherif akoush/scv2 50/flatten versions (and various other fixes) #86
    • Prometheus Inference Metrics #82
    • Traffic split envoy #79
    • Docker compose updates #84
    • Use Compose for Docker-based Make targets #81
    • Wiring up proxies #78
    • Add Docker Compose manifests #77
    • Sherif akoush/reverse proxy grpc 2 #72
    • K6 Load Tests #69
    • Service Mesh experiments Istio, Traefik, Ambassador #70
    • Sherif akoush/Memory over-commit (reverse proxy) #18
    • Server Custom Resource #41
    • Versioning #40
    • Move Protobuf contracts to top-level #39
    • Initial Operator update for Model resource #20
    • Add scheduler proxy/stub #38
    • Dynamic RClone Configuration #8
    • Format the code for new lines at end of file (gofmt) #11
    • Add github actions for linting and tests #10
    • Add golangci-lint linters and fix existing lint failures #9
    • Add payload logging with Envoy Taps #1
    • Updated to non deprecated grpc settings #29
    • add gRPC inference #28
    • Seldon Core V2 Scheduler Update #27
    • Add V2 APIs and Samples #26
    • New Operator APIs #25
    • Remove initial operator #24
    • SCV2 POC Update #23
    • Seldon Core V2 - Scheduler experiments #22
    • Seldon core v2 (add smoke test) #21
    • Seldon core v2 (further updates) #20
    • Generating changelog for v2.0.0 014a935
    • Seldon V2 APIs initial Draft 02d963c
    • Generating changelog for v2.0.0 d955a61
    • Initial commit for Model reconcile a1dfb6d
    • update status for model d127e19
    Source code(tar.gz)
    Source code(zip)
    certs.yaml(2.85 KB)
    seldon-core-v2-certs-2.0.0.tgz(681 bytes)
    seldon-core-v2-crds-2.0.0.tgz(115.60 KB)
    seldon-core-v2-servers-2.0.0.tgz(469 bytes)
    seldon-core-v2-setup-2.0.0.tgz(7.33 KB)
    seldon-linux-amd64(47.25 MB)
    seldon-v2-components.yaml(29.39 KB)
    seldon-v2-crds.yaml(987.02 KB)
    seldon-v2-servers.yaml(244 bytes)
  • v1.14.1(Aug 18, 2022)

    v1.14.1

    17 August 2022

    • Added fix for clashing zombie webhook #4265
    • fix workflow on red hat image scan process #4259
    • Make verbosity configurable and not leak sensitive values #4249
    • Make verbosity configurable and not leak sensitive values #4249
    • Added fix for webhook issues on 1.12.0 #4256
    • Update stalebot.yml #4250
    • Adding prepackaged server separate pod instructions #4238
    • doc: add util comment && identation #4242
    • Adding stalebot for issues and PRs with defaults #4232
    • Fixed trailing dash created from helm split resources #4230
    • Fix Typo in Readme.md #4228
    • enh: Add support to configure PrepackedTriton with no storage initialiser #4216
    • Added fix for removed guard on webhook #4218
    • fixes foldering of the gpt2 minio notebook #4197
    • Allow leader election controls for manager #4211
    • factored out parse_args #4213
    • upgrade pip, conda and setuptools in s2i image #4210
    • Fix logging args.grpc_workers #4212
    • renamed server_[123]func to server[rest|grpc|custom]_func #4214
    • typo fix in logging bind_address of gRPC server #4200
    • fix metadata #4207
    • typo fix in logging number of gRPC threads used #4194
    • typo fix in logging number of gRPC workers #4195
    • fix link to minio example in triton page #4196
    • Added 1.15.0-dev tag #4174
    • add missing yaml styling for snippets #4170
    • update rest_predict_seldon hardcoded version in route #4161
    • certified operator: create bundle (step 1) 602b4c2
    • update 1.14.0 community bundle for OpenShift 05ec72b
    • release 1.14.1 eefa9e4
    Source code(tar.gz)
    Source code(zip)
  • v1.14.0(Jun 17, 2022)

    v1.14.0

    16 June 2022

    • Fixed operator redhat image #4157
    • fix broken mlflow model build #4155
    • Bump MLServer version to 1.1.0 #4148
    • Upgrade to k8s 0.23 APIs, remove v1beta1 as default, upgrade KEDA #4136
    • Create graph-modes.md #4144
    • Fix typo in error message for Anchor tabular #4145
    • fix transport missing in executor #4107
    • fix alibi tests #4142
    • Broken docs test fix removing reference to Tree #4141
    • Fix. Ensemble model. Previouse not saved data in jagear. Working with Jagear and Istio #4139
    • Add optional manual commit to seldon kafka server #4117
    • update kind #4135
    • use alternative multiprocessing library if USE_MULTIPROCESS_PACKAGE i… #4114
    • upgrade alibi explain to 0.7.0 #4112
    • Update cache folder and bump MLServer image #4094
    • Adding protocol info to executor payload logging worker #4077
    • Don't hardcode UID for Triton containers #4099
    • Sorted metric tags to avoid duplicate prom data with gRPC requests #4006
    • respect envSecretRefName coming from helm values #4089
    • minor type fix #4086
    • Huggingface optimum prepackaged server #4081
    • Adding configuration for feature level drift metrics #4079
    • Fixed random seed for anchor explanation #4078
    • Re-setting numpy random seed to zero on every explain request #4076
    • Pass through model name env var for MLServer #4069
    • Update seldon-deployment.rst #4075
    • Adding tests to explicitly state expected behaviour of v2 protocol chaining in REST #4061
    • add prometheus operator docs #4038
    • change versions we test upgrade of operator from #4066
    • lock jager operator helm chart to fix integration tests #4064
    • Protocol specific ready checkers #4028
    • fix(executer): Forward parameters while chaining models via kfserve grpc #4054
    • Outlier example poetry #4055
    • fix removal of request logger to fix CI #4044
    • Enabling optional grpc server on python level only #4027
    • Removed request logger from github security workflows #4039
    • Python request logger example component deprication and removal #4016
    • Extended GPT2 MLServer Pipeline Example to include post-processor #4035
    • Updated CPP example to use latest 3.8 base image #4026
    • allow priorityClassName for manager #4030
    • Updating model inputs for new schema #4032
    • Updating typings for prediction API documentation #4025
    • Fixed Flask breaking version by werkzeug dependency limit to 2.1 #4018
    • Update ab_test_2pods.json #4020
    • Decompress prediction events before logging to kafka topics #4005
    • Updated poetry environment and lockfile for Alibi Detect 0.9.0 #4001
    • update sklearn iris example #3995
    • Release v1.13.1 for OpenShift #3987
    • Bumping rclone image version to 1.57.0 #3990
    • fix example yaml file error in README.md #3994
    • added tag for s2i python image #3992
    • Fixing nbqa linting for latest notebook #3991
    • support traffic settings for shadow deployment with istio #3780
    • Issue #3968: Allow hostNetwork=true for seldon operator #3971
    • Updating explainer docs into 0.6.4 #3976
    • Add TLS to Kafka Consumer and also add Kafka + KEDA + TLS example #3977
    • Merging 1.13.1 changelog and update to 1.14.0-dev images #3962
    • Bumping rclone image version to 1.57.0 (#3990) #3973
    • Updating changelog to 1.13.1 adf6c54
    • Release v1.14.0 158950f
    Source code(tar.gz)
    Source code(zip)
  • v1.13.1(Feb 22, 2022)

    v1.13.1

    21 February 2022

    • Updated base golang images to 1.17.7 #3951
    • cast float/int 64 to 32 in alibi-detect-server #3958
    • Update security policy to outline current security scans #3959
    • Addresing security vulnerabilities for 1.13.1 release #3949
    • Updating broken link in documentation #3950
    • Fixed Flask dependency by pinning markupsafe and itsdangerous #3948
    • Fixing failing docs CI tests #3915
    • Update images to 1.14.0-dev for next semver release #3939
    • Adding chmod to dockefile example #3937
    • Update Adopters.md #3934
    • Update kfserving-storage-initializer.md #3831
    • Release v1.13.1 c696e99
    • Release v1.13.1 security images d90766d
    • Release v1.13.1 changelog 83f2f63

    Security Scan Results

    Source code(tar.gz)
    Source code(zip)
  • v1.13.0(Feb 17, 2022)

    v1.13.0

    • Add test for tensorflow prepackaged Seldon protocol with resource requests specified #3928
    • Bump MLServer to 1.0.0 #3927
    • Skip request logging if skip header is present #3925
    • upgrade alibi explain to 0.6.4 #3885
    • Allow v2 as protocol name #3906
    • Bump MLServer image to 1.0.0.rc2 #3916
    • Update gcp.rst #3921
    • Add model_name when chaining requests #3805
    • Fixing Alibi Detect Server response cloud event data is json marshalled string #3907
    • bump alibi-detect to 0.8.1 in adserver #3871
    • Updating inference logic to add node level request-response logging #3874
    • Pass down ports info to MLServer #3898
    • Update autoscale docs #3905
    • Updating helm docs for 1.13.0-dev #3879
    • Updated cert-manager API version #3888
    • Fix seldon manager configmap for alibiexplainer version #3897
    • Adding test skip until fixed via 3857 #3894
    • Redhat 1.12.0 #3878
    • Add support to use PEM string for SSL #3868
    • Changes ndarry to ndarray #3892
    • 3804 Removal of Depricated Java Engine Resources #3845
    • Updating security tests to run on 1.13.0-dev images #3875
    • Upgrade confluent-kafka-go to v1.8.2 #3870
    • Bump upper constrains of MLflow server dependencies #3863
    • Add events to namespaced roles #3855
    • Seldon add ssl #3813
    • Add agrski as approver #3865
    • Operator sets seldondeployment to failed when deployment not progressing #3851
    • Read OIDC resource parameter #3844
    • Update mlflow.md #3843
    • Update overview.md #3842
    • Remove triage label and release notes block from templates #3835
    • Bump seldon-deploy-sdk to 1.4.1.2 in request logger #3838
    • Added missing words #3837
    • reference to kfserving storage initializer from dockerhub #3832
    • Updated license branch from master to main for hashicorp/go-version #3829
    • Update Dockerfile in python docker wrapper docs #3822
    • Fix broken link #3820
    • Exclude caBundle field when cert-manager is enabled #3807
    • Use default PID not UUID for worker ID #3801
    • Updating tag to 1.13.0-dev + adding changelog #3799
    • Add note for MacOS users #3800
    • Updated for changelog generator to use auto-changelog 0ea7e2c
    • Updated changelog e6a60ae
    • Updated changelog with full changes! 56bcce3
    Source code(tar.gz)
    Source code(zip)
  • v1.12.0(Dec 14, 2021)

    v1.12.0

    Full Changelog

    Fixed bugs:

    • Ensure docker images have the source code included for MPL based licenses #3787
    • Stateful Model Feedback Metrics Server uses wrong image #3784
    • seldon-core-microservice: error: unrecognized arguments: test REST --service-type MODEL --persistence 0 #3775
    • Tensorflow protocol can't support /v1/models/${MODEL_NAME}[/versions/${VERSION}|/labels/${LABEL}] when using istioingress gateway #3767
    • seldonio/mlflowserver:1.12.0-dev now unable to server models #3766
    • TEP PY SITH #3752
    • OOM when stress test the Seldon model, which may be caused by the logging of request and response payloads #3726
    • Unable to install seldon via Helm #3725
    • Conda pip install permission denied in OpenShift #3712
    • Python/Executor path mis-match #3705
    • seldon-core-operator helm chart - in single namespace mode the role seldon1-manager-role does not include poddisruptionbudgets #3692
    • Currently engine related values cannot be ommited from the values.yaml while deploying seldon #3691
    • apiversion is deprecated in notebooks/resources/ambassador-rbac.yaml #3677
    • Error to build env: Command 'conda run -n mlflow pip install -r /microservice/requirements.txt' returned non-zero exit status 1. #3670
    • Seldon Manager Config Map has an extra comma that can cause parsing error #3652
    • Validate and address recent apparent flakiness in v2 server tests #3639
    • Executor returns minimal information on http client failure #3625
    • helm chart not compatible with k8s v1.22 #3618
    • Upgrade Grafana chart in seldon-core-analytics #3558
    • Mlflow getting downgraded #3454
    • Deploying custom MLflow model - stuck at "Readiness probe failed" #3186
    • Java Engine and Go Executor Does not Terminate Graph upon Error #2480

    Security fixes:

    Closed issues:

    • Exposing GRPC timeouts in Python client #3779
    • remove explainer_examples notebook #3777
    • Submit cloud events uncompressed until consumers supports it #3749
    • When is seldon v1.12 going to drop? #3735
    • Add model chaining example for the V2 protocol #3732
    • Fix alibi-explain-server to use alibi load\_explainer helper function #3708
    • Batch processor enhancemenst through raw data parameter #3702
    • Allow for Annotations and Labels to be injected into helm chart templates #3699
    • Allow Controller Manager to have configurable container security context #3698
    • Batch processor cast integers to floats in payloads #3681
    • Integrated alibi-explain mlserver runtime in Core #3675
    • Add e2e example on training explainers with Poetry-locked environment #3664
    • Move adserver to use Poetry #3660
    • Update Openshift OperatorHub Releases for 1.11.1 #3563
    • Better support Istio Mesh Internal to cluster #3485
    • Enable index filed in batch_processor #3409
    • Add "names" field in batch_processor.py to align with seldon_client.py #3408
    • upgrade seldon-core-analytics helm charts #3403
    • batch processor: add raw inputs support to predict method #2657
    • Publish RedHat images via API #2085
    • Seldon deployment name in case of complex graph #1801

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.11.2(Oct 19, 2021)

  • v1.11.1(Oct 4, 2021)

    v1.11.1 (2021-10-04)

    Full Changelog

    Fixed bugs:

    • gRPC broken for non ipv6 systems post 1.9.1 #3616
    • How seldon core analytics is working for model metrics monitoring ??? #3592
    • incomplete meta.requestPath in responses and redundant "tags" field #3477

    Closed issues:

    • Python Wrapper does not adhere to Timeout annotations #3613
    • Are you interested in becoming a listed KEDA end-user? #3612
    • Seldon Core v1.11.0 Release #3598
    • what is the best way for receiving/sending data fast? #3449
    • HPA support for autoscaling/v2beta2 API #3143
    Source code(tar.gz)
    Source code(zip)
  • v1.11.0(Sep 22, 2021)

    v1.11.0

    Full Changelog

    Fixed bugs:

    • handle bad SemVer for CRD creation #3569
    • leader election RBAC incorrect #3567
    • batch processing with argo failed #3559
    • hello I do not know the composition of jsonΓê⌐Γò¥╞ÆCan you tell me something about it ? #3556
    • docs build failing due to problem with black 20.8b1 wheels #3546
    • Alibi explainer image broken due to numba 0.54 #3540
    • Update Alibi Detect Server to use Alibi Detect v0.7.1 #3481
    • helm 3.5.2 warning on seldon helm charts #2944
    • Support MLFlow models that return pandas DataFrame #2281

    Closed issues:

    • Should allow user customize LeaderElectionID #3576
    • FastAPI integration with Seldon #3575
    • Seldon Deployment Type Definition use duplicate protobuf Ids for a few fields #3574
    • How to the current available resources of the seldon kubernetes cluster through seldon api, including CPU and memory resources? #3543
    • Add a "fix this page" button to the docs #3535
    • Remove duplicate links to self on documentation home page #3534
    • Improvements to Python Server Configuration doc #3532
    • Troubleshooting guide enhancements #3531
    • Create new page on init containers #3530
    • Deprecate page https://docs.seldon.io/projects/seldon-core/en/latest/wrappers/language_wrappers.html in favor of individual wrapper pages #3529
    • Simplify jargon around CRDs and PodTemplateSpec on https://docs.seldon.io/projects/seldon-core/en/latest/graph/inference-graph.html #3528
    • Have JSON and YAML representations exactly match each other on https://docs.seldon.io/projects/seldon-core/en/latest/graph/inference-graph.html #3527
    • Investigate feasibility of "toggle" switches to move between JSON, YAML and different languages etc... #3526
    • Create a diagram to represent inference graphs and config #3525
    • Improvements to Testing Model Endpoints page #3524
    • Remove references to S2I from https://docs.seldon.io/projects/seldon-core/en/latest/python/python_module.html #3523
    • Fix broken links and reword Seldon Python Component doc #3522
    • Improvements to Python Wrapping S2I page #3521
    • Include pre-requisites before installation command #3520
    • Remove duplicate navigation from quickstart page #3519
    • Move "about the name Seldon" from quickstart to somewhere more appropriate #3518
    • Update model servers list on https://docs.seldon.io/projects/seldon-core/en/latest/workflow/overview.html #3517
    • Remove reference to Kubebuilder from https://docs.seldon.io/projects/seldon-core/en/latest/workflow/overview.html #3516
    • Remove details on adding custom metrics from https://docs.seldon.io/projects/seldon-core/en/latest/workflow/overview.html #3515
    • Remove duplicate text from tracing image (also appears in docs) https://docs.seldon.io/projects/seldon-core/en/latest/workflow/overview.html #3514
    • Benchmarking vs Flask for both the benchmarking sections and "benefits vs flask" in overview #3513
    • Additional information on installing prerequisites in https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html #3512
    • Install command doesn't include istio or ambassador so fails if copy/pasted. Ingress is only referenced afterwards #3511
    • Add information on meetups, twitter etc... to https://docs.seldon.io/projects/seldon-core/en/latest/developer/community.html #3510
    • Document storage initializers configuration for private GCP buckets #3509
    • Working group call calendar link has expired #3508
    • Rework entire quickstart page removing duplicate content and adding obvious next steps for each persona #3507
    • "documentation quickstart" is a totally different page, only findable through the original quickstart page. Combine content in to one. #3506
    • Improvements to Testing Model Endpoints page #3505
    • Make Python 3.8 s2i wrapper the default one #3500
    • Add documentation in UPGRADING page that outlines new explainer URI model param limitation #3499
    • Update explainer URL to allow for empty parameter for tensorlfow protocol #3498
    • Predictor server image version only accepts string not numeric value #3493
    • Correctly set GOMAXPROCS for executor and operator #3468
    • Release v1.10.0 #3467
    • Update MLServer image to 0.4.0 #3466
    • Add request logging direct to Kafka #3445
    • Update KIND CI tests to use latest KIND client (and hence Kubernetes 1.20) #3357
    • Update version of Triton image in configmap #3318
    • Add conditions for SeldonDeployments #3265
    • Investigate latest K8S Ingress CRD #2988
    • Adjust operator updates test to current kubernetes (>=1.18) #2966
    • Run black (nbQA) on notebooks as part of fmt/lint #2885
    Source code(tar.gz)
    Source code(zip)
  • v1.10.0(Aug 18, 2021)

    v1.10.0

    Full Changelog

    Fixed bugs:

    • sklearn iris model incompatible with latest sklearnserver (1.10.0-dev) #3424
    • It is not possible to add a new inference server. #3415
    • REST Executor Returns wrong Error message in DAG #3411
    • Conda base image is not being pushed to docker hub #3405
    • spec.preserveUnknownFields missing while upgrading seldon-core-operator to 1.9.1 from 1.2.2 using helm chart #3393
    • seldon-puid not included in grpc requests #3389
    • Docs lint is failing due to argo moving repo #3386
    • Seldon Deployment: Dryrun using k8s java Api is not validating all fields #3378
    • batch_processor.py: data will be left unprocessed if the line number in the input file can't be aliquoted by the batch_size #3377
    • Cannot apply Seldon Deployment from Kubernetes Python API #3375
    • click dependencies could not be resolved #3373
    • Can't change REST timeout #3368
    • Deployment giving certificate expired or is not yet valid #3366
    • go mod fails because of invalid character in file name #3354
    • Istio virtualservice created does not whitelist V2 Inference Protocol protobuf names #3352
    • Add functionality to support multiprocessing for Python wrapper GRPC #3334
    • If no-engine=true used only REST/HTTP virtuaservice is created with istio and no GRPC #3329
    • Seldon Explainer Container Crashes Due to GCS Permission Error #3324
    • I deploy tensorflow model using tensorflow2.4.1 and occur error:CUDA error (3): initialization error. #3314
    • Status address URL incorrect for no engine #3312
    • cant build simple-cpp example #3251
    • wrong conda version used in the mlflowserver image #3115
    • Failures on send_feedback_raw path when using proto #2606
    • custom_metrics notebook test is flaky #2570

    Closed issues:

    • Create MLflow example using MLServer #3462
    • Run Kubernetes PodSpec validation #3440
    • automatise generation and upload of example models for pre-packaged model servers #3439
    • Request Logger Update #3421
    • Upgrade Alibi Server to 0.6.0 #3401
    • Add MLServer MLFlow Server to Core #3384
    • Release 1.9.2 #3367
    • Create notebook that outlines steps required to extend all existing secrets to be compatible with rclone #3360
    • Research performance improvements for Python Seldon wrapper and research performance between versions of Seldon Core #3359
    • Update Core Builder to use more recent version of Python #3358
    • Create narrative / documentation around security #3345
    • Add GRPC_THREADS for configuring the number of threads in the Python wrapper (and default to 1) #3333
    • Set GUNICORN_THREADS to 1 by default #3332
    • Release 1.9.1 #3319
    • Python GRPC Server does not adhere to Worker/Thread environment variables #3238
    • Update Benchmarking with Argo Worfklows & Vegeta notebook example #3162
    • 1.8.0 Release #3125
    • Missing appVersion inside Chart.yaml #2737
    • Add integration tests to outlier detector and concept drift components #2681
    • Occasional Latency Spike in Python Nodes of Inference Graph #2656
    • Refactor env var retrieval for model_name / image_name in python wrapper so it's centralised in util #2621
    • Python Wrapper should Handle Exceptions correctly #2338
    • Seldon wrapper image with python 3.8 #1230
    Source code(tar.gz)
    Source code(zip)
  • v1.9.1(Jul 2, 2021)

  • v1.9.0(Jun 16, 2021)

    v1.9.0

    Full Changelog

    Fixed bugs:

    • seldon-core 1.8.0 helm chart CRD error #3254
    • explainer don't repect the spec.replicas #3241
    • Setting TRACING=0 does not disable Jaeger tracing #3158

    Closed issues:

    • Allow Tempo Server Env Override #3282
    • req logger - create elements section for tensorflow protocol #3279
    • Integrate Iter8 #3278
    • add some unit tests for request logger #3270
    • Update OpenAPI folder definitions #3261
    • parsing of categorical and proba in req logger for ndarray #3255
    • Custom name for Seldon deployment instead of metadataname-graph component names #3253
    • Allow V2 Protocol for Alibi Explain Server #3247
    • Usage of route_raw in seldon core 1.1.0 #3236
    • option to skip verify ssl in req logger #3230
    • GPT2-Triton Example: extand to contain load test example #3216
    • Allow multi-model repositories for Tensorflow Serving #3206
    • Allow for overriding Istio VirtualService hosts #3137
    • Run black (nbQA) on notebooks as part of fmt/lint #2885
    • Update request logger to run with gunicorn #2141
    • Progressive Rollout #1805
    Source code(tar.gz)
    Source code(zip)
  • v1.8.0(May 20, 2021)

    Full Changelog

    Implemented enhancements:

    • Removal of mapping type from request logger #3013
    • Improve labelling inconsistencies on seldon-managed k8s resources #2757

    Fixed bugs:

    • Java Wrapper /predict API regression #3210
    • Seldon feedback api is not supporting String values #3207
    • SA name hardcoded in seldon-leader-election-role rbac #3168
    • Community Call Calendar in Doc Out of Date #3167
    • V2 Inference Compliance #3156
    • Stuck at "liveness probe failed: HTTP probe failed with statuscode: 403" #3129
    • Failing end to end tests (integration and notebooks) in master #3124
    • GCP Release CRD issue #3114
    • Alibi detect image build fails to build in master #3111
    • Seldon Deployment errors when graph does not have a type field #3105
    • Wrong required variable in documentation #3097
    • storageInitializerImage does not work on Kubernetes 1.18 #3087
    • Deployment issue on AWS #3077
    • Fix python wrapper command line args docs #3069
    • HTTP Port Not Change even after PREDICTIVE_UNIT_SERVICE_PORT set #3035
    • OSS-203: Address CVEs for Java JNI Server Image from Twistlock Reports #2968

    Closed issues:

    • Add Tempo Prepackaged Server #3192
    • Adjust outlier examples to use rclone based storage initializer #3189
    • TEST ISSUE TO TEST SYNC TO GITHUB #3185
    • Seldon graph complexity #3184
    • Update Alibi-Detect to latest in Alibi-Detect server #3148
    • Allow any structure in custom field in metadata #3144
    • The /aggregate endpoint is wrongly called if we have a COMBINER with SEND_FEEDBACK method activated #3139
    • Metadata for Transformer #3132
    • Update kustomize usage in core #3127
    • GCP 1.7.0 Release #3103
    • Add support for transformers with arbitrary request/response format (ie. not SeldonMessage) #3096
    • Redhat release for 1.7.0 #3091
    • Add new example on Triton Jupyter Notebook Example with GPT-2 #3080
    • Add raw_data parameter to predict / transform / etc functions in Seldon Client #3079
    • Update SPACY notebooks to be aliged with latest Seldon Core #3072
    • Documentation around supported Alibi Algorithms #3053
    • Add health/ping to api v1 #3046
    • no matches for kind "SeldonDeployment" in version "machinelearning.seldon.io/v1alpha2" #3037
    • seldon-container-engine keeps restarting because readinessProbe failed #3036
    • Add GPU Drift Detection Example #3033
    • SeldonClient: Token Authentication without HTTPS #3032
    • 1.7.0 Release #3011
    • implement rclone-based storage.py equivalent #2942
    • Explore consistent python environments for users that create explainers #2934
    • Is it a good idea to support predicting multiple instances upon one request? #2929
    • Expose log level setting in Helm chart #2919
    • Add default /health/status implementation for models #2899
    • Remove "PERSISTENCE" Redis functionality and documentation #2888
    • Seldon component exit on failure without passing to next component in the seldon graph #2730
    • Metrics Endpoint dose not work with Istio Sidecar #2720
    • Prometheus gauge shown as NaN #2685
    • ability to specify init container on per deployment basis #2611
    • Evaluate alterntatives to Storage.py to reduce dependencies and improve support more data sources #1028
    Source code(tar.gz)
    Source code(zip)
  • v1.7.0(Mar 19, 2021)

    v1.7.0

    Full Changelog

    Fixed bugs:

    • Missing protocol check for KFServing for URL in sdep status #3063
    • environment.yml typo in docs #3052
    • Meta parameter did not passed to next model #3050
    • fix integration and notebook tests #3040
    • python microservice refuses to start: setuptools dep conflict #3038
    • HTTP Port Not Change even after PREDICTIVE_UNIT_SERVICE_PORT set #3035
    • Misaligned documentation for SKLearn pre-packaged model server #3029
    • Remove Mutating Webhook if found in latest operator startup #3024
    • Handle default api status in Seldon protocol in executor and python wrapper #3022
    • Update docs to state GUnicorn is stable feature. #3016
    • Cannot create new SeldonDeployment after seldon-core automatic update from 1.5 to 1.6 #3005
    • Explore re-allowing multiple shadow deployments (for Istio only as Ambassador doesn't support) #2991
    • Files created by controller-gen #2987
    • SeldonPodSpec in SeldonDeployment V1alpha and V1 in seldon v1.4 is not parsing metadata successfully #2983
    • Bug in elasticsearch index of metrics server #2971
    • Address CVEs for MAB Epsilon Greedy & Thompson Sampling Server Image from Twistlock Reports #2969
    • Address CVEs for Alibi Detect Server Image from Twistlock Reports #2967
    • Address CVEs for Alibi Explain Server Image from Twistlock Reports #2965
    • Address CVEs for XGBoost Server Image from Twistlock Reports #2964
    • Address CVEs for SKLearn Server Image from Twistlock Reports #2963
    • Address CVEs for MLFlow Server Image from Twistlock Reports #2962
    • Address CVEs for Storage Initializer Image from Twistlock Reports #2961
    • Address CVEs for Request Logger Image from Twistlock Reports #2960
    • seldon-core-microservice: error: unrecognized arguments: REST #2951
    • Seldon Batch Template Bug? #2943
    • Flaky Operator Unit Test: MLServer Panic #2904
    • Seldon-core-microservice Warning/Error message for changed args #2896
    • cannot overwrite initContainers image: reconcile error #2821
    • Manual scale doesn't work if hpaSpec is set #2816
    • Remove Status section of generated CRD by kubebuilder #2132

    Closed issues:

    • after run "kubectl get seldondeployments" but got "No resources found." #3010
    • Torchserve support #3002
    • Make Seldon Client REST requests more efficient #3001
    • Support model repositories with Triton Server #2986
    • Dependabot can't evaluate your Python dependency files #2975
    • seldon-batch-processor Install Instruction Missing #2956
    • Release 1.5.2 #2945
    • switch elastic helm chart to opendistro #2912
    • Make custom metrics work with gunicorn reload #2873
    • Create example using alert-manager for thresholds on Alibi Detect servers #2822
    • Allow annotations on Service created by operator #2590

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Feb 3, 2021)

    Changelog

    v1.6.0

    Full Changelog

    Implemented enhancements:

    • Create a prepackaged model server for PyTorch Models #831

    Fixed bugs:

    • IsADirectoryError: [Errno 21] Is a directory: '/mnt/models' #2876
    • error: a container name must be specified for pod #2875
    • MLFlow server-- ModuleNotFoundError: No module named 'prediction' #2874
    • V1 CRD has missing grpcPort and httpPort #2866
    • Broken Link to Documentation Example I'd like to find if it exists #2836
    • Executor does not send feedback to Routers. #2827
    • ArgoCD OutOfSync if SeldonDeployment includes mountpoint #2811
    • Helm failing to fetch https://kubernetes-charts.storage.googleapis.com/ resulting in failing tests #2808
    • send_feedback response is incorrectly managed in seldon_methods.py #2801
    • Upgrading to 1.5.0 causes unexpected error when calling predict endpoint of Python custom model #2786
    • SHAP Breaks Alibi Detect on Python 3.6 due to unpinned Numpy dependency #2767
    • Error when using the R language wrapper #2744
    • Transformers model unable to run with Cuda #2680
    • Allow seldon manager to run as non-root #2631
    • Operator sets HTTPS on the Engine's liveness and ready checks #2586
    • high memory and cpu usage in deployment of xgboost rest #1986

    Security fixes:

    • Resolve CVE for PyYAML - CVE-2020-14343 #2252

    Closed issues:

    • CVE checks update for redhat image scans #2869
    • Does Seldon Batch Processing Work with Azure Blob Storage? #2858
    • Update engine docs as deprecated #2840
    • Support V2 Protocol in outlier and drift detectors #2831
    • add example of batch processor with rclone #2819
    • Add example of custom init container with rclone #2818
    • remove mutating webhook #2817
    • Handle KFServing V2 Protocol in request logger #2791
    • Create 1.5.1 release with cherrypick #2756
    • Use f-strings in MAB study case examples #2729
    • helm chart imagePullSecrets support to bypass ratelimiting #2694
    • Seldon-core-operator Update for handling namespace #2676
    • docs: No Release Highlights since 1.1.0 #2634
    • Depricate engine (old Java service orchestrator) #2588
    • Add support for Datadog Tracing in the Executor and the Python Wrapper #2436
    • Multi_Archtecture Support #2333
    • Make deployment names configurable #2301
    • java-wrapper-0.2.0 jar is not checked for validity #2180
    • Stateful Model Serving by Saving state to Redis #2138
    • Add documentation on how to extend base prepackaged servers with new images (xgboost, sklearn, etc) #2060
    • Add documentation that dives into the iniContainer #2055
    • Multiplexing or parallel serving of gRPC / REST in Python Wrapper #1968
    • Allow globally configurable docker registry secret for seldon deployments #1923
    • Remove probesonly flag #1856
    • Use custom errors #1841
    • Allow mixed rest/grpc graphs in new golang based executor #1820

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.5.1(Dec 19, 2020)

    v1.5.1

    Full Changelog

    Fixed bugs:

    • SC Operator continues to reconcile objects that are being (foreground) delete #2781
    • Custom metrics not available in Prometheus #2766
    • seldon-batch-processor on seldon-core-s2i-python37 image is not generating any output #2745
    • transport: is not respected on seldondeployment #2540
    • helm install results in wrong configmap #2528

    Closed issues:

    • Hard requirement in Tensorflow (API) on GRPCIO 1.32.x breaks Seldon Core #2787
    • istio request timeouts #2727
    • Document how to run python wrapper locally for development #2722
    • Swagger API needs to be upgraded following best practices #2669
    • Authentication support for ELK Logging #2300
    • Support for xgboost4j-spark 0.9 #1395

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.5.0(Dec 4, 2020)

    v1.5.0 (2020-11-24)

    Full Changelog

    Fixed bugs:

    • Add a note in istio doc about pod security context for nonroot user #2686
    • KEDA notebook testing is broken #2683
    • [BUG] NodeSelector not working in SeldonDeployment #2682
    • minio setup notebook needs updating #2670
    • Alibi detect does not expose metrics when value is 0 #2668
    • Explainer wrapper should not add model to path for Tensorflow protocol #2664
    • Python processes when running Seldon with Gunicorn #2617
    • Update KEDA example to use v2.0 GA version #2614
    • Tutorial issues - CIFAR10 Drift Detection #2605
    • Conflict between gunicorn, gevent, and TensorFlow #2603
    • Fix notebook failing integartion tests for sklearn and xgboost V2 #2589
    • Repeatedly logging [DEBUG] Closing connection. #2568
    • Not able to pass string as input to the predict function. #2553
    • Notebook test test\_custom\_metrics failing in master #2541
    • grafana-prom-import-dashboards pod always fail in seldon-core-analytics chat #2518
    • requestPath meta missing in new executor #2505

    Closed issues:

    • use service account for argo batch example #2673
    • Update metrics exposed by alibi detect server to include all newer components (threshold, etc) #2667
    • Add namespace to metrics component in seldon core python module #2666
    • Explore send_feedback path for tensorflow protocol #2665
    • More restricted deployment rbac for seldon-core #2662
    • GCP Workload Identity Support for GCS - Prepackaged Model Server #2654
    • document how to use custom init containers #2610
    • Grafana on Ambassador (Public DNS)? #2591
    • new knative filtering #2551
    • Support gRPC and HTTP protocols at the same time #2378
    • Allow Inference Graphs to mix Protocols with the Executor #2299
    • Initial / immediate term base infrastructure for stateful metrics with feedback (custom metrics naming, concurrency coherence, etc) #2272
    • Remove OAuth code from Seldon Client #1677

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Oct 26, 2020)

    v1.4.0 (2020-10-26)

    Full Changelog

    Fixed bugs:

    • protocols_example notebook is failing tests #2569
    • KEDA prom auto scale notebook is broken #2563
    • transport: is not respected on seldondeployment #2539
    • Add integration test for outlier detection server #2535
    • LibGL fix to be cherry picked for 1.3.1 #2534
    • ADServer Crashes due to updated dependencies #2533
    • Integration test failed due to alibi_explainer container #2529
    • Missing comma in the operator/config/manager/configmap.yaml #2526
    • alertmanager errors from prometheus #2525
    • Triton Server Image Incorrect #2511
    • Ensure all image errors are caugh on the build script #2509
    • Update helm publishing Makefile for seldon-core-kafka chart removal #2502
    • Seldon does not work with Gunicorn async workers #2499
    • CI build/push failures on tfserving-proxy image are not included in exit values #2477
    • Explicitly define default requests and limits for engine container #2475
    • Fix broken documentation links #1760

    Closed issues:

    • add jsondata handling to req logger #2566
    • Seldon Core explainers to use alibi v0.5.5 #2562
    • Seldon Core 1.19 Kubernetes Support #2550
    • Add pidfile config for gunicorn #2546
    • Extend drift detector server (inside alibi detect server) to return metrics #2537
    • Update operator Redhat and OperatorHub integration #2532
    • Add requestPath back to the meta data #2531
    • easy to run out of disk with prometheus #2523
    • Support PDB specifications for SeldonDeployments #2508
    • Add KEDA support for seldon-core #2498
    • prometheus metrics for usage by seldondeployment #2483
    • Add SKLearn and XGBoost examples for MLServer / V2 Dataplane #2479
    • Support seldon-core running on knative serving like kfserving #2476
    • Triton server support with kfserving protocol #2460
    • Question about running grafana-prom for examples/kubeflow #2440
    • Data Science Metrics Core Update v1 #2397
    • configurable metrics port name for analytics #1809

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Sep 29, 2020)

    Changelog

    v1.3.0 (2020-09-29)

    Full Changelog

    Fixed bugs:

    • Unreadable notebook - sklearn_spacy_text_classifier_example.ipynb #2486
    • Alibi Detect Server libGL.so failed to find in image #2481
    • Seldon Docs failing on readthedocs #2455
    • 2 notebook tests failing #2454
    • SSL removed when executor multiplexing reverted #2447
    • JX master pipeline is failing to build and push images #2444
    • CI builds sklearn server with wrong sklearn version -> TestNotebooks.test_explainer_examples fails #2443
    • Install seldon-core-operator only working with old version #2438
    • Seldon Python Server memory leak in multithreading mode #2422
    • integration tests flakiness with TestPrepack.test_text_alibi_explainer #2408
    • integration tests fails: cannot import name 'Turkey' #2403
    • [doc] link to #Setup-Cluster is broken #2386
    • Unable to add more than one model in shadow deployment #2383
    • where is tfserving-mnist chart? #2372
    • semverComapre broken in some kubernetes flavours resulting in CRDs not being installed #2367
    • Revert multiplexing in the Executor #2364
    • Seldon pipeline crashes when there are a high volume of requests #2358
    • SeldonCore and random 'upstream connect error or disconnect/reset before headersupstream connect error or disconnect/reset before headers' errors on /predictions #2347
    • TerminationGracePeriodSeconds not respected in CRD #2332
    • remove trailing slash from graph metadata endpoint in docs #2322
    • Go version causes lint issues #2320
    • [doc] where is "seldon wrappers guideline"? #2307
    • Address flaky test test_model_template_app_rest_metrics_endpoint #2293
    • transform_output_raw not working . It gets refered to transform-input endpoint when analysing logs #2277
    • kfctl 0.5.1 is not available anymore #2258
    • Default user ID is always set to 8888 #2142
    • duplicate tensorflow_model_server command between entrypoint and args using prepackaged inference server #2133
    • Existing Webhook Secret Clashes if own certificate provided when doing upgrade #2101
    • can only join a child process #2094
    • Ensure all model servers have pinned requirements and the full requirements.txt is included in the docs #2065
    • curl response of the example sklearn_iris_jsondata is "Unknown data type returned as payload (must be list or np array):None" #2063
    • upgrades briefly go to a Failed state before Available but work the whole time #2044
    • SeldonDeployment with just a shadow is allowed past validation #2022
    • Seldon Core operator Crashes when deployment with empty predictor is passed #2020
    • OpenAPI Validation for PredictiveUnits limited to 5 levels #1864
    • seldon-core-operator CRD's incompatible with K8s v1.18 #1675

    Closed issues:

    • Add selectorpath for /scale subresource in SeldonDeployment CRD #2485
    • Dependabot can't resolve your Go dependency files #2464
    • Question about seldon-controller-manager setup on kubeadm #2452
    • Dependabot can't resolve your Go dependency files #2445
    • Add Concepts page to Seldon Core docs #2433
    • option to scrape prometheus less often #2401
    • Add Flag in the routing protocol to skip further processing and return #2400
    • Using ArgoCD to deploy Seldon-core-operator shows Webhooks as OutOfSync #2392
    • Add link to alibi notebooks to reference how the explainer models are built #2371
    • complex graph only expose endpoint for last metioned container and does not pass forward the output of the parent model #2370
    • Support YAML for SeldonDeployment definition in examples under seldon-core/helm-charts #2362
    • Add TreeShap Explainer example #2361
    • Update operator & executor k8s libraries to 1.18 (or 1.19) #2360
    • Inference Graph Example #2331
    • Setting and using SELDON_ENVIRONMENT for Request Logging to use one ELK Cluster for multiple Environment #2328
    • Setting and using SELDON_ENVIRONMENT for Request Logging to use one ELK Cluster for multiple Environment #2327
    • extend metadata schema to provide a field for custom entries #2312
    • Remove pytest-runner dependency from setup.py #2303
    • Remove depedency on deprecated pytest-runner #2302
    • Authentication support for ELK #2298
    • Add documentation and example for feedback reward leveraging custom metrics #2271
    • publish 1.2.2 RedHat operator #2244
    • MLFlow Model on MinIO Not Loading #2213
    • helm charts documentation #2203
    • Add / Extend docs on seldon-core-microservice #2202
    • Add "seldon.io" prefix path to all kubernetes labels associated with Seldon #2187
    • Change docker build context for executor to speed up build process #2186
    • Upgrade Alibi Integrations #2160
    • SeldonDeployment explainer description #2144
    • Remove storage.py from python module #2140
    • Refactor logging in Executor #2090
    • Make the helm chart generator part of the release script #2072
    • Upgrade k8s client API to 1.18+ #1949
    • silence flask logs from prometheus probing python wrapper #1907
    • Update SeldonDeployment Helm charts #1879
    • Grafana Dashboard not updating the deployments #1854
    • Hyphens in names cause the service orchestrator to start a grpc server #1850
    • SKLearn version support too low #1813
    • Seldon core wrapper support for Spring 2 #1796
    • Align GPU TF Python Image requirements and structure #1789
    • Investigate test_model_template_app_grpc_metrics flakiness #1745
    • support runtime request tags / metrics in thread/process safe way #1735
    • Support NVIDIA/KFServing V2 Data Plane #1648
    • Multiplex Http and gRPC traffic #1628
    • Update Kubeflow Example to 1.0 #1509
    • azure-storage-blob package update to 12.1.0 #1371
    • DownwardAPI fails validation in CRD #926
    • Update CRD to be Structural #641

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.2.3(Aug 14, 2020)

    Changelog

    v1.2.3 (2020-08-14)

    Full Changelog

    Fixed bugs:

    • Shadow model gets no traffic #2225
    • kubeflow central dashboard we could not see Manage Contributor menu #2223
    • Tensorflow session hangs in gunicorn worker process #2220
    • seldon operator giving error #2184
    • Python licenses change depending on the environment #2124

    Closed issues:

    • Add source mirroring for MPL licensed dependencies #2263
    • Is it possible to pass init parameters to Predictor class through seldon-core-microservice #2250
    • How to solve race conditions between two requests. #2240
    • Update to use KFserving 0.4.0 artifacts #2236
    • Add CVE checks as part of CI #2183
    • GCP Marketplace Release Update #1804
    • Add kubernetes labels to help with selectors #1405

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.2.2(Jul 30, 2020)

    Changelog

    v1.2.2 (2020-07-28)

    Full Changelog

    Fixed bugs:

    • Alibi Detect Drift does not use batch #2194
    • Explainers are hardwired to seldon protocol #2185
    • Address build stability #2175
    • Seldon-batch-processor Issue #2173
    • Jenkins X Pipelines are not marked as finished #2148
    • Robustness of operator_upgrade notebook #2119
    • Unable to view feedback reward on Grafana dashboard #2115
    • MLFlowServer predict function ignores feature_names parameter #2113
    • Request logger drops incoming requests for traffic coming from a single model #2109
    • deep_mnist example: failed calling webhook "v1alpha2.mseldondeployment.kb.io" #2107
    • Duplicated Mutating Webhooks can Coexist without Notice #2103
    • READWRITEMANY does not work on GCP #2102
    • There is no spam.csv in dir examples/input_tranformer, the example input_tranformer doesn't work #2087
    • seldon-controller-manager crashing #2066
    • No module named 'sklearn.linear_model._logistic' when using the docker image seldonio/sklearnserver_rest:1.2.0 due to scikit-learn==0.20.3 #2059
    • python: Relink error in GPU image #2048
    • Seldon Azure Deep Mnist tutorial CrashLoopBackOff while creating pods #2043
    • Address flakiness of batch processing integration test #1985
    • "Empty Json Parameter in data" for model components in Spam Classifier Example #1938
    • ambassador helm chart deprecation warnings #1928
    • Tags created by components inside combiner don't propagate #1927
    • Duplicate ports defined in seldon-container-engine container #1799
    • idletimeout between envoy and executor #1797
    • Helm Chart - Seldon Core Analytics - extraEnv and VirtualService not working anymore. #1791

    Closed issues:

    • CNCF-Runtime discussion/presentation(?) #2181
    • Make Azure dependency optional #2168
    • How to pass a contract.json as curl request. I keep getting bad data when i send a contract.json file as curl. #2151
    • Serving local (host) model with the prepackaged TensorFlow server #2146
    • Update version of Jaeger in Python wrapper #2143
    • Add to docs clarification on Routing not available in executor #2139
    • Upgrade k8s.io dependencies in the Executor #2134
    • Upgrade knative.dev deps in Operator #2128
    • bump zap from v1.10.0 to v1.15.0 #2127
    • upgrade istio.io dependencies in operator #2126
    • make mock-classifier a RELATED_IMAGE for redhat operator #2118
    • Upgrade controller-runtime in Operator #2116
    • Upgrade Operator dependencies that can be bumped without problems #2098
    • Remove Executor's dependency on client-go #2092
    • Upgrade Operator version in Executor deps #2091
    • Upgrade Executor dependencies that can be bumped without problems #2089
    • Update dependencies of Operator and Executor #2088
    • request logger retries #2079
    • allow loading wrapped model from installed package #2068
    • Is there a way to specify URL of swagger-ui static resources instead of https://cdnsjs.cloudflare.com in a intranet k8s cluster #2067
    • Determine 1st and 2nd Dependencies for Go operator and executor #2061
    • support multiple named tensors in seldon protocol and seldon-core client #2049
    • Drop podinfo volume name backwards compatibility transition in 1.3 release #2024
    • Return pointer instead of value in SeldonApiClient methods #2014
    • Enable production mode in Python server by default #1993
    • update UPGRADING.md with new name of rolling image #1989
    • re-define noEngine annotation #1976
    • Ability to return all outputs from tensorflow serving grpc #1965
    • Allow to specify model version for tensorflow serving #1964
    • Automate license check in CI linting pipeline #1932
    • pass ServiceAccountName in predictor to prepackaged servers initContainer #1865
    • Using "required" field for key values in helm chart #1784
    • Update Ambassador Circuit Breaker Example to have parallel requests #1753
    • Enable production settings in loggers #1737
    • Create a benchmarking framework #1731
    • GRPC Auth problem with GCP IAP #1719
    • Serialization of pre-processing pipeline for CI/CD #1713
    • Add and example Notebook for Istio Setup and Integrations #1712
    • Seldon Build Permission Denied #1689
    • Autogenerate an OpenAPI spec and SDK #1682
    • GPU deadlock for pytorch models using the python wrapper #1662
    • Convert Request Logger Example Scripts into Helm Chart #1511
    • Flask Theading bug, when using with sockeye and mxnet #1498
    • Improve release notes #1471
    • Migrate tutorials to use kind instead of Minikube #1256

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Jul 1, 2020)

    Changelog

    v1.2.1 (2020-07-01)

    Full Changelog

    Fixed bugs:

    • upgrading from 1.1 to 1.2 with existing sdep leads to volume mount error #2017
    • Seldon Operator automatic update from v1.1.0 to v1.2.0 causes seldondeployment failures #2009
    • Changing predictor.replicas causes all deployment pods to be replaced #2008
    • requests per second from seldon_api_executor_cient_requests_count not right? #2004
    • Cannot connect to metrics port 6000 for custom models wrapped with s2i. #1988
    • Helm switch rbac.create=false does not fully work #1982
    • serviceAccount.name not used in Helm chart templates #1977
    • executor service not targeting to executor #1975
    • Issue in "seldon-container-engine" with MLFLOW_SERVER #1922
    • Permission denied while reading ./openapi/seldon.json in seldon-container-engine #1855
    • Allow custom pip dependencies in MLFLOW_SERVER #1547
    • Stop integration tests if setup fails #1417

    Closed issues:

    • upgrading notes for 1.2.1 #2051
    • Dependabot can't resolve your Go dependency files #2003
    • Set executor port from model deployment file #1974
    • Readiness probe failed seldon-container-engine while deploying the pipeline #1963
    • Potential dependency conflicts between seldon-core and opentracing #1867
    • Add documentation that explains how to configure the named port #1853
    • Built-in header based routing #1739
    • Update release process #1732
    • Include prepackeged servers in GA and version using version.txt #1726
    • Can't create resources under v1.18 #1678
    • Use zap.New instead of zap.Logger #1657
    • Wrong package for YAML #1609
    • Batch Processing Exploration for Seldon Core #1413
    • Offline Batch Integration #1391
    • Add Health and Status endpoints to grpc spec #1387

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Jun 18, 2020)

    Changelog

    v1.2.0 (2020-06-18)

    Full Changelog

    Implemented enhancements:

    • request logger to support bearer token auth #1773
    • Feature request: Provide a metadata endpoint with custom metadata #319

    Fixed bugs:

    • Golang Service Orchestrator (Executor) fails to replace variables since no longer runs root #1955
    • seldon-single-model chart doesn't install #1942
    • How to access rolling accuracy metrics? #1926
    • curl command on seldon core overview doesn't work #1918
    • can't change analytics nodeExporter port #1916
    • Empty json parameter in data #1912
    • Multipart form data not passed between chain of seldon components #1899
    • Integration tests (test_mlflow) failing on Master #1896
    • Metrics notebook broken (gRPC) #1886
    • Helm chart seldon-manager role missing permissions for createResources: true #1885
    • Document Mistake about Endpoint Testing #1881
    • Routers not working with new Executor #1876
    • Notebook test failures #1869
    • Grafana Dashboard #1840
    • Publish new core builder image #1828
    • Feedback method is not handled in the engine's REST API #1825
    • Cant use the endpoint of kubernetes service created by the model #1824
    • Logs don't have information of which predictor it comes from #1817
    • Download specific Helm version in core-builder/Dockerfile #1811
    • Fluentd Helm Chart update breaks logs integration #1780
    • Seldon Core v1.1.0 helm chart has outdated seldon-config configmap #1779
    • Metrics executor local sample REST request fails #1770
    • Add XSS patches to executor #1766
    • Executor sample gRPC local loggger fails #1763
    • model server pods not created if controllerId set? #1758
    • Can't have a colon in the registry name with the Seldon-Core Python Package #1756
    • failed calling webhook #1742
    • Executor container not exposing core metrics #1729
    • problem configuring S3 access from MLFlow server #1727
    • seldon.io/rest-read-timeout for ambassador doesn't take effect in executor 1.1.0? #1724
    • createResources flag in helm chart looks backwards #1723
    • Incorrect seldon-core package version in 1.1.0 builder image #1718
    • custom tags in release 1.1.0 not working #1715
    • Scrape related annotations only in deployment and not in services caused prometheus to ignore seldon analytics metric endpoints #1705
    • Update swagger to work in executor and be up to date #1703
    • web client metrics not working #1685
    • com.google.protobuf.InvalidProtocolBufferException: com.google.gson.stream.MalformedJsonException: Unterminated array at line 1 column 32772 path $.data.ndarray[1] at io.seldon.engine.pb.JsonFormat$ParserImpl.merge(JsonFormat.java:1132) ~[classes!/:0.5.1-SNAPSHOT] at io.seldon.engine.pb.JsonFormat$Parser.merge(JsonFormat.java:313) #1680
    • Unable to use HPA with a pre-packaged "implementation:TENSORFLOW_SERVER" model due to "tfserving" container not having requests/limits defined #1676
    • Executor build failures #1663
    • Java wrapper do not accept application/json requests #1603
    • Images created with s2i don't work as explainers when added as containerspec #1549
    • Engine Thread Pool Configuration #1490
    • test_no_permission_bucket_gcs fails when run locally #1364
    • Benchmarking notebook fails with "Deployment not found" error #1177

    Closed issues:

    • From Kubeflow to Seldon Documentation #1961
    • Update Payload Logger to use V1 Cloudevents #1958
    • Make Operator and Executor Dockefile executable dynamically linked #1951
    • Notebook Test Updates #1948
    • Update Licenses for 1.2 Release #1947
    • add managed-by label #1936
    • extend metadata to cover seldon message #1933
    • Seldon-core-microservice-tester command available for ambassador endpoint #1914
    • seldon-core-manager pod is getting restarted #1910
    • /metrics endpoint to return a json file #1905
    • Is /metrics endpoint accessible #1901
    • How seldon custom metrics work? Can it be seen on grafana? #1898
    • Ensure Upgrading to 1.2.0 complete #1888
    • Multiplex HTTP and gRPC on the operator #1887
    • Add back Java engine's integration tests #1878
    • option for request logging to go to a single namespace #1859
    • track cloudevents version #1857
    • Ensure all documentation links are relative #1851
    • update model metadata notebook example #1829
    • Kind test script Istio install failing with kubectl version v1.18.x #1802
    • Better names for metric ports of SeldonDeployment containers #1798
    • Create new Release PIpeline in Jenkins X that works on separate branch #1793
    • To pass customized input json without the mandatory jsonData parameter #1792
    • Add a section in the python docs to talk about environment.yaml for conda #1788
    • Get timeout error on seldon-container-engine if I set GUNICORN_WORKERS env #1777
    • Add explainer image version to configmap #1776
    • Allow access to headers in python wrapper #1769
    • Multiplex HTTP and gRPC on the executor #1762
    • GCP Marketplace Update CRD Create #1755
    • Investigate test_model_template_app_grpc_metrics flakiness #1745
    • docs: have a single notebook that helm install minio #1740
    • RedHat version of core servers #1734
    • Pachyderm integration #1733
    • provide a graph-level model metadata #1728
    • RedHat Operator Updates for 1.1.0 #1710
    • Add docs for istio annotations #1707
    • log seldon-core and wrapper version after microservice start #1704
    • Running release script will remove chart comments #1690
    • add model metadata support #1638
    • allow to use Java engine if helm installation defaults to executor #1607
    • ensure we're using open source ambassador in examples #1581
    • High number of concurrent users causing Seldon API to timeout #1558
    • Add support to add ambassador circuit-breaker in seldon deployment #1556
    • Move notebook tests to separate test command #1538
    • com.google.protobuf.InvalidProtocolBufferException: null #1532
    • ENGINE_CONTAINER_SERVICE_ACCOUNT_NAME and EXECUTOR_CONTAINER_SERVICE_ACCOUNT_NAME is picking a default value #1508
    • UBI version of the latest Seldon operator image #1482
    • Request metadata: discussion and design #1480
    • Send back PUID on response #1460
    • Support custom client protobufs in grpc server #1420
    • Alibi:Detect Outlier Detection Integration #1393
    • Define future approach to metadata and metrics with new Executor #1362
    • Update outlier detection examples #1288
    • make ambassador shadow consistent with istio #1109
    • can we expose health check api in internal python service similar to what we have in external like /ping #770
    • Decouple tensorflow package from the base python seldon-core image. #564
    • intermittent UNIMPLEMENTED error from ambassador #473
    • conda-forge integration #341
    • Profile the CRD operator #167

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Apr 16, 2020)

    Changelog

    v1.1.0 (2020-04-16)

    Full Changelog

    Fixed bugs:

    • Client Go library for istio not backwards compatible #1695
    • Change executor timeouts #1691
    • Should only set istio route weight when seldon traffic field value > 0 #1658
    • [document] Seldon Core Roadmap is missing #1646
    • Clean up of VirtualServices active for non-istio scenarios #1631
    • Explainer prediction errors when spec-name or predictior-name change #1626
    • Explainer example notebook use v1alpha2 CRDS #1622
    • cifar10 custom logger not recording dataType #1617
    • wrapper compatibility table not visible in sphinx-rendered docs #1612
    • url with tensorflow protocol #1611
    • Ensure SNAPSHOT builds contain unique identified (hash, date, etc) #1605
    • PrePackaged Server deployment failure #1599
    • Duplicated VirtualServices entries #1594
    • Empty explainers on old deployments #1593
    • Inference graph becomes unavalible with no error in logs #1584
    • notebook links dead in mdincluded markdown files #1583
    • Changing names of componentSpecs causes reconcile error #1571
    • Different behaviors between seldon-engine v0.4.1 and v1.0.2 #1569
    • seldon-core-crd helm chart is out of date #1563
    • Add labels from componentSpecs.metadata to dpeloyments #1561
    • Istio creates different path to Ambassador for explainer #1548
    • Deployments created by Seldon are apps/v1 but HPA scaleTargetRef is extensions/v1beta1 #1533
    • Invalid JSON: Unexpected type for Value message when passing tuples in json_data #1527
    • Engine has multiple Makefiles and Dockerfiles #1521
    • Reduce flakiness on rolling update tests #1513
    • minio example #1506
    • missing python dependencies for integration tests #1504
    • Error in SeldonClient in debugger #1499
    • Jenkins X integration broken #1484
    • executor: tags do not propagate through inference graph #1474
    • seldonio/seldon-core-executor image versions need updating #1469
    • Seldon Core Deployment Istio virtualservice is only created until after the pod is ready resulting in service unavailable until it propagates #1461
    • Predictive Unit/Deployment Environment Variables not set by operator #1449
    • Errors in seldon_core_setup notebook #1440
    • Errors in helm_examples notebook #1436
    • Errors in explainer_examples notebook #1434
    • Executor version not updated in Kustomize during release #1432
    • Seldon controller crashes when submitting deployer #1418
    • Flakiness on integration tests #1402
    • operator/helm/create_templates.sh generated files can have order of sections different on different machines #1291

    Closed issues:

    • logger to identify single-element numeric reqs or responses as 'number' #1699
    • Add Outlier and Drift examples to Seldon Core #1692
    • Create Alibi Detect Server #1667
    • Request for Java version of Seldon Grpc Client #1656
    • standardise requests in req logger #1644
    • SeldonClient doesn't currently work with explainers on gateway #1627
    • API group should be "machinelearning.seldon.io" not "machinelearning" #1615
    • Remove old docs folder #1610
    • Clarify for how long Java engine orchestrator will be supported #1608
    • Document s2i wrapper compatibility with Seldon Core 1.1 #1601
    • Be able to scale SeldonDeployment objects using kubectl #1598
    • Default Engine Java Opts exposes Vulnerability #1597
    • [document request] Add more technical details about service orchestrator #1596
    • Inquiry - Java JMX Server Security Vulnerability #1595
    • Add Scale SubResource to CRD #1592
    • Make inclusion of metrics in SeldonMessage configurable in 1.1 #1582
    • Passing status code back from executor service orchestrator #1574
    • Separate Jupyter Notebook Tests from E2E tests to improve speed of tests #1567
    • Avoid logging that custom_tags or custom_metrics is not implemented on every request #1565
    • Document compatbility of wrapper versions with seldon engine versions #1564
    • Surface reconcile errors as Events on the SeldonDeployment #1560
    • New protocol and transport fields should be moved to higher level in CRD #1552
    • Upgrade process from v1.0.2 #1551
    • Upgrade Blogs Section in Documentation #1550
    • seldon-core-analytics should not use hostPort #1539
    • The variable envSecretRefName currently has to be set up on a deployment level but would be required to have global default for seldon-core #1530
    • OpenShift 4.3 compatibility for Seldon-Operator 0.1.5 #1524
    • Add doc for all images used by Core #1515
    • request logging prominence in docs #1514
    • Change TFServing image in helm chart to point to specific version instead of latest #1512
    • Add top level registry prefix to Helm Charts for custom registries #1510
    • Allow for default Seldon Core request logger endpoint to be set on helm chart #1501
    • seldon support muti istio gateway #1500
    • Change explainer type to pointer in go struct #1485
    • Prediction API should define a dedicated image type #1478
    • Support Seldon Core v1 in Operator Framework Community Operator #1477
    • Expose /metrics prometheus endpoint in python wrapper #1476
    • Add breaking changes for 1.1 to docs #1475
    • Clean up examples #1467
    • failed calling webhook "mutating-create-update-seldondeployment.seldon.io": error #1462
    • Clean up notebook examples #1456
    • Remove benchmark_simple_model notebook #1455
    • executor tracing enhancement #1444
    • Sync s2i wrapper image versions with Seldon Core releases #1439
    • Ambassador setup fails to complete on minikube #1435
    • Seldon v1.0 CRD uses new (k8s 1.15) feature that not backward compatible with k8s versions (<= 1.14) #1431
    • Cherrypick PRs for 1.0.2 Release #1429
    • Document custom pre-packaged model servers #1416
    • new v1.0.0 CRD's resources validator is not backward compatible #1410
    • Add kubernetes labels to help with selectors #1405
    • Support for Xgboost 0.9 #1394
    • Updated Seldon Data Plane #1389
    • Add annotation support for executor #1384
    • Possible enhance ambassador customization in seldon deployment #1378
    • Seldon controller pod crashes on deploy when no internet connection available #1374
    • request logging for shadows #1207
    • Integration tests for tracing #977
    • Custom errors raised in Python microservice don't make it back to the client #939
    • Prometheus nodes get evicted, limits could be added to avoid this #934
    • Scaling of SeldonSeployments #884
    • tensorflow-gpu #602
    • Reduce size of images created by Python wrapper #312
    • gRPC metrics not exposed to prometheus #302

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Feb 18, 2020)

    Changelog

    v1.0.2 (2020-02-17)

    Merged pull requests:

    • Operator 3rd party licences #1367
    • added knobs in operator helm chart to control manager resources #1407
    • Ensure unique names for webhooks #1408
    • Kubeflow manifest changes #1414
    • Update resources to larger defaults for operator #1428
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Jan 15, 2020)

    Changelog

    v1.0.1 (2020-01-15)

    Full Changelog

    Fixed bugs:

    • operator CI build fails #1330
    • Remove log4j from H2O example #1318
    • cert-manager version #1262

    Closed issues:

    • Old pods not removed after rolling update #1325
    • seldonio/seldon-core-s2i-python3-tf-gpu:0.15 image default python version is 2.7.17 #1324
    • send_feedback() got an unexpected keyword argument 'routing' #1321
    • Helm 2 install fails for v1.0 #1299
    • Clean up API types and Webhooks #1294
    • Include seldon proto compilation in GO #1245
    • helm upgrade when the operator configmap has changed? #1135
    • End-to-end tests for Pre-packaged model servers hang if name doesn't match exactly #820
    • S2I builder images should have pinned Python dependencies #340

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Dec 18, 2019)

    Changelog

    v1.0.0 (2019-12-18)

    Full Changelog

    Fixed bugs:

    • Certificate not added for all CRD versions in helm chart #1275
    • Webhook not updated for multi version handling #1274
    • Ambassador 404s on Canary Test #1271
    • Delete hpaSepc in seldon deployment doesn't delete the hpa object #1263
    • CVE-2019-5482 and CVE-2019-18224 #1261
    • ambassador version #1260
    • No module named 'tensorflow.examples.tutorials' with tensorflow 2.0 #1248
    • SeldonDeployment stuck on creating when an environment variable is a reference #1211
    • cat.json missing from Explainer notebook #1178
    • Analytics charts broken in K8s v1.16 #1176
    • Helm install issues on k8s 1.16.2 #1095
    • Model initializer does not work with S3 #885

    Closed issues:

    • Namespace operator e2e tests #1281
    • Istio integration requires sidecar injection #1273
    • Serialization exception when using the internal prediction API endpoint #1241
    • seldon-core 1.0 release prep #1239
    • Webhook Selectors only available in k8s >= 1.15 #1233
    • duplication of seldon_core_setup notebook #1232
    • crd issue on 1.16 #1225
    • Centralised logging not working on k8s 1.16 and helm 3 #1224
    • Port binding in Java wrapper clashes with the engine #1223
    • uuids in request logs #1209
    • User "system:serviceaccount:kubeflow:pipeline-runner" cannot create resource "seldondeployments" #1205
    • Add PR comment for build pipeline #1200
    • Fix docs install reference to 0.5.0 #1196
    • Unschedulable: pod has unbound immediate PersistentVolumeClaims #1191
    • Text batch data is not split into multiple requests by request logger #1189
    • latest snapshot not working in GKE #1173
    • local e2e don't build operator image #1171
    • e2e tests broken in master #1164
    • Move Go wrapper to incubating #1157
    • Update examples in line with Helm v3 #1154
    • Update Install docs in line with Helm v3 #1153
    • Add triage label to new issues #1152
    • add prepackaged model server pvc example #1150
    • running end to end tests on local machine #1147
    • Can we attach pvc to model-initializer(storage-initializer) ? #1146
    • Standardise Data Mappings in Seldon Wrapper #1145
    • Wrong types in Seldon Core user methods #1144
    • Create operator client-set #1141
    • Investigate impact of helm v3 in seldon-core #1140
    • Seldon Core Operator Restricted to Single Namespace #1139
    • Need to update integration test script with helm 3.0 version #1138
    • Can not control retry_policy from SeldonDeployment yaml file #1137
    • [Improvement] Obscure service name when deploy my yaml #1128
    • Jenkins X currently creates a new changelog tag / version every time a PR is landed #1124
    • strData requests are not printed by seldon request logger #1121
    • adding canary, shadow or explainer shouldn't affect main predictor #1110
    • Unable to mount model from PVC into tf serving prepackaged model server #1106
    • Remove Travis Integration #1105
    • Move CRD to v1 #1100
    • Upgrade Maven and JDK on CI image #1094
    • Fix sporadic failures with e2e tests #1084
    • Inconsistent return value for explain method in SeldonClient #1083
    • Refactoring handling httpResponse in Java Engine #1075
    • following tutorial got 503 #1073
    • Validation fails if componentSpecs.metadata.creationTimestamp is not specified #1061
    • Move non 1.0 components to incubating folders #990
    • Improve PrePackaged Model Servers #959
    • use fixed version for model initialiser image #957
    • Helm Upgrade Process #890
    • Dynamic Engine version support at Seldon operator #871
    • Integrate GPU Seldon Core Image into Build Scripts #868
    • Custom prepackaged model servers #857
    • Feature request: Python Seldon Client: support sending gRPC data with meta field #821
    • Allow access to puid within the predict API #795
    • Installing seldon-core-operator requires clusterwide RBAC and should be installed by a cluster admin #670
    • Update notebooks to refer to seldon install rather than include code #646

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Nov 21, 2019)

    Changelog

    v0.5.1 (2019-11-21)

    Full Changelog

    Fixed bugs:

    • Operator crash if one container in pod not created properly #1104
    • MLFLOW_SERVER ModuleNotFoundError #828

    Security fixes:

    Closed issues:

    • Engine using separate pod, ignoring annotations #1120
    • How to pass ModelInitializerContainerImage path #1116
    • how to define MyModel for PyTorch model which has more than one arguments? #1115
    • how can I wrap a PyTorch Image with customed network? #1114
    • Seldon AB testing - getting an error "info": "Parameter 'ratioA' is missing." #1081
    • remove old CRD generation #1079
    • Broken integration tests #1076
    • Cannot create a SeldonDeployment having volumes of type projected and configMap. #1072
    • DOCKER_IMAGE_VERSION in Makefile is only 0.3 and 0.4; public docker images go up to 0.13 #1071
    • SeldonDeployment modelUri support more protocol #1070
    • Your .dependabot/config.yml contained invalid details #1045
    • Add tox #1042
    • Broken imports on 0.5.0.2 release #1040
    • Improve speed of execution for integration tests #1032
    • Integration tests fail intermittently #1031
    • Major version changes on dependencies could cause issues #1029
    • with the latest release of azure-storage-blob 12.0.0, seldon build fails #1027
    • Extend documentation to include optional dependencies and azure blob quickfix #1025
    • Update release.py script to change version in kustomization.yaml #1024
    • Set upper limit for azure-storage-blob version #1023
    • Make GCS support optional #1018
    • Update Talks/Blogs/Videos/Use cases in Docs #1017
    • No available release name found for Seldon-Core-Operator #1014
    • Move API path from v0.1 to v1.0 #991
    • Fully automate CI/CD process and introduce manual trigger for release process #986
    • Use official Helm charts for Grafana and Prometheus #965
    • Set JSON_SORT_KEYS and JSONIFY_PRETTYPRINT_REGULAR to False by default #964
    • Enable Storage.py to be able to download from HDFS URL #963
    • Fix helm version in Chart.yaml #961
    • Modify engine's Proto Value to JSON conversion to avoid int-to-float conversions in REST requests #948
    • Code style standardisation for Seldon Core Python modules #947
    • Automated Test and Build Hooks #933
    • Seldon Image at "seldonio/seldon-core-s2i-python3-tf-gpu:0.12-SNAPSHOT" not able to find GPU devices #914
    • Update Java dependencies #902
    • Create 1.0 GA Document #887
    • Enable Storage.py to be able to download from URL file #883
    • Seldon container engine has not resource limit #769
    • Shadow deployment for Istio #741
    • Java Seldon Engine doesn't pass microservice HTTP exceptions upstream #705
    • Feature_names is redundant when using jsonData in predict. Also meta is redundant in response, as meta can be combined in the jsonData. #665
    • Modify helm chart seldon-core-analytics using chart dependencies #613
    • Update Python SeldonClient to handle JSON payloads #607
    • expand contributing guide #569
    • requirements.txt for Python wrapped models should be a configurable name/path #548
    • Docker Image is too Big #526
    • Deployment of seldon as a new custom resource via fabric8 #486
    • Update developer docs #203

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Nov 1, 2019)

    v0.5.0 (2019-11-01)

    Closed issues:

    • Update logback #1007
    • seldon-core-operator fails to install on Kubernetes 1.16 #1004
    • Custom error raised in Python model was not passed back to the client #974
    • Models with multiple input types are not supported #921

    Merged pull requests:

    • Removed the hash as it was crashing the command when the pipeline was ran #1022 (axsaucedo)
    • Add pre-commit hook for black and fix linter #1020 (adriangonz)
    • Added documentation on how to support Models with multiple input types in python wrapper #1015 (axsaucedo)
    • Move from logback to log4j2 #1008 (adriangonz)
    • Adding functionality for running e2e tests in Jenkins X #994 (axsaucedo)
    Source code(tar.gz)
    Source code(zip)
Owner
Seldon
Machine Learning Deployment for Kubernetes
Seldon
Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

FINRA 25 Dec 28, 2022
MLReef is an open source ML-Ops platform that helps you collaborate, reproduce and share your Machine Learning work with thousands of other users.

The collaboration platform for Machine Learning MLReef is an open source ML-Ops platform that helps you collaborate, reproduce and share your Machine

MLReef 1.4k Dec 27, 2022
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.

Hivemind: decentralized deep learning in PyTorch Hivemind is a PyTorch library to train large neural networks across the Internet. Its intended usage

null 1.3k Jan 8, 2023
Apache Liminal is an end-to-end platform for data engineers & scientists, allowing them to build, train and deploy machine learning models in a robust and agile way

Apache Liminals goal is to operationalise the machine learning process, allowing data scientists to quickly transition from a successful experiment to an automated pipeline of model training, validation, deployment and inference in production. Liminal provides a Domain Specific Language to build ML workflows on top of Apache Airflow.

The Apache Software Foundation 121 Dec 28, 2022
Evidently helps analyze machine learning models during validation or production monitoring

Evidently helps analyze machine learning models during validation or production monitoring. The tool generates interactive visual reports and JSON profiles from pandas DataFrame or csv files. Currently 6 reports are available.

Evidently AI 3.1k Jan 7, 2023
A demo project to elaborate how Machine Learn Models are deployed on production using Flask API

This is a salary prediction website developed with the help of machine learning, this makes prediction of salary on basis of few parameters like interview score, experience test score.

null 1 Feb 10, 2022
Exemplary lightweight and ready-to-deploy machine learning project

Exemplary lightweight and ready-to-deploy machine learning project

snapADDY GmbH 6 Dec 20, 2022
Iris-Heroku - Putting a Machine Learning Model into Production with Flask and Heroku

Puesta en Producción de un modelo de aprendizaje automático con Flask y Heroku L

Jesùs Guillen 1 Jun 3, 2022
Production Grade Machine Learning Service

This project is made to help you scale from a basic Machine Learning project for research purposes to a production grade Machine Learning web service

Abdullah Zaiter 10 Apr 4, 2022
Machine learning that just works, for effortless production applications

Machine learning that just works, for effortless production applications

Elisha Yadgaran 16 Sep 2, 2022
A Tools that help Data Scientists and ML engineers train and deploy ML models.

Domino Research This repo contains projects under active development by the Domino R&D team. We build tools that help Data Scientists and ML engineers

Domino Data Lab 73 Oct 17, 2022
Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models. Solve a variety of tasks with pre-trained models or finetune them in

Backprop 227 Dec 10, 2022
ClearML - Auto-Magical Suite of tools to streamline your ML workflow. Experiment Manager, MLOps and Data-Management

ClearML - Auto-Magical Suite of tools to streamline your ML workflow Experiment Manager, MLOps and Data-Management ClearML Formerly known as Allegro T

ClearML 4k Jan 9, 2023
⏳ Tempo: The MLOps Software Development Kit

Tempo provides a unified interface to multiple MLOps projects that enable data scientists to deploy and productionise machine learning systems.

Seldon 36 Jun 20, 2021
Pragmatic AI Labs 421 Dec 31, 2022
End to End toy example of MLOps

churn_model MLOps Toy Example End to End You might find below links useful Connect VSCode to Git MLFlow Port Heroku App Project Organization ├── LICEN

Ashish Tele 6 Feb 6, 2022
MLOps pipeline project using Amazon SageMaker Pipelines

This project shows steps to build an end to end MLOps architecture that covers data prep, model training, realtime and batch inference, build model registry, track lineage of artifacts and model drift detection. It utilizes SageMaker Pipelines that offers machine learning (ML) to orchestrate SageMaker jobs and author reproducible ML pipelines.

AWS Samples 3 Sep 16, 2022
Azure MLOps (v2) solution accelerators.

Azure MLOps (v2) solution accelerator Welcome to the MLOps (v2) solution accelerator repository! This project is intended to serve as the starting poi

Microsoft Azure 233 Jan 1, 2023
A Lucid Framework for Transparent and Interpretable Machine Learning Models.

Currently a Beta-Version lucidmode is an open-source, low-code and lightweight Python framework for transparent and interpretable machine learning mod

lucidmode 15 Aug 12, 2022