Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

Overview

Coverage Status SDK: Documentation Status

Overview of the Kubeflow pipelines service

Kubeflow is a machine learning (ML) toolkit that is dedicated to making deployments of ML workflows on Kubernetes simple, portable, and scalable.

Kubeflow pipelines are reusable end-to-end ML workflows built using the Kubeflow Pipelines SDK.

The Kubeflow pipelines service has the following goals:

  • End to end orchestration: enabling and simplifying the orchestration of end to end machine learning pipelines
  • Easy experimentation: making it easy for you to try numerous ideas and techniques, and manage your various trials/experiments.
  • Easy re-use: enabling you to re-use components and pipelines to quickly cobble together end to end solutions, without having to re-build each time.

Installation

  • Install Kubeflow Pipelines from choices described in Installation Options for Kubeflow Pipelines.

  • โญ [Alpha] Starting from Kubeflow Pipelines 1.7, try out Emissary Executor. Emissary executor is Container runtime agnostic meaning you are able to run Kubeflow Pipelines on Kubernetes cluster with any Container runtimes. The default Docker executor depends on Docker container runtime, which will be deprecated on Kubernetes 1.20+.

Documentation

Get started with your first pipeline and read further information in the Kubeflow Pipelines overview.

See the various ways you can use the Kubeflow Pipelines SDK.

See the Kubeflow Pipelines API doc for API specification.

Consult the Python SDK reference docs when writing pipelines using the Python SDK.

Refer to the versioning policy and feature stages documentation for more information about how we manage versions and feature stages (such as Alpha, Beta, and Stable).

Contributing to Kubeflow Pipelines

Before you start contributing to Kubeflow Pipelines, read the guidelines in How to Contribute. To learn how to build and deploy Kubeflow Pipelines from source code, read the developer guide.

Kubeflow Pipelines Community Meeting

The meeting is happening every other Wed 10-11AM (PST) Calendar Invite or Join Meeting Directly

Meeting notes

Kubeflow Pipelines Slack Channel

#kubeflow-pipelines

Blog posts

Acknowledgments

Kubeflow pipelines uses Argo by default under the hood to orchestrate Kubernetes resources. The Argo community has been very supportive and we are very grateful. Additionally there is Tekton backend available as well. To access it, please refer to Kubeflow Pipelines with Tekton repository.

Issues
  • [Multi User] failed to call `kfp.Client().create_run_from_pipeline_func` in in-cluster juypter notebook

    [Multi User] failed to call `kfp.Client().create_run_from_pipeline_func` in in-cluster juypter notebook

    What steps did you take:

    In a multi-user enabled env, I created a notebook server on user's namespace, launch a notebook and try to call Python SDK from there. When I execute the code below:

    pipeline = kfp.Client().create_run_from_pipeline_func(mnist_pipeline, arguments={}, namespace='mynamespace')
    

    What happened:

    The API call was rejected with the following errors:

    ~/.local/lib/python3.6/site-packages/kfp_server_api/rest.py in request(self, method, url, query_params, headers, body, post_params, _preload_content, _request_timeout)
        236 
        237         if not 200 <= r.status <= 299:
    --> 238             raise ApiException(http_resp=r)
        239 
        240         return r
    
    ApiException: (403)
    Reason: Forbidden
    HTTP response headers: HTTPHeaderDict({'content-length': '19', 'content-type': 'text/plain', 'date': 'Tue, 01 Sep 2020 00:58:39 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '8'})
    HTTP response body: RBAC: access denied
    

    What did you expect to happen:

    A pipeline run should be created and executed

    Environment:

    How did you deploy Kubeflow Pipelines (KFP)?

    I installed the KFP on IKS with multi-user support KFP version: v1.1.0 KFP SDK version: v1.0.0

    Anything else you would like to add:

    [Miscellaneous information that will assist in solving the issue.]

    /kind bug

    kind/feature 
    opened by yhwang 127
  • WIP: test oss prow configuration

    WIP: test oss prow configuration

    opened by Bobgy 83
  • feat(compiler): add dsl operation for parallelism on sub dag level

    feat(compiler): add dsl operation for parallelism on sub dag level

    Description of your changes: This PR adds parallelism limits for sub-DAG:s. This is a continuation (https://github.com/kubeflow/pipelines/pull/4149) which relate to issue. Checklist:

    • [ ] The title for your pull request (PR) should follow our title convention. Learn more about the pull request title convention used in this repository.

      PR titles examples:

      • fix(frontend): fixes empty page. Fixes #1234 Use fix to indicate that this PR fixes a bug.
      • feat(backend): configurable service account. Fixes #1234, fixes #1235 Use feat to indicate that this PR adds a new feature.
      • chore: set up changelog generation tools Use chore to indicate that this PR makes some changes that users don't need to know.
      • test: fix CI failure. Part of #1234 Use part of to indicate that a PR is working on an issue, but shouldn't close the issue when merged.
    • [ ] Do you want this pull request (PR) cherry-picked into the current release branch?

      If yes, use one of the following options:

      • (Recommended.) Ask the PR approver to add the cherrypick-approved label to this PR. The release manager adds this PR to the release branch in a batch update.
      • After this PR is merged, create a cherry-pick PR to add these changes to the release branch. (For more information about creating a cherry-pick PR, see the Kubeflow Pipelines release guide.)
    lgtm approved size/L cla: yes 
    opened by NikeNano 70
  • Multi-User support for Kubeflow Pipelines

    Multi-User support for Kubeflow Pipelines

    [April/6/2020] Latest design is in https://docs.google.com/document/d/1R9bj1uI0As6umCTZ2mv_6_tjgFshIKxkSt00QLYjNV4/edit?ts=5e4d8fbb#heading=h.5s8rbufek1ax

    Areas we are working on:

    • [x] [Frontend] Deploy ui artifact service for each namespace https://github.com/kubeflow/pipelines/issues/3554
    • [x] [Frontend/Backend] Deploy visualization service for each namespace https://github.com/kubeflow/pipelines/issues/2899
    • [x] [Backend] Use experiment for resource boundary for child resource CRUD https://github.com/kubeflow/pipelines/issues/2397
      • [x] Experiment https://github.com/kubeflow/pipelines/issues/3273
      • [x] Run https://github.com/kubeflow/pipelines/issues/3336
      • [x] Job https://github.com/kubeflow/pipelines/issues/3344
    • [x] [Frontend/SDK/Backend] Skip specify namespace for CreateRun APIs https://github.com/kubeflow/pipelines/issues/3290
    • [x] [Deployment] Enable MLMD functionality in multi-user mode https://github.com/kubeflow/pipelines/issues/3292
    • [x] [Frontend] Block non public api from frontend (e.g. report api) in multi-user mode https://github.com/kubeflow/pipelines/issues/3293
    • [x] [Frontend/Controller] Launch Tensorboard in user's namespace https://github.com/kubeflow/pipelines/issues/3294
    • [x] [Frontend] Pass namespace as a parameter for experiment API https://github.com/kubeflow/pipelines/issues/3291
    • [x] [Frontend] Pass namespace as a parameter for run API https://github.com/kubeflow/pipelines/pull/3351
    • [x] [Frontend] UI should react when user changes namespace https://github.com/kubeflow/pipelines/issues/3296
    • [x] [SDK] Pass namespace as a parameter for experiment APIs https://github.com/kubeflow/pipelines/pull/3272
    • [x] [Deployment] KFP profile controller that configures KFP required resources in each user's namespaces https://github.com/kubeflow/pipelines/issues/3420
    • [ ] [Test] Postsubmit test for multi user e2e scenario https://github.com/kubeflow/pipelines/issues/3288
    • [ ] [Test] Backend integration tests for multi-user scenarios https://github.com/kubeflow/pipelines/issues/3289
    • [ ] [Test] Network auth integration tests https://github.com/kubeflow/pipelines/issues/3646
    • [x] [Deployment] Make user identity header configurable #3752
    • [x] [Doc] documentation on kubeflow.org #4317

    Release

    • [x] How do we release KFP multi user mode? https://github.com/kubeflow/pipelines/issues/3645
    • [x] Multi user mode early access release #3693
    • [x] [Deployment] Merge changes to upstream kubeflow repo https://github.com/kubeflow/pipelines/issues/3241
    • [x] Integrate with platforms other than GCP https://github.com/kubeflow/manifests/issues/1364

    Areas related to integration with Kubeflow

    • [ ] [Central Dashboard] Manage contributors for all namespaces I own https://github.com/kubeflow/kubeflow/issues/4569
    • [x] [Central Dashboard] Support login to Kubeflow cluster without creating his/her namespace for a non-admin contributor https://github.com/kubeflow/kubeflow/issues/4889
    • [ ] [Profile CRD] Support more than one owner of a profile CR https://github.com/kubeflow/kubeflow/issues/4888
    • [ ] [Profile CRD] Support updating the owner of a profile https://github.com/kubeflow/kubeflow/issues/4890

    =============== original description

    Some users express the interest of an isolation between the cluster admin and cluster user - Cluster admin deploy Kubeflow Pipelines as part of Kubeflow in the cluster; Cluster user can use Kubeflow Pipelines functionalities, without being able to access the control plane.

    Here are the steps to support this functionality.

    1. Provision control plane in one namespace, and launch argo workflow instances in another
      • provision control plane in kubeflow namespace, and argo job in namespace FOO (parameterization)
      • API server should update the incoming workflow definition to namespace FOO. Sample code that API server modify the workflow
    2. Currently all workflows are run under a clusterrole pipeline-runner (definition). And it's specified during compilation (link). Instead, it should run the workflows under a role instead of a clusterrole.
      • change pipeline-runner to role, and specify the namespace during deployment (expose as deployment parameter)
      • API server should update the incoming workflow definition to use pipeline-runner role.
    3. Cluster user can access UI through IAP/SimpleAuth endpoint, instead of port-forwarding.
    help wanted priority/p1 area/frontend area/backend kind/feature area/wide-impact status/triaged 
    opened by IronPan 67
  • Support for non-docker based deployments

    Support for non-docker based deployments

    Do you think it would be possible to support non-docker based clusters as well? I'm currently checking out the examples and see that they want to mount the docker.sock into the container. We might achieve the same results when using crictl. WDYT?

    priority/p1 area/development 
    opened by saschagrunert 63
  • feat(backend): Added multi-user pipelines API. Fixes #4197

    feat(backend): Added multi-user pipelines API. Fixes #4197

    Added namespaced pipelines, with UI and API changes, as well as the ability to share pipelines.

    Fixes: https://github.com/kubeflow/pipelines/issues/4197

    Description of your changes:

    • Added a new field in Pipelines table for namespace.
    • Uploaded Pipelines are by default namespaced.
    • Ability to share Pipelines by selecting "shared" check-mark in the UI.
    • Authorization via SubjectAccessReview for Pipelines, PipelinesVersions, and Upload Pipelines endpoints.

    Authors: @arllanos @maganaluis

    lgtm approved size/XL ok-to-test cla: yes 
    opened by maganaluis 44
  • Configure Renovate

    Configure Renovate

    WhiteSource Renovate

    Welcome to Renovate! This is an onboarding PR to help you understand and configure settings before regular Pull Requests begin.

    :vertical_traffic_light: To activate Renovate, merge this Pull Request. To disable Renovate, simply close this Pull Request unmerged.


    Detected Package Files

    • WORKSPACE (bazel)
    • backend/Dockerfile (dockerfile)
    • backend/Dockerfile.bazel (dockerfile)
    • backend/Dockerfile.cacheserver (dockerfile)
    • backend/Dockerfile.persistenceagent (dockerfile)
    • backend/Dockerfile.scheduledworkflow (dockerfile)
    • backend/Dockerfile.viewercontroller (dockerfile)
    • backend/Dockerfile.visualization (dockerfile)
    • backend/metadata_writer/Dockerfile (dockerfile)
    • backend/src/cache/deployer/Dockerfile (dockerfile)
    • components/gcp/container/Dockerfile (dockerfile)
    • components/kubeflow/deployer/Dockerfile (dockerfile)
    • components/kubeflow/dnntrainer/Dockerfile (dockerfile)
    • components/kubeflow/katib-launcher/Dockerfile (dockerfile)
    • components/kubeflow/kfserving/Dockerfile (dockerfile)
    • components/kubeflow/launcher/Dockerfile (dockerfile)
    • components/local/base/Dockerfile (dockerfile)
    • components/local/confusion_matrix/Dockerfile (dockerfile)
    • components/local/roc/Dockerfile (dockerfile)
    • components/sample/keras/train_classifier/Dockerfile (dockerfile)
    • contrib/components/openvino/model_convert/containers/Dockerfile (dockerfile)
    • contrib/components/openvino/ovms-deployer/containers/Dockerfile (dockerfile)
    • contrib/components/openvino/predict/containers/Dockerfile (dockerfile)
    • contrib/components/openvino/tf-slim/containers/Dockerfile (dockerfile)
    • frontend/Dockerfile (dockerfile)
    • manifests/gcp_marketplace/deployer/Dockerfile (dockerfile)
    • proxy/Dockerfile (dockerfile)
    • samples/contrib/image-captioning-gcp/src/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/inference_server_launcher/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/preprocess/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/train/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/webapp/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/components/webapp_launcher/Dockerfile (dockerfile)
    • samples/contrib/nvidia-resnet/pipeline/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/helloworld-ci-sample/helloworld/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/download_dataset/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/submit_result/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/train_model/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/visualize_html/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/kaggle-ci-sample/visualize_table/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/mnist-ci-sample/tensorboard/Dockerfile (dockerfile)
    • samples/contrib/versioned-pipeline-ci-samples/mnist-ci-sample/train/Dockerfile (dockerfile)
    • test/api-integration-test/Dockerfile (dockerfile)
    • test/frontend-integration-test/Dockerfile (dockerfile)
    • test/frontend-integration-test/selenium-standalone-chrome-gcloud-nodejs.Docker/Dockerfile (dockerfile)
    • test/imagebuilder/Dockerfile (dockerfile)
    • test/images/Dockerfile (dockerfile)
    • test/initialization-test/Dockerfile (dockerfile)
    • test/sample-test/Dockerfile (dockerfile)
    • tools/bazel_builder/Dockerfile (dockerfile)
    • go.mod (gomod)
    • frontend/mock-backend/package.json (npm)
    • frontend/package.json (npm)
    • frontend/server/package.json (npm)
    • package.json (npm)
    • test/frontend-integration-test/package.json (npm)
    • frontend/.nvmrc (nvm)
    • backend/metadata_writer/requirements.txt (pip_requirements)
    • backend/requirements.txt (pip_requirements)
    • backend/src/apiserver/visualization/requirements.txt (pip_requirements)
    • components/kubeflow/katib-launcher/requirements.txt (pip_requirements)
    • contrib/components/openvino/ovms-deployer/containers/requirements.txt (pip_requirements)
    • docs/requirements.txt (pip_requirements)
    • samples/contrib/azure-samples/databricks-pipelines/requirements.txt (pip_requirements)
    • samples/contrib/ibm-samples/ffdl-seldon/source/seldon-pytorch-serving-image/requirements.txt (pip_requirements)
    • samples/core/ai_platform/training/requirements.txt (pip_requirements)
    • samples/core/container_build/requirements.txt (pip_requirements)
    • sdk/python/requirements.txt (pip_requirements)
    • test/kfp-functional-test/requirements.txt (pip_requirements)
    • test/sample-test/requirements.txt (pip_requirements)
    • components/gcp/container/component_sdk/python/setup.py (pip_setup)
    • components/kubeflow/dnntrainer/src/setup.py (pip_setup)
    • samples/core/ai_platform/training/setup.py (pip_setup)
    • sdk/python/setup.py (pip_setup)

    Configuration

    :abcd: Renovate has detected a custom config for this PR. Feel free to ask for help if you have any doubts and would like it reviewed.

    Important: Now that this branch is edited, Renovate can't rebase it from the base branch any more. If you make changes to the base branch that could impact this onboarding PR, please merge them manually.

    What to Expect

    With your current configuration, Renovate will create 57 Pull Requests:

    chore(deps): pin dependencies
    chore(deps): update gcr.io/inverting-proxy/agent docker digest to 9817c74
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-gcr.io-inverting-proxy-agent
    • Merge into: master
    • Upgrade gcr.io/inverting-proxy/agent to sha256:9817c740a3705e4bf889e612c071686a8cb3cfcfe9ad191c570a295c37316ff0
    chore(deps): update github.com/vividcortex/mysqlerr commit hash to 4c396ae
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-vividcortex-mysqlerr-digest
    • Merge into: master
    • Upgrade github.com/VividCortex/mysqlerr to 4c396ae82aacc60540048b4846438cec44a1c222
    chore(deps): update golang.org/x/net commit hash to 5f4716e
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/golang.org-x-net-digest
    • Merge into: master
    • Upgrade golang.org/x/net to 5f4716e94777e714bc2fb3e3a44599cb40817aac
    chore(deps): update google.golang.org/genproto commit hash to 646a494
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/google.golang.org-genproto-digest
    • Merge into: master
    • Upgrade google.golang.org/genproto to 646a494a81eaa116cb3e3978e5ac1278e35abfdd
    chore(deps): update docker patch updates docker tags (patch)
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-patch-docker-updates
    • Merge into: master
    • Upgrade golang to 1.13.15-stretch
    • Upgrade tensorflow/tensorflow to 2.0.4-py3
    • Upgrade tensorflow/tensorflow to 2.2.2
    chore(deps): update go.mod dependencies (patch)
    fix(deps): update npm dependencies (patch)
    chore(deps): update alpine docker tag to v3.13
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-alpine-3.x
    • Merge into: master
    • Upgrade alpine to 3.13
    chore(deps): update gcr.io/cloud-marketplace-tools/k8s/deployer_helm/onbuild docker tag to v0.10.10
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-gcr.io-cloud-marketplace-tools-k8s-deployer_helm-onbuild-0.x
    • Merge into: master
    • Upgrade gcr.io/cloud-marketplace-tools/k8s/deployer_helm/onbuild to 0.10.10
    chore(deps): update go.mod dependencies (minor)
    chore(deps): update golang docker tag
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-golang-1.x
    • Merge into: master
    • Upgrade golang to 1.15.7
    • Upgrade golang to 1.15.7-alpine3.12
    • Upgrade golang to 1.14.14-stretch
    chore(deps): update node.js
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/node-12.x
    • Merge into: master
    • Upgrade node to 12.20.1
    • Upgrade node to 12.20.1-alpine
    chore(deps): update npm dependencies (minor)
    chore(deps): update nvcr.io/nvidia/tensorflow docker tag to v19.10
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-nvcr.io-nvidia-tensorflow-19.x
    • Merge into: master
    • Upgrade nvcr.io/nvidia/tensorflow to 19.10-py3
    chore(deps): update python docker tag to v3.9
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-python-3.x
    • Merge into: master
    • Upgrade python to 3.9-slim
    • Upgrade python to 3.9
    chore(deps): update tensorflow/tensorflow docker tag
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/docker-tensorflow-tensorflow-2.x
    • Merge into: master
    • Upgrade tensorflow/tensorflow to 2.2.2-py3
    • Upgrade tensorflow/tensorflow to 2.4.1
    chore(deps): update dependency @testing-library/react to v11
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/testing-library-react-11.x
    • Merge into: master
    • Upgrade @testing-library/react to 11.2.3
    chore(deps): update dependency @​types/jest to v26
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/jest-26.x
    • Merge into: master
    • Upgrade @types/jest to 26.0.20
    chore(deps): update dependency @​types/react to v17
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-17.x
    • Merge into: master
    • Upgrade @types/react to 17.0.0
    chore(deps): update dependency @​types/react-dom to v17
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-dom-17.x
    • Merge into: master
    • Upgrade @types/react-dom to 17.0.0
    chore(deps): update dependency @​types/react-router-dom to v5
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-router-dom-5.x
    • Merge into: master
    • Upgrade @types/react-router-dom to 5.1.7
    chore(deps): update dependency @​types/react-test-renderer to v17
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-test-renderer-17.x
    • Merge into: master
    • Upgrade @types/react-test-renderer to 17.0.0
    chore(deps): update dependency @​types/tar-stream to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/tar-stream-2.x
    • Merge into: master
    • Upgrade @types/tar-stream to 2.2.0
    chore(deps): update dependency jest to v26
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/major-jest-monorepo
    • Merge into: master
    • Upgrade jest to 26.6.3
    chore(deps): update dependency prettier to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/prettier-2.x
    • Merge into: master
    • Upgrade prettier to 2.2.1
    • Upgrade @types/prettier to 2.1.6
    chore(deps): update dependency react-scripts to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-scripts-4.x
    • Merge into: master
    • Upgrade react-scripts to 4.0.1
    chore(deps): update dependency standard-version to v9
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/standard-version-9.x
    • Merge into: master
    • Upgrade standard-version to 9.1.0
    chore(deps): update dependency supertest to v6
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/supertest-6.x
    • Merge into: master
    • Upgrade supertest to 6.1.3
    chore(deps): update dependency ts-jest to v26
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/ts-jest-26.x
    • Merge into: master
    • Upgrade ts-jest to 26.5.0
    chore(deps): update dependency ts-node to v9
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/ts-node-9.x
    • Merge into: master
    • Upgrade ts-node to 9.1.1
    chore(deps): update dependency typescript to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/typescript-4.x
    • Merge into: master
    • Upgrade typescript to 4.1.3
    chore(deps): update dependency webpack to v5
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/webpack-5.x
    • Merge into: master
    • Upgrade webpack to 5.19.0
    chore(deps): update dependency webpack-bundle-analyzer to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/webpack-bundle-analyzer-4.x
    • Merge into: master
    • Upgrade webpack-bundle-analyzer to 4.4.0
    chore(deps): update module argoproj/argo to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-argoproj-argo-2.x
    • Merge into: master
    • Upgrade github.com/argoproj/argo to 5f5150730c644865a5867bf017100732f55811dd
    chore(deps): update module cenkalti/backoff to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-cenkalti-backoff-4.x
    • Merge into: master
    • Upgrade github.com/cenkalti/backoff to v4.1.0
    chore(deps): update module grpc-ecosystem/grpc-gateway to v2
    chore(deps): update module k8s.io/client-go to v12
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/k8s.io-client-go-12.x
    • Merge into: master
    • Upgrade k8s.io/client-go to v12.0.0
    chore(deps): update module masterminds/squirrel to v1
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-masterminds-squirrel-1.x
    • Merge into: master
    • Upgrade github.com/Masterminds/squirrel to d1a9a0e53225d7810c4f5e1136db32f4e360c5bb
    chore(deps): update module mattn/go-sqlite3 to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-mattn-go-sqlite3-2.x
    • Merge into: master
    • Upgrade github.com/mattn/go-sqlite3 to v2.0.6
    chore(deps): update module minio/minio-go to v7
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-minio-minio-go-7.x
    • Merge into: master
    • Upgrade github.com/minio/minio-go to v7.0.7
    chore(deps): update module robfig/cron to v3
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/github.com-robfig-cron-3.x
    • Merge into: master
    • Upgrade github.com/robfig/cron to v3.0.1
    fix(deps): update dependency @google-cloud/storage to v5
    fix(deps): update dependency crypto-js to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/crypto-js-4.x
    • Merge into: master
    • Upgrade crypto-js to ^4.0.0
    • Upgrade @types/crypto-js to 4.0.1
    fix(deps): update dependency d3 to v6
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/d3-6.x
    • Merge into: master
    • Upgrade d3 to 6.5.0
    • Upgrade @types/d3 to 6.3.0
    fix(deps): update dependency d3-dsv to v2
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/d3-dsv-2.x
    • Merge into: master
    • Upgrade d3-dsv to 2.0.0
    • Upgrade @types/d3-dsv to 2.0.1
    fix(deps): update dependency http-proxy-middleware to v1
    fix(deps): update dependency js-yaml to v4
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/js-yaml-4.x
    • Merge into: master
    • Upgrade js-yaml to 4.0.0
    • Upgrade @types/js-yaml to 4.0.0
    fix(deps): update dependency markdown-to-jsx to v7
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/markdown-to-jsx-7.x
    • Merge into: master
    • Upgrade markdown-to-jsx to 7.1.1
    fix(deps): update dependency mocha to v8
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/mocha-8.x
    • Merge into: master
    • Upgrade mocha to 8.2.1
    fix(deps): update dependency re-resizable to v6
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/re-resizable-6.x
    • Merge into: master
    • Upgrade re-resizable to 6.9.0
    fix(deps): update dependency react-ace to v9
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-ace-9.x
    • Merge into: master
    • Upgrade react-ace to 9.3.0
    fix(deps): update dependency react-dropzone to v11
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/react-dropzone-11.x
    • Merge into: master
    • Upgrade react-dropzone to 11.2.4
    fix(deps): update dependency react-router-dom to v5
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/major-reactrouter-monorepo
    • Merge into: master
    • Upgrade react-router-dom to 5.2.0
    fix(deps): update dependency webdriverio to v6
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/major-webdriverio-monorepo
    • Merge into: master
    • Upgrade webdriverio to 6.12.1
    fix(deps): update mui monorepo (major)
    fix(deps): update react monorepo to v17 (major)
    • Schedule: ["before 3am on Monday"]
    • Branch name: renovate/major-react-monorepo
    • Merge into: master
    • Upgrade react to 17.0.1
    • Upgrade react-dom to 17.0.1
    • Upgrade react-test-renderer to 17.0.1

    :children_crossing: Branch creation will be limited to maximum 2 per hour, so it doesn't swamp any CI resources or spam the project. See docs for prhourlylimit for details.


    :question: Got questions? Check out Renovate's Docs, particularly the Getting Started section. If you need any further assistance then you can also request help here.


    This PR has been generated by WhiteSource Renovate. View repository job log here.

    lgtm approved size/M ok-to-test cla: yes 
    opened by renovate-bot 42
  • KFP sdk client authentication error

    KFP sdk client authentication error

    /kind bug

    What steps did you take and what happened: Enabled authentication with Azure AD on AKS and installing Kubeflow with kfctl_istio_dex.v1.1.0.yaml but skipping the dex from the manifest as Azure AD is an OIDC provider. The load balancer is exposed over https with TLS 1.3 self-signed cert.

    OIDC Auth Service Configuration:

    • client_id=XXXX
    • oidc_provider=https://login.microsoftonline.com/XXXX/v2.0
    • oidc_redirect_uri=https://XXXX/login/oidc
    • oidc_auth_url=https://login.microsoftonline.com/XXXX/oauth2/v2.0/authorize
    • application_secret=XXXX
    • skip_auth_uri=
    • namespace=istio-system
    • userid-header=kubeflow-userid
    • userid-prefix=

    Issue When using KFP client to upload the pipeline (client.pipeline_uploads.upload_pipeline()) with below client config throws an error.

    client = kfp.Client(host='https://<LoadBalancer IP Address>/pipeline', existing_token=<token>)

    Error HTTPSConnectionPool(host='<Host_IP>', port=443): Max retries exceeded with url: /pipeline/apis/v1beta1/pipelines/upload?name=local_exp-6714175b-6d59-40d0-9019-5b4ee58dc483 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1076)')))

    Is there a way to override cert verification?

    or

    When using KFP client to upload the pipeline (client.pipeline_uploads.upload_pipeline()) with below client config redirects to google auth error.

    client = kfp.Client(host='https://<LoadBalancer IP Address>/pipeline ,client_id=<client_id>, other_client_id=<client_id>,other_client_secret=<application_secret>,namespace='kfauth')

    image

    Environment:

    • Kubeflow version: v1.1.O
    • kfctl version: kfctl_v1.1.0-0-g9a3621e_linux.tar.gz
    • kfp version: 1.0.1
    • python version: 3.6.8
    • kfp-server-api version: 1.0.1
    • Kubernetes platform: Azure Kubernetes Service
    • Kubernetes version: 1.17.11

    CC: @Bobgy

    area/sdk/client kind/feature 
    opened by sudivate 42
  • fix(backend): remove Bazel from building the API. Part of #3250

    fix(backend): remove Bazel from building the API. Part of #3250

    Description of your changes: Remove Bazel from the api generations, this is part of https://github.com/kubeflow/pipelines/issues/3250. The suggested solution is based upon work from: https://github.com/kubeflow/pipelines/pull/4393.

    The suggested solution uses docker to make it simpler for users to not have to install all the necessary tools and environments locally, but not sure it is the best solution. Post this as an early draft in order to discuss possible solutions.

    Checklist:

    lgtm approved size/XXL cla: yes 
    opened by NikeNano 41
  • Update kubeflow/manifests to ship correct version of KFP in 0.7? 1.31

    Update kubeflow/manifests to ship correct version of KFP in 0.7? 1.31

    We are trying to finalize Kubeflow 0.7 by end of month.

    Which version of KFP should be shipped in 0.7.

    We are currently shipping KFP 0.1.23. It looks like this is about 1 month old. It looks like there was a fairly recent release 0.1.31 https://github.com/kubeflow/pipelines/releases

    @IronPan @jessiezcc Should we ship 0.1.31 in 0.7? Are there additional improvements that we would like to ship in 0.7? If so do we have an ETA for when they will land?

    priority/p0 kind/feature area/pipelines 
    opened by jlewi 39
  • [Feature] Supports parameterized S3Artifactory for Pipeline and ContainerOp in kfp package

    [Feature] Supports parameterized S3Artifactory for Pipeline and ContainerOp in kfp package

    Motivation

    I am running a kubeflow pipeline deployment with my custom helm chart and a minio s3 gateway to my custom bucket. This bucket has a different name from the default one in kfp, hence I need some way to parameterize the s3 artifact configs.

    Status

    • Waiting for Review

    Features

    • kfp can now declare a custom artifact location inside a pipeline or containerop.
    from kfp import dsl
    from kubernetes.client.models import V1SecretKeySelector
    
    
    @dsl.pipeline( name='foo', description='hello world')
    def foo_pipeline(namespace: str):
    
        # configures artifact location
        artifact_location = dsl.ArtifactLocation.s3(
                                bucket="foobar",
                                endpoint="minio-service.%s:9000" % namespace,  # parameterized namespace
                                insecure=True,
                                access_key_secret=V1SecretKeySelector(name="minio", key="accesskey"),
                                secret_key_secret={"name": "minio", "key": "secretkey"}  # accepts dict also
        )
    
        # set pipeline level artifact location
        conf = dsl.get_pipeline_conf().set_artifact_location(artifact_location)
        
        # use pipeline level artifact location (i.e. minio-service)
        op1 = dsl.ContainerOp(name='foo', image='bash:latest')
    
        # use containerop level artifact location (i.e. aws)
        op2 = dsl.ContainerOp(
                            name='foo', 
                            image='bash:latest',
                            # configures artifact location
                            artifact_location=dsl.ArtifactLocation.s3(
                                bucket="foobar",
                                endpoint="s3.amazonaws.com",
                                insecure=False,
                                access_key_secret=V1SecretKeySelector(name="s3-secret", key="accesskey"),
                                secret_key_secret=V1SecretKeySelector(name="s3-secret", key="secretkey"))
        )
    
    

    TLDR changes

    • argo-models is now a dependency in setup.py (argo v2.2.1)
    • Added static class ArtifactLocation
      • to help generate artifact location for s3
      • to help generate artifact for workflow templates
    • Updated PipelineConf to support artifact location
    • Updated k8s helper and related, to support openapi objects (I accidentally used openapi generator instead of swagger codegen for argo-models)
    • Added unit test for ArtifactLocation
    • Fixed unit test for kfp.aws (Found that it has a bug, and was not imported into the unit test)

    This change isโ€‚Reviewable

    lgtm approved size/XL ok-to-test 
    opened by eterna2 39
  • feat(frontend): Display multi-level dropdown on KFPv2 Run Comparison page

    feat(frontend): Display multi-level dropdown on KFPv2 Run Comparison page

    This PR is based off of #7933, and therefore is in draft mode until that PR is merged.

    Description of your changes:

    • Display the multi-level dropdown on the KFPv2 Run Comparison page by converting RunArtifacts[] to DropdownItem[]
      • This is done for the Confusion Matrix, HTML, and Markdown tabs, since these use a two-panel setup
    • Add a visualization placeholder area

    Screencasts:

    This screenshot shows the multi-level dropdown with options loaded from selected runs, and the visualization placeholder.

    https://user-images.githubusercontent.com/7987279/175758740-90c5c7c5-9d82-4b1a-b2fe-f94fff21c7a2.mp4

    Notes:

    • A future PR will implement the two-panel setup to compare both of the selected artifacts. This PR does not implement the actual display of the Confusion Matrix, HTML, or Markdown, so the placeholder here stays even after selecting an artifact.

    Checklist:

    do-not-merge/work-in-progress size/XL 
    opened by zpChris 2
  • [backend] the cache server does not check whether the cached artifacts have been deleted

    [backend] the cache server does not check whether the cached artifacts have been deleted

    Environment

    Kubeflow 1.5.1

    Steps to reproduce

    The cache server wrongly states that the step is cached even if the artifacts have been deleted from S3. We propose https://github.com/kubeflow/pipelines/pull/7938 that checks whether the folder actually still exists on S3. If the folder does not exist we do not wrongfully pretend that it is cached and instead delete the cachedb entry and let the pipeline step run again.


    Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.

    kind/bug area/backend 
    opened by juliusvonkohout 0
  • WIP: fix(backend): Caching should not be wrongly shown as successful if the artifacts have been deleted from S3

    WIP: fix(backend): Caching should not be wrongly shown as successful if the artifacts have been deleted from S3

    opened by juliusvonkohout 2
Releases(1.8.2)
  • 1.8.2(Jun 10, 2022)

  • 2.0.0-alpha.2(May 6, 2022)

  • 2.0.0-alpha.1(Apr 5, 2022)

  • 2.0.0-alpha.0(Mar 16, 2022)

  • 1.8.1(Feb 26, 2022)

  • 1.8.1-rc.0(Feb 22, 2022)

  • 1.8.0(Feb 16, 2022)

  • 1.8.0-rc.3(Feb 10, 2022)

  • 1.8.0-rc.2(Feb 8, 2022)

  • 1.8.0-rc.1(Jan 11, 2022)

  • 1.8.0-rc.0(Jan 10, 2022)

  • 1.8.0-alpha.0(Dec 7, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api==1.7.1 --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.7.1(Oct 31, 2021)

  • 1.7.0(Aug 25, 2021)

  • 1.7.0-rc.4(Aug 19, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api==1.7.0rc4 --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.7.0-rc.3(Aug 6, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api==1.7.0rc3 --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.7.0-rc.2(Jul 22, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api==1.7.0rc2 --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.7.0-alpha.2(Jul 3, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api==1.7.0a2 --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.7.0-alpha.1(Jun 28, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api==1.7.0a1 --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.5.1(Jun 21, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api==1.5.0 --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.6.0(May 24, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api==1.6.0 --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.6.0-rc.0(May 13, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api --pre --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.5.0(Apr 20, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install kfp-server-api package (python 3.6 above) by running:

    python3 -m pip install kfp-server-api --upgrade
    

    See the Change Log

    NOTE, kfp python SDK is NOT included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.5.0-rc.3(Apr 9, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install python kfp-server-api (python 3.6 above) by running:

    python3 -m pip install kfp-server-api --pre --upgrade
    

    See the Change Log.

    Note, kfp python SDK is not included and released separately.

    Source code(tar.gz)
    Source code(zip)
  • 1.5.0-rc.2(Apr 2, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install python SDK (python 3.6 above) by running:

    python3 -m pip install kfp kfp-server-api --pre --upgrade
    

    See the Change Log

    Source code(tar.gz)
    Source code(zip)
  • 1.5.0-rc.1(Apr 1, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install python SDK (python 3.6 above) by running:

    python3 -m pip install kfp kfp-server-api --pre --upgrade
    

    See the Change Log

    Source code(tar.gz)
    Source code(zip)
  • 0.5.2(Mar 29, 2021)

    This is a release of the Kubeflow Pipelines SDK to address a potential security vulnerability. There is no corresponding backend release.

    Install the KFP sdk via:

    pip install kfp==0.5.2
    
    Source code(tar.gz)
    Source code(zip)
  • 1.4.1(Feb 26, 2021)

  • 1.4.0(Feb 18, 2021)

  • 1.4.0-rc.1(Feb 1, 2021)

    To deploy Kubeflow Pipelines in an existing cluster, follow the instruction in here.

    Install python SDK (python 3.6 above) by running:

    python3 -m pip install kfp kfp-server-api --pre --upgrade
    

    See the Change Log

    Source code(tar.gz)
    Source code(zip)
Owner
Kubeflow
Kubeflow is an open, community driven project to make it easy to deploy and manage an ML stack on Kubernetes
Kubeflow
A machine learning toolkit dedicated to time-series data

tslearn The machine learning toolkit for time series analysis in Python Section Description Installation Installing the dependencies and tslearn Getti

null 2.2k Jun 28, 2022
A machine learning toolkit dedicated to time-series data

tslearn The machine learning toolkit for time series analysis in Python Section Description Installation Installing the dependencies and tslearn Getti

null 2.2k Jun 29, 2022
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.

Ray provides a simple, universal API for building distributed applications. Ray is packaged with the following libraries for accelerating machine lear

null 21k Jun 21, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.2k Jun 23, 2022
Bodywork deploys machine learning projects developed in Python, to Kubernetes.

Bodywork deploys machine learning projects developed in Python, to Kubernetes. It helps you to: serve models as microservices execute batch jobs run r

Bodywork Machine Learning 339 Jun 4, 2022
Empyrial is a Python-based open-source quantitative investment library dedicated to financial institutions and retail investors

By Investors, For Investors. Want to read this in Chinese? Click here Empyrial is a Python-based open-source quantitative investment library dedicated

Santosh 555 Jun 21, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jun 16, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.6k Jun 28, 2022
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4k Jun 22, 2022
ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

ml4h is a toolkit for machine learning on clinical data of all kinds including genetics, labs, imaging, clinical notes, and more

Broad Institute 48 Jun 14, 2022
A Powerful Serverless Analysis Toolkit That Takes Trial And Error Out of Machine Learning Projects

KXY: A Seemless API to 10x The Productivity of Machine Learning Engineers Documentation https://www.kxy.ai/reference/ Installation From PyPi: pip inst

KXY Technologies, Inc. 14 Apr 17, 2022
Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

Model Validation Toolkit is a collection of tools to assist with validating machine learning models prior to deploying them to production and monitoring them after deployment to production.

FINRA 23 Jun 11, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.6k Jun 21, 2022
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Augusto Almeida 76 Jun 13, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

Vowpal Wabbit 8k Jun 29, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Iterative 17 Apr 4, 2022
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.3k Jun 22, 2022
Data Efficient Decision Making

Data Efficient Decision Making

Microsoft 171 Jun 27, 2022