Codeflare - Scale complex AI/ML pipelines anywhere

Overview

License Build Status PyPI Downloads Documentation Status GitHub

Scale complex AI/ML pipelines anywhere

CodeFlare is a framework to simplify the integration, scaling and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.

Its main features are:

  • Pipeline execution and scaling: CodeFlare Pipelines faciltates the definition and parallel execution of pipelines. It unifies pipeline workflows across multiple frameworks while providing nearly optimal scale-out parallelism on pipelined computations.
  • Deploy and integrate anywhere: CodeFlare simplifies deployment and integration by enabling a serverless user experience with the integration with Red Hat OpenShift and IBM Cloud Code Engine and providing adapters and connectors to make it simple to load data and connect to data services.

Release status

This project is under active development. See the Documentation for design descriptions and the latest version of the APIs.

Quick start

Run in your laptop

Instaling locally

CodeFlare can be installed from PyPI.

Prerequisites:

We recommend installing Python 3.8.6 using pyenv. You can find here recommended steps to set up the Python environment.

Install from PyPI:

pip3 install --upgrade pip          # CodeFlare requires pip >21.0
pip3 install --upgrade codeflare

Alternatively, you can also build locally with:

git clone https://github.com/project-codeflare/codeflare.git
cd codeflare
pip3 install --upgrade pip
pip3 install .

Using Docker

You can try CodeFlare by running the docker image from Docker Hub:

  • projectcodeflare/codeflare:latest has the latest released version installed.

The command below starts the most recent development build in a clean environment:

docker run --rm -it -p 8888:8888 projectcodeflare/codeflare:latest

It should produce an output similar to the one below, where you can then find the URL to run CodeFlare from a Jupyter notebook in your local browser.

[I 
   
     ServerApp] Jupyter Server 
    
      is running at:
...
[I 
     
       ServerApp]     http://127.0.0.1:8888/lab

     
    
   

Using Binder service

You can try out some of CodeFlare features using the My Binder service.

Click on the link below to try CodeFlare, on a sandbox environment, without having to install anything.

Binder

Pipeline execution and scaling

CodeFlare Pipelines reimagined pipelines to provide a more intuitive API for the data scientist to create AI/ML pipelines, data workflows, pre-processing, post-processing tasks, and many more which can scale from a laptop to a cluster seamlessly.

See the API documentation here, and reference use case documentation in the Examples section.

A set of reference examples are provided as executable notebooks.

To run examples, if you haven't done so yet, clone the CodeFlare project with:

git clone https://github.com/project-codeflare/codeflare.git

Example notebooks require JupyterLab, which can be installed with:

pip3 install --upgrade jupyterlab

Use the command below to run locally:

jupyter-lab codeflare/notebooks/<example_notebook>

The step above should automatically open a browser window and connect to a running Jupyter server.

If you are using any one of the recommended cloud based deployments (see below), examples are found in the codeflare/notebooks directory in the container image. The examples can be executed directly from the Jupyter environment.

As a first example of the API usage, see the sample pipeline.

For an example of how CodeFlare Pipelines can be used to scale out common machine learning problems, see the grid search example. It shows how hyperparameter optimization for a reference pipeline can be scaled and accelerated with both task and data parallelism.

Deploy and integrate anywhere

Unleash the power of pipelines by seamlessly scaling on the cloud. CodeFlare can be deployed on any Kubernetes-based platform, including IBM Cloud Code Engine and Red Hat OpenShift Container Platform.

  • IBM Cloud Code Engine for detailed instructions on how to run CodeFlare on a serverless platform.
  • Red Hat OpenShift for detailed instructions on how to run CodeFlare on OpenShift Container Platform.

Contributing

Join us in making CodeFlare Better! We encourage you to take a look at our Contributing page.

Blog

CodeFlare related blogs are published on our Medium publication.

License

CodeFlare is an open-source project with an Apache 2.0 license.

Comments
  • Error running notebook

    Error running notebook "RaySystemError: System error: buffer source array is read-only"

    Describe the bug I'm trying to run the example notebooks (in codeflare/notebooks), and came across this error. The error persisted thru attempts to restart my kernel, entire machine, and re-cloning the repo. Any help, or an explanation of the root cause, is much appreciated!

    To Reproduce Steps to reproduce the behavior:

    1. Go to notebooks/plot_nca_classification.ipynb
    2. Run 2nd code block. It uses Ray and Codeflare.
    3. This line produces the error knn_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn)[0])
    4. See error: RaySystemError: System error: buffer source array is read-only

    Full stack trace:

    RaySystemError: System error: buffer source array is read-only
    traceback: Traceback (most recent call last):
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 268, in deserialize_objects
        obj = self._deserialize_object(data, metadata, object_ref)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 191, in _deserialize_object
        return self._deserialize_msgpack_data(data, metadata_fields)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 169, in _deserialize_msgpack_data
        python_objects = self._deserialize_pickle5_data(pickle5_data)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 157, in _deserialize_pickle5_data
        obj = pickle.loads(in_band, buffers=buffers)
      File "sklearn/neighbors/_dist_metrics.pyx", line 223, in sklearn.neighbors._dist_metrics.DistanceMetric.__setstate__
      File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
      File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
    ValueError: buffer source array is read-only
    
    
    ---------------------------------------------------------------------------
    RaySystemError                            Traceback (most recent call last)
    /tmp/ipykernel_1251/3313313255.py in <module>
          9 test_input.add_xy_arg(node_scalar, dm.Xy(X_test, y_test))
         10 
    ---> 11 knn_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn)[0])
         12 knn_score = ray.get(rt.execute_pipeline(knn_pipeline, ExecutionType.SCORE, test_input)
         13                     .get_xyrefs(node_knn)[0].get_yref())
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/codeflare/pipelines/Runtime.py in select_pipeline(pipeline_output, chosen_xyref)
        381         curr_xyref = xyref_queue.get()
        382         curr_node_state_ptr = curr_xyref.get_curr_node_state_ref()
    --> 383         curr_node = ray.get(curr_node_state_ptr)
        384         prev_xyrefs = curr_xyref.get_prev_xyrefs()
        385 
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/_private/client_mode_hook.py in wrapper(*args, **kwargs)
         87             if func.__name__ != "init" or is_client_mode_enabled_by_default:
         88                 return getattr(ray, func.__name__)(*args, **kwargs)
    ---> 89         return func(*args, **kwargs)
         90 
         91     return wrapper
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/worker.py in get(object_refs, timeout)
       1621                     raise value.as_instanceof_cause()
       1622                 else:
    -> 1623                     raise value
       1624 
       1625         if is_individual_id:
    
    

    Expected behavior Expected is selecting the pipeline and evaluating its score via a 'SCORE' pipeline.

    Desktop

    • OS: Ubuntu 20.04 via WSL2 on Windows.
    • Python 3.8.6

    Thank you for any help! I am a University of Illinois at Urbana-Champaign grad student trying to make the most of your work!

    opened by KastanDay 6
  • Replace SimlpleQueue

    Replace SimlpleQueue

    Overview

    Currently, lineage uses SimpleQueue to realize pipelines. But this is available only in Python >=3.8. This reduces adoption, moving to Queue will give us broader Python version coverage.

    Acceptance Criteria

    • [x] Replace SimpleQueue with Queue
    • [x] Ensure tests pass

    Questions

    • What are the drawbacks of using Queue vs SimpleQueue?

    Assumptions

    Reference

    • https://towardsdatascience.com/dive-into-queue-module-in-python-its-more-than-fifo-ce86c40944ef
    enhancement 
    opened by raghukiran1224 5
  • CodeFlare resiliency tool: initial commit

    CodeFlare resiliency tool: initial commit

    What does this PR do? This is first step towards improving resiliency and performance in Ray without modifying the source code. This PR includes a new tool that helps configure Ray cluster conveniently. The tool helps in fetching and parsing ray configurations, and generating resiliency profiles (e.g., strict, relaxed, recommended). Currently, we are working on deciding configuration options for each resiliency profile manually by evaluating them on various ray workloads. We'll update this PR accordingly.

    Description of Changes The changes in this PR is currently independent of the main codeFlare code. We intend to put this tool in a new folder called utils in the codeFlare root directory.

    opened by JainTwinkle 2
  • Predicted output is not properly assigned to get_yref(), instead is assigned to get_Xref()

    Predicted output is not properly assigned to get_yref(), instead is assigned to get_Xref()

    Describe the bug After running a PREDICT, the y_predcannot be obtained via get_yref(), instead can be obtained via the get_Xref(). Semantically, this seems weird.

    To Reproduce Steps to reproduce the behavior:

    1. Go to https://github.ibm.com/codeflare/ray-pipeline/blob/complex-example-1/notebooks/plot_feature_selection_pipeline.ipynb
    2. Scroll down to `y_pred = ray.get(predict_clf_output[0].get_yref())
    3. If you change that statement to y_pred = ray.get(predict_clf_output[0].get_Xref()) the output would match the original sklearn pipeline on the top.

    Expected behavior The predicted output should be obtained from calling get_yref().

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

    bug good first issue cfp-runtime cfp-datamodel 
    opened by raghukiran1224 2
  • sample pipeline jupyter notebook on binder errors-out

    sample pipeline jupyter notebook on binder errors-out

    Describe the bug sample pipeline jupyter notebook errors out due to undefined variable

    To Reproduce Steps to reproduce the behavior:

    1. Go to binder
    2. Click on sample pipeline jupyter notebook
    3. Run

    Expected behavior Jupyter notebook on binder should run without exception

    Additional context error while executing cell:

    pipeline_output = rt.execute_pipeline(pipeline, ExecutionType.FIT, pipeline_input)
    node_0_output = pipeline_output.get_xyrefs(node_0)
    
    
    In [74]:
    
    outputs[0]
    
    
    
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-74-a45df8d4a457> in <module>
    ----> 1 outputs[0]
    
    NameError: name 'outputs' is not defined
    
    opened by asm582 2
  • Jupyter notebook plot_scalable_poly_kernels dies when run on binder

    Jupyter notebook plot_scalable_poly_kernels dies when run on binder

    Describe the bug Jupyter notebook kernel dies

    To Reproduce Steps to reproduce the behavior:

    1. Go to Binder
    2. Click on plot_scalable_poly_kernels
    3. Run the notebook

    Expected behavior The jupyter notebook should run without error

    opened by asm582 2
  • Grid search jupyter notebook on binder missing graphviz library

    Grid search jupyter notebook on binder missing graphviz library

    Graph viz missing from binder service

    To Reproduce Steps to reproduce the behavior:

    1. Go to binder service 2.Run Grid search notebook

    Additional context

    Below error caused by execution of cell:

    non_param_graph = cf_utils.pipeline_to_graph(pipeline)
    non_param_graph
    

    ExecutableNotFound: failed to execute ['dot', '-Kdot', '-Tsvg'], make sure the Graphviz executables are on your systems' PATH

    bug wontfix 
    opened by asm582 2
  • Refactor configuration utility tool; added support for latest Ray version

    Refactor configuration utility tool; added support for latest Ray version

    Related PRs Extending #37

    What does this PR do?

    This PR extends the Ray resiliency config tool. The PR does the following:

    1. The Ray config utility script now supports configurations from Ray v1.6, 1.7, and 1.8.

    2. The tool now saves config. files into their respective version directory. This is more organized as compared to saving files from all Ray versions into a single folder. For example, now the tools save output config. files in the following manner by default. ├── configs │ ├── 1.0.0 │ │ ├── Ray 1.0.0 related config files │ ├── 1.1.0 │ │ └── Ray 1.1.0 related config files

    3. The configuration parsing code is more generalized than before. Removed some hard-coded conditions and added functions to make the code less cluttered.

    4. Added a new field called config_string in the output config file. This field stores the original string from which we parsed the default value of the configuration. The config_string stores string whenever the default value is not a simple value but a conditional statement. This field will help in explaining how the associated environment variable's value will determine the default value. For example: For raylet_start_wait_time_s configuration, the signature/input is following:

    RAY_CONFIG(uint32_t, raylet_start_wait_time_s,
               std::getenv("RAY_preallocate_plasma_memory") != nullptr &&
                       std::getenv("RAY_preallocate_plasma_memory") == std::string("1")
                   ? 120
                   : 10)
    

    And, the script dumps following Yaml entry in the .conf file:

    raylet_start_wait_time_s:
      config_string: 'std::getenv("RAY_preallocate_plasma_memory") != nullptr && std::getenv("RAY_preallocate_plasma_memory") == std::string("1") ? 120 : 10'
      default: '10'
      env: RAY_preallocate_plasma_memory
      type: uint32_t
      value_for_this_mode: '10'
    

    The new field i.e. config_string is informatory and gives an idea about how the associated environment variable will be processed.

    1. The config tool now uses YAML format variable instead of a hardcoded string for system-config map YAML (system_cm.yaml)
    opened by JainTwinkle 1
  • Fix corner case with a singleton node

    Fix corner case with a singleton node

    Related Issue

    Supports #27

    Related PRs

    Reopen PR 31 after PR 27 merged with develop.

    What does this PR do?

    Description of Changes Checked node exists in pipeline post_graph Added ExecutionType.TRANSFORM Added a unit test

    bug 
    opened by yuanchi2807 1
  • Fix yref assignment for pipeline PREDICT and SCORE

    Fix yref assignment for pipeline PREDICT and SCORE

    Related Issue

    Supports #22

    Related PRs

    This PR is not dependent on any other PR

    What does this PR do?

    Description of Changes

    Assign PREDICT and SCORE results to yref as appropriate in Runtime.py. Updated unit tests and notebook examples.

    What gif most accurately describes how I feel towards this PR?

    Example of a gif

    bug 
    opened by yuanchi2807 1
  • Pipeline with a single dangling estimator node triggers an exception

    Pipeline with a single dangling estimator node triggers an exception

    Describe the bug Possibly a corner case? ray-pipeline/codeflare/pipelines/Datamodel.py in get_pre_edges(self, node) 640 """ 641 pre_edges = [] --> 642 pre_nodes = self.pre_graph[node] 643 # Empty pre 644 if not pre_nodes:

    KeyError: <codeflare.pipelines.Datamodel.EstimatorNode object at 0x7fa2d8920f10>

    To Reproduce

    ## initialize codeflare pipeline by first creating the nodes
    pipeline = dm.Pipeline()
    node_a = dm.EstimatorNode('a', MinMaxScaler())
    node_b = dm.EstimatorNode('b', StandardScaler())
    node_c = dm.EstimatorNode('c', MaxAbsScaler())
    node_d = dm.EstimatorNode('d', RobustScaler())
    
    node_e = dm.AndNode('e', FeatureUnion())
    node_f = dm.AndNode('f', FeatureUnion())
    node_g = dm.AndNode('g', FeatureUnion())
    
    ## codeflare nodes are then connected by edges
    pipeline.add_edge(node_a, node_e)
    pipeline.add_edge(node_b, node_e)
    pipeline.add_edge(node_c, node_f)
    ## node_d does not have a downstream node
    # pipeline.add_edge(node_d, node_f)
    pipeline.add_edge(node_e, node_g)
    pipeline.add_edge(node_f, node_g)
    
    pipeline_input = dm.PipelineInput()
    xy = dm.Xy(X,y)
    pipeline_input.add_xy_arg(node_a, xy)
    pipeline_input.add_xy_arg(node_b, xy)
    pipeline_input.add_xy_arg(node_c, xy)
    pipeline_input.add_xy_arg(node_d, xy)
    
    ## execute the codeflare pipeline
    pipeline_output = rt.execute_pipeline(pipeline, ExecutionType.FIT, pipeline_input)
    
    

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

    bug cfp-runtime 
    opened by raghukiran1224 1
  • Ray cluster on OpenShift fails due to missing file

    Ray cluster on OpenShift fails due to missing file

    Describe the bug Cannot bring up Ray cluster as defined in the OCP tutorial

    To Reproduce Steps to reproduce the behavior:

    1. Go to https://codeflare.readthedocs.io/en/latest/getting_started/starting.html#Openshift-Ray-Cluster-Operator
    2. Run pip3 install --upgrade codeflare
    3. Create namespace oc create namespace codeflare
    4. Run ray up ray/python/ray/autoscaler/kubernetes/example-full.yaml fails:
    $ ray up ray/python/ray/autoscaler/kubernetes/example-full.yaml
    Provided cluster configuration file (ray/python/ray/autoscaler/kubernetes/example-full.yaml) does not exist
    

    Expected behavior Bring up Ray cluster on OCP

    Desktop (please complete the following information):

    • OS: MacOS

    Additional context OCP Cluster running on IBM Cloud.

    $ oc cluster-info
    Kubernetes master is running at https://c100-e.jp-tok.containers.cloud.ibm.com:31129
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

    CodeFlare commit hash commit a2b290a115b0cc1317270cef6059d5281215842e

    opened by cmisale 0
  • Data splitter

    Data splitter

    Overview

    As a CFP user, I would like to split a dataset (e.g., np array, pandas dataframe) into smaller objects that can then be fed into other nodes/pipeline. This is especially useful when we have compute intensive tasks and would like to parallelize it easily.

    Acceptance Criteria

    • [x] Design for splitter, should be simple and intuitive
    • [ ] Implementation as an extension to the Node construct
    • [x] Tests

    Questions

    • What type of semantics does the splitter node define?

    Assumptions

    Reference

    good first issue help wanted cfp-datamodel user-story Prio1 
    opened by raghukiran1224 1
  • Support better integration between Ray and Spark in passing ObjectRef without actually moving data

    Support better integration between Ray and Spark in passing ObjectRef without actually moving data

    Overview

    As a Codeflare user, I want to use Ray and Spark alternately to execute my end-to-end ML jobs. Some steps might be executed more efficiently using Ray, while others using Spark. The plasma store in Ray seems to provide an efficient way to share ObjectRef between Ray and Spark. Currently, RayDP project supports from Spark to Ray in some limited way, by running Spark as a Ray actor. However, ObjectRef cannot be shared easily in both directions, Spark-2-Ray and Ray-2-Spark.

    Acceptance Criteria

    • Pandas dataframe created by remote tasks in local Ray plasma stores can be passed with ObjectRefto the Spark driver to create a Spark dataframecontaining list of ObjectRef.
    • Once that is done, on the Spark side, the executors of Spark can then access to the original Pandas dataframe locally.
    • From Spark to Ray: Spark preserves groupby() partition semantics and writes these partitions to plasma store, instead of using hashPartition().

    Questions

    • In RayDP, only the driver node knows about and can access Ray. The executors of PySpark doesn't have access to Ray. This will prevent the PySpark executors from accessing the Ray plasma store. As a result, it is not possible to seamlessly pass ObjectRefbetween Ray workers and Spark executors.

    Assumptions

    • Ray and Spark can share data seamlessly by exchanging ObjectRef among Ray workers and Spark executors.

    Reference

    [Reference] I have opened an issue on the RayDP repo: https://github.com/oap-project/raydp/issues/164

    ray-related 
    opened by klwuibm 3
  • Nested pipelines

    Nested pipelines

    Overview

    As a CF pipelines user, support for nested pipelines, where the node of a pipeline can be a pipeline itself.

    Acceptance Criteria

    • [ ] Nested pipeline API
    • [ ] Nested pipeline implementation
    • [ ] ADR for supporting nested pipelines
    • [ ] Tests

    Questions

    • Given that pipelines are not estimators by themselves, how can we support nesting easily?

    Assumptions

    Reference

    cfp-runtime cfp-datamodel user-story Prio1 
    opened by raghukiran1224 0
  • Investigate and measure zero copy for pipelines

    Investigate and measure zero copy for pipelines

    Overview

    As a CF pipelines user, I would like to understand the memory consumption when pipelines are executed. Given pipelines accept nparrays, will zero copy sharing of Ray help?

    Acceptance Criteria

    • [ ] Memory growth as pipelines are executed
    • [ ] Clear documentation on this
    • [ ] A potential story explaining this in more detail

    Questions

    Assumptions

    Reference

    help wanted cfp-runtime Prio1 benchmark 
    opened by raghukiran1224 0
  • Select best/k-best pipelines

    Select best/k-best pipelines

    Overview

    As a CF pipelines user, I would like the ability to select the best or k-best pipelines from a parameter grid search output.

    Acceptance Criteria

    • [ ] Best pipeline selection
    • [ ] K-best pipeline selection
    • [ ] Tests and compatibility with sklearn outputs

    Questions

    Assumptions

    Reference

    enhancement good first issue help wanted cfp-runtime 
    opened by raghukiran1224 0
Releases(0.1.2.dev0)
  • 0.1.2.dev0(Jul 9, 2021)

    Addressing the python version needs of IBM Cloud Watson Studio, we removed the deps on SimpleQueue and used Queue instead. This removes CodeFlare pipelines dep on >=3.8 and can do >=3.7.

    Shout out to @aviolante for helping with this fix!

    Install can be now done from pypi using pip3 install codeflare, default version updated to 0.1.2

    Source code(tar.gz)
    Source code(zip)
Owner
CodeFlare
Scaling complex pipelines anywhere
CodeFlare
Unified Interface for Constructing and Managing Workflows on different workflow engines, such as Argo Workflows, Tekton Pipelines, and Apache Airflow.

Couler What is Couler? Couler aims to provide a unified interface for constructing and managing workflows on different workflow engines, such as Argo

Couler Project 781 Jan 3, 2023
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Dec 30, 2022
Build tensorflow keras model pipelines in a single line of code. Created by Ram Seshadri. Collaborators welcome. Permission granted upon request.

deep_autoviml Build keras pipelines and models in a single line of code! Table of Contents Motivation How it works Technology Install Usage API Image

AutoViz and Auto_ViML 102 Dec 17, 2022
🤗 Push your spaCy pipelines to the Hugging Face Hub

spacy-huggingface-hub: Push your spaCy pipelines to the Hugging Face Hub This package provides a CLI command for uploading any trained spaCy pipeline

Explosion 30 Oct 9, 2022
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
AI pipelines for Nvidia Jetson Platform

Jetson Multicamera Pipelines Easy-to-use realtime CV/AI pipelines for Nvidia Jetson Platform. This project: Builds a typical multi-camera pipeline, i.

NVIDIA AI IOT 96 Dec 23, 2022
The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines.

The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines. It includes tools for downloading pipelines and their dependencies and tools for measuring their performace.

null 8 Dec 4, 2022
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
A Python library created to assist programmers with complex mathematical functions

libmaths libmaths was created not only as a learning experience for me, but as a way to make mathematical models in seconds for Python users using mat

Simple 73 Oct 2, 2022
Continuous Query Decomposition for Complex Query Answering in Incomplete Knowledge Graphs

Continuous Query Decomposition This repository contains the official implementation for our ICLR 2021 (Oral) paper, Complex Query Answering with Neura

UCL Natural Language Processing 71 Dec 29, 2022
Official implementation of the paper DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows

DeFlow: Learning Complex Image Degradations from Unpaired Data with Conditional Flows Official implementation of the paper DeFlow: Learning Complex Im

Valentin Wolf 86 Nov 16, 2022
Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers.

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

null 2 Oct 14, 2021
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
Autonomous Perception: 3D Object Detection with Complex-YOLO

Autonomous Perception: 3D Object Detection with Complex-YOLO LiDAR object detect

Thomas Dunlap 2 Feb 18, 2022
Neural-fractal - Create Fractals Using Complex-Valued Neural Networks!

Neural Fractal Create Fractals Using Complex-Valued Neural Networks! Home Page Features Define Dynamical Systems Using Complex-Valued Neural Networks

Amirabbas Asadi 10 Dec 17, 2022
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

null 12 Feb 8, 2022
This is the first released system towards complex meters` detection and recognition, which is implemented by computer vision techniques.

A three-stage detection and recognition pipeline of complex meters in wild This is the first released system towards detection and recognition of comp

Yan Shu 19 Nov 28, 2022
"SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image [Paper] [Website] Pipeline Code Environment pip install -r requirements

VITA 250 Jan 5, 2023