FlingBot: The Unreasonable Effectiveness of Dynamic Manipulations for Cloth Unfolding

Overview

FlingBot: The Unreasonable Effectiveness of Dynamic Manipulations for Cloth Unfolding

Huy Ha, Shuran Song

Columbia University, New York, NY, United States

Conference on Robot Learning 2021 (Oral Presentation)

Project Page | Video | Arxiv

High-velocity dynamic actions (e.g., fling or throw) play a crucial role in our everyday interaction with deformable objects by improving our efficiency and effectively expanding our physical reach range. Yet, most prior works have tackled cloth manipulation using exclusively single-arm quasi-static actions, which requires a large number of interactions for challenging initial cloth configurations and strictly limits the maximum cloth size by the robot's reach range. In this work, we demonstrate the effectiveness of dynamic flinging actions for cloth unfolding with our proposed self-supervised learning framework, FlingBot. Our approach learns how to unfold a piece of fabric from arbitrary initial configurations using a pick, stretch, and fling primitive for a dual-arm setup from visual observations. The final system achieves over 80% coverage within 3 actions on novel cloths, can unfold cloths larger than the system's reach range, and generalizes to T-shirts despite being trained on only rectangular cloths. We also finetuned FlingBot on a real-world dual-arm robot platform, where it increased the cloth coverage over 4 times more than the quasi-static baseline did. The simplicity of FlingBot combined with its superior performance over quasi-static baselines demonstrates the effectiveness of dynamic actions for deformable object manipulation.

This repository contains code for training and evaluating FlingBot in both simulation and real-world settings on a dual-UR5 robot arm setup for Ubuntu 18.04. It has been tested on machines with Nvidia GeForce 1080 Ti and GeForce RTX 2080 Ti.

If you find this codebase useful, consider citing:

@inproceedings{ha2021flingbot,
	title={FlingBot: The Unreasonable Effectiveness of Dynamic Manipulation for Cloth Unfolding},
	author={Ha, Huy and Song, Shuran},
	booktitle={Conference on Robotic Learning (CoRL)},
	year={2021}
}

If you have any questions, please contact me at huy [at] cs [dot] columbia [dot] edu.

Table of Contents

Simulation

Setup

This section walks you through setting up the CUDA accelerated cloth simulation environment. To start, install Blender, docker and nvidia-docker.

Python Dependencies

We have prepared a conda YAML file which contains all the python dependencies.

conda env create -f flingbot.yml

Compiling the simulator

This codebases uses a CUDA accelerated cloth simulator which can load any arbitrary mesh to train a cloth unfolding policy. The simulator is a fork of PyFlex from Softgym, and requires a GPU to run. We have provided a Dockerfile in this repo for compiling and using this simulation environment for training in Docker.

docker build -t flingbot .

To launch the docker container, go to this repo's root directory, then run

export FLINGBOT_PATH=${PWD}
nvidia-docker run \
	-v $FLINGBOT_PATH:/workspace/flingbot\
	-v /path/to/your/anaconda3:/path/to/your/anaconda3\
	--gpus all --shm-size=64gb  -d -e DISPLAY=$DISPLAY -e QT_X11_NO_MITSHM=1 -it flingbot

You might need to change --shm-size appropriately for your system.

Add conda to PATH, then activate flingbot

export PATH=/path/to/your/anaconda3/bin:$PATH
conda init bash
source ~/.bashrc
conda activate flingbot

Then, at the root of this repo inside the docker container, compile the simulator with

. ./prepare.sh && ./compile.sh

NOTE: Always make sure you're in the correct conda environment before running these two shell scripts.

The compilation will result in a .so shared object file. ./prepare.sh sets the environment variables needed for compilation and also tells the python interpreter to look into the build directory containing the compiled .so file.

After this .so object file is created, you should be able to run experiments outside of docker as well as inside. In my experience as well as other's in the community, docker is best used only for compilation and usually fails for running experiments. If you experience this, try taking the compiled .so file and running the python commands in the sections to follow outside of docker. Make sure you set the environment variables correctly using ./prepare.sh. Not setting $PYTHONPATH correctly will result in ModuleNotFoundError: No module named 'pyflex' when you try to import pyflex.

You can check out Softgym's Docker guide and Daniel Seita's blog post on installing PyFlex with Docker for more information.

Evaluate FlingBot

In the repo's root, download the pretrained weights

wget https://flingbot.cs.columbia.edu/data/flingbot.pth

As described in the paper, we evaluate FlingBot on 3 held-out different evaluation datasets. First are normal cloths, which contain rectangular cloths from the same distribution as the training dataset. Second are large cloths, which also rectangular cloths like normal cloths, but larger than the system's reach range. Third are shirts, which are completely unseen during training.

To download the evaluation datasets

wget https://flingbot.cs.columbia.edu/data/flingbot-normal-rect-eval.hdf5
wget https://flingbot.cs.columbia.edu/data/flingbot-large-rect-eval.hdf5
wget https://flingbot.cs.columbia.edu/data/flingbot-shirt-eval.hdf5

To evaluate FlingBot on one of the evaluation datasets, pass their respective paths

python run_sim.py --eval --tasks flingbot-normal-rect-eval.hdf5 --load flingbot.pth --num_processes 1 --gui

You can remove --gui to run headless, and use more parallel environments with --num_processes 16. Since the simulator is hardware accelerated, the maximum --num_processes you can set will be limited by how much memory your GPU have. You can also add --dump_visualizations to get videos of the episodes.

The output of evaluation is a directory whose name is prefixed with the checkpoint name (i.e.: flingbot_eval_X in the the example), which contains a replay_buffer.hdf5. You can print the summary statistics and dump visualizations

python visualize.py flingbot_eval_X/replay_buffer.hdf5
cd flingbot_eval_X
python -m http.server 8080

The last command starts a webserver rooted at flingbot_eval_X so you can view the visualizations on your web browser at localhost:8080.

Train Flingbot

In the repo's root, download the training tasks

wget https://flingbot.cs.columbia.edu/data/flingbot-rect-train.hdf5

Then train the model from scratch with

python run_sim.py --tasks_path flingbot-rect-train.hdf5 --num_processes 16 --log flingbot-train-from-scratch --action_primitives fling

Make sure to change num_processes appropriately for your GPU memory capacity. You can also change action_primitive to any subset of ['fling', 'stretchdrag', 'drag', 'place']. For instance, to train an unfolding policy which uses fling and drag at the same time, use --action_primitive fling drag.

Cloth renderer

In our paper, we use Blender to render cloths with domain randomization to help with sim2real transfer. However, training with Blender is much slower due to the over head of launching a rull rendering engine as a subprocess.

We also provide the option of rendering with opengl within PyFlex with --render_engine opengl. We recommend using this option if domain randomization is not necessary.

Note: that the results reported in our paper were with --render_engine blender.

We prefer using the Eevee engine over Cycles in Blender, since we require faster training time but do not need ray-traced images. However, because Eevee does not support headless rendering, you will need a virtual desktop environment if you plan to run the codebase on a headless server.

Generating new tasks

You can also generate new cloth unfolding task datasets. To generate a normal rect dataset

python environment/tasks.py --path new-normal-rect-tasks.hdf5 --num_processes 16 --num_tasks 200 --cloth_type square --min_cloth_size 64 --max_cloth_size 104

where min and max cloth size is measured in number of particles. Since the default particle radius is 0.00625m, 64-104 particles per edge gives a 0.4m-0.65m edge length. Similarly, to generate a large rect dataset

python environment/tasks.py --path new-large-rect-tasks.hdf5 --num_processes 16 --num_tasks 200 --cloth_type square --min_cloth_size 64 --max_cloth_size 120 --strict_min_edge_length 112

where a strict_min_edge_length of 112 ensures that at least one edge is greater than the system's physical reach range 112 * 0.00625m = 0.7m. This physical limit is imposed with a max lift height and max stretch distance. To regenerate an unfolding task dataset with shirts similar to ours, download our version of Cloth3D, where all jackets have been filtered out and all meshes have been appropriately processed

wget  https://flingbot.cs.columbia.edu/data/cloth3d.tar.gz
tar xvzf cloth3d.tar.gz

Then run the following command

python environment/tasks.py --path new-shirt-tasks.hdf5.hdf5 --num_processes 16 --num_tasks 200 --cloth_type mesh --cloth_mesh_path cloth3d/val

You can replace --cloth_mesh_path with any directory containing only quad meshes. To achieve the best simulation quality, make sure edge lengths in the meshes are roughly the particle radius (0.00625m by default).

Real World

Our real world system uses 2 UR5 (CM3) arms, one equipped with an OnRobot RG2 gripper and one with a Schunk WSG50. We modify the Schunk WSG50 finger tip with a rubber tip for better cloth pinch grasping. We use 2 RGB-D cameras, an Azure Kinect v3 for the top down camera and an Intel Realsense D415. You can setup the IP addresses of the robots and cameras inside real_world/setup.py

Real world setup

Cloth dataset

The real world testing cloth dataset is stored in a dictionary called cloths at the top of real_world/realWorldEnv.py. To use your own cloths, add a dictionary item to cloths which contains flattened_area (as measured with compute_coverage()), cloth_size and mass. One entry has been left in there as an example. In each experiment, you can select the cloth you're currently using with CURRENT_CLOTH.

Workspace background segmentation

In FlingBot, we use domain randomization on the background texture in Blender to help with sim2real transfer. However, a simpler solution is to train a simulation model to work on only black backgrounds, then filter out the background in real. Since object-background segmentation is a solved problem, this hack feels acceptable in the context of getting high speed flinging to work. To use this, you can set replace_background=True in RealWorldEnv's constructor. This allows a simulation model trained with an OpenGL rendering engine (such that background domain randomization is absent) to also be transferred to real. Note that for the results we report in the paper, we use replace_background=False. and Blender as our rendering engine.

Stretching primitive variables

To mitigate the use of expensive and inaccessible force sensors, we choose to implement our stretching primitive using vision only. The stretching primitive and variables are defined in real_world/stretch.py. The closed-loop procedure takes an RGB-D image at every iteration, determines whether the cloth has been stretched, then halting if it has and continues to stretch otherwise. The stretching detector is implemented by finding the horizontal line at the top of the cloth mask then checking if it is straight.

You can set the 3 stretching variables as follows:

  • FOREGROUND_BACKGROUND_DIST: the depth distance from the front camera such that both arms and the cloth are closer than this distance after a grasp.
  • GRIPPER_LINE: The y (vertical) coordinate in the front camera image where the middle of the gripper is after a grasp.
  • CLOTH_LINE: The y (vertical) coordinate in the front camera image where the cloth is (just below the tips of the gripper) after a grasp. While setting up the stretching variables, you should set debug=True in is_cloth_stretched().

This stretching primitive assumes the front camera is approximately 0.8m away from the center of the arms, with a front view of the arms. Make sure the front camera is always centered as so. If the distance of the front camera from the two arms is different, you can set FOREGROUND_BACKGROUND_DIST accordingly.

Running Flingbot in the real world

Camera calibration

FlingBot policies actions encoded as a pixel. To find and grasp the 3D position in the real world setup of a chosen pixel in the RGB-D image, the camera pose relative to the arms' bases can be calibrated with

python calibrate_camera.py

The output of this script are a depth scaling and the relative poses of the camera to the right arm and left arm, saved as camera_depth_scale.txt, top_down_left_ur5_cam_pose.txt, and top_down_right_ur5_cam_pose.txt respectively.

I recommend rerunning this script everytime you run real world experiments.

Loading a simulation trained model

To load a simulation model whose path is at ./flingbot.pth and run the real world setup,

python run_real_world.py --load flingbot.pth --warmup 128 --log logs/flingbot-real-world

where --warmup specifies the number of real world data points to collect before the model begins finetuning. Like all simulation commands, you'll need to run . ./prepare.sh before real world commands.

Comments
  • ray.get([e.reset.remote() for e in envs]) does not work

    ray.get([e.reset.remote() for e in envs]) does not work

    I set the environment according to the instruction without any error. But, when I decide to train the model. the ray.get does not work anymore.

    input: python run_sim.py --tasks flingbot-rect-train.hdf5 --num_processes 2 --log flingbot-train-from-scratch --action_primitives fling

    the code is following. (In order to debug, I printed some information)

        #run_sim.py
        print(envs)
        observations = ray.get([e.reset.remote() for e in envs])
    
    

    the output of the terminal:

    2022-07-09 01:18:03,451 INFO services.py:1476 -- View the Ray dashboard at http://127.0.0.1:8265
    SEEDING WITH 0
    [Policy] Action primitives:
            fling
    Replay Buffer path: flingbot-train-from-scratch/replay_buffer.hdf5
    [Actor(SimEnv, d24c6cc9f6d506fd4331284101000000), Actor(SimEnv, 4b8d98d4a8025b3e5e0e3ccf01000000)]
    
    

    The code did not run further, The code is stuck in an endless wait without outputting any more information.

    opened by happydog-gu 1
  • Could not find a package configuration file provided by

    Could not find a package configuration file provided by "pybind11"

    Hello,

    I followed the instructions exactly in README.

    Entered the docker environment using this command:

    sudo docker exec -t -i 0e912bc162b8 /bin/bash
    

    When I tried to compile the .so inside the docker, I got the following error:

    CMake Error at CMakeLists.txt:5 (find_package):
      By not providing "Findpybind11.cmake" in CMAKE_MODULE_PATH this project has
      asked CMake to find a package configuration file provided by "pybind11",
      but CMake did not find one.
    
      Could not find a package configuration file provided by "pybind11" with any
      of the following names:
    
        pybind11Config.cmake
        pybind11-config.cmake
    
      Add the installation prefix of "pybind11" to CMAKE_PREFIX_PATH or set
      "pybind11_DIR" to a directory containing one of the above files.  If
      "pybind11" provides a separate development package or SDK, be sure it has
      been installed.
    
    
    -- Configuring incomplete, errors occurred!
    See also "/workspace/flingbot/PyFlex/bindings/build/CMakeFiles/CMakeOutput.log".
    See also "/workspace/flingbot/PyFlex/bindings/build/CMakeFiles/CMakeError.log".
    make: *** No targets specified and no makefile found.  Stop.
    

    Interestingly, I was able to compile successfully outside of docker (in the conda environment). But when I run the sample command, I got the following error:

    raceback (most recent call last):
      File "run_sim.py", line 1, in <module>
        from utils import (
      File "/home/workspace/flingbot/utils.py", line 2, in <module>
        from environment import SimEnv, TaskLoader
      File "/home/workspace/flingbot/environment/__init__.py", line 1, in <module>
        from .simEnv import SimEnv
      File "/home/workspace/flingbot/environment/simEnv.py", line 3, in <module>
        from .utils import (
      File "/home/workspace/flingbot/environment/utils.py", line 7, in <module>
        import pyflex
    ImportError: /home/workspace/flingbot/PyFlex/bindings/build/pyflex.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cudaSetupArgument
    
    opened by genkv 1
  • How can I run the environment outside of Docker?

    How can I run the environment outside of Docker?

    How can I run the environment outside of Docker? After prepare.sh and compile.sh are compiled in Docker, .so file is created, but the environment cannot be run outside of Docker. Is it because additional operations are needed?

    opened by licheng198 0
  • RayActorError: The actor died unexpectedly before finishing this task.

    RayActorError: The actor died unexpectedly before finishing this task.

    ID: fffffffffffffffffd5f641d1e7ed592e00f065c01000000 Worker ID: 20556ec92d7abbb0b2bf7fb0d363e865ab7587196b608cb3605c3f2f Node ID: 75d7f57c65e988cab741a0c5412548fb08f8fcb7e118eb82fc38fc4d Worker IP address: 172.17.0.2 Worker port: 37185 Worker PID: 1224 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors. Traceback (most recent call last): File "run_sim.py", line 46, in envs, task_loader = setup_envs(dataset=dataset_path, **vars(args)) File "/workspace/flingbot/utils.py", line 158, in setup_envs ray.get([e.setup_ray.remote(e) for e in envs]) File "/home/li/anaconda3/envs/flingbot/lib/python3.6/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper return func(*args, **kwargs) File "/home/li/anaconda3/envs/flingbot/lib/python3.6/site-packages/ray/_private/worker.py", line 2277, in get raise value ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task. class_name: SimEnv actor_id: fd5f641d1e7ed592e00f065c01000000 pid: 1224 namespace: 79413bf6-9d63-48f7-baa3-20f27c337fe9 ip: 172.17.0.2 The actor is dead because its worker process has died. Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors. The actor never ran - it was cancelled before it started running.

    opened by licheng198 2
  • ImportError: /home/hc/dextairity/PyFlex/bindings/build/pyflex.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cudaSetupArgument

    ImportError: /home/hc/dextairity/PyFlex/bindings/build/pyflex.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cudaSetupArgument

    Hi guys, When I run python test_sim.py, I get the following error:

    Traceback (most recent call last): File "test_sim.py", line 2, in <module> from sim_env import SimEnv File "/home/hc/dextairity/sim_env.py", line 7, in <module> import pyflex ImportError: /home/hc/dextairity/PyFlex/bindings/build/pyflex.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cudaSetupArgument

    Can anyone help me??

    opened by canhe173 3
  • AttributeError: module 'typing' has no attribute '_SpecialForm'

    AttributeError: module 'typing' has no attribute '_SpecialForm'

    When I run python run_sim.py --eval --tasks flingbot-normal-rect-eval.hdf5 --load flingbot.pth --num_processes 1 --gui there is a problem :

    Traceback (most recent call last): File "run_sim.py", line 1, in from utils import ( File "/media/randy/299D817A2D97AD94/fty/flingbot/utils.py", line 2, in from environment import SimEnv, TaskLoader File "/media/randy/299D817A2D97AD94/fty/flingbot/environment/init.py", line 1, in from .simEnv import SimEnv File "/media/randy/299D817A2D97AD94/fty/flingbot/environment/simEnv.py", line 3, in from .utils import ( File "/media/randy/299D817A2D97AD94/fty/flingbot/environment/utils.py", line 1, in from torch import cat, tensor File "/home/randy/anaconda3/envs/flingbot/lib/python3.6/site-packages/torch/init.py", line 643, in from .functional import * # noqa: F403 File "/home/randy/anaconda3/envs/flingbot/lib/python3.6/site-packages/torch/functional.py", line 6, in import torch.nn.functional as F File "/home/randy/anaconda3/envs/flingbot/lib/python3.6/site-packages/torch/nn/init.py", line 1, in from .modules import * # noqa: F403 File "/home/randy/anaconda3/envs/flingbot/lib/python3.6/site-packages/torch/nn/modules/init.py", line 2, in from .linear import Identity, Linear, Bilinear, LazyLinear File "/home/randy/anaconda3/envs/flingbot/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 6, in from .. import functional as F File "/home/randy/anaconda3/envs/flingbot/lib/python3.6/site-packages/torch/nn/functional.py", line 11, in from .._jit_internal import boolean_dispatch, _overload, BroadcastingList1, BroadcastingList2, BroadcastingList3 File "/home/randy/anaconda3/envs/flingbot/lib/python3.6/site-packages/torch/_jit_internal.py", line 34, in from typing_extensions import Final File "/home/randy/anaconda3/envs/flingbot/lib/python3.6/site-packages/typing_extensions.py", line 159, in class _FinalForm(typing._SpecialForm, _root=True): AttributeError: module 'typing' has no attribute '_SpecialForm'

    How can I solve this problem?

    opened by TriBall3 2
  • when i run `python run_sim.py', the worker died or was killed by an unexpected system error

    when i run `python run_sim.py', the worker died or was killed by an unexpected system error

    when i run python run_sim.py --eval --tasks flingbot-normal-rect-eval.hdf5 --load flingbot.pth --num_processes 1 --gui the error shows:ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task. 2021-11-22 15:10:23,194 WARNING worker.py:1228 -- A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff341cd030556402df7c59625701000000 Worker ID: 4f72e151e496fac468e1c730556e291e00ec1cfb29882f51097186fd Node ID: d4a9eb590967aeb63fe838e2eca52cf666565bf009207c0ec4a730e6 Worker IP address: 192.168.1.106 Worker port: 41747 Worker PID: 18687 i don't know why occur this issue, could you please help me?

    opened by robint-XNF 4
Owner
Columbia Artificial Intelligence and Robotics Lab
We develop algorithms that enable intelligent systems to learn from their interactions with the physical world to execute complex tasks and assist people
Columbia Artificial Intelligence and Robotics Lab
Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging, ICCV2021 [PyTorch Code]

Dense Deep Unfolding Network with 3D-CNN Prior for Snapshot Compressive Imaging, ICCV2021 [PyTorch Code]

Jian Zhang 20 Oct 24, 2022
[ICLR 2022] Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics

CPDeform Code and data for paper Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics at ICLR 2022 (Spotlight). @InProceed

(Lester) Sizhe Li 29 Nov 29, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Intelligent Systems Lab Org 2.3k Jan 1, 2023
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
Dynamic vae - Dynamic VAE algorithm is used for anomaly detection of battery data

Dynamic VAE frame Automatic feature extraction can be achieved by probability di

null 10 Oct 7, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Jan 8, 2023
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 61.4k Jan 4, 2023
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Jan 5, 2023
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

null 199 Dec 26, 2022
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 46.1k Feb 13, 2021
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 19.3k Feb 12, 2021
DyNet: The Dynamic Neural Network Toolkit

The Dynamic Neural Network Toolkit General Installation C++ Python Getting Started Citing Releases and Contributing General DyNet is a neural network

Chris Dyer's lab @ LTI/CMU 3.3k Jan 6, 2023
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
Deep learning with dynamic computation graphs in TensorFlow

TensorFlow Fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph

null 1.8k Dec 28, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Master Docs License Apache MXNet (incubating) is a deep learning framework designed for both efficiency an

ROCm Software Platform 29 Nov 16, 2022
Simple Dynamic Batching Inference

Simple Dynamic Batching Inference 解决了什么问题? 众所周知,Batch对于GPU上深度学习模型的运行效率影响很大。。。 是在Inference时。搜索、推荐等场景自带比较大的batch,问题不大。但更多场景面临的往往是稀碎的请求(比如图片服务里一次一张图)。 如果

null 116 Jan 1, 2023
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 48 Dec 23, 2022
Official implementation of "Dynamic Anchor Learning for Arbitrary-Oriented Object Detection" (AAAI2021).

DAL This project hosts the official implementation for our AAAI 2021 paper: Dynamic Anchor Learning for Arbitrary-Oriented Object Detection [arxiv] [c

ming71 215 Nov 28, 2022
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

CVMI Lab 228 Dec 25, 2022