ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

Overview

ManiSkill-Learn

ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-demonstrations benchmark for object manipulation. In this challenge, an agent is aimed at generalizing its manipulation skills to unseen objects of the same category given demonstrations and inputs.

An important feature of this package is that it supports visual inputs, especially point-cloud inputs. Such visual data is widely obtainable and applicable in real-world settings, such as self-driving and robotics. Point cloud features also contain explicit and accurate positional information, which could be challenging to be inferred purely through RGB-D images.

ManiSkill-Learn implements various point cloud-based network architectures (e.g. PointNet, PointNet Transformer) adapted to various learning-from-demonstrations algorithms (e.g. Behavior Cloning(BC), Offline/Batch RL(BCQ, CQL, TD3-BC)). It is easy for everyone to design new network architectures and new learning-from-demonstrations algorithms, change the observation processing framework, and generate new demonstrations.

Getting Started

Installation

ManiSkill-Learn requires the python version to be >= 3.6 and torch version to be >= 1.5.0. We suggest users to use python 3.8 and pytorch 1.9.0 with cuda 11.1. The evaluation system of ManiSkill challenge uses python 3.8.10. To create the corresponding Anaconda environment, run conda create -n "myenv" python=3.8.10.

To get started, enter the parent directory of where you installed ManiSkill (mani_skill) and clone this repo, then install with pip under the same environment where mani_skill is installed.

cd {parent_directory_of_mani_skill}
conda activate mani_skill #(activate the anaconda env or virtual env where mani_skill is installed)
git clone https://github.com/haosulab/ManiSkill-Learn.git
cd ManiSkill-Learn/
pip install -e .

Due to variations in CUDA versions, we did not include pytorch or torchvision in requirements.txt. Therefore, please install torch and torchvision before using this package. A version reference can be found in this url.

Simple Workflow

Download Data - Quick Example

Enter this repo, then download example demonstration data from Google Drive and store the data under ./example_mani_skill_data.

Evaluation on Simple Pretrained Models

This section provides a simple example of our evaluation pipeline. To evaluate the example pre-trained models, please check the scripts in scripts/simple_mani_skill_example/eval_bc_example.sh. For example, you can run

python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py \
--gpu-ids=0 --cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0" \
"eval_cfg.save_video=True" "eval_cfg.num=10" "eval_cfg.use_log=True" \
--work-dir=./test/OpenCabinetDrawer_1045_link_0-v0_pcd \
--resume-from=./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt --evaluation

to test the performance of PointNet + Transformer on environment OpenCabinetDrawer_1045_link_0-v0 in ManiSkill.

Train a Simple Agent with Behavior Cloning (BC).

This section provides a simple example of our training pipeline. To train with example demonstration data, please check the scripts in scripts/simple_mani_skill_example/. For example, you can run

python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py \
--gpu-ids=0 --cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0" \
--work-dir=./work_dirs/OpenCabinetDrawer_1045_link_0-v0 --clean-up

to train PointNet + Transformer with demonstration data generated on OpenCabinetDrawer_1045_link_0-v0 in ManiSkill.

There can be some noise between different runs (which is very common in RL), so you might want to train multiple times and choose the best model.

Download Full Demonstration Data and Pretrained Models

The simple examples above only consider training and evaluation on one single object instance, but our challenge aims at training on many object instances and generalizing the learned manipulation skills to novel object instances.

All the data used in this repository is stored on Google Drive.

You can download the full point cloud demonstration dataset (which is stored here) along with models (which is stored here) pre-trained on all of our training object instances. The demonstration data and pre-trained models need to be stored under ./full_mani_skill_data to run the provided scripts in scripts/full_mani_skill_example.

If you want to render point cloud demonstrations from state demonstrations by yourself, you can download the demonstration data with environment state here and put them in ./full_mani_skill_state_data/. Please read Demonstrations and Generating Custom Point Cloud Demonstrations.

You can run python tools/check_md5.py to check if you download all the files correctly.

Download Data from ScienceDB.cn

If you cannot access Google easily, you can download data from here.

Full Training and Evaluation Pipeline

After you download the full data, you are ready to start the ManiSkill challenge. For the full training and evaluation pipeline, see Workflow.

Submit a Model

After you train a full model, you can submit it to our ManiSkill challenge. Please see Challenge Submission in Workflow for more details.

Demonstrations

To download our demonstration dataset, please see Download Full Demonstration Data and Pretrained Models.

This section introduces important details about our point cloud demonstration data. The format of our demonstrations is explained in Demonstrations Format. The provided point cloud demonstrations are downsampled and processed using an existing processing function, which is explained in more detail in Observation Processing. If you want to generate point cloud demonstrations using custom post-processing functions, please refer to Generating Custom Point Cloud Demonstrations.

We did not generate RGB-D demonstration since downsampling an RGB-D image can easily lose important information, while downsampling a point cloud is a lot easier. If you want to, please see Generating RGB-D Demonstrations.

Demonstrations Format

The demonstrations are stored in HDF5 format. The point cloud demonstrations have the following structure:

>>> from h5py import File
>>> f = File('./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_pcd.h5', 'r')
# f is a h5py.Group with keys traj_0 ... traj_n
>>> f['traj_0'].keys()
<KeysViewHDF5 ['actions', 'dones', 'next_obs', 'obs', 'rewards']>
# Let the length of the trajectory 0 be l,
# then f['traj_0']['actions'] is h5py.Dataset with shape == [l, action_dim]
# f['traj_0']['dones'] is a h5py.Dataset with shape == (l,), and the last element is True
# f['traj_0']['rewards'] is a h5py.Dataset with shape == (l,)
# f['traj_0']['obs'] and f['traj_0']['next_obs'] are h5py.Group,
both have the following structure:
  {
  state: h5py.Dataset, shape (l, state_shape); agent's state, including pose, velocity, angular velocity of the moving platform of the robot, joint angles and joint velocities of all robot joints, positions and velocities of end-effectors
  pointcloud: h5py.Group
    {
    xyz: h5py.Dataset, shape (l, n_points, 3); position for each point, recorded in world frame
    rgb: h5py.Dataset, shape (l, n_points, 3); RGB values for each point
    seg: h5py.Dataset, shape (l, n_points, n_seg); some task-relevant segmentaion masks, e.g. handle of a cabinet door
    }
  }
# For the description of segmentation masks (`seg`) for each task, 
please refer to the ManiSkill repo.

You may notice that the structure of obs and next_obs is different from the raw observation structure of the ManiSkill environments. The raw observation structure of ManiSkill environments can be obtained from running the script below:

>>> import mani_skill.env, gym
>>> env=gym.make('OpenCabinetDoor-v0')
>>> env.seg_env_mode(obs_mode='pointcloud')
>>> obs=env.reset()
{
  'agent': ... , # a vector that describes agent's state, including pose, velocity, angular velocity of the moving platform of the robot, joint angles and joint velocities of all robot joints, positions and velocities of end-effectors
  'pointcloud': {
    'rgb': ... , # (N, 3) array, RGB values for each point
    'xyz': ... , # (N, 3) array, position for each point, recorded in world frame
    'seg': ... , # (N, k) array, some task-relevant segmentaion masks, e.g. handle of a cabinet door
  }
}

Our demonstration data has the following differences. The differences are due to the observation post-processing function in mani_skill_learn/env/observation_process.py called by the Gym environment wrapper (mani_skill_learn/env/wrappers.py), which is explained in more detail in Observation Processing.

  • The demonstrations are downsampled and processed from the raw point cloud when utilizing the wrapper (see Observation Processing). Therefore, our demonstration has much fewer points (l=1200) than the raw point cloud (N=160*400*3).
  • The state key in our demonstration corresponds to the agent key in the raw observation of the ManiSkill environments. An important thing to keep in mind is that the vector only represents the robot state of the agent and does not contain ground truth information of the environment (i.e. does not contain information such as object location, distance from the robot to the object, etc, and such information should be inferred from the point cloud)

Even though our demonstration format differs from the raw observation structure of ManiSkill environments, since our environments are wrapped by the SapienRLWrapper class in mani_skill_learn/env/wrappers.py, and we always utilize the wrapper to call the observation processing function whenever we advance a step in the environment and obtain the corresponding observations (i.e. observation, reward, done, info = env.step(action)) during model inference, we do NOT need to do anything special in our repo.

Observation Processing

For demonstration generation, online data collection, and model inference, the observation processing is done in the environment wrappers in mani_skill_learn/env/wrapper.py. For ManiSkill environments, the wrapper calls process_sapien_rl_base in mani_skill_learn/env/observation_process.py to downsample and process the raw point cloud. In our implementation, we selectively downsample from 160*400*3 points to 1200 points. Specifically, for each dimension of the segmentation mask, we first sample 50 points where the mask is True (if there are fewer than 50 points, then we keep all the points). We then randomly sample from the rest of points where at least one of the segmentation masks is true, such that we obtain a total of 800 points (if there are fewer than 800, then we keep all of them). Finally, we randomly sample from the points where none of the segmentation masks are true and where the points are not on the ground (i.e. have positive $z$-coordinate value), such that we obtain a total of 1200 points.

Point clouds in the demonstration data we provided have already been processed by process_sapien_rl_base our repo. If you want to write custom processing function and you want to use any demonstration data during training, you need to first re-render the demonstrations. See Generating Custom Point Cloud Demonstrations.

Generating Custom Point Cloud Demonstrations

To generate custom point cloud demonstrations using custom post-processing functions (by replacing process_sapien_rl_base, see Observation Processing), we have provided demonstrations containing all the necessary information to precisely reconstruct the environment at each time step. You should have downloaded the files in ./full_mani_skill_state_data/

The demonstration has the following format:

>>> from h5py import File
>>> f = File(path, 'r')
>>> f.keys()
dict_keys(['actions', 'dones', 'env_levels', 'env_scene_states', 
'env_states', 'episode_dones', ... , 'next_env_scene_states', 
'next_env_states', 'next_obs', 'obs', 'rewards'])

The API for rendering and converting the above demonstration to point cloud data is provided in tools/convert_state.py.

To run convert_state.py, example script is provided in scripts/simple_mani_skill_example/convert_state_to_pcd.sh. The script generates point cloud for only one of the environments, so you need to repeat so for all environments (see available_environments.txt in the ManiSkill repo).

Chunking Demonstration Dataset

The sum of size of generated point cloud demonstrations for all environments of a task is larger than 10G. After loading and converting into internal dataset, the total consumed memory during agent training will be 60 - 120G. If your computer does not have enough memory, you can use a chunked dataset. Please refer to Demonstration Loading and Replay Buffer for more details.

Generating RGB-D Demonstrations

We did not provide pre-generated RGB-D demonstrations because, unlike point cloud demonstrations, they cannot be easily downsampled without losing important information, which means they have a much larger size and would be in the scale of terabytes (300 trajs/env * 170 training envs * about 30 steps per traj * 160 * 400 * 3 * 4 * 4bytes/float = 4.7TB). If you would like to train models using RGB-D demonstrations, you could use tools/convert_state.py by passing --obs_mode=rgbd to generate the demonstrations. In addition, you need to also implement custom network architectures that process RGB-D images (see "Network Architectures" below).

Workflow

Training

We have provided example training scripts in scripts/full_mani_skill_example/. If you want to directly evaluate a pretrained model (such as those we provided), see Evaluation.

To train an agent, run tools/run_rl.py with appropriate arguments. Among the arguments, config requires a user to specify a path to the config file. cfg-options overrides some configs in an existing config file. The config file specifies information regarding the algorithm, the hyperparameters, the network architectures, the training process, and the evaluation process. For more details about the config, see Detailed Functionalities and Config Settings. Example config files for learning-from-demonstrations algorithms are in configs/.

There can be some noise between different runs (which is very common in RL), so you might want to train multiple times and choose the best model.

If you are interested in designing any custom functionalities, such as writing custom Reinforcement Learning algorithms / training pipelines or designing custom network architectures, please also see Detailed Functionalities and Config Settings for specific information.

If you are interested in designing custom computer vision architectures which map point cloud input to action output, please also pay attention to Demonstrations Format, Observation Processing, and Network Architectures.

Evaluation

Evaluating an agent can be done during training or using an existing model.

To evaluate an agent during training, run tools/run_rl.py with appropriate configs (see the Evaluation config settings below).

To evaluate an agent solely through an existing model, set appropriate eval_cfg and run tools/run_rl.py with --evaluation and --resume-from options.

Example evaluation scripts using existing pretrained models can be found in scripts/full_mani_skill_example/.

Challenge Submission

To create a submission for the ManiSkill challenge, please read the instructions in mani-skill-submission-example first.

If you want to include functionalities in our ManiSkill-Learn repo for your submission, example user_solution.py and environment.yml can be found in the submission_example directory. In this case, please first move these files directly under the repo (i.e. move them to {this_repo}/user_solution.py and {this_repo}/environment.yaml). Also please ensure that the correct versions of torch and torchvision are in environment.yml.

Before submitting to our server, you can utilize the ManiSkill (local) evaluation tool to ensure that things are set up correctly (please refer to "Evaluation" section in the ManiSkill repo).

Detailed Functionalities and Config Settings

This section introduces different modules and functionalities in ManiSkill-Learn, along with how the config settings are parsed in our scripts. This section is especially useful if you want to implement any new functionalities or make changes to the default config settings we have given.

General Config Settings and Config in Command Line

At a high level, the configs for learning-from-demonstrations algorithms are in configs/. The configs/_base_/ subdirectory contains configs that can be reused across different environments, such as network architectures and evaluation configs. To train or evaluate an agent, run tools/run_rl.py with appropriate arguments and path to the config file (--config=path_to_config_file).

The config files are processed as iterated dictionaries when running tools/run_rl.py, with rules as follows: when a config file is imported as _base_ config in another config file, in case there is a conflict in the value of the same dictionary key in the two files, the value in the latter file overrides the value in the former file. If the latter file contains dictionary keys which the _base_ config doesn't have, then the keys are merged together.

One can also specify cfg-options in the command lines when running tools/run_rl.py. In this case, the configs passed to the command line override the existing configs in the config files.

General Training / Evaluation Process Management

During training, tools/run_rl.py calls mani_skill_learn/apis/train_rl.py to manage the training process, evaluation during training, and result logging. mani_skill_learn/apis/train_rl.py calls the implementations in mani_skill_learn/methods/ for algorithm-specific model forward functions and parameter update functions. These functions then invoke files under mani_skill_learn/networks/policy_network/ for forwarding policy networks and mani_skill_learn/networks/value_network/ for forwarding value networks (if the algorithm has them). When evaluating during training, mani_skill_learn/apis/train_rl.py calls the evaluator in mani_skill_learn/env/evaluation.py.

During evaluating pre-trained models, tools/run_rl.py directly calls the evaluator in mani_skill_learn/env/evaluation.py.

Learning-from-Demonstrations (LfD) Algorithms

We implemented several learning-from-demonstration algorithms in mani_skill_learn/methods/: BC (Behavior Cloning), BCQ (Batch-Constrained Q-Learning), CQL (Conservative Q-Learning), and TD3+BC.

Besides learning-from-demonstrations algorithms, we also provide implementations of online model-free agents such as SAC and TD3 (which don't use any demonstration data) in mani_skill_learn/methods/mfrl/.

The algorithm hyperparameters, along with the policy and value network architectures, are specified in the agent entry in the config files. The train_mfrl_cfg entry specifies training parameters. Specifically,

  • total_steps is the total number of training gradient steps.
  • For replay buffer related configs, i.e. init_replay_buffers and init_replay_with_split, please see Demonstration Loading and Replay Buffer. The reason we call the data structure for storing loaded demonstration data "replay buffer" is that we can share the LfD algorithm interface with the online algorithm interface; in pure imitation learning / offline RL settings, since we are not collecting online data and adding it to the replay buffer, when we load the demonstrations, the replay buffer becomes the entire dataset.
  • n_steps is set to 0 for pure imitation learning / offline settings. If one wants to combine offline algorithms with online data collection, then one needs to set some nonzero n_steps similar to the configs in configs/mfrl/. In addition, rollout_cfg needs to be set, agent.policy needs to be implemented explicitly for algorithms, and you might also want to change the implementation of replay buffer (since the original demonstrations might be overwritten when the number of samples reaches the buffer capacity). However, keep in mind that online data collection is expensive in point cloud setting.
  • n_updates refers to the number of agent updates for every sampled batch.
  • num_trajs_per_demo_file refers to the number of trajectories to load per demo file,

Network Architectures

Our network architectures are implemented in mani_skill_learn/networks. The mani_skill_learn/networks/backbones directory contains point cloud-based architectures such as PointNet and PointNet + Transformer. These specific architectures are built from configs during the policy and value network building processes.

We did not implement RGB-D based architectures because the RGB-D demonstrations would be a lot larger than the point cloud demonstrations (since downsampling an RGB-D image can easily lose important information, while downsampling a point cloud is a lot easier). Thus you need to implement custom image-based architectures if you want to train a model using RGB-D demonstrations.

Architecture-specific configurations are specified in the agent/{policy or value network}/nn_cfg entries of the config files. Note that some algorithms have multiple policy or value networks.

During model training, the inputs to the model (sampled from the replay buffer, see init_replay_buffers) have the following format (which is different from the raw observation structure in ManiSkill environments, recall Demonstrations Format):

let b = batch_size
dict(
  actions: (b, action_dim)
  dones: (b,)
  rewards: (b,)
  obs: dict(
    state: (b, state_shape)
    pointcloud: dict(
      xyz: (b, n_points, 3)
      rgb: (b, n_points, 3)
      seg: (b, n_points, n_seg)
    )
  )
  next_obs has the same format as obs 
)

During model inference, the inputs to the model have the following format (since we are not training the model, we don't have information such as next_obs):

let b = batch_size
dict(
  obs: dict(
    state: (b, state_shape)
    pointcloud: dict(
      xyz: (b, n_points, 3)
      rgb: (b, n_points, 3)
      seg: (b, n_points, n_seg)
    )
  )
  actions: (b, action_dim)
)

Environments

Source files for the environment utilities are stored in mani_skill_learn/env/.

Configs

In the config file, environment-related configurations can be set similar to the format below:

env_cfg = dict(
  type='gym',
  unwrapped=False,
  stack_frame=1,
  obs_mode='pointcloud',
  reward_type='dense',
  env_name={env_name},
)

Here stack_frame refers to the number of observation frames to stack (similar to Atari, except that we are now working in a point cloud). env_name refers to the environment name. Other arguments are typically unchanged.

Getting Environment Info

The environment info can be obtained through the script below. Note that the environment is wrapped and the point cloud observation has been post-processed.

>>> from mani_skill_learn.env import get_env_info
>>> env = gym.make(env_name)
>>> env.set_env_mode(obs_mode='pointcloud')
>>> obs_shape, action_shape, action_space = \
get_env_info({'env_name': env_name, 'obs_mode': 'pointcloud', 'type': 'gym'})
>>> obs_shape
{'pointcloud': {'rgb': (1200, 3), 'xyz': (1200, 3), 'seg': (1200, 3)}, 'state': some_int1}
>>> action_shape
some_int2
>>> action_space
Box(-1.0, 1.0, (some_int2,), float32)

Demonstration Loading and Replay Buffer

We have provided functionalities on loading demonstrations into the replay buffer in mani_skill_learn/env/replay_buffer.py. During agent training, the replay buffer is loaded through buffer.restore in mani_skill_learn/apis/train_rl.py. The raw utilities on HDF5 file operations are in mani_skill_learn/utils/fileio/h5_utils.py.

Config for the replay buffer is set in both train_mfrl_cfg and replay_cfg entries. In train_mfrl_cfg, init_replay_buffer refers to the path to a single demonstration data / a list of demonstration data/directory to the chunked datasets (see the paragraphs below for explanations) to be loaded into the replay buffer. init_replay_with_split takes a list of exactly two strings, where the first string is the directory of the demonstration dataset, and the second string is the path to the YAML file containing a list of training door/drawer/chair/bucket environment ids (refer to scripts/full_mani_skill_example/ for examples). This argument loads all demonstration files that correspond to the environment indices in the second file, under the directory specified by the first file. One should only use one of init_replay_buffer or init_replay_with_split arguments. You may set init_replay_buffer='' to ignore it.

For replay_cfg, the capacity config refers to the limit on the number of (observation, action, reward, next_observation, info) samples. The type config specifies the type of replay buffer (ReplayMemory if loading all demonstration data into the replay buffer at once and/or doing any online sampling; ReplayDisk if loading only a chunk of demonstration data as we go in order to save memory, in this case, we need to set a smaller capacity that is divisible by the batch size)

If your computer does not have enough memory to load all demonstrations at once, you can generate a chunked dataset by using tools/split_datasets.py. The demonstrations from different environments will be randomly shuffled and stored into several files under the specified folder. To load a chunked dataset for agent training, you need let replay_cfg.type='ReplayDisk' and train_mfrl_cfg.init_replay_buffers='the folder that stores the chunked dataset'. To find more details, you can check out configs/bc/mani_skill_point_cloud_transformer_disk.py. Example scripts are in scripts/simple_mani_skill_example/run_with_chunked_dataset.sh and scripts/full_mani_skill_example/run_with_chunked_dataset.sh.

Evaluation

Most of evaluation configs are specified in the eval_cfg entry in the config files. Inside the entry, the num parameter specifies the number of trajectories to evaluate per environment. The num_proc parameter controls the parallelization.

In addition, the n_eval parameter in the train_mfrl_cfg entry specifies the number of gradient steps between two evaluations during agent training (if n_eval = None then no evaluation is performed during training). Use save_video=True if you want to see the video generated by the trained agent. It may slow down the evaluation speed.

Acknowledgements

Some functions (e.g. config system, checkpoint) are adopted from MMCV:

Citation

@article{mu2021maniskill,
  title={ManiSkill: Learning-from-Demonstrations Benchmark for Generalizable Manipulation Skills},
  author={Mu, Tongzhou and Ling, Zhan and Xiang, Fanbo and Yang, Derek and Li, Xuanlin, and Tao, Stone and Huang, Zhiao and Jia, Zhiwei and Su, Hao},
  journal={arXiv preprint arXiv:2107.14483},
  year={2021}
}

License

ManiSkill-Learn is released under the Apache 2.0 license, while some specific operations in this library are with other licenses.

Comments
  • I can run the program but the output video isn't visible

    I can run the program but the output video isn't visible

    I installed maniskill and maniskill-learn according to readme and run the example: python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py
    --gpu-ids=3 --cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0"
    "eval_cfg.save_video=True" "eval_cfg.num=1" "eval_cfg.use_log=True"
    --work-dir=./test/OpenCabinetDrawer_1045_link_0-v0_pcd
    --resume-from=./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt --evaluation The program can run,but the video of test looks black:

    image

    Hope to get your help, thank you! The program log is as follows:

    INFO - 2021-08-30 09:39:12,600 - utils - Note: detected 72 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable. INFO - 2021-08-30 09:39:12,600 - utils - Note: NumExpr detected 72 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. Size of image in the rendered video (160, 400, 3) /bin/sh: 1: /home/qjn/miniconda/envs/mani_skill/bin/nvcc: not found OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:23 - Environment info:

    sys.platform: linux Python: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] CUDA available: True GPU 0,1,2,5: Quadro RTX 8000 GPU 3,4: NVIDIA GeForce RTX 2080 Ti CUDA_HOME: /home/qjn/miniconda/envs/mani_skill NVCC: Num of GPUs: 6 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.8.0+cu111 PyTorch compiling details: PyTorch built with:

    • GCC 7.3
    • C++ Version: 201402
    • Intel(R) oneAPI Math Kernel Library Version 2021.3-Product Build 20210617 for Intel(R) 64 architecture applications
    • Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)
    • OpenMP 201511 (a.k.a. OpenMP 4.5)
    • NNPACK is enabled
    • CPU capability usage: AVX2
    • CUDA Runtime 11.1
    • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
    • CuDNN 8.0.5
    • Magma 2.5.2
    • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

    TorchVision: 0.9.0+cu111 OpenCV: 4.5.3 mani_skill_learn: 1.0.0

    OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:23 - Config: log_level = 'INFO' stack_frame = 1 num_heads = 4 agent = dict( type='BC', batch_size=128, policy_cfg=dict( type='ContinuousPolicy', policy_head_cfg=dict(type='DeterministicHead', noise_std=1e-05), nn_cfg=dict( type='PointNetWithInstanceInfoV0', stack_frame=1, num_objs='num_objs', pcd_pn_cfg=dict( type='PointNetV0', conv_cfg=dict( type='ConvMLP', norm_cfg=None, mlp_spec=['agent_shape + pcd_xyz_rgb_channel', 256, 256], bias='auto', inactivated_output=True, conv_init_cfg=dict(type='xavier_init', gain=1, bias=0)), mlp_cfg=dict( type='LinearMLP', norm_cfg=None, mlp_spec=[256, 256, 256], bias='auto', inactivated_output=True, linear_init_cfg=dict(type='xavier_init', gain=1, bias=0)), subtract_mean_coords=True, max_mean_mix_aggregation=True), state_mlp_cfg=dict( type='LinearMLP', norm_cfg=None, mlp_spec=['agent_shape', 256, 256], bias='auto', inactivated_output=True, linear_init_cfg=dict(type='xavier_init', gain=1, bias=0)), transformer_cfg=dict( type='TransformerEncoder', block_cfg=dict( attention_cfg=dict( type='MultiHeadSelfAttention', embed_dim=256, num_heads=4, latent_dim=32, dropout=0.1), mlp_cfg=dict( type='LinearMLP', norm_cfg=None, mlp_spec=[256, 1024, 256], bias='auto', inactivated_output=True, linear_init_cfg=dict( type='xavier_init', gain=1, bias=0)), dropout=0.1), pooling_cfg=dict(embed_dim=256, num_heads=4, latent_dim=32), mlp_cfg=None, num_blocks=6), final_mlp_cfg=dict( type='LinearMLP', norm_cfg=None, mlp_spec=[256, 256, 'action_shape'], bias='auto', inactivated_output=True, linear_init_cfg=dict(type='xavier_init', gain=1, bias=0))), optim_cfg=dict(type='Adam', lr=0.0003, weight_decay=5e-06))) eval_cfg = dict( type='Evaluation', num=1, num_procs=1, use_hidden_state=False, start_state=None, save_traj=True, save_video=True, use_log=True, env_cfg=dict( type='gym', unwrapped=False, stack_frame=1, obs_mode='pointcloud', reward_type='dense', env_name='OpenCabinetDrawer_1045_link_0-v0')) train_mfrl_cfg = dict( on_policy=False, total_steps=50000, warm_steps=0, n_steps=0, n_updates=500, n_eval=50000, n_checkpoint=50000, init_replay_buffers= './example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_pcd.h5') env_cfg = dict( type='gym', unwrapped=False, stack_frame=1, obs_mode='pointcloud', reward_type='dense', env_name='OpenCabinetDrawer_1045_link_0-v0') replay_cfg = dict(type='ReplayMemory', capacity=1000000) work_dir = './test/OpenCabinetDrawer_1045_link_0-v0_pcd/BC' resume_from = './example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt'

    OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:23 - Set random seed to None OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:24 - State shape:{'pointcloud': {'rgb': (1200, 3), 'xyz': (1200, 3), 'seg': (1200, 3)}, 'state': 38}, action shape:Box(-1.0, 1.0, (13,), float32) OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:24 - We do not use distributed training, but we support data parallel in torch OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:24 - Save trajectory at ./test/OpenCabinetDrawer_1045_link_0-v0_pcd/BC/test/trajectory.h5. OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:24 - Begin to evaluate OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:39 - Episode 0: Length 200 Reward: -2865.0203219550845 OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:40 - memory:5.53G gpu_mem_ratio:3.5% gpu_mem:1.65G gpu_mem_this:0.00G gpu_util:4% OpenCabinetDrawer_1045_link_0-v0 - INFO - 2021-08-30 09:39:40 - Num of trails: 1.00, Length: 200.00+/-0.00, Reward: -2865.02+/-0.00, Success or Early Stop Rate: 0.00

    opened by QiuJunning 27
  • Unsuccessful SAC training for demo generation

    Unsuccessful SAC training for demo generation

    Hi, we made a new environment with just a floating gripper, and we are trying to generate demo data with a SAC agent. However, we found that for opening-door task, when the handle is "horizontal", SAC agent fails to succeed after training (as seen in video below, on env 1001 and 1002). I have also attached the command I was using. I switched the seed from 0 to 10 and still didn't work. Please advise if we have done anything wrong. Thanks.

    python -m tools.run_rl configs/sac/sac_mani_skill_state_1M_train.py --seed=10 --cfg-options \"env_cfg.env_name={}\" \"rollout_cfg.type=Rollout\" \"rollout_cfg.num_procs=1\" \"eval_cfg.num_procs=1\" --gpu-ids=1".format(gripper_env)

    ezgif com-gif-maker

    opened by harryzhangOG 9
  • Pointcloud data is not exactly the same as state data?

    Pointcloud data is not exactly the same as state data?

    I downloaded the point cloud data from here and the state data from here and tried the following:

    import h5py
    
    pc_data = h5py.File('full_mani_skill_data/OpenCabinetDrawer/OpenCabinetDrawer_1000_link_0-v0.h5', 'r')
    state_data = h5py.File('full_mani_skill_state_data/OpenCabinetDrawer_state/OpenCabinetDrawer_1000_link_0-v0.h5', 'r')
    
    print((pc_data['traj_10']['actions'][:] == state_data['traj_10']['actions'][:]).all())
    

    And this prints False. Why is this the case? Were the two datasets generated separately?

    opened by arjung128 8
  • Some questions about the environment

    Some questions about the environment

    Thanks for the pretrained models. I found that these pretrained models were obtained on OpenCabinetDoor-v0,PushChair-v0,MoveBucket-v0 and OpenCabinetDrawer-v0. There are several different cabinet doors in OpenCabinetDoor-v0, are the pretrained models obtained on different doors,or on just one door?

    What is the relationship between OpenCabinetDoor-v0 and OpenCabinetDoor_1000-v0? Does OpenCabinetDoor-v0 include OpenCabinetDoor_1000-v0?

    And what is the relationship between OpenCabinetDoor_1000-v0 and OpenCabinetDoor_1000_link_0-v0?

    In the article you say "For each task, the average test success rates are calculated over the 10 test environments and 50 evaluation trajectories per environment.” If I want to get the results of the test by myself, which ten environments do I need to evaluate?

    opened by zhanghuzhenyu 5
  • AttributeError: 'sapien.core.pysapien.VulkanScene' object has no attribute 'set_ambient_light'

    AttributeError: 'sapien.core.pysapien.VulkanScene' object has no attribute 'set_ambient_light'

    when I run the following example python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py
    --gpu-ids=0 --cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0"
    "eval_cfg.save_video=True" "eval_cfg.num=10" "eval_cfg.use_log=True"
    --work-dir=./test/OpenCabinetDrawer_1045_link_0-v0_pcd
    --resume-from=./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt --evaluation

    I got this error opencv-contrib not installed, some features will be disabled. Please install with pip3 install opencv-contrib-python INFO - 2022-08-24 15:26:27,224 - utils - Note: NumExpr detected 16 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. Traceback (most recent call last): File "/home/osama/anaconda3/envs/mani_skill/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/osama/anaconda3/envs/mani_skill/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/osama/Desktop/RI/ManiSkill/ManiSkill-Learn/tools/run_rl.py", line 265, in main() File "/home/osama/Desktop/RI/ManiSkill/ManiSkill-Learn/tools/run_rl.py", line 154, in main evaluator = build_evaluation(eval_cfg) File "/home/osama/Desktop/RI/ManiSkill/ManiSkill-Learn/mani_skill_learn/env/builder.py", line 32, in build_evaluation return build_from_cfg(cfg, EVALUATIONS, default_args) File "/home/osama/Desktop/RI/ManiSkill/ManiSkill-Learn/mani_skill_learn/utils/meta/registry.py", line 132, in build_from_cfg return obj_cls(**args) File "/home/osama/Desktop/RI/ManiSkill/ManiSkill-Learn/mani_skill_learn/env/evaluation.py", line 36, in init self.env = build_env(env_cfg) File "/home/osama/Desktop/RI/ManiSkill/ManiSkill-Learn/mani_skill_learn/env/env_utils.py", line 95, in build_env return build_from_cfg(cfg, ENVS, default_args) File "/home/osama/Desktop/RI/ManiSkill/ManiSkill-Learn/mani_skill_learn/utils/meta/registry.py", line 132, in build_from_cfg return obj_cls(**args) File "/home/osama/Desktop/RI/ManiSkill/ManiSkill-Learn/mani_skill_learn/env/env_utils.py", line 60, in make_gym_env env = gym.make(env_name, **tmp_kwargs) File "/home/osama/anaconda3/envs/mani_skill/lib/python3.8/site-packages/gym/envs/registration.py", line 145, in make return registry.make(id, **kwargs) File "/home/osama/anaconda3/envs/mani_skill/lib/python3.8/site-packages/gym/envs/registration.py", line 90, in make env = spec.make(kwargs) File "/home/osama/anaconda3/envs/mani_skill/lib/python3.8/site-packages/gym/envs/registration.py", line 60, in make env = cls(_kwargs) File "/home/osama/Desktop/RI/ManiSkill/mani_skill/env/open_cabinet_door_drawer.py", line 364, in init super().init( File "/home/osama/Desktop/RI/ManiSkill/mani_skill/env/open_cabinet_door_drawer.py", line 27, in init super().init( File "/home/osama/Desktop/RI/ManiSkill/mani_skill/env/base_env.py", line 93, in init obs = self.reset(level=0) File "/home/osama/Desktop/RI/ManiSkill/mani_skill/env/open_cabinet_door_drawer.py", line 39, in reset super().reset(*args, **kwargs) File "/home/osama/Desktop/RI/ManiSkill/mani_skill/env/base_env.py", line 160, in reset self._setup_renderer() File "/home/osama/Desktop/RI/ManiSkill/mani_skill/env/base_env.py", line 188, in _setup_renderer self._scene.renderer_scene.set_ambient_light( AttributeError: 'sapien.core.pysapien.VulkanScene' object has no attribute 'set_ambient_light'

    Note that opencv-contrib-python is installed.

    opened by samo133 4
  • Some Points on the Arm are Masked as Points on the Cabinet

    Some Points on the Arm are Masked as Points on the Cabinet

    4753dd1bfea3693cad659b32b1cf749

    I found some points on the arm are masked as points on the cabinet when running open-cabinet-door and open-cabinet-drawer environment. After running env.get_obs(), downloading the xyz and seg in the pointcloud and visualizing it in matplotlib, I got the image above. Anyone who would like to check the visualization could run the code in the issue.zip. issue.zip

    opened by Derick317 4
  • Exact State of Environment in Demonstrations

    Exact State of Environment in Demonstrations

    Do we know the exact state of the environment in these demonstrations (i.e. position / orientation of articulated object)? Can this environment state scene be loaded into the visualizer? Or do we simply know the level of the environment used in the demonstrations (https://github.com/haosulab/ManiSkill-Learn/issues/8)?

    opened by arjung128 4
  • Problems occurred when I train SAC, even when I train example BC

    Problems occurred when I train SAC, even when I train example BC

    Two days ago I can train SAC and other networks properly on maniskill, but some problems suddenly occurred today I reinstalled ManiSkill benchmark and ManiSkill-Learn but the problems are still here Nvidia-smi works, and file /usr/share/vulkan/icd.d/nvidia_icd.json exists

    Here is the problem: When running /ManiSkill-Learn/scripts/train_rl_agent/run_SAC.sh

    Traceback (most recent call last): File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/weikangwan/ManiSkill-Learn/mani_skill_learn/env/parallel_runner.py", line 43, in run func = self.cls(*self.args, **self.kwargs) File "/home/weikangwan/ManiSkill-Learn/mani_skill_learn/env/rollout.py", line 18, in init self.env = build_env(env_cfg) File "/home/weikangwan/ManiSkill-Learn/mani_skill_learn/env/env_utils.py", line 95, in build_env return build_from_cfg(cfg, ENVS, default_args) File "/home/weikangwan/ManiSkill-Learn/mani_skill_learn/utils/meta/registry.py", line 132, in build_from_cfg return obj_cls(**args) File "/home/weikangwan/ManiSkill-Learn/mani_skill_learn/env/env_utils.py", line 60, in make_gym_env env = gym.make(env_name, **tmp_kwargs) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/gym/envs/registration.py", line 145, in make return registry.make(id, **kwargs) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/gym/envs/registration.py", line 90, in make env = spec.make(**kwargs) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/gym/envs/registration.py", line 59, in make cls = load(self.entry_point) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/gym/envs/registration.py", line 18, in load mod = importlib.import_module(mod_name) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1014, in _gcd_import File "", line 991, in _find_and_load File "", line 975, in _find_and_load_unlocked File "", line 671, in _load_unlocked File "", line 848, in exec_module File "", line 219, in _call_with_frames_removed File "/home/weikangwan/ManiSkill/mani_skill/env/open_cabinet_door_drawer.py", line 5, in from mani_skill.env.base_env import BaseEnv File "/home/weikangwan/ManiSkill/mani_skill/env/base_env.py", line 59, in _renderer = sapien.VulkanRenderer(default_mipmap_levels=1) RuntimeError: vk::PhysicalDevice::createDeviceUnique: ErrorInitializationFailed

    and when running python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py \

    --gpu-ids=0 --cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0"
    --work-dir=./work_dirs/OpenCabinetDrawer_1045_link_0-v0 --clean-up

    Traceback (most recent call last): File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/weikangwan/ManiSkill-Learn/tools/run_rl.py", line 265, in main() File "/home/weikangwan/ManiSkill-Learn/tools/run_rl.py", line 254, in main main_mfrl_brl(cfg, args, rollout, evaluator, logger) File "/home/weikangwan/ManiSkill-Learn/tools/run_rl.py", line 96, in main_mfrl_brl agent.to('cuda') File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/torch/nn/modules/module.py", line 852, in to return self._apply(convert) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) [Previous line repeated 5 more times] File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/torch/nn/modules/module.py", line 552, in _apply param_applied = fn(param) File "/home/weikangwan/anaconda3/envs/mani_skill/lib/python3.8/site-packages/torch/nn/modules/module.py", line 850, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.

    So how can I solve these problems?

    opened by shen-hhao 4
  • Pre-trained OpenCabinetDrawer RL models

    Pre-trained OpenCabinetDrawer RL models

    When I evaluate the example pre-trained model using the provided script:

    python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py \
    --gpu-ids=0 --cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0" \
    "eval_cfg.save_video=True" "eval_cfg.num=10" "eval_cfg.use_log=True" \
    --work-dir=./test/OpenCabinetDrawer_1045_link_0-v0_pcd \
    --resume-from=./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt --evaluation
    

    During training:

    • Is this model trained on a single object instance, or multiple object instances within a category?
    • As for the variation in the environment, is the only variation (a) robot position, and (b) environment parameters e.g. robot friction during training?

    At test time:

    • Is this being tested on an unseen object instance from the same category as training?

    Thanks!

    opened by arjung128 3
  • Demonstration Action Not Successful

    Demonstration Action Not Successful

    Hi, I wrote a toy code snippet to test the demo data to see if they are perfect but have found much of the data does not contain successful demonstrations (see the video below). The code to reproduce is:

    import gym
    import mani_skill.env
    import numpy as np
    from sapien.core import Pose
    import h5py
    from h5py import File
    
    f = File('/home/knox/OpenCabinetDoor/OpenCabinetDoor_1061_link_0-v0.h5', 'r')
    acts = f['traj_0']['actions'][()]
    env = gym.make("OpenCabinetDoor_1061_link_0-v0")
    env.set_env_mode(obs_mode='pointcloud', reward_type='sparse')
    obs = env.reset(level=0)
    for i in range(100):
        env.step(np.zeros(13))
        if i > 50:
            env.step(acts[i-50, :])
        env.render('human')
    env.close()
    

    I don't know if i am doing anything fundamentally wrong or there's some internal problems with the data. Please advise. Thanks.

    https://user-images.githubusercontent.com/43732483/136081245-464af68a-0d13-4740-9b6f-5cf4cb86f8b7.MOV

    o)

    opened by harryzhangOG 3
  • Can you provide pretrained results of other algorithms, like BCQ and TD3+BC, and pretrained result of BC + PointNet?

    Can you provide pretrained results of other algorithms, like BCQ and TD3+BC, and pretrained result of BC + PointNet?

    Can you provide pretrained results of other algorithms, like BCQ and TD3+BC, and pretrained result of BC + PointNet? I want to evaluate the accuracy of the algorithm in the article.

    opened by zhanghuzhenyu 2
  • How to get reproducible deterministic evaluation results?

    How to get reproducible deterministic evaluation results?

    I evaluate the example pre-trained models on 100 trajectories. I set the seed to 0. I run the following command twice:

    python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py --gpu-ids=0 --evaluation \
    --work-dir=./test/OpenCabinetDrawer_1045_link_0-v0_pcd \
    --resume-from=./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt \
    --cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0" \
    "eval_cfg.save_video=False" \
    "eval_cfg.num=100" \
    "eval_cfg.num_procs=10" \
    "eval_cfg.use_log=True" \
    --seed=0
    

    For the first run, the Success or Early Stop Rate is 0.81. For the second time, the result is 0.84. It seems that the generated seed (using following code) is different although I set the seed to 0 explictly. https://github.com/haosulab/ManiSkill-Learn/blob/9742da932448a5234222cf94381ca0f861dc83fd/mani_skill_learn/env/evaluation.py#L72-L74 So how can I control the determinism through seed?

    In addition, I have a queation about the ManiSkill environment. I notice that there are shadows of objects and robots in the rendered image in the first version of your arxiv paper, like this: image

    But the world frame image I get is like this (I change the resolution to 256*256). How to make the image more realistic like the image shown above? word_frame39_5

    opened by Alxead 3
Owner
Hao Su's Lab, UCSD
Hao Su's Lab, UCSD
BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training

BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training By Likun Cai, Zhi Zhang, Yi Zhu, Li Zhang, Mu Li, Xiangyang Xue. This

null 290 Dec 29, 2022
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet.

Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks, each with (i) a scripted oracle that provides expert demonstrations (for imitation learning), and (ii) reward functions that provide partial credit (for reinforcement learning).

Google Research 367 Jan 9, 2023
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

GraspNet Baseline Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020). [paper] [dataset] [API] [do

GraspNet 209 Dec 29, 2022
Text Generation by Learning from Demonstrations

Text Generation by Learning from Demonstrations The README was last updated on March 7, 2021. The repo is based on fairseq (v0.9.?). Paper arXiv Prere

null 38 Oct 21, 2022
Code for the paper: On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations

Non-Parametric Prior Actor-Critic (N-PPAC) This repository contains the code for On Pathologies in KL-Regularized Reinforcement Learning from Expert D

Cong Lu 5 May 13, 2022
OSLO: Open Source framework for Large-scale transformer Optimization

O S L O Open Source framework for Large-scale transformer Optimization What's New: December 21, 2021 Released OSLO 1.0. What is OSLO about? OSLO is a

TUNiB 280 Nov 24, 2022
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning Object-object Interaction Affordance Learning. For a given object-object int

Kaichun Mo 26 Nov 4, 2022
Official Implement of CVPR 2021 paper “Cross-Modal Collaborative Representation Learning and a Large-Scale RGBT Benchmark for Crowd Counting”

RGBT Crowd Counting Lingbo Liu, Jiaqi Chen, Hefeng Wu, Guanbin Li, Chenglong Li, Liang Lin. "Cross-Modal Collaborative Representation Learning and a L

null 37 Dec 8, 2022
DeepGNN is a framework for training machine learning models on large scale graph data.

DeepGNN Overview DeepGNN is a framework for training machine learning models on large scale graph data. DeepGNN contains all the necessary features in

Microsoft 45 Jan 1, 2023
Open-AI's DALL-E for large scale training in mesh-tensorflow.

DALL-E in Mesh-Tensorflow [WIP] Open-AI's DALL-E in Mesh-Tensorflow. If this is similarly efficient to GPT-Neo, this repo should be able to train mode

EleutherAI 432 Dec 16, 2022
Revisiting Video Saliency: A Large-scale Benchmark and a New Model (CVPR18, PAMI19)

DHF1K =========================================================================== Wenguan Wang, J. Shen, M.-M Cheng and A. Borji, Revisiting Video Sal

Wenguan Wang 126 Dec 3, 2022
​TextWorld is a sandbox learning environment for the training and evaluation of reinforcement learning (RL) agents on text-based games.

TextWorld A text-based game generator and extensible sandbox learning environment for training and testing reinforcement learning (RL) agents. Also ch

Microsoft 983 Dec 23, 2022
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 134 Dec 6, 2022
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques Installation PyPI pip install colossalai Install

HPC-AI Tech 7.1k Jan 3, 2023
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 8, 2022
Trading environnement for RL agents, backtesting and training.

TradzQAI Trading environnement for RL agents, backtesting and training. Live session with coinbasepro-python is finaly arrived ! Available sessions: L

Tony Denion 164 Oct 30, 2022
An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

null 33 Jun 27, 2021
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation mode

Aiden Nibali 36 Oct 30, 2022
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation models. It contains 17 different amateur subjects performing 30 sports-related actions each, for a total of 510 action clips.

Aiden Nibali 25 Jun 20, 2021