Code for paper 'Hand-Object Contact Consistency Reasoning for Human Grasps Generation' at ICCV 2021

Overview

GraspTTA

Hand-Object Contact Consistency Reasoning for Human Grasps Generation (ICCV 2021). report

Project Page with Videos

Teaser

Demo

Quick Results Visualization

We provide generated grasps on out-of-domain HO-3D dataset (saved at ./diverse_grasp/ho3d), you can visualize the results by:

python vis_diverse_grasp --obj_id=6

The visualization will look like this:

Visualization

Generate diverse grasps on out-of-domain HO-3D dataset (the model is trained on ObMan dataset)

You can also generate the grasps by yourself

  • First, download pretrained weights, unzip and put into checkpoints.

  • Second, download the MANO model files (mano_v1_2.zip) from MANO website. Unzip and put mano/models/MANO_RIGHT.pkl into models/mano.

  • Third, download HO-3D object models, unzip and put into models/HO3D_Object_models.

  • The structure should look like this:

GraspTTA/
  checkpoints/
    model_affordance_best_full.pth
    model_cmap_best.pth
  models/
    HO3D_Object_models/
      003_cracker_box/
        points.xyz
        textured_simple.obj
        resampled.npy
       ......
    mano/
      MANO_RIGHT.pkl
  • Then, install the V-HACD for building the simulation of grasp displacement. Change this line to your own path.
  • Finally, run run.sh for installing other dependencies and start generating grasps.

Generate grasps on custom objects

  • First, resample 3000 points on object surface as the input of the network. You can use this function.
  • Second, write your own dataloader and related code in gen_diverse_grasp_ho3d.py.

Training code

Upsate soon

Citation

@inproceedings{jiang2021graspTTA,
          title={Hand-Object Contact Consistency Reasoning for Human Grasps Generation},
          author={Jiang, Hanwen and Liu, Shaowei and Wang, Jiashun and Wang, Xiaolong},
          booktitle={Proceedings of the International Conference on Computer Vision},
          year={2021}
}

Acknowledgments

We thank:

  • MANO provided by Omid Taheri.
  • This implementation of PointNet.
  • This implementation of CVAE.
Comments
  • About Training

    About Training

    Hello. I am a student studying Hand-Object mesh.

    First of all, congratulations on your research success.

    I have a few questions and I would appreciate it if you could answer them.

    1. When will the learning code be released?
    2. It seems that Penetration, Vertex, Translation, Pose, and KLD loss are used when learning CVAE in the paper. Could you please tell me what the weight was?

    We will wait for your reply. thank you.

    opened by chanwo0kim 6
  • Reproduce test results on ho3d

    Reproduce test results on ho3d

    Hi, thanks for the great work! I'm reproducing the numbers of HO-3D in Table 1. And I have several questions.

    1. For each of the ten objects, how many samples should I use during evaluation? 100000?
    2. Am I correct to re-initialize the cVAE decoder weights at the begining of each sequence(object) and keep it being optimized over the whole sequence?
    3. Will the batch-size, the order of samples and the augmentation of samples affect the evaluation results?

    Thanks!

    opened by zehongs 3
  • issue with visual grasp

    issue with visual grasp

    I am trying to visual the diverse grasp. However, the visualized policy seems wrong for some reason, I just follow the README, have no idea why this happen. I would be appreciate if some can help! image

    opened by tianhaowuhz 2
  • buggy implementation of intersection volume metric?

    buggy implementation of intersection volume metric?

    Hi, I think the metric of intersection volume is not correct. The function intersect_vox is supposed to compute the intersection volume. However, vox=mesh.voxelized() and vox.points are only giving the surface points. So it is not offering a solid volume. The mesh.contains(points) is giving a smaller estimate of intersection volume.

    https://github.com/hwjiang1510/GraspTTA/blob/cecb9642e6d63670d4e954cf420d03f1a93b5a90/metric/intersect.py#L7-L12

    obj

    opened by zehongs 1
  • Hope for Grasp TTA training code

    Hope for Grasp TTA training code

    你好, 最近关注到您的文章——Grasp TTA,这对我们目前的研究工作有较强的参考价值,因此希望能够复现其效果,但该工作还没有release模型训练代码,希望能获得该部分代码,我们将在未来的文章中引用您的paper。 感谢!

    Hello I recently noticed your work "Grasp TTA", which has strong reference value for our current research work. Therefore, I hope to reproduce its. However, this work has not released the model training code, we hope to get this part of the code. We will refer to your paper in future articles. Thank you!

    opened by zhengyanzhao1997 0
  • Incurrate Penetration Volume for GT in Obman testset

    Incurrate Penetration Volume for GT in Obman testset

    I admire you very much for this research work, but I encountered some problems in evaluating the Obmanc test set.

    When I evaluated the GT of Obman test, I found that the Penetration Volume calculated by my side does not correspond to your article, the value in your article is 1.70cm³, and the value calculated by my side is 2.68cm³.

    I would like to confirm the details with you, Obman test set a total of 6285 samples, each sample has Hand and Object one-to-one correspondence, I used the intersect_vox function in your evaluation code to evaluate the GT, please ask me whether my side ignored something, which led to the calculated value is different.

    opened by lihaoming45 8
  • Inaccurate grasp predictions

    Inaccurate grasp predictions

    Hello, firstly I want to mention this is some great work!

    I was using the trained model to generate grasps for my own object pointclouds generated from simulation. Surprisingly the generated hand vertices were very distant from the object. I am not very sure the reason for this, is there any requirement on the input object pointcloud origin and axes orientation before using the network that I may have missed?

    I am attaching an image of the predicted grasp for one of the input object pointcloud I used: Screenshot from 2022-04-07 19-41-42

    opened by abhinavkk 8
Owner
Hanwen Jiang
Hanwen Jiang
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

TUCH This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright License fo

Lea Müller 45 Jan 7, 2023
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch >= 1.2.0 P

null 16 Dec 14, 2022
Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loop Story Generation"

Storium GPT-2 Models This is the official repository for the GPT-2 models described in the EMNLP 2020 paper [STORIUM: A Dataset and Evaluation Platfor

Nader Akoury 27 Dec 20, 2022
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 1, 2022
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
MohammadReza Sharifi 27 Dec 13, 2022
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

null 184 Dec 11, 2022
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021]

Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021] Abstract Analyzing complex scenes with DNN is a challenging ta

Irene Yuan 24 Jun 27, 2022
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 8, 2023
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Don’t be Contradicted with Anything!CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System This repository contains the PyTorch im

Libo Qin 25 Sep 6, 2022
PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

PyTorch code for EMNLP 2021 paper: Don't be Contradicted with Anything! CI-ToD: Towards Benchmarking Consistency for Task-oriented Dialogue System

Libo Qin 12 Sep 26, 2021
the official code for ICRA 2021 Paper: "Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation"

G2S This is the official code for ICRA 2021 Paper: Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation by Hemang

NeurAI 4 Jul 27, 2022
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 6, 2022
[RSS 2021] An End-to-End Differentiable Framework for Contact-Aware Robot Design

DiffHand This repository contains the implementation for the paper An End-to-End Differentiable Framework for Contact-Aware Robot Design (RSS 2021). I

Jie Xu 60 Jan 4, 2023