TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

Related tags

Deep Learning tacto
Overview

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

License: MIT PyPI CircleCI Code style: black DIGIT-logo

TACTO Simulator

This package provides a simulator for vision-based tactile sensors, such as DIGIT. It provides models for the integration with PyBullet, as well as a renderer of touch readings.

NOTE: the simulator is not meant to provide a physically accurate dynamics of the contacts (e.g., deformation, friction), but rather relies on existing physics engines.

For updates and discussions please join the #TACTO channel at the www.touch-sensing.org community.

Installation

The preferred way of installation is through PyPi:

pip install tacto

Alternatively, you can manually clone the repository and install the package using:

git clone https://github.com/facebookresearch/tacto.git
cd tacto
pip install -e .

Content

This package contain several components:

  1. A renderer to simulate readings from vision-based tactile sensors.
  2. An API to simulate vision-based tactile sensors in PyBullet.
  3. Mesh models and configuration files for the DIGIT and Omnitact sensors.

Usage

Additional packages (torch, gym, pybulletX) are required to run the following examples. You can install them by pip install -r requirements/examples.txt.

For a basic example on how to use TACTO in conjunction with PyBullet look at [TBD],

For an example of how to use just the renderer engine look at examples/demo_render.py.

For advanced examples of how to use the simulator with PyBullet look at the examples folder.

Demo DIGIT

Demo Allegro

Demo OmniTact

Demo Grasp

Demo Rolling

NOTE: the renderer requires a screen. For rendering headless, use the "EGL" mode with GPU and CUDA driver or "OSMESA" with CPU. See PyRender for more details.

License

This project is licensed under MIT license, as found in the LICENSE file.

Citing

If you use this project in your research, please cite:

@Article{Wang2020TACTO,
  author  = {Wang, Shaoxiong and Lambeta, Mike and Chou, Lambeta and Calandra, Roberto},
  title   = {TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors},
  journal = {Arxiv},
  year    = {2020},
  url     = {https://arxiv.org/abs/2012.08456},
}
Comments
  • tacto headless setup errors

    tacto headless setup errors

    Note: I uncommented the following code: https://github.com/facebookresearch/tacto/blob/master/tacto/renderer.py#L21 which is equivalent to your suggestion on using EGL Mode

    Below is the error I get when attempting to run tacto headless:

    Traceback (most recent call last):
      File "demo_pybullet_digit.py", line 23, in main
        digits = tacto.Sensor(**cfg.tacto, background=bg)
      File "/viscam/u/small02/tacto/tacto/sensor.py", line 80, in __init__
        self.renderer = Renderer(width, height, background, config_path)
      File "/viscam/u/small02/tacto/tacto/renderer.py", line 77, in __init__
        self._init_pyrender()
      File "/viscam/u/small02/tacto/tacto/renderer.py", line 115, in _init_pyrender
        self.r = pyrender.OffscreenRenderer(self.width, self.height)
      File "/sailhome/small02/miniconda2/envs/tacto/lib/python3.8/site-packages/pyrender/offscreen.py", line 31, in __init__
        self._create()
      File "/sailhome/small02/miniconda2/envs/tacto/lib/python3.8/site-packages/pyrender/offscreen.py", line 149, in _create
        self._platform.init_context()
      File "/sailhome/small02/miniconda2/envs/tacto/lib/python3.8/site-packages/pyrender/platforms/egl.py", line 177, in init_context
        assert eglInitialize(self._egl_display, major, minor)
      File "/sailhome/small02/miniconda2/envs/tacto/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 402, in __call__
        return self( *args, **named )
      File "/sailhome/small02/miniconda2/envs/tacto/lib/python3.8/site-packages/OpenGL/error.py", line 228, in glCheckError
        raise GLError(
    OpenGL.error.GLError: GLError(
    	err = 12289,
    	baseOperation = eglInitialize,
    	cArguments = (
    		<OpenGL._opaque.EGLDisplay_pointer object at 0x7f00379bbdc0>,
    		c_long(0),
    		c_long(0),
    	),
    	result = 0
    )
    
    Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
    
    
    opened by deevanshimall 12
  • No tactile feedback for some objects

    No tactile feedback for some objects

    Hi, I am trying to use tacto for a door opening task, but I don't get a tactile feedback when grasping the handle.

    handle handle_img

    For other objects like this banana it works well:

    banana banana_img

    Do you have any idea what could cause this problem? I made sure that the door object is added to the digit sensor. This is the mesh of the door:

    Screenshot from 2021-01-20 15-19-33

    I took the digit fingers from demo_pybullet_grasp.py and added them to our panda gripper.

    Thank you!

    opened by lukashermann 9
  • Best way to add an object generated using `p.createMultiBody` in pybullet to Tacto

    Best way to add an object generated using `p.createMultiBody` in pybullet to Tacto

    Hi,

    I have object meshes from .obj files that I add to my pybullet simulator using p.createMultiBody.

    What would be the best steps to follow in order to add this object to the Tacto simulator? I am not sure if there's something similar to add_object(): https://github.com/facebookresearch/tacto/blob/a21d0c4626d74a546d94859226c2fea348babb6a/tacto/sensor.py#L127

    Thank you, Oliver

    opened by Olimoyo 4
  • digit touch forces on heavy mass objects

    digit touch forces on heavy mass objects

    Hello,

    I have a few digit related questions: 1/ I understand that digit is just another mesh object, that is loaded into bullet like any other collision object. but wondering if there are any units/size/mass of the default loaded digit object? and any function to change these default values?

    2/ Let's say I have a heavy mass object B and I am using digit to generate touch forces on it. I do get some force values, but these don't seem realistic when comparing the relative mass sizes of the default-loaded digit and object B. I'm using globalScaling parameter into loadURDF while loading digit currently to do some relative scaling. But, how do you recommend scaling digit relative to object B to generate realistic touch forces? is it necessary? or do you recommend using multiple digits?

    Please let me know if my question is unclear. Thanks.

    opened by deevanshimall 4
  • How to enable shadows in Tacto?

    How to enable shadows in Tacto?

    Hi, I read your paper and was impressed by the realistic readings generated by Tacto. You mentioned in the paper that shadows can be easily enabled in Tacto, so I tried with changing the 515th line in renderer.py: color, depth = self.r.render(self.scene, flags=RenderFlags.SHADOWS_ALL) However, pyrender 0.1.45 seems not to support shadows for point lights. An exception is thrown:

    NotImplementedError: Shadows not implemented for point lights
    

    Could you help me find the way to enable shadows in Tacto? Thank you!

    opened by chenwh14 4
  • Fail to create a GelSight around a cylinder

    Fail to create a GelSight around a cylinder

    Hi, everyone. I am trying to create a GelSight in a cylinder. I try to write a sensor configuration file according to the OmniContact example. But the result turns out to be a blank image. Here is the link to my sensor configuration file.

    |A simple demonstration to the sensor|Gelsight image| |-|-| |image|image|

    What could lead to that error? Any information will be highly appreciated!

    opened by jc-bao 3
  • New pypi release

    New pypi release

    Hi all, the currently release pypi Tacto package dates December 16, 2020. Would it be possible to build and upload a newer version of Tacto to Pypi to have included the fixes that have been intorduced since then? Best, Oier

    opened by mees 3
  • PyTables error in experiments/grasp_stability/grasping_data_collection.py

    PyTables error in experiments/grasp_stability/grasping_data_collection.py

    I ran setup.py and installed all additional requirements before running each package, but getting this error. I tried doing pip install tables which did not fix the issue. both in draw.py and above file, deepdish is used which depends on pytables

      File "/Users/shivanimall/tacto/experiments/grasp_stability/draw.py", line 62, in <module>
        mean, std = load(field, Ns, epoch=epoch)
      File "/Users/shivanimall/tacto/experiments/grasp_stability/draw.py", line 27, in load
        log = dd.io.load(fn)
      File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/deepdish/io/__init__.py", line 14, in _f
        raise ImportError("You need PyTables for this function")
    ImportError: You need PyTables for this function
    
    opened by deevanshimall 3
  • control digit touch movements on an object

    control digit touch movements on an object

    Hello,

    wanted to share some background I am using tacto for and hoping to get your thoughts/suggestions on the best ways about this:

    I modified the robot in https://github.com/facebookresearch/tacto/blob/master/experiments/grasp_stability/robot.py to have only one finger tip (i.e deleted the left finger in setup/sawyer_wsg50.urdf) and instead of grasp or reach as in https://github.com/facebookresearch/tacto/blob/master/experiments/grasp_stability/grasp_data_collection.py#L246, trying to perform a touch on an object(is static) using digit but with very close accuracy for the touch location/position on the object.

    My end goal is to make the finger_right_tip on end effector reach a desired base position with close accuracy( at least upto one decimal) on the static object so that I can obtain rgb, depth map touch readings for the desired touch/collision point. Currently using go function in robot.py does not get me this accuracy which is central to my use case. another caveat I found -- using the robot arm requires accounting for exact rotation/euler values to reach a desired location.

    Some thoughts I have are to 1/ change/simplify robot structure 2/ use a different type of end effector 3/ currently I keep object static and move the digit. but maybe can do the reverse of this- control object movements (via grasp/lift) and bring in contact w/ a static digit touch sensor 4/ the surface area of digit tip also matters since that will determine the maximum covered region of the given touch location. (want it to be small for accuracy)

    maybe physics related, but is there a simple way/api in tacto to perform touch on different desired points on an object?

    Please let me know if my question is unclear. Thank you so much!

    opened by deevanshimall 2
  • [FeatureRequest] Use PyBullet Client ID to enable multiple pybullet instances

    [FeatureRequest] Use PyBullet Client ID to enable multiple pybullet instances

    Currently the Sensor class doesn't accept a cid as input parameter [0]. For cases where one wants to have multiple pybullet instances / threads, using the cid would be a simple way to allow it.

    [0] https://github.com/facebookresearch/tacto/blob/master/tacto/sensor.py#L52

    good first issue 
    opened by mees 2
  • Wrong image normalization for grasp stability

    Wrong image normalization for grasp stability

    Hi, if I understand correctly, the rgb images of tacto are being normalized with mean=0.5, std=0.5 https://github.com/facebookresearch/tacto/blob/main/experiments/grasp_stability/train.py#L236 However, since a pretrained resnet18 is used, I think the imagenet statistics should be used for normalization. transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])

    opened by mees 1
  • GelSight Mini support

    GelSight Mini support

    Hi authors, thank you for sharing this awesome repo!

    I'm currently using GelSight Mini from GelSight Inc., and am wondering if I can somehow use this simulator to simulate deformations of the GelSight Mini sensor. For that, I'd like to ask two questions:

    1. Do you have any plan to support the GelSight Mini sensor?
    2. How can I simulate the GelSight Mini-like sensor by modifying the current codes?
      • It might not be as accurate as Digit sensor, but I imagine this simulator has some parameters that enable it to simulate another gel-based sensor, such as parameters that define sensor size, geometries, pose, etc. I want to know if there are some workarounds.

    Thanks!

    opened by keiohta 0
  • Franka Panda support

    Franka Panda support

    Thanks you for making this wonderful simulator!

    I'm just wondering if it is possible to support franka panda arm smulation? I'm using panda instead of sawyer. I would really want to use panda for simulation. Thanks!

    opened by HaoLiRobo 2
  • I wonder if the synthetic images are significantly different from the real-world images and could lead to depth calculation failure.

    I wonder if the synthetic images are significantly different from the real-world images and could lead to depth calculation failure.

    Hi, thanks for the fantastic work:

    However, when I use synthetic images to train an image to depth neural network, it works very bad on real-world images.

    I found the synthetic images (as shown below) are significantly different from the real image, is this the reason leading to depth calculation failure? When I test the neural network on the synthetic images, it works fine. However, when I test real-world images it just predicts them as no-depth images (a full black image).

    As shown in the synthetic images, the contact image of synthetic images looks more three-dimensional.

    2

    1

    I am new to the image process area, therefore, I may miss some key pre-treatment processes for the real-world images to make them more like synthetic images.

    Is my digit sensor not fabricated well?

    Or is any pre-treatment needed for the real-world images to make them more like synthetic images?

    Any advice from the community will be extremely helpful to me.

    opened by pekkykang 0
  • Unreasonable white pixels in render images

    Unreasonable white pixels in render images

    Hello! When I render color and depth with egl, some images have white pixels as shown below. This didn't happen when using OpenGL. Does anyone happen to know why this occurs? color_7 depth_7

    opened by estellexky 0
  • [Request/Support] Using Grasping Stability NN in real world

    [Request/Support] Using Grasping Stability NN in real world

    Hi @wx405557858, my team and I have bought the DIGIT sensors (directly from GelSight. I have read in your paper about the grasping stability NN that you trained. image

    We intend to use the tactile feedback from the sensors to establish the stability of the grasp (we uese real-life YCB Dataset objects). We have some ideas in mind, but we would also test out the NN you trained in the real world. We would probably like to implement it within ROS, so getting anything from python should work. Could you kindly walk me through, at code level:

    -How to run inference of the NN? -Would it be possible to have the script to run it and the weights? ( we would probably do a finetuning on top of it) -How/Where to provide the DIGIT tactile images as input? -What exactly is the provided output? Regards Roberto

    opened by robertokcanale 0
Owner
Facebook Research
Facebook Research
Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding

Vision Longformer This project provides the source code for the vision longformer paper. Multi-Scale Vision Longformer: A New Vision Transformer for H

Microsoft 209 Dec 30, 2022
Python interface for the DIGIT tactile sensor

DIGIT-INTERFACE Python interface for the DIGIT tactile sensor. For updates and discussions please join the #DIGIT channel at the www.touch-sensing.org

Facebook Research 35 Dec 22, 2022
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 184 Dec 31, 2022
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 2, 2023
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin Sinanoğlu 2 Mar 4, 2022
A Fast and Stable GAN for Small and High Resolution Imagesets - pytorch

A Fast and Stable GAN for Small and High Resolution Imagesets - pytorch The official pytorch implementation of the paper "Towards Faster and Stabilize

Bingchen Liu 455 Jan 8, 2023
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Welcome to AirSim AirSim is a simulator for drones, cars and more, built on Unreal Engine (we now also have an experimental Unity release). It is open

Microsoft 13.8k Jan 5, 2023
A fast poisson image editing implementation that can utilize multi-core CPU or GPU to handle a high-resolution image input.

Poisson Image Editing - A Parallel Implementation Jiayi Weng (jiayiwen), Zixu Chen (zixuc) Poisson Image Editing is a technique that can fuse two imag

Jiayi Weng 110 Dec 27, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
Pocsploit is a lightweight, flexible and novel open source poc verification framework

Pocsploit is a lightweight, flexible and novel open source poc verification framework

cckuailong 208 Dec 24, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
ObjectDetNet is an easy, flexible, open-source object detection framework

Getting started with the ObjectDetNet ObjectDetNet is an easy, flexible, open-source object detection framework which allows you to easily train, resu

null 5 Aug 25, 2020
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 1, 2022
Shallow Convolutional Neural Networks for Human Activity Recognition using Wearable Sensors

-IEEE-TIM-2021-1-Shallow-CNN-for-HAR [IEEE TIM 2021-1] Shallow Convolutional Neural Networks for Human Activity Recognition using Wearable Sensors All

Wenbo Huang 1 May 17, 2022
Adaout is a practical and flexible regularization method with high generalization and interpretability

Adaout Adaout is a practical and flexible regularization method with high generalization and interpretability. Requirements python 3.6 (Anaconda versi

lambett 1 Feb 9, 2022
Simple, efficient and flexible vision toolbox for mxnet framework.

MXbox: Simple, efficient and flexible vision toolbox for mxnet framework. MXbox is a toolbox aiming to provide a general and simple interface for visi

Ligeng Zhu 31 Oct 19, 2019
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 4, 2022
Fast, flexible and fun neural networks.

Brainstorm Discontinuation Notice Brainstorm is no longer being maintained, so we recommend using one of the many other,available frameworks, such as

IDSIA 1.3k Nov 21, 2022