The Generic Manipulation Driver Package - Implements a ROS Interface over the robotics toolbox for Python

Overview

Armer Driver

QUT Centre for Robotics Open Source License: MIT Build Status Language grade: Python Coverage

image

Armer documentation can be found here

image

Armer aims to provide an interface layer between the hardware drivers of a robotic arm giving the user control in several ways:

In addition to a multiple control method layer, Armer is designed to be a compatability layer allowing the user to use the same code across different robotic platforms. Armer supports control for physical and simulated arms giving users the ability to develop even without access to a physical manipulator.

Below is a gif of 3 different simulated arms moving with the same cartesian velocity commands.

image

Requirements

Several ROS action servers, topics and services are set up by Armer to enable this functionality. A summary of these can be found here.

Armer is built on the Python Robotics Toolbox (RTB) and requires a URDF loaded RTB model to calculate the required movement kinematics, RTB comes with browser based simulator Swift which Armer uses as an out of the box simulator.

Due to these supporting packages using Armer with a manipulator will require several requirements:

Software requirements

Robot specific requirements

  • ROS drivers with joint velocity controllers
  • Robotics Toolbox model

Installation

Copy and paste the following code snippet into a terminal to create a new catkin workspace and install Armer to it. Note this script will also add the workspace to be sourced every time a bash terminal is opened.

sudo apt install python3-pip 
mkdir -p ~/armer_ws/src && cd ~/armer_ws/src 
git clone https://github.com/qcr/armer.git && git clone https://github.com/qcr/armer_msgs 
cd .. && rosdep install --from-paths src --ignore-src -r -y 
catkin_make 
echo "source ~/armer_ws/devel/setup.bash" >> ~/.bashrc 
source ~/armer_ws/devel/setup.bash
echo "Installation complete!"

Supported Arms

Armer relies on the manipulator's ROS driver to communicate with the low level hardware so the the ROS drivers must be started along side Armer.

Currently Armer driver has packages that launches Armer and the target manipulator's drivers are bundled together. If your arm model has a hardware package, control should be a fairly plug and play experience. (An experience we are still working on so please let us know if it isn't.). Below are the github pages to arms with hardware packages. Install directions can be found on their respective pages.

For more information on setting up manipulators not listed here see the Armer documentation, Supported Arms.

Usage

The Armer interface can be launched with the following command:

roslaunch armer_{ROBOT_MODEL} robot_bringup.launch config:={PATH_TO_CONFIG_YAML_FILE} sim:={true/false}

After launching, an arm can be controlled in several ways. Some quick tutorials can be referenced below:

For more information and examples see the Armer documentation

Comments
  • Plot signal from _traj_move

    Plot signal from _traj_move

    The output signal from _traj_move may potentially be saw-tooth if it overshoots between points in the trajectory.

    We should plot the signal to ensure that this is not the case.

    Further, we should investigate mechanisms for interpolating between points to account for this issue.

    bug 
    opened by suddrey-qut 2
  • startup race condition on joint positions

    startup race condition on joint positions

    If a latched pose goal is received before the first joint state message is registered with the robot then the following error is generated

    [ERROR] [1636937448.492012]: Exception in your execute callback: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 0 is different from 7) Traceback (most recent call last): File "/opt/ros/noetic/lib/python3/dist-packages/actionlib/simple_action_server.py", line 289, in executeLoop self.execute_callback(goal) File "/home/robot/armer_ws/src/armer/armer/robots/ROSRobot.py", line 372, in pose_cb if self.__traj_move(traj, goal.speed if goal.speed else 0.2): File "/home/robot/armer_ws/src/armer/armer/robots/ROSRobot.py", line 677, in __traj_move current_twist = jacob0 @ current_jv ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 0 is different from 7)

    opened by suddrey-qut 1
  • Panda arm shake with current _traj_move and step methods

    Panda arm shake with current _traj_move and step methods

    The current version of the Armer package (stable with rtb 0.11.0-future and swift-sim 0.10.0) has some shake when run on a real Panda arm. Investigation into control code to be tested for review. See work in progress (wip) branch with issue ID for tracking.

    bug 
    opened by DasGuna 1
  • UR5 goes to negatives of x and y world coordinates

    UR5 goes to negatives of x and y world coordinates

    When using the following example script the UR5 in the S11 labs reports its end position as x=-0.300 and y=-200 instead of the expected x=0.300 and y=0.200

    import rospy
    import actionlib
    from armer_msgs.msg import MoveToPoseAction, MoveToPoseGoal
    from geometry_msgs.msg import PoseStamped
    import rospy
    
    rospy.init_node('armer_example', disable_signals=True)
    pose_cli = actionlib.SimpleActionClient('/arm/cartesian/pose', MoveToPoseAction)
    pose_cli.wait_for_server()
    
    target = PoseStamped()
    # target.header.frame_id = 'world'
    target.pose.position.x = 0.300
    target.pose.position.y = 0.200
    target.pose.position.z = 0.290
    target.pose.orientation.x = -1.00
    target.pose.orientation.y =  0.00
    target.pose.orientation.z =  0.00
    target.pose.orientation.w =  0.00
    
    goal = MoveToPoseGoal()
    goal.pose_stamped=target
    pose_cli.send_goal(goal)
    pose_cli.wait_for_result()
    

    This was running on a new nuc with a clean installation.

    Running the same code in Swift produces a different pose, reverse of the physical pose (as expected): image

    Armer version: 6b69d0549b8c3123716dcbc6b30c53fe555605e9 armer_ur: 2f67b9d8a60e08ea3e77f7b625546341cc6dbf19 RTB: 0.11.0 Swift: 0.10 SpatialGeometry: 0.2 SpatialMath: 0.11

    bug 
    opened by qutfaith 1
  • Armer requires escalation to SIGTERM to quit in example.

    Armer requires escalation to SIGTERM to quit in example.

    CTRL-C does not cause nicely quit

    (robostackenv) stevem@steveQUT:~/armer_ws$ roslaunch armer armer.launch ... logging to /home/stevem/.ros/log/e526e96c-dfb2-11eb-8d88-b88584a53a39/roslaunch-steveQUT-364430.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt WARNING: disk usage in log directory [/home/stevem/.ros/log] is over 1GB. It's recommended that you use the 'rosclean' command.

    started roslaunch server http://steveQUT:39161/

    SUMMARY

    PARAMETERS

    • /armer/config: /home/stevem/arme...
    • /rosdistro: noetic
    • /rosversion: 1.15.9

    NODES / armer (armer/armer)

    ROS_MASTER_URI=http://localhost:11311

    process[armer-1]: started with pid [364444] /home/stevem/mambaforge/envs/robostackenv/lib/python3.8/site-packages/swift/out ^C[armer-1] killing on exit ^[[A^C^C^C[armer-1] escalating to SIGTERM [armer-1] escalating to SIGKILL Shutdown errors:

    • process[armer-1, pid 364444]: required SIGKILL. May still be running. shutting down processing monitor... ... shutting down processing monitor complete done
    opened by stevencolinmartin 1
  • ServoToPoseAction is not available

    ServoToPoseAction is not available

    https://github.com/qcr/armer/blob/ceeb2dc2b8a20ff88a217d9a4c125a4c195f9a8b/examples/panda_example.py#L12

    Hi, the panda example fails because armer_msgs doesn't have all messages and actions defined. Best, Jan

    opened by Dyymon 0
  • Auto generated modules documentation cannot generate due to incompatible numpy issue

    Auto generated modules documentation cannot generate due to incompatible numpy issue

    Due to RTB requiring a different version of numpy than is available by default in apt. This causes a version conflict and causes sphinx to not be able to import numpy and generate module docs

    opened by qutfaith 0
  • Explore Collision Avoidance Strategies

    Explore Collision Avoidance Strategies

    This is a two part problem:

      • Servoing does not incorporate collision avoidance
      • Trajectory planners in robot toolbox do not currently support collision avoidance.
    enhancement 
    opened by suddrey-qut 0
  • Import world configurations

    Import world configurations

    World configuration files are a must for performing collision avoidance w.r.t. fixed features of the world.

    Would be nice if we could import MoveIt workspaces to reduce overhead when swapping.

    enhancement 
    opened by suddrey-qut 0
Owner
QUT Centre for Robotics (QCR)
A collection of the open source projects released by the QUT Centre for Robotics (QCR).
QUT Centre for Robotics (QCR)
ROS-UGV-Control-Interface - Control interface which can be used in any UGV

ROS-UGV-Control-Interface Cam Closed: Cam Opened:

Ahmet Fatih Akcan 1 Nov 4, 2022
Python package for covariance matrices manipulation and Biosignal classification with application in Brain Computer interface

pyRiemann pyRiemann is a python package for covariance matrices manipulation and classification through Riemannian geometry. The primary target is cla

null 447 Jan 5, 2023
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
A naive ROS interface for visualDet3D.

YOLO3D ROS Node This repo contains a Monocular 3D detection Ros node. Base on https://github.com/Owen-Liuyuxuan/visualDet3D All parameters are exposed

Yuxuan Liu 19 Oct 8, 2022
YOLOX + ROS(1, 2) object detection package

YOLOX + ROS(1, 2) object detection package

Ar-Ray 158 Dec 21, 2022
Implements Gradient Centralization and allows it to use as a Python package in TensorFlow

Gradient Centralization TensorFlow This Python package implements Gradient Centralization in TensorFlow, a simple and effective optimization technique

Rishit Dagli 101 Nov 1, 2022
This package implements THOR: Transformer with Stochastic Experts.

THOR: Transformer with Stochastic Experts This PyTorch package implements Taming Sparsely Activated Transformer with Stochastic Experts. Installation

Microsoft 45 Nov 22, 2022
Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"

Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation This repository is the pytorch implementation of our paper: Hierarchical Cr

null 43 Nov 21, 2022
Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Stephen James 51 Dec 27, 2022
YARR is Yet Another Robotics and Reinforcement learning framework for PyTorch.

Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Stephen James 21 Aug 1, 2021
Libraries, tools and tasks created and used at DeepMind Robotics.

Libraries, tools and tasks created and used at DeepMind Robotics.

DeepMind 270 Nov 30, 2022
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

Photogrammetry & Robotics Bonn 314 Dec 30, 2022
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 7, 2023
Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator

DRL-robot-navigation Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gra

null 87 Jan 7, 2023
ROS Basics and TurtleSim

Waypoint Follower Anna Garverick This package draws given waypoints, then waits for a service call with a start position to send the turtle to each wa

Anna Garverick 1 Dec 13, 2021
ROS support for Velodyne 3D LIDARs

Overview Velodyne1 is a collection of ROS2 packages supporting Velodyne high definition 3D LIDARs3. Warning: The master branch normally contains code

ROS device drivers 543 Dec 30, 2022
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 본 repo 는 mAy-I Inc. 팀으로 참가한 2021 인공지능 온라인 경진대회 중 [이미지] 운전 사고 예방을 위한 운전자 부주의 행동 검출 모델] 태스크 수행을 위한 레포지토리입니다. mAy-I 는 과학기술정보통신부가 주최하

Junhyuk Park 9 Dec 1, 2022
Using CNN to mimic the driver based on training data from Torcs

Behavioural-Cloning-in-autonomous-driving Using CNN to mimic the driver based on training data from Torcs. Approach First, the data was collected from

Sudharshan 2 Jan 5, 2022