Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

Overview

ORB-SLAM2

Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2)

13 Jan 2017: OpenCV 3 and Eigen 3.3 are now supported.

22 Dec 2016: Added AR demo (see section 7).

ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). It is able to detect loops and relocalize the camera in real time. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. We also provide a ROS node to process live monocular, stereo or RGB-D streams. The library can be compiled without ROS. ORB-SLAM2 provides a GUI to change between a SLAM Mode and Localization Mode, see section 9 of this document.

ORB-SLAM2 ORB-SLAM2 ORB-SLAM2

Related Publications:

[Monocular] Raúl Mur-Artal, J. M. M. Montiel and Juan D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. (2015 IEEE Transactions on Robotics Best Paper Award). PDF.

[Stereo and RGB-D] Raúl Mur-Artal and Juan D. Tardós. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, 2017. PDF.

[DBoW2 Place Recognizer] Dorian Gálvez-López and Juan D. Tardós. Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188-1197, 2012. PDF

1. License

ORB-SLAM2 is released under a GPLv3 license. For a list of all code/library dependencies (and associated licenses), please see Dependencies.md.

For a closed-source version of ORB-SLAM2 for commercial purposes, please contact the authors: orbslam (at) unizar (dot) es.

If you use ORB-SLAM2 (Monocular) in an academic work, please cite:

@article{murTRO2015,
  title={{ORB-SLAM}: a Versatile and Accurate Monocular {SLAM} System},
  author={Mur-Artal, Ra\'ul, Montiel, J. M. M. and Tard\'os, Juan D.},
  journal={IEEE Transactions on Robotics},
  volume={31},
  number={5},
  pages={1147--1163},
  doi = {10.1109/TRO.2015.2463671},
  year={2015}
 }

if you use ORB-SLAM2 (Stereo or RGB-D) in an academic work, please cite:

@article{murORB2,
  title={{ORB-SLAM2}: an Open-Source {SLAM} System for Monocular, Stereo and {RGB-D} Cameras},
  author={Mur-Artal, Ra\'ul and Tard\'os, Juan D.},
  journal={IEEE Transactions on Robotics},
  volume={33},
  number={5},
  pages={1255--1262},
  doi = {10.1109/TRO.2017.2705103},
  year={2017}
 }

2. Prerequisites

We have tested the library in Ubuntu 12.04, 14.04 and 16.04, but it should be easy to compile in other platforms. A powerful computer (e.g. i7) will ensure real-time performance and provide more stable and accurate results.

C++11 or C++0x Compiler

We use the new thread and chrono functionalities of C++11.

Pangolin

We use Pangolin for visualization and user interface. Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin.

OpenCV

We use OpenCV to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org. Required at leat 2.4.3. Tested with OpenCV 2.4.11 and OpenCV 3.2.

Eigen3

Required by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org. Required at least 3.1.0.

DBoW2 and g2o (Included in Thirdparty folder)

We use modified versions of the DBoW2 library to perform place recognition and g2o library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the Thirdparty folder.

ROS (optional)

We provide some examples to process the live input of a monocular, stereo or RGB-D camera using ROS. Building these examples is optional. In case you want to use ROS, a version Hydro or newer is needed.

3. Building ORB-SLAM2 library and examples

Clone the repository:

git clone https://github.com/raulmur/ORB_SLAM2.git ORB_SLAM2

We provide a script build.sh to build the Thirdparty libraries and ORB-SLAM2. Please make sure you have installed all required dependencies (see section 2). Execute:

cd ORB_SLAM2
chmod +x build.sh
./build.sh

This will create libORB_SLAM2.so at lib folder and the executables mono_tum, mono_kitti, rgbd_tum, stereo_kitti, mono_euroc and stereo_euroc in Examples folder.

4. Monocular Examples

TUM Dataset

  1. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.

  2. Execute the following command. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder.

./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER

KITTI Dataset

  1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php

  2. Execute the following command. Change KITTIX.yamlby KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change PATH_TO_DATASET_FOLDER to the uncompressed dataset folder. Change SEQUENCE_NUMBER to 00, 01, 02,.., 11.

./Examples/Monocular/mono_kitti Vocabulary/ORBvoc.txt Examples/Monocular/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER

EuRoC Dataset

  1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

  2. Execute the following first command for V1 and V2 sequences, or the second command for MH sequences. Change PATH_TO_SEQUENCE_FOLDER and SEQUENCE according to the sequence you want to run.

./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.txt Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_FOLDER/mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/SEQUENCE.txt 
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.txt Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE/cam0/data Examples/Monocular/EuRoC_TimeStamps/SEQUENCE.txt 

5. Stereo Examples

KITTI Dataset

  1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php

  2. Execute the following command. Change KITTIX.yamlto KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change PATH_TO_DATASET_FOLDER to the uncompressed dataset folder. Change SEQUENCE_NUMBER to 00, 01, 02,.., 11.

./Examples/Stereo/stereo_kitti Vocabulary/ORBvoc.txt Examples/Stereo/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER

EuRoC Dataset

  1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

  2. Execute the following first command for V1 and V2 sequences, or the second command for MH sequences. Change PATH_TO_SEQUENCE_FOLDER and SEQUENCE according to the sequence you want to run.

./Examples/Stereo/stereo_euroc Vocabulary/ORBvoc.txt Examples/Stereo/EuRoC.yaml PATH_TO_SEQUENCE/mav0/cam0/data PATH_TO_SEQUENCE/mav0/cam1/data Examples/Stereo/EuRoC_TimeStamps/SEQUENCE.txt
./Examples/Stereo/stereo_euroc Vocabulary/ORBvoc.txt Examples/Stereo/EuRoC.yaml PATH_TO_SEQUENCE/cam0/data PATH_TO_SEQUENCE/cam1/data Examples/Stereo/EuRoC_TimeStamps/SEQUENCE.txt

6. RGB-D Example

TUM Dataset

  1. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.

  2. Associate RGB images and depth images using the python script associate.py. We already provide associations for some of the sequences in Examples/RGB-D/associations/. You can generate your own associations file executing:

python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
  1. Execute the following command. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder. Change ASSOCIATIONS_FILE to the path to the corresponding associations file.
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUMX.yaml PATH_TO_SEQUENCE_FOLDER ASSOCIATIONS_FILE

7. ROS Examples

Building the nodes for mono, monoAR, stereo and RGB-D

  1. Add the path including Examples/ROS/ORB_SLAM2 to the ROS_PACKAGE_PATH environment variable. Open .bashrc file and add at the end the following line. Replace PATH by the folder where you cloned ORB_SLAM2:
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM2/Examples/ROS
  1. Execute build_ros.sh script:
chmod +x build_ros.sh
./build_ros.sh

Running Monocular Node

For a monocular input from topic /camera/image_raw run node ORB_SLAM2/Mono. You will need to provide the vocabulary file and a settings file. See the monocular examples above.

rosrun ORB_SLAM2 Mono PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

Running Monocular Augmented Reality Demo

This is a demo of augmented reality where you can use an interface to insert virtual cubes in planar regions of the scene. The node reads images from topic /camera/image_raw.

rosrun ORB_SLAM2 MonoAR PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

Running Stereo Node

For a stereo input from topic /camera/left/image_raw and /camera/right/image_raw run node ORB_SLAM2/Stereo. You will need to provide the vocabulary file and a settings file. If you provide rectification matrices (see Examples/Stereo/EuRoC.yaml example), the node will recitify the images online, otherwise images must be pre-rectified.

rosrun ORB_SLAM2 Stereo PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE ONLINE_RECTIFICATION

Example: Download a rosbag (e.g. V1_01_easy.bag) from the EuRoC dataset (http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets). Open 3 tabs on the terminal and run the following command at each tab:

roscore
rosrun ORB_SLAM2 Stereo Vocabulary/ORBvoc.txt Examples/Stereo/EuRoC.yaml true
rosbag play --pause V1_01_easy.bag /cam0/image_raw:=/camera/left/image_raw /cam1/image_raw:=/camera/right/image_raw

Once ORB-SLAM2 has loaded the vocabulary, press space in the rosbag tab. Enjoy!. Note: a powerful computer is required to run the most exigent sequences of this dataset.

Running RGB_D Node

For an RGB-D input from topics /camera/rgb/image_raw and /camera/depth_registered/image_raw, run node ORB_SLAM2/RGBD. You will need to provide the vocabulary file and a settings file. See the RGB-D example above.

rosrun ORB_SLAM2 RGBD PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

8. Processing your own sequences

You will need to create a settings file with the calibration of your camera. See the settings file provided for the TUM and KITTI datasets for monocular, stereo and RGB-D cameras. We use the calibration model of OpenCV. See the examples to learn how to create a program that makes use of the ORB-SLAM2 library and how to pass images to the SLAM system. Stereo input must be synchronized and rectified. RGB-D input must be synchronized and depth registered.

9. SLAM and Localization Modes

You can change between the SLAM and Localization mode using the GUI of the map viewer.

SLAM Mode

This is the default mode. The system runs in parallal three threads: Tracking, Local Mapping and Loop Closing. The system localizes the camera, builds new map and tries to close loops.

Localization Mode

This mode can be used when you have a good map of your working area. In this mode the Local Mapping and Loop Closing are deactivated. The system localizes the camera in the map (which is no longer updated), using relocalization if needed.

Comments
  • How to obtain 3D point cloud of the map?

    How to obtain 3D point cloud of the map?

    Hello,

    I am a newbie and I am sorry if this is the wrong place to ask this question. I want to use the 3D point cloud obtained from ORB slam in order to recognize structures like walls/floor/ceiling etc. And after this, I want to generate some augmentations. I just want to know how to obtain the pose of the camera and 3D positions of all landmarks (point cloud). Also all these are with respect to the world co-ordinate system?

    Thank you very much. Cheers!

    opened by akashshar 25
  • ORB-SLAM2 with Kinect

    ORB-SLAM2 with Kinect

    Hey everyone, I am trying to use Kinect with the ORB-SLAM2. I already went through the dataset examples, and they worked fine. However, when trying to use with kinect, it doesn't work: the camera window displays "Waiting for Images". I already checked rostopic list, which gives: /camera/depth_registered/image_raw /camera/rgb/image_raw /rosout /rosout_agg

    The two first topics are the ones that should be used with ORB-SLAM2, right?

    In addition, I already tried launching libfreenect: roslaunch freenect_launch freenect.launch

    I get a bunch more rostopics and I am even able to view rgb video stream from image_view: rosrun image_view image_view image:=/camera/rgb/image_raw However, I cannot view the depth image from /camera/depth_registered/image_raw, but I can view the one in the topic: rosrun image_view image_view image:=/camera/depth/image

    I tried to change the ros_rgbd.cc file From: message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "camera/depth_registered/image_raw", 1); To: message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "/camera/depth/image", 1);

    However, I didn't it would work anyway (and it didn't). Neither of the above solved the problem.

    When executing the rqt_graph, it seems like ORB_SLAM2 creates a node called RGBD that is getting the topic /camera/rgb/image_raw. The rqt_graph can be seen below: https://www.dropbox.com/s/fri1ik0xtzw7a9e/rosgraph.png?dl=0 (seems like the link can only be accessed by copying and pasting into the browser)

    Would any of you have a suggestion of what can I do

    opened by marcelinomalmeidan 25
  • ROS build error for ORB_SLAM2

    ROS build error for ORB_SLAM2

    I want to build ROS node. I have set the environment variable as follows and also added it to .bashrc. Still I get the errrors as shown bellow. Anyone has idea?

    export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:home/ujjval/ORB_SLAM2/Examples/ROS/ORB_SLAM2

    `ujjval@ujjval-VPCEH18FG:~/ORB_SLAM2$ ./build_ros.sh Building ROS nodes mkdir: cannot create directory ‘build’: File exists [rosbuild] Building package ORB_SLAM2 [rosbuild] Error from directory check: /opt/ros/kinetic/share/ros/core/rosbuild/bin/check_same_directories.py /home/ujjval/ORB_SLAM2/Examples/ROS/ORB_SLAM2 1 Traceback (most recent call last): File "/opt/ros/kinetic/share/ros/core/rosbuild/bin/check_same_directories.py", line 46, in raise Exception Exception CMake Error at /opt/ros/kinetic/share/ros/core/rosbuild/private.cmake:102 (message): [rosbuild] rospack found package "ORB_SLAM2" at "", but the current directory is "/home/ujjval/ORB_SLAM2/Examples/ROS/ORB_SLAM2". You should double-check your ROS_PACKAGE_PATH to ensure that packages are found in the correct precedence order. Call Stack (most recent call first): /opt/ros/kinetic/share/ros/core/rosbuild/public.cmake:177 (_rosbuild_check_package_location) CMakeLists.txt:4 (rosbuild_init)

    -- Configuring incomplete, errors occurred! See also "/home/ujjval/ORB_SLAM2/Examples/ROS/ORB_SLAM2/build/CMakeFiles/CMakeOutput.log". make: *** No targets specified and no makefile found. Stop. `

    opened by ujur007 24
  • Discuss map save & load

    Discuss map save & load

    When I needed to save and load maps, I saw many ORB-SLAM2 forks doing it in many ways, some not working, most saving unnecessary data.

    I went on a deep journey and got a decent working minimum binary file size, avoiding saving a lot of members: some ephemeral, some rebuildable. But I did it on a modified version of ORB-SLAM2, working only on monocular.

    If someone want to debate and code a neat serialize class for ORB-SLAM2 to share, this is where to start.

    opened by AlejandroSilvestri 20
  • OpenCV error when running ros examples  in Ubuntu16.04 + ROS (kinetic version)

    OpenCV error when running ros examples in Ubuntu16.04 + ROS (kinetic version)

    Hi, I tried to run the ros examples in Ubuntu16.04 + ROS (kinetic version), run the following command at each tab: roscore rosrun ORB_SLAM2 Stereo ORB_SLAM2/Vocabulary/ORBvoc.txt ORB_SLAM2/Examples/Stereo/EuRoC.yaml true rosbag play --pause Downloads/MH_01_easy.bag /cam0/image_raw:=/camera/left/image_raw /cam1/image_raw:=/camera/right/image_raw I got this problems: OpenCV Error: Bad argument (Invalid pointer to file storage) in cvGetFileNodeByName, file /tmp/binarydeb/ros-kinetic-opencv3-3.1.0/modules/core/src/persistence.cpp, line 709 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/binarydeb/ros-kinetic-opencv3-3.1.0/modules/core/src/persistence.cpp:709: error: (-5) Invalid pointer to file storage in function cvGetFileNodeByName..

    How to solve the problem? Hope to you reply, thank u very much.

    opened by gblack007 18
  • ORB extraction

    ORB extraction

    I found 2 issues in orb extraction when I used it in another project.

    1. it will crash in DistributeOctTree(nIni = 0) if the image width is smaller than height
    2. The results are always different for several times feature extraction with a same image, but I don't find anywhere used random variation.
    opened by jianchong-chen 17
  • Unable to build the ros examples!

    Unable to build the ros examples!

    Hi,

    I'm having an issue with building the ROS examples. The first part of the installation works perfectly.i.e I can successfully build the Thirdparty libraries and the examples. But when I try to build the ROS examples, I get this error:

    `[rosbuild] Building package ORB_SLAM2 [rosbuild] Error from directory check: /opt/ros/indigo/share/ros/core/rosbuild/bin/check_same_directories.py /home/ankit/orb_slam_catkin_ws/ORB_SLAM2/Examples/ROS/ORB_SLAM2 1 Traceback (most recent call last): File "/opt/ros/indigo/share/ros/core/rosbuild/bin/check_same_directories.py", line 46, in raise Exception Exception CMake Error at /opt/ros/indigo/share/ros/core/rosbuild/private.cmake:102 (message): [rosbuild] rospack found package "ORB_SLAM2" at "", but the current directory is "/home/ankit/orb_slam_catkin_ws/ORB_SLAM2/Examples/ROS/ORB_SLAM2". You should double-check your ROS_PACKAGE_PATH to ensure that packages are found in the correct precedence order. Call Stack (most recent call first): /opt/ros/indigo/share/ros/core/rosbuild/public.cmake:177 (_rosbuild_check_package_location) CMakeLists.txt:4 (rosbuild_init)

    -- Configuring incomplete, errors occurred! ` I guess this could be something to do with opencv. Not sure though! P.S I have all the dependencies installed successfully.

    Thanks

    opened by ankitvora7 17
  • Map Viewer Top View

    Map Viewer Top View

    Hello, for the Map Viewer, would there be a way to set the view as Top View, like this video (https://www.youtube.com/watch?feature=player_embedded&v=LnbAI-o7YHk)? Thank you for your help!

    opened by ghost 16
  • Not solved yet. Please help! How can I upload the camera data and apply monocular in real time?

    Not solved yet. Please help! How can I upload the camera data and apply monocular in real time?

    Hi,

    I'm a newbie at SLAM and Linux system. I followed the tutorial and ran the example TUM dataset in Ubuntu for a test, it works well. Now I want to upload my camera raw data in real time to run monocular algorithm. How can I do this? Which interface should I choose?

    And how can I make use of my telephone sensor to make a better adjustment?

    Thanks a lot for your help.

    opened by scp096 14
  • How to get the scale factor in Monocular ORB-SLAM?

    How to get the scale factor in Monocular ORB-SLAM?

    I am trying to run monocular ORB-SLAM with a preloaded map of the same area and if it recognizes the place it should merge the maps together. Since monocular slam can only determine the pose up to a certain scale, ORB-SLAM gives each time different result (different scale), which affects the comparison with the preloaded map. Is there any possibility to change the scale factor of one map according to another map? Does anyone here has experience with map merging using ORB-SLAM? I appreciate any kind of help!

    opened by Zhujiazhen 12
  • too many key frames!!!

    too many key frames!!!

    I'm using a calibratied kinect2 with orbslam2. but when I using it, even with a small move of 4cm, it produce so many keyframes(about 14). why? BTW, the dense point cloud is ugly. temp

    opened by EXing 12
  • How to apply ORB_SLAM2 for the KITTI dataset sequences in color (RGB)?

    How to apply ORB_SLAM2 for the KITTI dataset sequences in color (RGB)?

    I am in doubt in how to use ORB_SLAM2 correctly with the odometry dataset in color (65 GB) located in this link.

    Mainly because we have in calibration file P0, P1, P2 and P3. P2 and P3 are the projections matrices for the cameras in color, using as reference the gray left camera. More details of the setup used in KITTI dataset located in this link.

    An example of one calib containing these matrices is shown below. We can observe that they have the same focal lengths and centers, but when we compare the last column of P2 (left color camera) and P0 (left gray camera), P2 have other values that are not used in the config file of ORB_SLAM2. We only use the first value that corresponds to Camera.bf (stereo baseline times fx).

    So I think I would have to apply some transformations and maybe in the groundtruth file, but I don't know how.

    P0: 7.070912000000e+02 0.000000000000e+00 6.018873000000e+02 0.000000000000e+00 0.000000000000e+00 7.070912000000e+02 1.831104000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00

    P1: 7.070912000000e+02 0.000000000000e+00 6.018873000000e+02 -3.798145000000e+02 0.000000000000e+00 7.070912000000e+02 1.831104000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00

    P2: 7.070912000000e+02 0.000000000000e+00 6.018873000000e+02 4.688783000000e+01 0.000000000000e+00 7.070912000000e+02 1.831104000000e+02 1.178601000000e-01 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 6.203223000000e-03

    P3: 7.070912000000e+02 0.000000000000e+00 6.018873000000e+02 -3.334597000000e+02 0.000000000000e+00 7.070912000000e+02 1.831104000000e+02 1.930130000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 3.318498000000e-03

    Thanks in advance!!

    opened by larissasantesso 0
  • ORB SLAM 2 error while executing mono_tum: no such file or directory

    ORB SLAM 2 error while executing mono_tum: no such file or directory

    I had successfully built ORB SLAM 2, and I want to run the Monocular example. I used the command:

    ./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml /home/$USER/Downloads/rgbd_dataset_freiburg1_xyz

    to run the mono_tum executable on the given data set, but I get the following error:

    no such file or directory: ./Examples/Monocular/mono_tum

    What should I do ?

    opened by AhmedJabareen96 1
  • ROS2 support ?

    ROS2 support ?

    hello! does ORB_SLAM2 support ROS2 ? or just ROS1?

    if "yes" how to build nodes? i receive this:

    al@al:~/ORB_SLAM2$ ./build_ros.sh 
    Building ROS nodes
    mkdir: cannot create directory ‘build’: File exists
    CMake Warning (dev) in CMakeLists.txt:
      No project() command is present.  The top-level CMakeLists.txt file must
      contain a literal, direct call to the project() command.  Add a line of
      code such as
    
        project(ProjectName)
    
      near the top of the file, but after cmake_minimum_required().
    
      CMake is pretending there is a "project(Project)" command on the first
      line.
    This warning is for project developers.  Use -Wno-dev to suppress it.
    
    CMake Error at CMakeLists.txt:2 (include):
      include could not find load file:
    
        /core/rosbuild/rosbuild.cmake
    
    
    CMake Error at CMakeLists.txt:4 (rosbuild_init):
      Unknown CMake command "rosbuild_init".
    

    i know that it is because i use colcon build instead catkin. but may be it could be fixed ?

    opened by zoldaten 0
  • Publish map points to a ROS topic

    Publish map points to a ROS topic

    I would like to publish the map points to a /Mono/PointCloud2 topic, but I don't where to find them.

    I have been looking at here but when I try to install the node, it crashes.

    I could get the mappoint in the interface ORB_SLAM2 gives us. mampoints-pasillo002 mappoints-cubiculo mappoints-pasillo001

    I know the MapPoints are generated here and the header is here, but i would like to create a publisher to publish them in a /Mono/PointCloud2 topic.

    Does any one help me please?

    thank you all :D

    opened by barovjt 0
  • Pointcloud transformation

    Pointcloud transformation

    Hi! I want to add an INS to tf which will transform pointcloud from ORB SLAM2 + ROS to real world coordinate. My idea is to do a transformation so that the INS is at the rotational center of the robot.

    My tf_tree looks like this: INS -> map -> base_link -> camera_link

    My problem - despite the set transformation, the point cloud is saved in the orb slam system, not in the real coordinate system with INS. For my future work I have to get point cloud with world coordinates in ECEF.

    Has anyone ever had this problem or can give me some advice?

    opened by MagdaZal 1
  • 【OSX】build failed. no template named 'SkewSymmetricMatrix3'

    【OSX】build failed. no template named 'SkewSymmetricMatrix3'

    Hi, I have install eigen 3.4.0. However there is some error when i build this project with build.sh error log: /usr/local/include/eigen3/Eigen/src/Core/SkewSymmetricMatrix3.h:54:13: error: no template named 'SkewSymmetricMatrix3' typedef SkewSymmetricMatrix3 PlainObject; ^ /usr/local/include/eigen3/Eigen/src/Core/SkewSymmetricMatrix3.h:125:44: error: no template named 'SkewSymmetricWrapper'; did you mean '::Eigen::SkewSymmetricBase'? using SkewSymmetricProductReturnType = SkewSymmetricWrapper<const EIGEN_CWISE_BINARY_RETURN_TYPE( ^ /usr/local/include/eigen3/Eigen/src/Core/SkewSymmetricMatrix3.h:34:7: note: '::Eigen::SkewSymmetricBase' declared here class SkewSymmetricBase : public EigenBase ^

    opened by dandingol03 0
Owner
Raul Mur-Artal
Computer Vision - SLAM
Raul Mur-Artal
FLVIS: Feedback Loop Based Visual Initial SLAM

FLVIS Feedback Loop Based Visual Inertial SLAM 1-Video EuRoC DataSet MH_05 Handheld Test in Lab FlVIS on UAV Platform 2-Relevent Publication: Under Re

UAV Lab - HKPolyU 182 Dec 4, 2022
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

null 71 Nov 29, 2022
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
TrackTech: Real-time tracking of subjects and objects on multiple cameras

TrackTech: Real-time tracking of subjects and objects on multiple cameras This project is part of the 2021 spring bachelor final project of the Bachel

null 5 Jun 17, 2022
ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

ICCV2021 Paper: AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection

Zongdai 107 Dec 20, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch .

PyTorch-High-Res-Stereo-Depth-Estimation Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch. Stereo dep

Ibai Gorordo 26 Nov 24, 2022
RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching

RAFT-Stereo: Multilevel Recurrent Field Transforms for Stereo Matching This repository contains the source code for our paper: RAFT-Stereo: Multilevel

Princeton Vision & Learning Lab 328 Jan 9, 2023
3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans.

3DMV 3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 p

Владислав Молодцов 0 Feb 6, 2022
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV @ CVPR 2021.

MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Jhacson Meza 47 Nov 18, 2022
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time The implementation is based on SIGGRAPH Aisa'20. Dependencies Python 3.7 Ubuntu

soratobtai 124 Dec 8, 2022
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
This package is for running the semantic SLAM algorithm using extracted planar surfaces from the received detection

Semantic SLAM This package can perform optimization of pose estimated from VO/VIO methods which tend to drift over time. It uses planar surfaces extra

Hriday Bavle 125 Dec 2, 2022
Frigate - NVR With Realtime Object Detection for IP Cameras

A complete and local NVR designed for HomeAssistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.

Blake Blackshear 6.4k Dec 31, 2022
AirLoop: Lifelong Loop Closure Detection

AirLoop This repo contains the source code for paper: Dasong Gao, Chen Wang, Sebastian Scherer. "AirLoop: Lifelong Loop Closure Detection." arXiv prep

Chen Wang 53 Jan 3, 2023