A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

Overview

ManhattanSLAM

Authors: Raza Yunus, Yanyan Li and Federico Tombari

ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera pose trajectory, a sparse 3D reconstruction (containing point, line and plane features) and a dense surfel-based 3D reconstruction. Further details can be found in the related publication. The code is based on ORB-SLAM2.

ManhattanSLAM

Related Publication:

Raza Yunus, Yanyan Li and Federico Tombari, ManhattanSLAM: Robust Planar Tracking and Mapping Leveraging Mixture of Manhattan Frames, in 2021 IEEE International Conference on Robotics and Automation (ICRA) . PDF.

1. License

ManhattanSLAM is released under a GPLv3 license. For a list of all code/library dependencies (and associated licenses), please see Dependencies.md.

If you use ManhattanSLAM in an academic work, please cite:

@inproceedings{yunus2021manhattanslam,
    author = {R. Yunus, Y. Li and F. Tombari},
    title = {ManhattanSLAM: Robust Planar Tracking and Mapping Leveraging Mixture of Manhattan Frames},
    year = {2021},
    booktitle = {2021 IEEE international conference on Robotics and automation (ICRA)},
}

2. Prerequisites

We have tested the library in Ubuntu 16.04, but it should be easy to compile on other platforms. A powerful computer (e.g. i7) will ensure real-time performance and provide more stable and accurate results. Following is the list of dependecies for ManhattanSLAM and their versions tested by us:

  • OpenCV: 3.3.0
  • PCL: 1.7.2
  • Eigen3: 3.3
  • DBoW2: Included in Thirdparty folder
  • g2o: Included in Thirdparty folder
  • Pangolin
  • tinyply

3. Building and testing

Clone the repository:

git clone https://github.com/razayunus/ManhattanSLAM

There is a script build.sh to build the Thirdparty libraries and ManhattanSLAM. Please make sure you have installed all required dependencies (see section 2). Execute:

cd ManhattanSLAM
chmod +x build.sh
./build.sh

This will create libManhattanSLAM.so in lib folder and the executable manhattan_slam in Example folder.

To test the system:

  1. Download a sequence for one of the following datasets and uncompress it:

  2. Associate RGB images and depth images using the python script associate.py. You can generate an associations file by executing:

python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
  1. Execute the following command. Change Config.yaml to ICL.yaml for ICL-NUIM sequences, TAMU.yaml for TAMU RGB-D sequences or TUM1.yaml, TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences of TUM RGB-D respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder. Change ASSOCIATIONS_FILE to the path to the corresponding associations file.
./Example/manhattan_slam Vocabulary/ORBvoc.txt Example/Config.yaml PATH_TO_SEQUENCE_FOLDER ASSOCIATIONS_FILE
You might also like...
Pip-package for trajectory benchmarking from "Be your own Benchmark: No-Reference Trajectory Metric on Registered Point Clouds", ECMR'21

Map Metrics for Trajectory Quality Map metrics toolkit provides a set of metrics to quantitatively evaluate trajectory quality via estimating consiste

Utilizes Pose Estimation to offer sprinters cues based on an image of their running form.

Running-Form-Correction Utilizes Pose Estimation to offer sprinters cues based on an image of their running form. How to Run Dependencies You will nee

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

ORB-SLAM2 Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2) 13 Jan 2017: OpenCV 3 and Eigen 3.3 are now suppor

A list of papers about point cloud based place recognition, also known as loop closure detection in SLAM (processing)

A list of papers about point cloud based place recognition, also known as loop closure detection in SLAM (processing)

MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera
MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Python and C++ implementation of
Python and C++ implementation of "MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation". Accepted at LXCV @ CVPR 2021.

MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation This is a PyTorch and LibTorch implementation of MarkerPose: a

Indoor Panorama Planar 3D Reconstruction via Divide and Conquer
Indoor Panorama Planar 3D Reconstruction via Divide and Conquer

HV-plane reconstruction from a single 360 image Code for our paper in CVPR 2021: Indoor Panorama Planar 3D Reconstruction via Divide and Conquer (pape

"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

git《Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction》(ECCV 2020) GitHub:

Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction Code for the ECCV 2020 paper by Yiming Qian and Yasutaka Furukawa Getting

Comments
  • Problem with run on TUM dataset

    Problem with run on TUM dataset

    ./Example/manhattan_slam Vocabulary/ORBvoc.txt Example/TUM1.yaml /media/roma-z/Huge/00-SLAM_DataSets/TUM/rgbd_dataset_freiburg1_xyz/ /media/roma-z/Huge/00-SLAM_DataSets/TUM/rgbd_dataset_freiburg1_xyz/associations.txt

    ManhattanSLAM Copyright (C) 2021 Raza Yunus, Technical University of Munich (TUM). This program comes with ABSOLUTELY NO WARRANTY; This is free software, and you are welcome to redistribute it under certain conditions. See LICENSE.txt.

    Loading ORB Vocabulary. This could take a while... Vocabulary loaded!

    img_width = 640 img_height = 480 mUndistX size = 0x5584c0465ab0 mUndistY size = 0x5584c0465b10

    Camera Parameters:

    • fx: 517.306
    • fy: 516.469
    • cx: 318.643
    • cy: 255.314
    • k1: 0.262383
    • k2: -0.953104
    • k3: 1.16331
    • p1: -0.005358
    • p2: 0.002628
    • fps: 30
    • color order: RGB (ignored if grayscale)

    ORB Extractor Parameters:

    • Number of Features: 1000
    • Scale Levels: 8
    • Scale Factor: 1.2
    • Initial Fast Threshold: 20
    • Minimum Fast Threshold: 7

    Depth Threshold (Close/Far Points): 3.09294

    ==========test1============13131.2

    ==========test2============2500

    ==========test3============13131.2

    ==========test4============13131.2 Segmentation fault (core dumped)

    I find other people also have this issue when running ManhattanSLAM or ORB-SLAM2/3 in TUM dataset, can you give me some solution?

    opened by zhuhu00 7
  • Problem with tinyply version

    Problem with tinyply version

    Hi, Sorry to trobule you. But what is your tinlyply version? There have a version 2.3 on my laptop, it shows that this error:

    make[2]: *** No rule to make target '/usr/local/lib/libtinyply.so', needed by '../lib/libManhattanSLAM.so'.  Stop.
    CMakeFiles/Makefile2:110: recipe for target 'CMakeFiles/ManhattanSLAM.dir/all' failed
    
    opened by Gatsby23 1
  • Core dump when run on TUM dataset

    Core dump when run on TUM dataset

    It gives the following error result: `ManhattanSLAM Copyright (C) 2021 Raza Yunus, Technical University of Munich (TUM). This program comes with ABSOLUTELY NO WARRANTY; This is free software, and you are welcome to redistribute it under certain conditions. See LICENSE.txt.

    Loading ORB Vocabulary. This could take a while... Vocabulary loaded!

    [INFO] Run the Tracking function. img_width = 640 img_height = 480 mUndistX size = 480 x 640 mUndistY size = 480 x 640

    Camera Parameters:

    • fx: 535.4
    • fy: 539.2
    • cx: 320.1
    • cy: 247.6
    • k1: 0
    • k2: 0
    • p1: 0
    • p2: 0
    • fps: 30
    • color order: RGB (ignored if grayscale)

    ORB Extractor Parameters:

    • Number of Features: 1000
    • Scale Levels: 8
    • Scale Factor: 1.2
    • Initial Fast Threshold: 20
    • Minimum Fast Threshold: 7

    Depth Threshold (Close/Far Points): 2.98842 [INFO]: Run the System Track() function. [INFO]: Run The Frame() function. manhattan_slam: /usr/local/include/eigen3/Eigen/src/Core/PlainObjectBase.h:306: void Eigen::PlainObjectBase::resize(Eigen::Index) [with Derived = Eigen::Matrix<double, 6, 1>; Eigen::Index = long int]: Assertion ((SizeAtCompileTime == Dynamic && (MaxSizeAtCompileTime==Dynamic || size<=MaxSizeAtCompileTime)) || SizeAtCompileTime == size) && size>=0' failed. Aborted (core dumped)

    I use GDB to debug, it gives message as follow: `Excess command line arguments ignored. (Example/TUM3.yaml ...) GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: http://www.gnu.org/software/gdb/bugs/. Find the GDB manual and other documentation resources online at: http://www.gnu.org/software/gdb/documentation/. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./Example/manhattan_slam...done. "/home/dongying/study/ManhattanSLAM/Vocabulary/ORBvoc.txt" is not a core dump: File format not recognized (gdb) r Vocabulary/ORBvoc.txt Example/TUM3.yaml /home/dongying/dataset/rgbd_dataset_freiburg3_sitting_xyz /home/dongying/dataset/rgbd_dataset_freiburg3_sitting_xyz/associate.txt Starting program: /home/dongying/study/ManhattanSLAM/Example/manhattan_slam Vocabulary/ORBvoc.txt Example/TUM3.yaml /home/dongying/dataset/rgbd_dataset_freiburg3_sitting_xyz /home/dongying/dataset/rgbd_dataset_freiburg3_sitting_xyz/associate.txt [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

    ManhattanSLAM Copyright (C) 2021 Raza Yunus, Technical University of Munich (TUM). This program comes with ABSOLUTELY NO WARRANTY; This is free software, and you are welcome to redistribute it under certain conditions. See LICENSE.txt.

    Loading ORB Vocabulary. This could take a while... Vocabulary loaded!

    [INFO] Run the Tracking function. img_width = 640 img_height = 480 mUndistX size = 480 x 640 mUndistY size = 480 x 640

    Camera Parameters:

    • fx: 535.4
    • fy: 539.2
    • cx: 320.1
    • cy: 247.6
    • k1: 0
    • k2: 0
    • p1: 0
    • p2: 0
    • fps: 30
    • color order: RGB (ignored if grayscale)

    ORB Extractor Parameters:

    • Number of Features: 1000
    • Scale Levels: 8
    • Scale Factor: 1.2
    • Initial Fast Threshold: 20
    • Minimum Fast Threshold: 7

    Depth Threshold (Close/Far Points): 2.98842 [New Thread 0x7fffb25e5700 (LWP 1822)] [New Thread 0x7fffb11dd700 (LWP 1823)] [New Thread 0x7fffb09dc700 (LWP 1824)] [INFO]: Run the System Track() function. [New Thread 0x7fffaac3c700 (LWP 1825)] [New Thread 0x7fffaa43b700 (LWP 1826)] [New Thread 0x7fffa9c3a700 (LWP 1827)] [New Thread 0x7fffa9439700 (LWP 1828)] [New Thread 0x7fffa8c38700 (LWP 1829)] [New Thread 0x7fff9bfff700 (LWP 1830)] [New Thread 0x7fff9b7fe700 (LWP 1831)] [New Thread 0x7fff9affd700 (LWP 1832)] [New Thread 0x7fff9a7fc700 (LWP 1833)] [New Thread 0x7fff99ffb700 (LWP 1834)] [New Thread 0x7fff997fa700 (LWP 1835)] [New Thread 0x7fff98ff9700 (LWP 1836)] [New Thread 0x7fff77fff700 (LWP 1837)] [New Thread 0x7fff777fe700 (LWP 1838)] [New Thread 0x7fff76ffd700 (LWP 1839)] [New Thread 0x7fff767fc700 (LWP 1840)] [INFO]: Run The Frame() function. [New Thread 0x7fff75ffb700 (LWP 1841)] [New Thread 0x7fff757fa700 (LWP 1842)] [New Thread 0x7fff74ff9700 (LWP 1843)] [New Thread 0x7fff52524700 (LWP 1844)] [New Thread 0x7fff51d23700 (LWP 1845)] [Thread 0x7fff75ffb700 (LWP 1841) exited] [New Thread 0x7fff75ffb700 (LWP 1846)] [New Thread 0x7fff51522700 (LWP 1847)] [Thread 0x7fff757fa700 (LWP 1842) exited] [Thread 0x7fff74ff9700 (LWP 1843) exited] ` Screenshot from 2022-04-22 09-47-15 I try to switch between different versions of Eigen(3.3.0, 3.3.9), but still no effect. Could you give me some advice to help me solve this problem ? Thank you very much! Looking forward to your recovery.

    opened by DongyingYu 0
  • Error when non detect keypoints or lines /non relocalize tracking

    Error when non detect keypoints or lines /non relocalize tracking

    Hi razayunus, First of all congratulations on your great work. I have successfully migrated the project to Visual Studio 2019. The examples (TUM RGB-D, ICL-NUI; and TAMU RGB-D) work perfectly and in real time (0.05 mean tracking time) on my computer . My configuration is: OpenCV 3.4.5 PCL 1.8.1 Eigen3. 3.3.9 Pangolin DBoW2: Included in Thirdparty folder g2o: Included in Thirdparty folder

    Subsequently, I tried to use the application in live time in an indoor room using the Intel RealSense D435 as an RGB-D camera. It works correctly except when the application does not detect any keypoints or lines (e.g. the camera is covered with the hand) in which an error occurs and closes the app. “Error: keypoint list is empty OpenCV: terminate handler is called! The last OpenCV error is: OpenCV(3.4.5) Error: Assertion failed (type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U)) in cv::batchDistance” On the other hand, when the system loses tracking (it occurs very rarely) it is not able to find relocation tracking again.
    Do you have any idea why these two things occur?

    My configuration file for Intel d435 is: Camera.fy: 617.73 Camera.fy: 618.19 Camera.cx: 317.76 Camera.cy: 248.28 Camera.k1: 0 Camera.k2: 0 Camera.p1: 0 Camera.p2: 0 Camera.k3: 0 Camera.width: 640 Camera.height: 480 Camera.fps: 30.0 Camera.bf: 30.5 Camera.RGB: 1 ThDepth: 40.0 DepthMapFactor: 1000.0 ORBextractor.nFeatures: 1000 ORBextractor.scaleFactor: 1.2 ORBextractor.nLevels: 8 ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7 Viewer.KeyFrameSize: 0.05 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize: 2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500 Plane.AssociationDisRef: 0.05 Plane.AssociationAngRef: 0.985 # 10 degree Plane.VerticalThreshold: 0.08716 # 85 degree Plane.ParallelThreshold: 0.9962 # 5 degree Plane.AngleInfo: 0.5 Plane.DistanceInfo: 50 Plane.Chi: 100 Plane.VPChi: 50 Plane.ParallelInfo: 0.5 Plane.VerticalInfo: 0.5 Plane.DistanceThreshold: 0.04 Plane.MFVerticalThreshold: 0.01 Surfel.distanceFar: 30.0 Surfel.distanceNear: 0.5 SavePath.Keyframe: "KeyFrameTrajectory.txt" SavePath.Frame: "CameraTrajectory.txt"

    An the D435-I example code is: /

    • This file is part of ORB-SLAM2.
    • Copyright (C) 2014-2016 Raúl Mur-Artal (University of Zaragoza)
    • For more information see https://github.com/raulmur/ORB_SLAM2
    • ORB-SLAM2 is free software: you can redistribute it and/or modify
    • it under the terms of the GNU General Public License as published by
    • the Free Software Foundation, either version 3 of the License, or
    • (at your option) any later version.
    • ORB-SLAM2 is distributed in the hope that it will be useful,
    • but WITHOUT ANY WARRANTY; without even the implied warranty of
    • MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    • GNU General Public License for more details.
    • You should have received a copy of the GNU General Public License
    • along with ORB-SLAM2. If not, see http://www.gnu.org/licenses/. */

    #include #include #include #include #include #include

    #include <opencv2/core/core.hpp>

    #include <librealsense2/rs.hpp>

    #include <System.h>

    void stop_falg_detection();

    // A flag to indicate whether a key had been pressed. std::atomic_bool stop_flag(false);

    int main(int argc, char** argv) try {

    if (argc != 3) {
        cerr << endl << "Usage: ./rgbd_realsense path_to_vocabulary path_to_settings" << endl;
        return EXIT_SUCCESS;
    }
    
    std::cout << "Querying Realsense device info..." << std::endl;
    
    // Create librealsense context for managing devices
    rs2::context ctx;
    auto devs = ctx.query_devices();  // Get device list
    int device_num = devs.size();
    std::cout << "Device number: " << device_num << std::endl; // Device amount
    
    // Query the info of first device
    rs2::device dev = devs[0];  // If no device conneted, a rs2::error exception will be raised
    // Device serial number (different for each device, can be used for searching device when having mutiple devices)
    std::cout << "Serial number: " << dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER) << std::endl;
    
    rs2::config cfg;
    // Default it will config all the devices,you can specify the device index you want to config (query by serial number)
    // Config color stream: 640*480, frame format: BGR, FPS: 30
    cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_RGB8, 30);  // BGR8 correspond to CV_8UC3 in OpenCV
    // Config depth stream: 640*480, frame format: Z16, FPS: 30
    cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 30); // Z16 corresponds to CV_16U in OpenCV
    
    std::cout << "Config RGB frame format to 8-channal RGB" << std::endl;
    std::cout << "Config RGB and depth FPS to 30" << std::endl;
    
    rs2::pipeline pipe;
    pipe.start(cfg);
    rs2::align align_to_color(RS2_STREAM_COLOR);
    // Block program until frames arrive
    rs2::frameset data = pipe.wait_for_frames();
    
    rs2::depth_frame depth = data.get_depth_frame();
    rs2::video_frame color = data.get_color_frame();
    
    rs2::stream_profile depth_profile = depth.get_profile();
    rs2::stream_profile color_profile = color.get_profile();
    
    // Get RGB camera intrinsics
    // Note that the change of config will cause the change of intrinsics
    rs2::video_stream_profile cvsprofile(color_profile);
    rs2::video_stream_profile dvsprofile(depth_profile);
    rs2_intrinsics color_intrinsics = cvsprofile.get_intrinsics();
    rs2_intrinsics depth_intrinsics = dvsprofile.get_intrinsics();
    
    const int color_width = color_intrinsics.width;
    const int color_height = color_intrinsics.height;
    const int depth_width = 640; //depth_intrinsics.width;
    const int depth_height = 480;// depth_intrinsics.height;
    
    std::cout << "RGB Frame width: " << color_width << std::endl;
    std::cout << "RGB Frame height: " << color_height << std::endl;
    std::cout << "Depth Frame width: " << depth_width << std::endl;
    std::cout << "Depth Frame height: " << depth_height << std::endl;
    std::cout << "RGB camera intrinsics:" << std::endl;
    std::cout << "fx: " << color_intrinsics.fx << std::endl;
    std::cout << "fy: " << color_intrinsics.fy << std::endl;
    std::cout << "cx: " << color_intrinsics.ppx << std::endl;
    std::cout << "cy: " << color_intrinsics.ppy << std::endl;
    std::cout << "RGB camera distortion coeffs:" << std::endl;
    std::cout << "k1: " << color_intrinsics.coeffs[0] << std::endl;
    std::cout << "k2: " << color_intrinsics.coeffs[1] << std::endl;
    std::cout << "p1: " << color_intrinsics.coeffs[2] << std::endl;
    std::cout << "p2: " << color_intrinsics.coeffs[3] << std::endl;
    std::cout << "k3: " << color_intrinsics.coeffs[4] << std::endl;
    //std::cout << "RGB camera distortion model: " << color_intrinsics.model << std::endl;
    
    std::cout << "* Please adjust the parameters in config file accordingly *" << std::endl;
    
    // Create SLAM system. It initializes all system threads and gets ready to process frames.
    ORB_SLAM2::System SLAM(argv[1], argv[2], true);
    
    // Vector for tracking time statistics
    vector<float> vtimes_track;
    
    std::thread stop_detect_thread = std::thread(stop_falg_detection);
    
    std::cout << std::endl << "-------" << std::endl;
    std::cout << "Start processing realsense stream ..." << std::endl;
    std::cout << "Use 'p + enter' to end the system" << std::endl;
    
    while (!stop_flag) {
        data = pipe.wait_for_frames();
        data = align_to_color.process(data);
        depth = data.get_depth_frame();
        color = data.get_color_frame();
    
        double time_stamp = data.get_timestamp();
    
        cv::Mat im_D(cv::Size(depth_width, depth_height), CV_16U, (void*)depth.get_data(), cv::Mat::AUTO_STEP);
        cv::Mat im_RGB(cv::Size(color_width, color_height), CV_8UC3, (void*)color.get_data(), cv::Mat::AUTO_STEP);
    
        std::chrono::steady_clock::time_point t1 = std::chrono::steady_clock::now();
    
        // Pass the image to the SLAM system
        SLAM.Track(im_RGB, im_D, time_stamp);
    
        std::chrono::steady_clock::time_point t2 = std::chrono::steady_clock::now();
        double ttrack = std::chrono::duration_cast<std::chrono::duration<double>>(t2 - t1).count();
        vtimes_track.push_back(ttrack);
    }
    
    stop_detect_thread.join();
    
    // Stop all threads
    SLAM.Shutdown();
    
    // Tracking time statistics
    sort(vtimes_track.begin(), vtimes_track.end());
    float time_total = 0;
    for (size_t i = 0; i < vtimes_track.size(); i++) {
        time_total += vtimes_track[i];
    }
    
    std::cout << "-------" << std::endl << std::endl;
    std::cout << "median tracking time: " << vtimes_track[vtimes_track.size() / 2] << std::endl;
    std::cout << "mean tracking time: " << time_total / vtimes_track.size() << std::endl;
    
    // Save camera trajectory
    SLAM.SaveTrajectoryTUM("CameraTrajectory.txt");
    SLAM.SaveKeyFrameTrajectoryTUM("KeyFrameTrajectory.txt");
    
    return EXIT_SUCCESS;
    

    } catch (const rs2::error& e) { // Capture device exception std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl; return EXIT_FAILURE; } catch (const std::exception& e) { std::cerr << "Other error : " << e.what() << std::endl; return EXIT_FAILURE; }

    void stop_falg_detection() { char c; while (!stop_flag) { c = std::getchar(); if (c == 'p') { stop_flag = true;; } } }**

    Thanks for everything

    opened by Wavelet303 2
Owner
null
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022
Code for "Neural 3D Scene Reconstruction with the Manhattan-world Assumption" CVPR 2022 Oral

News 05/10/2022 To make the comparison on ScanNet easier, we provide all quantitative and qualitative results of baselines here, including COLMAP, COL

ZJU3DV 365 Dec 30, 2022
[ICCV 2021 (oral)] Planar Surface Reconstruction from Sparse Views

Planar Surface Reconstruction From Sparse Views Linyi Jin, Shengyi Qian, Andrew Owens, David F. Fouhey University of Michigan ICCV 2021 (Oral) This re

Linyi Jin 89 Jan 5, 2023
This package is for running the semantic SLAM algorithm using extracted planar surfaces from the received detection

Semantic SLAM This package can perform optimization of pose estimated from VO/VIO methods which tend to drift over time. It uses planar surfaces extra

Hriday Bavle 125 Dec 2, 2022
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
Developed an optimized algorithm which finds the most optimal path between 2 points in a 3D Maze using various AI search techniques like BFS, DFS, UCS, Greedy BFS and A*

Developed an optimized algorithm which finds the most optimal path between 2 points in a 3D Maze using various AI search techniques like BFS, DFS, UCS, Greedy BFS and A*. The algorithm was extremely optimal running in ~15s to ~30s for search spaces as big as 10000000 nodes where a set of 18 actions could be performed at each node in the 3D Maze.

null 1 Mar 28, 2022
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
Blender Add-On for slicing meshes with planes

MeshSlicer Blender Add-On for slicing meshes with multiple overlapping planes at once. This is a simple Blender addon to slice a silmple mesh with mul

null 52 Dec 12, 2022
Make a Turtlebot3 follow a figure 8 trajectory and create a robot arm and make it follow a trajectory

HW2 - ME 495 Overview Part 1: Makes the robot move in a figure 8 shape. The robot starts moving when launched on a real turtlebot3 and can be paused a

Devesh Bhura 0 Oct 21, 2022
A 3D Dense mapping backend library of SLAM based on taichi-Lang designed for the aerial swarm.

TaichiSLAM This project is a 3D Dense mapping backend library of SLAM based Taichi-Lang, designed for the aerial swarm. Intro Taichi is an efficient d

XuHao 230 Dec 19, 2022