Visualize Data From Stray Scanner https://keke.dev/blog/2021/03/10/Stray-Scanner.html

Overview

StrayVisualizer

A set of scripts to work with data collected using Stray Scanner.

Staircase pointcloud

Usage

Installing Dependencies

Install dependencies with pip -r requirements.txt.

Example Datasets

If you don't have your own dataset, download one of these example datasets:

  • wget https://stray-data.nyc3.digitaloceanspaces.com/datasets/ZB1.tar.gz
  • wget https://stray-data.nyc3.digitaloceanspaces.com/datasets/ZB2.tar.gz
  • wget https://stray-data.nyc3.digitaloceanspaces.com/datasets/ZB3.tar.gz

Assuming you selected ZB1.tar.gz, you can extract the dataset using the command tar -xvf ZB1.tar.gz.

Visualizing the data

Run python stray_visualize.py <path-to-dataset>.

Available command line options are:

  • --point-clouds shows pointclouds.
  • --confidence=<value> there are three levels of confidence for the depth outputs: 0, 1 and 2. Higher is more confident. Only points having confidence higher or equal to the given value are shown.
  • --frames shows the camera pose coordinate frames.
  • --every=<n> determines how often the coordinate frame is drawn. Default is to draw every 60th frame.
  • --trajectory shows a black line for the trajectory of the camera.
  • --integrate will run the data through the Open3D RGB-D integration pipeline and visualize the resulting mesh.
  • --voxel-size=<size> sets the voxel size in meters for RGB-D integration.
  • --mesh-filename save the mesh from RGB-D integration into the given file. Defaults to no mesh saved.

Creating a Video From the Depth Maps

python make_video.py <dataset-path> will combine depth maps into a video. It will create a file depth_video.mp4 inside the dataset folder.

Running 3D Reconstruction on Collected Data

For convenience, the convert_to_open3d.py script is provided to convert the Stray Scanner format to the Open3D reconstruction system format.

Usage: python convert_to_open3d.py --dataset <path-to-dataset> --out <where-to-save-converted-dataset>.

You can run their reconstruction pipeline using python <path-to-open3d-repo>/examples/python/reconstruction_system/run_system.py <config.json> --make --register --refine --integrate as described here. <config.json> is a configuration file created for convenience by convert_to_open3d.py into the newly created dataset folder. It contains absolute paths, so if you move your dataset, be sure to update the configuration.

Beware that the Open3D reconstruction system takes up quite a lot of memory and compute. On Mac, you might need to add "python_multi_threading": false into the config file to avoid crashing.

Reporting Issues

If you find any issues with this project or bugs in the Stray Scanner app, you can open an issue on this repository.

Comments
  • ValueError: Expected `quat` to have shape (4,) or (N x 4), got (2,).

    ValueError: Expected `quat` to have shape (4,) or (N x 4), got (2,).

    Hi, I had a problem when I ran the code "python stray_visualize.py --point-clouds".

    When I run the code with example datasets "ZB2", I got an error:

    Traceback (most recent call last): File "/home/user/Desktop/new1/sv5/stray_visualize.py", line 206, in main() File "/home/user/Desktop/new1/sv5/stray_visualize.py", line 190, in main data = read_data(flags) File "/home/user/Desktop/new1/sv5/stray_visualize.py", line 53, in read_data T_WC[:3, :3] = Rotation.from_quat(quaternion).as_matrix() File "rotation.pyx", line 624, in scipy.spatial.transform.rotation.Rotation.from_quat File "rotation.pyx", line 513, in scipy.spatial.transform.rotation.Rotation.init ValueError: Expected quat to have shape (4,) or (N x 4), got (2,).

    I have no idea why this error occurred. Thank you for reading my issues, hope you reply.

    opened by daekyounglee 5
  • timestamp in odometry.csv

    timestamp in odometry.csv

    Hello. Thank you for sharing.

    I wonder what the timestamp of odometry.csv means.

    I thought this was a unix timestamp, so I converted it to human date time, and although it was a timestamp obtained by the Stray scanner app on July 27, 2022, the time zone came out on January 1, 1970.

    How do I get the current unix timestamp or date time in real-time through the Stray scanner app?

    opened by Eunju-Jeong178 4
  • Asking about the registration of rgb and depth images,

    Asking about the registration of rgb and depth images,

    Hi Kenneth,

    I have been looking for an app like this, to record RGB + depth images from the iPhone/iPad LiDAR sensor. Very cool!! :) Could I ask a few questions about the data acquisition process and data itself??

    1) alignment (registration) between RGB and depth images It seems like the RGB, depth image resolutions are fixed as 1920x1440 / 256x192 in ARKit mode. (right?) If we downsample RGB as 1/7.5, then their sizes are exactly the same, showing good RGB and depth alignment.

    I am very curious that the alignment (registration) between RGB and depth images is already done? (no need for camera extrinsic calibration parameters between the RGB and depth cameras?) RGB and depth images share the same camera intrinsic parameters stored in camera_matrix.csv? RGB's K will be the matrix in the camera_matrix.csv file, and depth's K will be RGB's K / 7.5,

    Is this the correct understanding?

    2) iPad Pro vs. iPhone 12 Pro Max It seems like the same RGB, depth image resolution is used in both iPad Pro and iPhone 12 Pro Max (AFAIK). Is there any difference in RGB, depth, and confidence data when recording datasets on both iPad Pro and iPhone 12 Pro Max?

    3) adjustment of recoding FPS Is it possible to adjust the FPS setting of RGB, depth, and confidence images? It seems like the default FPS (60 FPS, right?) is too fast sometimes, so the data size grows too fast.... It would be really nice to have a setting that allows the user to set the FPS before data acquisition in the app.

    4) camera_matrix.csv As far as I know, Apple ARKit estimates and provides K (camera matrix) for every frame. How is the K (camera matrix) determined? from the very first frame or the last frame??

    5) depth image from depthMap As far as I know, we can access the depth map from the LiDAR sensor through didUpdate frame: ARFrame. (Swift) In this app, is the depth map to be saved with frame.sceneDepth?.depthMap or frame.smoothedSceneDepth?.depthMap ? Also, I am just curious what is the difference between them....

    6) depth map format (npy) Is there any specific reason to save the depth map as .npy format? I think we can just directly save the depth map as .png format similar to confidence image..

    I really appreciate your time and concerns for my several questions. Please let me know if you have any questions.

    Thanks, Pyojin Kim.

    opened by PyojinKim 4
  • .mp4 encoding losing frames

    .mp4 encoding losing frames

    I have a recording which has 631 frames (depth, confidence, odometry, etc), however the .mp4 of coloured images only has 627 frames encoded. It is relatively common for FFPMEG to drop frames when encoding to .mp4 when using the -r flag and not the -vf flag. I think that's what's happening here, and it's encoding with a variable frame-rate and dropping frames (this is in fact what is happening here). This makes the mp4 not very useful for aligning the rgb and depth.

    One solution is to change the ffmpeg command used for encoding to RGB

    A better solution is to output the frames as images rather than video already.

    Finally last point, it would be MUCH more convenient if files from the app could be exported (as zipped) to the 'files' app on ios and then we could simply click the zip and airdrop it over to the computer and not have to deal with itunes.

    Hope this feedback is helpful.

    Cheers, Jono

    opened by JonathonLuiten 2
  • Resolution of iphone 12 pro's lidar

    Resolution of iphone 12 pro's lidar

    Hi, I have recently used Stray Scanner. I found the resolution of depth image is 256*192. Is it right? I want to know the real resolution of iphone 12 pro's lidar, but I can't find the parameter through search engines and Apple offical documents. Can you tell me? Look forward to your apply.

    opened by chenzhao2018 2
  • About Install Stray Command Line Tool and Stray Studio

    About Install Stray Command Line Tool and Stray Studio

    Thanks so much for the great software of stray command toolkit, however I installed this tool using the following shell command "curl --proto '=https' --tlsv1.2 -sSf https://stray-builds.ams3.digitaloceanspaces.com/cli/install.sh | bash" After installed, nothing works, I can't use stray command, I don't know the reason and how to make it works. So I'm looking for your kind help. My system is Linux. Is it because that I don't have a STRAY_LICENSE_KEY=key? and how can I get this key? or some other reason else. I want use stray command toolkit to make a 3D reconstution pipeline based RGB-D sequences acquired by IPhone 12 pro device.

    opened by jackchinor 1
  • How to calculate odometry from Stray Scanner?

    How to calculate odometry from Stray Scanner?

    Hi, I have used Stray Scanner app and I'm really curious about the odometry information. I have tried to develop a small app to get quaternion (simd_quatf(frame.camera.transform)) and the position to get the extrinsic camera. But when I input the dataset from my app to Stray Visualizer, it's not show okay. Do I have to scale or apply anything to odometry? If I need to scale, What ratio I should to scale by? Could your share me how to calculate the odometry each frames. Thank you, hope you reply.

    opened by anhpham96 1
  • Fix reading of `odometry.csv`

    Fix reading of `odometry.csv`

    The first two items from odometry.csv should be discarded, since the file is organized as

    timestamp, frame, x, y, z, qx, qy, qz, w
    

    The indices of each camera position should be [2:5], and the indices of each quaternion should be [5:].

    Signed-off-by: Gaoyang Zhang [email protected]

    opened by blurgyy 1
  • 3D reconstruction

    3D reconstruction

    I have gotten quite poor results using the full Open3D reconstruction process, while using StrayVisualizer's --integrate works much better. The main difference is I am only using 1/10th of the frames for Open3D, since using all frames takes forever.

    Have you been able to achieve good results with Open3D? I'm not sure if there are some parameters I need to adjust.

    opened by tkzeng 1
  • AttributeError: 'FFmpegWriter' object has no attribute '_proc'

    AttributeError: 'FFmpegWriter' object has no attribute '_proc'

    Hi,

    there was a problem when I tried to create a video from the depth maps

    as I am not using iphone 12 yet, I downloaded the example datasets which you uploaded (ZB1, ZB2, ZB3)

    and when I tried to run the code "python make_video.py ", I ran into the error :

    Traceback (most recent call last): File "make_video.py", line 39, in main() File "make_video.py", line 36, in main writer.close() File "/opt/anaconda3/envs/stray_scanner/lib/python3.7/site-packages/skvideo/io/abstract.py", line 474, in close if self._proc is None: # pragma: no cover AttributeError: 'FFmpegWriter' object has no attribute '_proc'

    I have no idea what to do. I checked the version of scikit-video and tried it to be re-installed, but didn't work.

    opened by KIMSUEUN73 1
  • "Failed to interpret file %s as a pickle"

    Hei,

    I am try to create a depth video from my lidarscan. I am using python 3.7.7 (otherwise i can't install open3d) and numpy 1.20.3

    If I use the make_video.py I get the error:

    Traceback (most recent call last):
      File "make_video.py", line 39, in <module>
        main()
      File "make_video.py", line 22, in main
        depth = np.load(path)
      File "/usr/local/lib/python3.7/site-packages/numpy/lib/npyio.py", line 445, in load
        raise ValueError("Cannot load file containing pickled data "
    ValueError: Cannot load file containing pickled data when allow_pickle=False
    

    So if I ad depth = np.load(path, allow_pickle=True) there is a new Error i don't quit get.

    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/site-packages/numpy/lib/npyio.py", line 448, in load
        return pickle.load(fid, **pickle_kwargs)
    _pickle.UnpicklingError: invalid load key, '\x00'.
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "make_video.py", line 39, in <module>
        main()
      File "make_video.py", line 22, in main
        depth = np.load(path, allow_pickle=True)
      File "/usr/local/lib/python3.7/site-packages/numpy/lib/npyio.py", line 451, in load
        "Failed to interpret file %s as a pickle" % repr(file)) from e
    OSError: Failed to interpret file '/Users/jonasvonarb/Downloads/8a42b3d040/depth/.DS_Store' as a pickle
    

    Do you have an Idea how to solve this?

    opened by jonasvonarb 1
  • Problem with convert to open3D

    Problem with convert to open3D

    Hi, I am very new to open 3D and iOS programming. And I encounter a problem with the convert to open 3D program. I was able to run the program without any problem, but after it said done, I saw that that depth folder was empty. So I copied from the original raw data depth folder directly, but when I ran the 3D reconstruction, I was unable to run it. I show the following error:

    I suspect the error is due to the size different between the image RGB and the depth data, but I am not sure? May I know many recommendations on the fix or other ways to import those data into open 3d?

    OpenCV is detected. Using ORB + 5pt algorithm making fragments from RGBD sequence. Fragment 000 / 019 :: RGBD matching between frame : 0 and 1 [Open3D Error] (class std::shared_ptr __cdecl open3d::geometry::RGBDImage::CreateFromColorAndDepth(const class open3d::geometry::Image &,const class open3d::geometry::Image &,double,double,bool)) D:\a\Open3D\Open3D\cpp\open3d\geometry\RGBDImageFactory.cpp:40: Unsupported image format.

    Traceback (most recent call last): File "C:\Users\elsto\Documents\GitHub\CityU-FYP-Metaversa-3DModel\Trial\Open3D-0.16.0\examples\python\reconstruction_system\run_system.py", line 130, in make_fragments.run(config) File "C:\Users\elsto\Documents\GitHub\CityU-FYP-Metaversa-3DModel\Trial\Open3D-0.16.0\examples\python\reconstruction_system\make_fragments.py", line 203, in run process_single_fragment(fragment_id, color_files, depth_files, File "C:\Users\elsto\Documents\GitHub\CityU-FYP-Metaversa-3DModel\Trial\Open3D-0.16.0\examples\python\reconstruction_system\make_fragments.py", line 174, in process_single_fragment make_posegraph_for_fragment(config["path_dataset"], sid, eid, color_files, File "C:\Users\elsto\Documents\GitHub\CityU-FYP-Metaversa-3DModel\Trial\Open3D-0.16.0\examples\python\reconstruction_system\make_fragments.py", line 94, in make_posegraph_for_fragment info] = register_one_rgbd_pair(s, t, color_files, depth_files, File "C:\Users\elsto\Documents\GitHub\CityU-FYP-Metaversa-3DModel\Trial\Open3D-0.16.0\examples\python\reconstruction_system\make_fragments.py", line 50, in register_one_rgbd_pair source_rgbd_image = read_rgbd_image(color_files[s], depth_files[s], True, File "C:\Users\elsto\Documents\GitHub\CityU-FYP-Metaversa-3DModel\Trial\Open3D-0.16.0\examples\python\open3d_example.py", line 213, in read_rgbd_image rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth( RuntimeError: [Open3D Error] (class std::shared_ptr __cdecl open3d::geometry::RGBDImage::CreateFromColorAndDepth(const class open3d::geometry::Image &,const class open3d::geometry::Image &,double,double,bool)) D:\a\Open3D\Open3D\cpp\open3d\geometry\RGBDImageFactory.cpp:40: Unsupported image format.

    opened by pangchunhei 1
Owner
Kenneth Blomqvist
Kenneth Blomqvist
Backend/API for the Mumble.dev, an open source social media application.

Welcome to the Mumble Api Repository Getting Started If you are trying to use this project for the first time, you can get up and running by following

Dennis Ivy 189 Dec 27, 2022
Dev-meme - A repository that contains memes just for people like us

A repository that contains memes just for people like us. Coders are constantly

Padmashree Jha 4 Oct 31, 2022
Scraping comments from the political section of popular Nigerian blog (Nairaland), and saving in a CSV file.

Scraping_Nairaland This project scraped comments from the political section of popular Nigerian blog www.nairaland.com using the Python BeautifulSoup

Ansel Orhero 1 Nov 14, 2021
Add your recently blog and douban states in your GitHub Profile

Add your recently blog and douban states in your GitHub Profile

Bingjie Yan 4 Dec 12, 2022
Fetch data from an excel file and create HTML file

excel-to-html Problem Statement! - Fetch data from excel file and create html file Excel.xlsx file contain the information.in multiple rows that is ne

Vivek Kashyap 1 Oct 25, 2021
Create an application to visualize single/multiple Xandar Kardian people counting sensors detection result for a indoor area.

Program Design Purpose: We want to create an application to visualize single/multiple Xandar Kardian people counting sensors detection result for a indoor area.

null 2 Dec 28, 2022
This python code will get requests from SET (The Stock Exchange of Thailand) a previously-close stock price and return it in Thai Baht currency using beautiful soup 4 HTML scrapper.

This python code will get requests from SET (The Stock Exchange of Thailand) a previously-close stock price and return it in Thai Baht currency using beautiful soup 4 HTML scrapper.

Andre 1 Oct 24, 2022
apysc is the Python frontend library to create html and js file, that has ActionScript 3 (as3)-like interface.

apysc apysc is the Python frontend library to create HTML and js files, that has ActionScript 3 (as3)-like interface. Notes: Currently developing and

simonritchie 17 Dec 14, 2022
We'll be using HTML, CSS and JavaScript for the frontend

We'll be using HTML, CSS and JavaScript for the frontend. Nothing to install in specific. Open your text-editor and start coding a beautiful front-end.

Mugada sai tilak 1 Dec 15, 2021
Convert text with ANSI color codes to HTML or to LaTeX.

Convert text with ANSI color codes to HTML or to LaTeX.

PyContribs 326 Dec 28, 2022
Grimoire is a Python library for creating interactive fiction as hyperlinked html.

Grimoire Grimoire is a Python library for creating interactive fiction as hyperlinked html. Installation pip install grimoire-if Usage Check out the

Scott Russell 5 Oct 11, 2022
An html wrapper for python

MessySoup What is it? MessySoup is a python wrapper for html elements. While still a ways away, the main goal is to be able to build a wesbite straigh

null 4 Jan 5, 2022
Bootstraparse is a personal project started with a specific goal in mind: creating static html pages for direct display from a markdown-like file

Bootstraparse is a personal project started with a specific goal in mind: creating static html pages for direct display from a markdown-like file

null 1 Jun 15, 2022
Simple utlity for sniffing decrypted HTTP/HTTPS traffic on a jailbroken iOS device into an HAR format.

Description iOS devices contain a hidden feature for sniffing decrypted HTTP/HTTPS traffic from all processes using the CFNetwork framework into an HA

null 83 Dec 25, 2022
Python library and cli util for https://www.zerochan.net/

Zerochan Library for Zerochan.net with pics parsing and downloader included! Features CLI utility for pics downloading from zerochan.net Library for c

kiriharu 10 Oct 11, 2022
Automatização completa do site https://blaze.com

PyBlaze Pyblaze possibilita o acesso a api do site blaze utilizando python, retornando os últimos resultados de crashs e doubles. Agora também é possí

Cleiton Leonel 24 Dec 30, 2022
Refer'd Resume Scanner

Refer'd Resume Scanner I wanted to share a free resource we built to assist applicants with resume building. Our resume scanner identifies potential s

Refer'd 74 Mar 7, 2022
Nuclei - Burp Extension allows to run nuclei scanner directly from burp and transforms json results into the issues

Nuclei - Burp Extension Simple extension that allows to run nuclei scanner directly from burp and transforms json results into the issues. Installatio

null 106 Dec 22, 2022
resultados (data) de elecciones 2021 y código para extraer data de la ONPE

elecciones-peru-2021-ONPE Resultados (data) de elecciones 2021 y código para extraer data de la ONPE Data Licencia liberal, pero si vas a usarlo por f

Ragi Yaser Burhum 21 Jun 14, 2021