Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

Overview

ACSC

Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems.

pipeline

System Architecture

pipeline

1. Dependency

Tested with Ubuntu 16.04 64-bit and Ubuntu 18.04 64-bit.

  • ROS (tested with kinetic / melodic)

  • Eigen 3.2.5

  • PCL 1.8

  • python 2.X / 3.X

  • python-pcl

  • opencv-python (>= 4.0)

  • scipy

  • scikit-learn

  • transforms3d

  • pyyaml

  • mayavi (optional, for debug and visualization only)

2. Preparation

2.1 Download and installation

Use the following commands to download this repo.

Notice: the SUBMODULE should also be cloned.

git clone --recurse-submodules https://github.com/HViktorTsoi/ACSC

Compile and install the normal-diff segmentation extension.

cd /path/to/your/ACSC/segmentation

python setup.py install

We developed a practical ROS tool to achieve convenient calibration data collection, which can automatically organize the data into the format in 3.1. We strongly recommend that you use this tool to simplify the calibration process.

It's ok if you don't have ROS or don't use the provided tool, just manually process the images and point clouds into 3.1's format.

First enter the directory of the collection tool and run the following command:

cd /path/to/your/ACSC/ros/livox_calibration_ws

catkin_make

source ./devel/setup.zsh # or source ./devel/setup.sh

File explanation

  • ros/: The data collection tool directory (A ros workspace);

  • configs/: The directory used to store configuration files;

  • calibration.py: The main code for solving extrinsic parameters;

  • projection_validation.py: The code for visualization and verification of calibration results;

  • utils.py: utilities.

2.2 Preparing the calibration board

chessboard

We use a common checkerboard as the calibration target.

Notice, to ensure the success rate of calibration, it is best to meet the following requirement, when making and placing the calibration board:

  1. The size of the black/white square in the checkerboard should be >= 8cm;

  2. The checkerboard should be printed out on white paper, and pasted on a rectangular surface that will not deform;

  3. There should be no extra borders around the checkerboard;

  4. The checkerboard should be placed on a thin monopod, or suspended in the air with a thin wire. And during the calibration process, the support should be as stable as possible (Due to the need for point cloud integration);

  5. When placing the checkerboard on the base, the lower edge of the board should be parallel to the ground;

  6. There are not supposed to be obstructions within 3m of the radius of the calibration board.

Checkerboard placement

calibration board placement

Sensor setup

calibration board placement

3. Extrinsic Calibration

3.1 Data format

The images and LiDAR point clouds data need to be organized into the following format:

|- data_root
|-- images
|---- 000000.png
|---- 000001.png
|---- ......
|-- pcds
|---- 000000.npy
|---- 000001.npy
|---- ......
|-- distortion
|-- intrinsic

Among them, the images directory contains images containing checkerboard at different placements, recorded by the camera ;

The pcds directory contains point clouds corresponding to the images, each point cloud is a numpy array, with the shape of N x 4, and each row is the x, y, z and reflectance information of the point;

The distortion and intrinsic are the distortion parameters and intrinsic parameters of the camera respectively (will be described in detail in 3.3).

Sample Data

The sample solid state LiDAR point clouds, images and camera intrinsic data can be downloaded (375.6 MB) on:

Google Drive | BaiduPan (Code: fws7)

If you are testing with the provided sample data, you can directly jump to 3.4.

3.2 Data collection for your own sensors

First, make sure you can receive data topics from the the Livox LiDAR ( sensor_msgs.PointCloud2 ) and Camera ( sensor_msgs.Image );

Run the launch file of the data collection tool:

mkdir /tmp/data

cd /path/to/your/ACSC/ros/livox_calibration_ws
source ./devel/setup.zsh # or source ./devel/setup.sh

roslaunch calibration_data_collection lidar_camera_calibration.launch \                                                                                
config-path:=/home/hvt/Code/livox_camera_calibration/configs/data_collection.yaml \
image-topic:=/camera/image_raw \
lidar-topic:=/livox/lidar \
saving-path:=/tmp/data

Here, config-path is the path of the configuration file, usually we use configs/data_collection.yaml, and leave it as default;

The image-topic and lidar-topic are the topic names that we receive camera images and LiDAR point clouds, respectively;

The saving-path is the directory where the calibration data is temporarily stored.

After launching, you should be able to see the following two interfaces, which are the real-time camera image, and the birdeye projection of LiDAR.

If any of these two interfaces is not displayed properly, please check yourimage-topic and lidar-topic to see if the data can be received normally.

GUI

Place the checkerboard, observe the position of the checkerboard on the LiDAR birdeye view interface, to ensure that it is within the FOVof the LiDAR and the camera.

Then, press <Enter> to record the data; you need to wait for a few seconds, after the point cloud is collected and integrated, and the screen prompts that the data recording is complete, change the position of the checkerboard and continue to record the next set of data.

To ensure the robustness of the calibration results, the placement of the checkerboard should meet the following requirement:

  1. The checkerboard should be at least 2 meters away from the LiDAR;

  2. The checkerboard should be placed in at least 6 positions, which are the left, middle, and right sides of the short distance (about 4m), and the left, middle, and right sides of the long distance (8m);

  3. In each position, the calibration plate should have 2~3 different orientations.

When all calibration data is collected, type Ctrl+c in the terminal to close the calibration tool.

At this point, you should be able to see the newly generated data folder named with saving-path that we specified, where images are saved in images, and point clouds are saved in pcds:

collection_result

3.3 Camera intrinsic parameters

There are many tools for camera intrinsic calibration, here we recommend using the Camera Calibrator App in MATLAB, or the Camera Calibration Tools in ROS, to calibrate the camera intrinsic.

Write the camera intrinsic matrix

fx s x0
0 fy y0
0  0  1

into the intrinsic file under data-root. The format should be as shown below:

intrinsic

Write the camera distortion vector

k1  k2  p1  p2  k3

into the distortion file under data-root. The format should be as shown below:

dist

3.4 Extrinsic Calibration

When you have completed all the steps in 3.1 ~ 3.3, the data-root directory should contain the following content:

data

If any files are missing, please confirm whether all the steps in 3.1~3.3 are completed.

Modify the calibration configuration file in directory config, here we take sample.yaml as an example:

  1. Modify the root under data, to the root directory of data collected in 3.1~3.3. In our example, root should be /tmp/data/1595233229.25;

  2. Modify the chessboard parameter under data, change W and H to the number of inner corners of the checkerboard that you use (note that, it is not the number of squares, but the number of inner corners. For instance, for the checkerboard in 2.2, W= 7, H=5); Modify GRID_SIZE to the side length of a single little white / black square of the checkerboard (unit is m);

Then, run the extrinsic calibration code:

python calibration.py --config ./configs/sample.yaml

After calibration, the extrinsic parameter matrix will be written into the parameter/extrinsic file under data-root. data

4. Validation of result

After extrinsic calibration of step 3, run projection_projection.py to check whether the calibration is accurate:

python projection_validation.py --config ./configs/sample.yaml

It will display the point cloud reprojection to the image with solved extrinsic parameters, the RGB-colorized point cloud, and the visualization of the detected 3D corners reprojected to the image.

Note that, the 3D point cloud colorization results will only be displayed if mayavi is installed.

Reprojection of Livox Horizon Point Cloud

data

Reprojection Result of Livox Mid100 Point Cloud

data

Reprojection Result of Livox Mid40 Point Cloud

data

Colorized Point Cloud

data

Detected Corners

data data

Appendix

I. Tested sensor combinations

No. LiDAR Camera Chessboard Pattern
1 LIVOX Horizon MYNTEYE-D 120 7x5, 0.08m
2 LIVOX Horizon MYNTEYE-D 120 7x5, 0.15m
3 LIVOX Horizon AVT Mako G-158C 7x5, 0.08m
4 LIVOX Horizon Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
5 LIVOX Mid-40 MYNTEYE-D 120 7x5, 0.08m
6 LIVOX Mid-40 MYNTEYE-D 120 7x5, 0.15m
7 LIVOX Mid-40 AVT Mako G-158C 7x5, 0.08m
8 LIVOX Mid-40 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
9 LIVOX Mid-100 MYNTEYE-D 120 7x5, 0.08m
10 LIVOX Mid-100 MYNTEYE-D 120 7x5, 0.15m
11 LIVOX Mid-100 AVT Mako G-158C 7x5, 0.08m
12 LIVOX Mid-100 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
13 RoboSense ruby MYNTEYE-D 120 7x5, 0.08m
14 RoboSense ruby AVT Mako G-158C 7x5, 0.08m
15 RoboSense ruby Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
16 RoboSense RS32 MYNTEYE-D 120 7x5, 0.08m
17 RoboSense RS32 AVT Mako G-158C 7x5, 0.08m
18 RoboSense RS32 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m

II. Paper

ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

@misc{cui2020acsc,
      title={ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems}, 
      author={Jiahe Cui and Jianwei Niu and Zhenchao Ouyang and Yunxiang He and Dian Liu},
      year={2020},
      eprint={2011.08516},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

III. Known Issues

Updating...

Comments
  • 用自己录制的pcds文件提示以下错误,相机校准出错,但是用你的pcds文件却可以运行,请问是什么原因

    用自己录制的pcds文件提示以下错误,相机校准出错,但是用你的pcds文件却可以运行,请问是什么原因

    python3 calibration.py --config ./configs/sample.yaml

    Calculating frame: 0 / 6 multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "calibration.py", line 780, in corner_detection_task ROI_pc = locate_chessboard(pc) File "calibration.py", line 389, in locate_chessboard pc = utils.voxelize(pc, voxel_size=configs['calibration']['RG_VOXEL']) File "/home/zehao/catkin_ws/src/ACSC/utils.py", line 129, in voxelize cloud.from_array(pc.astype(np.float32)) File "pcl/pxi/PointCloud_PointXYZI_180.pxi", line 158, in pcl._pcl.PointCloud_PointXYZI.from_array AssertionError """

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "calibration.py", line 940, in calibration(keep_list=None) File "calibration.py", line 870, in calibration corners_world, final_cost, corners_image = detection_result[idx].get() File "/usr/lib/python3.6/multiprocessing/pool.py", line 644, in get raise self._value AssertionError

    opened by alsosos 18
  • setup.py install  失败

    setup.py install 失败

    很棒的工作! 在本地配置的时候,运行python setup.py install出现以下问题: -- looking for PCL_COMMON -- looking for PCL_KDTREE -- looking for PCL_OCTREE -- looking for PCL_SEARCH -- looking for PCL_IO -- looking for PCL_SAMPLE_CONSENSUS -- looking for PCL_FILTERS -- looking for PCL_GEOMETRY -- looking for PCL_FEATURES -- looking for PCL_SEGMENTATION -- looking for PCL_SURFACE -- looking for PCL_REGISTRATION -- looking for PCL_RECOGNITION -- looking for PCL_KEYPOINTS -- looking for PCL_VISUALIZATION -- looking for PCL_PEOPLE -- looking for PCL_OUTOFCORE -- looking for PCL_TRACKING -- looking for PCL_APPS -- Could NOT find PCL_APPS (missing: PCL_APPS_LIBRARY) -- looking for PCL_MODELER -- looking for PCL_IN_HAND_SCANNER -- looking for PCL_POINT_CLOUD_EDITOR -- Configuring done -- Generating done -- Build files have been written to: /home/ACSC/segmentation/build/temp.linux-x86_64-3.7 make[2]: *** No rule to make target '/usr/lib/x86_64-linux-gnu/libproj.so', needed by '../lib.linux-x86_64-3.7/segmentation_ext.so'. Stop. CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/segmentation_ext.dir/all' failed make[1]: *** [CMakeFiles/segmentation_ext.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    -- Could NOT find PCL_APPS (missing: PCL_APPS_LIBRARY) 这个具体是啥原因呢

    opened by liuyuancv 6
  • About the calibration chessboard

    About the calibration chessboard

    Hi @HViktorTsoi Thank you very much for the project. I have a question related to the checkerboard. In README, you have mentioned that There should be no extra borders around the checkerboard. Can you elaborate on the reason why it is important to remove the extra borders of the checkerboard?

    opened by xmba15 4
  • ValueError: vector::_M_default_append on segmentation_ext.region_growing_kernel

    ValueError: vector::_M_default_append on segmentation_ext.region_growing_kernel

    We have been using this tool on multiple projects, and it has been working splendidly. Recently, we switched to new laptops that have Ubuntu 20.04 and Python 3.8. I have gotten all libraries and am able to import them into the Python interpreter, however, there seems to be an issue, either with the new libraries or with the size of our dataset (we are now using 6K images for calibration).

    Everything works as usual up until the point in the calibration script (calibration.py) where the "pc" variable is assigned to utils.voxelize(pc, voxel_size=configs['calibration']['RG_VOXEL']). This downsamples a 1,173,359 point cloud to 90,052 points. As soon as segmentation_ext.region_growing_kernel is run, calibration.py spawns 5 additional threads, and the following error is immediately thrown:

    Calculating frame: 0 / 22
    multiprocessing.pool.RemoteTraceback: 
    """
    Traceback (most recent call last):
      File "/home/visionarymind/anaconda3/lib/python3.8/multiprocessing/pool.py", line 125, in worker
        result = (True, func(*args, **kwds))
      File "calibration.py", line 780, in corner_detection_task
        ROI_pc = locate_chessboard(pc)
      File "calibration.py", line 392, in locate_chessboard
        segmentation = segmentation_ext.region_growing_kernel(
    ValueError: vector::_M_default_append
    """
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "calibration.py", line 943, in <module>
        calibration(keep_list=None)
      File "calibration.py", line 873, in calibration
        corners_world, final_cost, corners_image = detection_result[idx].get()
      File "/home/visionarymind/anaconda3/lib/python3.8/multiprocessing/pool.py", line 771, in get
        raise self._value
    ValueError: vector::_M_default_append
    

    I have heard of this happening before with very large datasets, but a 90k point cloud should not be a problem. Would you have any idea how to get around this? It happens even if we setup a Conda environment with Python 2.7 and allow it to solve all dependencies.

    Perhaps you could offer a pre-configured Conda environment YAML file that we could use to ensure all the right libraries are installed? I do not think this is a problem with library contention, but I want to make sure. I have already spent nearly a week attempting to get this working with variant library setups.

    opened by VisionaryMind 1
  • python2.7安装mayavi报错

    python2.7安装mayavi报错

    您好,请问您在使用mayavi进行三维点云的可视化的时候是使用的python2.7吗?我使用pip安装mayavi时遇到了以下问题: Requirement already satisfied: mayavi in ./anaconda3/envs/acsc2/lib/python2.7/site-packages/mayavi-4.5.0-py2.7-linux-x86_64.egg (4.5.0) Requirement already satisfied: apptools in ./anaconda3/envs/acsc2/lib/python2.7/site-packages (from mayavi) (5.1.0) ERROR: Package 'apptools' requires a different Python: 2.7.18 not in '>=3.6' 请问您有遇到过此类问题吗?如果没有的话,您有相关python2.7安装mayavi的教程吗?谢谢!

    opened by Simpleforever 1
  • sample.yaml DEBUG

    sample.yaml DEBUG

    Hello, thanks to your work, but I have a problem I want to visualize 3D detection result, intensity distribution and camera calibration So I change DEBUG to 2 in sample.yaml But after that calibration.py doesn't work Could I get your help?

    opened by sjw9805 0
  • The size of the black/white square in the checkerboard

    The size of the black/white square in the checkerboard

    Thanks for your great work! Can the black/white square size of the calibration board be less than 8cm? For example, 5cm, what impact will it have on the algorithm?

    opened by wwtinwhu 0
  • Problem when running python calibration.py --config ./configs/sample.yaml

    Problem when running python calibration.py --config ./configs/sample.yaml

    When I run this command, it says that Traceback (most recent call last): File "calibration.py", line 27, in import segmentation_ext ImportError: dynamic module does not define module export function (PyInit_segmentation_ext) Could you help me with this? I really appreciate it. Thank you.

    opened by LihaoQiu 0
  • [initCompute] Failed to allocate 4372933213416 indices.

    [initCompute] Failed to allocate 4372933213416 indices.

    I reported an error "[initCompute] Failed to allocate 221603688728963936 indices, MemoryError: std::bad_alloc" when i run the calibration code. The versions of PCL, Python and so on are consistent with the project. 2022-11-06 10-47-20 的屏幕截图

    opened by cjfcjf7 0
  • 在执行python projection_validation.py --config ./configs/sample.yaml 后命令行输出几行字就停止输出

    在执行python projection_validation.py --config ./configs/sample.yaml 后命令行输出几行字就停止输出

    输出: Localization done. min cost=7.189008491164129

    Localization done. min cost=10.974828754350037

    Localization done. min cost=7.696183526641846

    Localization done. min cost=5.486117499095273

    Localization done. min cost=8.745949616974988

    Localization done. min cost=6.122541215581003

    Localization done. min cost=4.3667670171396775

    Localization done. min cost=10.33130107718595

    Localization done. min cost=7.663290353740171

    Localization done. min cost=4.713727093038523

    Localization done. min cost=10.030486626383299

    Localization done. min cost=8.256724505393125

    Localization done. min cost=19.50139201376781

    Localization done. min cost=11.691634027177853

    Localization done. min cost=8.065487603386272

    Localization done. min cost=22.969422421889817

    Localization done. min cost=10.21475545242713

    Localization done. min cost=5.366466291371865

    Localization done. min cost=11.561646084577959

    Localization done. min cost=9.697920791823435

    Localization done. min cost=18.51885867076172

    python-pcl等库已正确安装,但我的pcl版本是1.9.1 vtk版本是8.1,不知道这个会不会有影响。

    opened by Mikehuntisbald 0
  • about ROI

    about ROI

    hello, firstly I appreciate about your project However I had a problem that when I calibrate while using this module, I don't know how to get ROIs files value. Could I get your help?

    opened by sjw9805 0
  • a large FOV calibration

    a large FOV calibration

    The fisheye's FOV is 180 degree,and we plane to replace it with 197 degree camera later. I wonder if this method would still work with such a large FOV?You prompt reply will be very much appreciated.

    opened by WeijieChen99 0
  • AttributeError: 'module' object has no attribute 'findChessboardCornersSB'

    AttributeError: 'module' object has no attribute 'findChessboardCornersSB'

    Traceback(most recent call last): File "calibration.py", line 943, in calibration(keep_list=None) File "calibration.py", line 873, in calibration corners_world, final_cost, corners_image = detection_result[idx].get() File "/usr/lib/python2.7/multiprocessing/pool.py", line 572, in get raise self._value AttributeError: 'module' object has no attribute 'findChessboardCornersSB'

    opened by lvxuanxuan123 0
Owner
KINO
Failed person.
KINO
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
Camera calibration & 3D pose estimation tools for AcinoSet

AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs in the Wild Daniel Joska, Liam Clark, Naoya Muramatsu, Ricardo Jericevich, Fre

African Robotics Unit 42 Nov 16, 2022
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

null 57 Nov 14, 2022
Omnidirectional camera calibration in python

Omnidirectional Camera Calibration Key features pure python initial solution based on A Toolbox for Easily Calibrating Omnidirectional Cameras (Davide

Thomas Pönitz 12 Nov 22, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on the combined output candidates of any 3D and any 2D detector, and is trained to produce more accurate 3D and 2D detection results.

Su Pang 254 Dec 16, 2022
Project Aquarium is a SUSE-sponsored open source project aiming at becoming an easy to use, rock solid storage appliance based on Ceph.

Project Aquarium Project Aquarium is a SUSE-sponsored open source project aiming at becoming an easy to use, rock solid storage appliance based on Cep

Aquarist Labs 73 Jul 21, 2022
BRepNet: A topological message passing system for solid models

BRepNet: A topological message passing system for solid models This repository contains the an implementation of BRepNet: A topological message passin

Autodesk AI Lab 42 Dec 30, 2022
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection

Confluence: A Robust Non-IoU Alternative to Non-Maxima Suppression in Object Detection 1. 介绍 用以替代 NMS,在所有 bbox 中挑选出最优的集合。 NMS 仅考虑了 bbox 的得分,然后根据 IOU 来

null 44 Sep 15, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
Automatic differentiation with weighted finite-state transducers.

GTN: Automatic Differentiation with WFSTs Quickstart | Installation | Documentation What is GTN? GTN is a framework for automatic differentiation with

null 100 Dec 29, 2022
Automatic self-diagnosis program (python required)Automatic self-diagnosis program (python required)

auto-self-checker 자동으로 자가진단 해주는 프로그램(python 필요) 중요 이 프로그램이 실행될때에는 절대로 마우스포인터를 움직이거나 키보드를 건드리면 안된다(화면인식, 마우스포인터로 직접 클릭) 사용법 프로그램을 구동할 폴더 내의 cmd창에서 pip

null 1 Dec 30, 2021
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

joisino 20 Aug 21, 2022
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)

S2-BNN (Self-supervised Binary Neural Networks Using Distillation Loss) This is the official pytorch implementation of our paper: "S2-BNN: Bridging th

Zhiqiang Shen 52 Dec 24, 2022
A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization

Website, Tutorials, and Docs    Uncertainty Toolbox A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualizatio

Uncertainty Toolbox 1.4k Dec 28, 2022
CSAC - Collaborative Semantic Aggregation and Calibration for Separated Domain Generalization

CSAC Introduction This repository contains the implementation code for paper: Co

ScottYuan 5 Jul 22, 2022
Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process

Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process, a complete algorithm library is established, which is named opensa (openspectrum analysis).

Fu Pengyou 50 Jan 7, 2023
The comma.ai Calibration Challenge!

Welcome to the comma.ai Calibration Challenge! Your goal is to predict the direction of travel (in camera frame) from provided dashcam video. This rep

comma.ai 697 Jan 5, 2023