A set of tools to pre-calibrate and calibrate (multi-focus) plenoptic cameras (e.g., a Raytrix R12) based on the libpleno.

Overview

banner-logo


COMPOTE: Calibration Of Multi-focus PlenOpTic camEra.

COMPOTE is a set of tools to pre-calibrate and calibrate (multifocus) plenoptic cameras (e.g., a Raytrix R12) based on the libpleno.

Quick Start

Pre-requisites

The COMPOTE applications have a light dependency list:

  • boost version 1.54 and up, portable C++ source libraries,
  • libpleno, an open-souce C++ library for plenoptic camera,

and was compiled and tested on:

  • Ubuntu 18.04.4 LTS, GCC 7.5.0, with Eigen 3.3.4, Boost 1.65.1, and OpenCV 3.2.0.

Compilation & Test

If you are comfortable with Linux and CMake and have already installed the prerequisites above, the following commands should compile the applications on your system.

mkdir build && cd build
cmake ..
make -j6

To test the calibrate application you can use the example script from the build directory:

./../example/run_calibration.sh

Applications

Configuration

All applications use .js (json) configuration file. The path to this configuration files are given in the command line using boost program options interface.

Options:

short long default description
-h --help Print help messages
-g --gui true Enable GUI (image viewers, etc.)
-v --verbose true Enable output with extra information
-l --level ALL (15) Select level of output to print (can be combined): NONE=0, ERR=1, WARN=2, INFO=4, DEBUG=8, ALL=15
-i --pimages Path to images configuration file
-c --pcamera Path to camera configuration file
-p --pparams "internals.js" Path to camera internal parameters configuration file
-s --pscene Path to scene configuration file
-f --features "observations.bin.gz" Path to observations file
-e --extrinsics "extrinsics.js" Path to save extrinsics parameters file
-o --output "intrinsics.js" Path to save intrinsics parameters file

For instance to run calibration:

./calibrate -i images.js -c camera.js -p params.js -f observations.bin.gz -s scene.js -g true -l 7

Configuration file examples are given for the dataset R12-A in the folder examples/.

Pre-calibration

precalibrate uses whites raw images taken at different aperture to calibrate the Micro-Images Array (MIA) and computes the internal parameters used to initialize the camera and to detect the Blur Aware Plenoptic (BAP) features.

Requirements: minimal camera configuration, white images. Output: radii statistics (.csv), internal parameters, initial camera parameters.

Features Detection

detect extracts the newly introduced Blur Aware Plenoptic (BAP) features in checkerboard images.

Requirements: calibrated MIA, internal parameters, checkerboard images, and scene configuration. Output: micro-image centers and BAP features.

Camera Calibration

calibrate runs the calibration of the plenoptic camera (set I=0 to act as pinholes array, or I>0 for multifocus case). It generates the intrinsics and extrinsics parameters.

Requirements: calibrated MIA, internal parameters, features and scene configuration. If none are given all steps are re-done. Output: error statistics, calibrated camera parameters, camera poses.

Extrinsics Estimation & Calibration Evaluation

extrinsics runs the optimization of extrinsics parameters given a calibrated camera and generates the poses.

Requirements: internal parameters, features, calibrated camera and scene configuration. Output: error statistics, estimated poses.

COMPOTE also provides two applications to run stats evaluation on the optimized poses optained with a constant step linear translation along the z-axis:

  • linear_evaluation gives the absolute errors (mean + std) and the relative errors (mean + std) of translation of the optimized poses,
  • linear_raytrix_evaluation takes .xyz pointcloud obtained by Raytrix calibration software and gives the absolute errors (mean + std) and the relative errors (mean + std) of translation.

Note: those apps are legacy and have been moved and generalized in the [BLADE] app's evaluate.

Blur Proportionality Coefficient Calibration

blurcalib runs the calibration of the blur proportionality coefficient kappa linking the spread parameter of the PSF with the blur radius. It updates the internal parameters with the optimized value of kappa.

Requirements: internal parameters, features and images. Output: internal parameters.

Datasets

Datasets R12-A, R12-B and R12-C can be downloaded from here. The dataset R12-D, and the simulated unfocused plenoptic camera dataset UPC-S are also available from here.

Citing

If you use COMPOTE or libpleno in an academic context, please cite the following publication:

@inproceedings{labussiere2020blur,
  title 	=	{Blur Aware Calibration of Multi-Focus Plenoptic Camera},
  author	=	{Labussi{\`e}re, Mathieu and Teuli{\`e}re, C{\'e}line and Bernardin, Fr{\'e}d{\'e}ric and Ait-Aider, Omar},
  booktitle	=	{Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages		=	{2545--2554},
  year		=	{2020}
}

License

COMPOTE is licensed under the GNU General Public License v3.0. Enjoy!


You might also like...
Camera-caps - Examine the camera capabilities for V4l2 cameras
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Script that receives an Image (original) and a set of images to be used as
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

 Open-Set Recognition: A Good Closed-Set Classifier is All You Need
Open-Set Recognition: A Good Closed-Set Classifier is All You Need

Open-Set Recognition: A Good Closed-Set Classifier is All You Need Code for our paper: "Open-Set Recognition: A Good Closed-Set Classifier is All You

PyTorch implementation of the Transformer in Post-LN (Post-LayerNorm) and Pre-LN (Pre-LayerNorm).
PyTorch implementation of the Transformer in Post-LN (Post-LayerNorm) and Pre-LN (Pre-LayerNorm).

Transformer-PyTorch A PyTorch implementation of the Transformer from the paper Attention is All You Need in both Post-LN (Post-LayerNorm) and Pre-LN (

MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

QuakeLabeler is a Python package to create and manage your seismic training data, processes, and visualization in a single place — so you can focus on building the next big thing.
QuakeLabeler is a Python package to create and manage your seismic training data, processes, and visualization in a single place — so you can focus on building the next big thing.

QuakeLabeler Quake Labeler was born from the need for seismologists and developers who are not AI specialists to easily, quickly, and independently bu

Implementation of PyTorch-based multi-task pre-trained models

mtdp Library containing implementation related to the research paper "Multi-task pre-training of deep neural networks for digital pathology" (Mormont

Owner
ComSEE - Computers that SEE
Computer Vision research team of the Image, Systems of Perception and Robotics (ISPR) department of the Institut Pascal.
ComSEE - Computers that SEE
CVPRW 2021: How to calibrate your event camera

E2Calib: How to Calibrate Your Event Camera This repository contains code that implements video reconstruction from event data for calibration as desc

Robotics and Perception Group 104 Nov 16, 2022
Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR 2022)

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds (CVPR2022)[paper] Authors: Chenhang He, Ruihuang Li, Shuai Li, L

Billy HE 141 Dec 30, 2022
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

ORB-SLAM2 Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2) 13 Jan 2017: OpenCV 3 and Eigen 3.3 are now suppor

Raul Mur-Artal 7.8k Dec 30, 2022
Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders

MultiMAE: Multi-modal Multi-task Masked Autoencoders Roman Bachmann*, David Mizrahi*, Andrei Atanov, Amir Zamir Website | arXiv | BibTeX Official PyTo

Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL) 385 Jan 6, 2023
TrackTech: Real-time tracking of subjects and objects on multiple cameras

TrackTech: Real-time tracking of subjects and objects on multiple cameras This project is part of the 2021 spring bachelor final project of the Bachel

null 5 Jun 17, 2022
Frigate - NVR With Realtime Object Detection for IP Cameras

A complete and local NVR designed for HomeAssistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.

Blake Blackshear 6.4k Dec 31, 2022
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
BabelCalib: A Universal Approach to Calibrating Central Cameras. In ICCV (2021)

BabelCalib: A Universal Approach to Calibrating Central Cameras This repository contains the MATLAB implementation of the BabelCalib calibration frame

Yaroslava Lochman 55 Dec 30, 2022
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

null 32 Dec 26, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022