SOLD² - Self-supervised Occlusion-aware Line Description and Detection
This repository contains the implementation of the paper: SOLD² : Self-supervised Occlusion-aware Line Description and Detection, J-T. Lin*, R. Pautrat*, V. Larsson, M. Oswald and M. Pollefeys (Oral at CVPR 2021).
SOLD² is a deep line segment detector and descriptor that can be trained without hand-labelled line segments and that can robustly match lines even in the presence of occlusion.
Demos
Matching in the presence of occlusion:
Matching with a moving camera:
Usage
Installation
We recommend using this code in a Python environment (e.g. venv or conda). The following script installs the necessary requirements with pip:
pip install -r requirements.txt
Set your dataset and experiment paths (where you will store your datasets and checkpoints of your experiments) by modifying the file config/project_config.py
. Both variables DATASET_ROOT
and EXP_PATH
have to be set.
You can download the version of the Wireframe dataset that we used during our training and testing here. This repository also includes some files to train on the Holicity dataset to add more outdoor images, but note that we did not extensively test this dataset and the original paper was based on the Wireframe dataset only.
Training your own model
All training parameters are located in configuration files in the folder config
. Training SOLD² from scratch requires several steps, some of which taking several days, depending on the size of your dataset.
Step 1: Train on a synthetic dataset
The following command will create the synthetic dataset and start training the model on it:
python experiment.py --mode train --dataset_config config/synthetic_dataset.yaml --model_config config/train_detector.yaml --exp_name sold2_synth
Step 2: Export the raw pseudo ground truth on the Wireframe dataset with homography adaptation
Note that this step can take one to several days depending on your machine and on the size of the dataset. You can set the batch size to the maximum capacity that your GPU can handle.
python experiment.py --exp_name wireframe_train --mode export --resume_path <path to your previously trained sold2_synth> --model_config config/train_detector.yaml --dataset_config config/wireframe_dataset.yaml --checkpoint_name <name of the best checkpoint> --export_dataset_mode train --export_batch_size 4
You can similarly perform the same for the test set:
python experiment.py --exp_name wireframe_test --mode export --resume_path <path to your previously trained sold2_synth> --model_config config/train_detector.yaml --dataset_config config/wireframe_dataset.yaml --checkpoint_name <name of the best checkpoint> --export_dataset_mode test --export_batch_size 4
Step3: Compute the ground truth line segments from the raw data
cd postprocess
python convert_homography_results.py <name of the previously exported file (e.g. "wireframe_train.h5")> <name of the new data with extracted line segments (e.g. "wireframe_train_gt.h5")> ../config/export_line_features.yaml
cd ..
We recommend testing the results on a few samples of your dataset to check the quality of the output, and modifying the hyperparameters if need be. Using a detect_thresh=0.5
and inlier_thresh=0.99
proved to be successful for the Wireframe dataset in our case for example.
Step 4: Train the detector on the Wireframe dataset
We found it easier to pretrain the detector alone first, before fine-tuning it with the descriptor part. Uncomment the lines 'gt_source_train' and 'gt_source_test' in config/wireframe_dataset.yaml
and fill them with the path to the h5 file generated in the previous step.
python experiment.py --mode train --dataset_config config/wireframe_dataset.yaml --model_config config/train_detector.yaml --exp_name sold2_wireframe
Alternatively, you can also fine-tune the already trained synthetic model:
python experiment.py --mode train --dataset_config config/wireframe_dataset.yaml --model_config config/train_detector.yaml --exp_name sold2_wireframe --pretrained --pretrained_path <path ot the pre-trained sold2_synth> --checkpoint_name <name of the best checkpoint>
Lastly, you can resume a training that was stopped:
python experiment.py --mode train --dataset_config config/wireframe_dataset.yaml --model_config config/train_detector.yaml --exp_name sold2_wireframe --resume --resume_path <path to the model to resume> --checkpoint_name <name of the last checkpoint>
Step 5: Train the full pipeline on the Wireframe dataset
You first need to modify the field 'return_type' in config/wireframe_dataset.yaml
to 'paired_desc'. The following command will then train the full model (detector + descriptor) on the Wireframe dataset:
python experiment.py --mode train --dataset_config config/wireframe_dataset.yaml --model_config config/train_full_pipeline.yaml --exp_name sold2_full_wireframe --pretrained --pretrained_path <path ot the pre-trained sold2_wireframe> --checkpoint_name <name of the best checkpoint>
Pretrained models
We provide the checkpoints of two pretrained models:
- sold2_synthetic.tar: SOLD² detector trained on the synthetic dataset only.
- sold2_wireframe.tar: full version of SOLD² trained on the Wireframe dataset.
How to use it
We provide a notebook showing how to use the trained model of SOLD². Additionally, you can use the model to export line features (segments and descriptor maps) as follows:
python export_line_features.py --img_list <list to a txt file containing the path to all the images> --output_folder <path to the output folder> --checkpoint_path <path to your best checkpoint,>
You can tune some of the line detection parameters in config/export_line_features.yaml
, in particular the 'detect_thresh' and 'inlier_thresh' to adapt them to your trained model and type of images.
Results
Comparison of repeatability and localization error to the state of the art on the Wireframe dataset for an error threshold of 5 pixels in structural and orthogonal distances:
Structural distance | Orthogonal distance | |||
---|---|---|---|---|
Rep-5 | Loc-5 | Rep-5 | Loc-5 | |
LCNN | 0.434 | 2.589 | 0.570 | 1.725 |
HAWP | 0.451 | 2.625 | 0.537 | 1.725 |
DeepHough | 0.419 | 2.576 | 0.618 | 1.720 |
TP-LSD TP512 | 0.563 | 2.467 | 0.746 | 1.450 |
LSD | 0.358 | 2.079 | 0.707 | 0.825 |
Ours with NMS | 0.557 | 1.995 | 0.801 | 1.119 |
Ours | 0.616 | 2.019 | 0.914 | 0.816 |
Matching precision-recall curves on the Wireframe and ETH3D datasets:
Bibtex
If you use this code in your project, please consider citing the following paper:
@InProceedings{Pautrat_Lin_2021_CVPR,
author = {Pautrat, Rémi* and Juan-Ting, Lin* and Larsson, Viktor and Oswald, Martin R. and Pollefeys, Marc},
title = {SOLD²: Self-supervised Occlusion-aware Line Description and Detection},
booktitle = {Computer Vision and Pattern Recognition (CVPR)},
year = {2021},
}