RADIal is available now! Check the download section

Related tags

Deep Learning RADIal
Overview

Watch the video

Latest news:

RADIal is available now! Check the download section. However, because we are currently working on the data anonymization, we provide for now a low resolution preview video stream. The full resolution will be provided once the anonymization is completed, planned by 2022, February.

RADIal dataset

RADIal stands for “Radar, Lidar et al.” It's a collection of 2-hour of raw data from synchronized automotive-grade sensors (camera, laser, High Definition radar) in various environments (citystreet, highway, countryside road) and comes with GPS and vehicle’s CAN traces.

RADIal contains 91 sequences of 1 to 4 minutes in duration, for a total of 2 hours. These sequences are categorized in highway, country-side and city driving. The distribution of the sequences is indicated in the figure below. Each sequence contains raw sensor signals recorded with their native frame rate. There are approximately 25,000 frames with the three sensors synchronized, out of which 8,252 are labelled with a total of 9,550 vehicles.

Sensor specifications

Central to the RADIal dataset, our high-definition radar is composed of NRx=16 receiving antennas and NTx= 12 transmitting antennas, leading to NRx·NTx= 192 virtual antennas. This virtual-antenna array enables reaching a high azimuth angular resolution while estimating objects’ elevation angles as well. As the radar signal is difficult to interpret by annotators and practitioners alike, a 16-layer automotive-grade laser scanner (LiDAR) and a 5 Mpix RGB camera are also provided. The camera is placed below the interior mirror behind the windshield while the radar and the LiDAR are installed in the middle of the front ventilation grid, one above the other. The three sensors have parallel horizontallines of sight, pointing in the driving direction. Their extrinsic parameters are provided together with the dataset. RADIal also offers synchronized GPS and CAN traces which give access to the geo-referenced position of the vehicle as well as its driving information such as speed, steering wheelangle and yaw rate. The sensors’ specifications are detailed in the table below.

Dataset structure

RADIal is a unique folder containing all the recorded sequences. Each sequence is a folder containing:

  • A preview video of the scene (low resolution);
  • The camera data compressed in MJPEG format (will be released by 2022, February);
  • The Laser Scanner point cloud data saved in a binary file;
  • The ADC radar data saved in a binary file. There are 4 files in total, one file for each radar chip, each chip containing 4 Rx antennas;
  • The GPS data saved in ASCII format
  • The CAN traces of the vehicle saved in binary format
  • And finally, a log file that provides the timestamp of each individual sensor event.

We provide in a Python library DBReader to read the data. Because all the radar data are recorded in a RAW format, that is to say the signal after the Analog to Digital Conversion (ADC), we provided too an optimized Python library SignalProcessing to process the Radar signal and generate either the Power Spectrums, the Point Cloud or the Range-Azimuth map.

Labels

Out of the 25,000 synchronized frames, 8,252 frames are labelled. Labels for vehicles are stored in a separated csv file. Each label containg the following information:

  • numSample: number of the current synchronized sample between all the sensors. That is to say, this label can be projected in each individual sensor with a common dataset_index value. Note that there might be more than one line with the same numSample, one line per label;
  • [x1_pix, y1_pix, x2_pix, y2_pix]: 2D coordinates of the vehicle' bouding boxes in the camera coordinates;
  • [laser_X_m, laser_Y_m, laser_Z_m]: 3D coordinates of the vehicle in the laser scanner coordinates system. Note that this 3D point is the middle of either the back or front visible face of the vehicle;
  • [radar_X_m, radar_Y_m, radar_R_m, radar_A_deg, radar_D, radar_P_db]: 2D coordinates (bird' eyes view) of the vehicle in the radar coordinates system either in cartesian (X,Y) or polar (R,A) coordinates. radar_D is the Doppler value and radar_P_db is the power of the reflected signal;
  • dataset: name of sequence it belongs to;
  • dataset_index: frame index in the current sequence;
  • Difficult: either 0 or 1

Note that -1 in all field means a frame without any label.

Labels for the Free-driving-space is provided as a segmentaion mask saved in a png file.

Download instructions

To download the raw dataset, please follow these instructions.

$ wget -c -i download_urls.txt -P your_target_path
$ unzip 'your_target_path/*.zip' -d your_target_path
$ rm -Rf your_target_path/*.zip

You will have then to use the SignalProcessing library to generate data for each modalities uppon your need.

We provide too a "ready to use" dataset that can be loaded with the PyTorch data loader example provided in the Loader folder.

$ wget https://www.dropbox.com/s/bvbndch5rucyp97/RADIal.zip
Comments
  • Radar details

    Radar details

    Excellent dataset and nice code, thank you.

    Could you provide more information about the radar system you are using? I would like to do some radar signal processing before applying classification algorithms. Specifically, I would need to know how the antenna arrays are placed (the relative distance between them in wavelengths).

    opened by IgnacioRoldan 5
  • Raw Data Preprocessing

    Raw Data Preprocessing

    Hi, thank you for sharing the code! It is a really great job! I've been following your work recently and found that you've provided raw dataset and some preprocessing code in SignalProcessing folder. But I failed to preprocess the raw data into the "ready to use" format that you provided in https://drive.google.com/file/d/1PpAcL5r2PRYMxDb46ASps9YqToL7uJcE/view I' m wondering if you can provide the complete code about how to process the raw data into the "ready to use" dataset ? Thanks a lot!

    opened by illaMcbender 4
  • problem of downing dataset

    problem of downing dataset

    Thanks for your great work!

    Can you provide RADIal part dataset in the form of BaiduPan, because the files always are interrupted after one hour.

    Thanks!

    opened by sutiankang 3
  • Unexpected keyword argument 'offset_radar' of DBReader

    Unexpected keyword argument 'offset_radar' of DBReader

    Hello, When runing signalprocessing/radar_processing, I encountered the following error

    image Could you please explain the meaning of "offset_radar" and "offset_scala" ?

    opened by zhai-EE 2
  • Is hard cases trained with the same configuration as the pre-trained model?

    Is hard cases trained with the same configuration as the pre-trained model?

    The pre-trained model that you provide in the link is for easy cases, right? Did you train the hard cases with the same configuration as the pre-trained model, only thing to adjust is setting the dataset loading 'difficult=True'?

    opened by FrkWang 2
  • Problem to use the odometry

    Problem to use the odometry

    Hallo, Thank you for sharing the dataset! When I used example_sync_reader.ipynb to check the data, I met the following issue. (P.S. I used the same measurement as the original file : RECORD@2020-11-22_12.28.47 ) image Any idea how to solve it?

    BR Yi

    opened by chasedreams-jy 2
  • radar parameters

    radar parameters

    Hello,could you please provide the initial parameters of the radar, such as bandwidth, chirp time, starting frequency, etc., because I want to further process the data, thank you

    opened by lightzwc 2
  • Prepared dataset radar doppler clarification

    Prepared dataset radar doppler clarification

    Hello, When looking at the prepared dataset, specifically at the doppler data in radar_PCL, I can not find an explanation on how to interpret this data and can not find an example of usage of this specific data. This is for example not used in the loader/example_dataset.ipynb file. In the loader/dataset.py file there is a definition of the radar_PCL data as "range,azimuth,elevation,power,doppler,x,y,z,v". The doppler value is an 8-bit value associated with each datapoint. How is the doppler value to be interpreted and is there an example which I have missed?

    opened by samberngit 2
  • signal process code part: 404 not found

    signal process code part: 404 not found

    Hi there, thanks for your work. this is really a great project for sensor fusion area. I wonder when can we have the radar signal process code as it shows 404 right now. Thank you so much.

    opened by yyxr75 2
  • Interpreting radar RA map dimensions

    Interpreting radar RA map dimensions

    Dear authors,

    Thanks for sharing such an interesting dataset. I am trying to playing with it. However, I am having trouble understanding the RA map -- what is the size of each range bin from the RA map, and size of the azimuth bin?

    I generated the RA map from the SignalProcessing toolkit. When examine the RA map, should I expect the provied labels (from labels.csv) overlay with the bright spots in the RA map? I expect so, but it is not the case for the sample I checked -- there might be something wrong with the way I interpret the RA bins though.

    Thanks in advance!

    opened by laixuezhongwen 1
  • Downloading Error Issue

    Downloading Error Issue

    I have downloaded RADIal.zip from location "https://drive.google.com/drive/folders/1RFRoJJd9ghfjRHjA8t_POwGzYdoUA1IY".

    Even after getting completly downloaded, the extraction error thrown in the above snapshot. Kindly look into the above issue. Valeo_DownloadError

    opened by ChidanandKumarKS 1
  • Calibration problem for 3D bbox annotation

    Calibration problem for 3D bbox annotation

    Hi,

    First of all, thank you for your great contribution.

    I'm trying 3D annotation for 3D Object detection with Radial dataset.

    However, when projecting the 3D bbox annotated in 3D lidar space onto the image plane, the calibration is hardly correct.

    Can you give me some tips or exact internal/external calibration values?

    opened by shawrby 0
  • Issues about the extrinsic parameter of sensors in the

    Issues about the extrinsic parameter of sensors in the "ready to use" dataset

    Could you please provide more detailed extrinsic parameter of sensors in the "ready to use" dataset so that we can better calibrate them?

    Thanks a lot!

    opened by lbhwyy 0
  • Are you using a 4D millimeter wave radar?

    Are you using a 4D millimeter wave radar?

    I can see in Figure 1 (a) of your published paper that the RADAR point cloud projected into the image has height information, but most of the existing vehicle borne 3D millimeter wave radars do not have height measurement capability, so I want to ask if you use 4D millimeter wave radar?

    1664602608077

    opened by chenhengwei1999 0
  • Question about evaluation

    Question about evaluation

    Thanks for your great work!

    I have a problem about run_evaluation function and run_Full_evaluation function. When I use the same weight and dataloader to evaluation, I found that the metrics about mAP, mAR and mIoU are quite different. Can you tell me what is the problem about it.

    Thanks!

    opened by sutiankang 0
  • Problem of distributed training

    Problem of distributed training

    Thanks for the great paper, dataset and code!

    I tried to train the model with ready data using single GPU, it took roughly half day. So I tried to add some distributed training component, the training time decreased, but also the AP/AR/IOU values. Have you tested distributed training? How do you correctly set the parameters to ensure shorter training time and proper AP/AR/IOU values?

    Thank you!

    opened by eagles1812 2
  • Raw-data extraction

    Raw-data extraction

    Hi,

    I'm trying to recreate the ready to use data set from raw data. In the script that is shared in an earlier issue, a so called label_candidates_0deg.npy is loaded.

    What is the function of this file and what does this look like. Or can someone explain te me how I can recreate the data using the signalProcessing library in this project?

    Thanks in advance!

    opened by JeroenBax 0
Owner
valeo.ai
We are an international team based in Paris, conducting AI research for Valeo automotive applications, in collaboration with world-class academics.
valeo.ai
Mahadi-Now - This Is Pakistani Just Now Login Tools

PAKISTANI JUST NOW LOGIN TOOLS Install apt update apt upgrade apt install python

MAHADI HASAN AFRIDI 19 Apr 6, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph

NIRPS-ETC Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph February 2

Nolan Grieves 2 Sep 15, 2022
ERISHA is a mulitilingual multispeaker expressive speech synthesis framework. It can transfer the expressivity to the speaker's voice for which no expressive speech corpus is available.

ERISHA: Multilingual Multispeaker Expressive Text-to-Speech Library ERISHA is a multilingual multispeaker expressive speech synthesis framework. It ca

Ajinkya Kulkarni 43 Nov 27, 2022
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Zhenhua Xu 79 Jan 4, 2023
This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the time series forecasting research space.

TSForecasting This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in the tim

Rakshitha Godahewa 80 Dec 30, 2022
Repository for publicly available deep learning models developed in Rosetta community

trRosetta2 This package contains deep learning models and related scripts used by Baker group in CASP14. Installation Linux/Mac clone the package git

null 81 Dec 29, 2022
A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.

Use this instead: https://github.com/facebookresearch/maskrcnn-benchmark A Pytorch Implementation of Detectron Example output of e2e_mask_rcnn-R-101-F

Roy 2.8k Dec 29, 2022
PyTorch Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

pytorch-fcn PyTorch implementation of Fully Convolutional Networks. Requirements pytorch >= 0.2.0 torchvision >= 0.1.8 fcn >= 6.1.5 Pillow scipy tqdm

Kentaro Wada 1.6k Jan 7, 2023
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

null 730 Jan 9, 2023
An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available actions

Agar.io_Q-Learning_AI An experiment on the performance of homemade Q-learning AIs in Agar.io depending on their state representation and available act

null 1 Jun 9, 2022
The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory

This repository contains the software implementation of most algorithms used or developed in my research. The LaTeX and Python code for generating the

João Fonseca 3 Jan 3, 2023
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Dec 22, 2022
Industrial knn-based anomaly detection for images. Visit streamlit link to check out the demo.

Industrial KNN-based Anomaly Detection ⭐ Now has streamlit support! ⭐ Run $ streamlit run streamlit_app.py This repo aims to reproduce the results of

aventau 102 Dec 26, 2022
This repo is to be freely used by ML devs to check the GAN performances without coding from scratch.

GANs for Fun Created because I can! GOAL The goal of this repo is to be freely used by ML devs to check the GAN performances without coding from scrat

Sagnik Roy 13 Jan 26, 2022
Official code for Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018)

MUC Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018) Performance Details for Accuracy: | Dataset

Yijun Su 3 Oct 9, 2022
N-Person-Check-Checker-Splitter - A calculator app use to divide checks

N-Person-Check-Checker-Splitter This is my from-scratch programmed calculator ap

null 2 Feb 15, 2022