Official implementation of the paper Do pedestrians pay attention? Eye contact detection for autonomous driving

Overview

Do pedestrians pay attention? Eye contact detection for autonomous driving

Official implementation of the paper Do pedestrians pay attention? Eye contact detection for autonomous driving

alt text

Image taken from : https://jooinn.com/people-walking-on-pedestrian-lane-during-daytime.html . Results obtained with the model trained on JackRabbot, Nuscenes, JAAD and Kitti. The model file is available at models/predictor and can be reused for testing with the predictor.

Abstract

In urban or crowded environments, humans rely on eye contact for fast and efficient communication with nearby people. Autonomous agents also need to detect eye contact to interact with pedestrians and safely navigate around them. In this paper, we focus on eye contact detection in the wild, i.e., real-world scenarios for autonomous vehicles with no control over the environment or the distance of pedestrians. We introduce a model that leverages semantic keypoints to detect eye contact and show that this high-level representation (i) achieves state-of-the-art results on the publicly-available dataset JAAD, and (ii) conveys better generalization properties than leveraging raw images in an end-to-end network. To study domain adaptation, we create LOOK: a large-scale dataset for eye contact detection in the wild, which focuses on diverse and unconstrained scenarios for real-world generalization. The source code and the LOOK dataset are publicly shared towards an open science mission.

Table of contents

Requirements

Use 3.6.9 <= python < 3.9. Run pip3 install -r requirements.txt to get the dependencies

Predictor

Get predictions from our pretrained model using any image with the predictor. The scripts extracts the human keypoints on the fly using OpenPifPaf. The predictor supports eye contact detection using human keypoints only. You need to specify the following arguments in order to run correctly the script:

Parameter Description
--glob Glob expression to be used. Example: .png
--images Path to the input images. If glob is enabled you need the path to the directory where you have the query images
--looking_threshold Threshold to define an eye contact. Default 0.5
--transparency Transparency of the output poses. Default 0.4

Example command:

If you want to reproduce the result of the top image, run:

If you want to run the predictor on a GPU:

python predict.py --images images/people-walking-on-pedestrian-lane-during-daytime-3.jpg

If you want to run the predictor on a CPU:

python predict.py --images images/people-walking-on-pedestrian-lane-during-daytime-3.jpg --device cpu --disable-cuda

Create the datasets for training and evaluation

Please follow the instructions on the folder create_data.

Training your models on LOOK / JAAD / PIE

You have one config file to modify. Do not change the variables name. Check the meaning of each variable to change on the training wiki.

After changing your configuration file, run:

python train.py --file [PATH_TO_CONFIG_FILE]

A sample config file can be found at config_example.ini

Evaluate your trained models

Check the meaning of each variable to change on the evaluation wiki.

After changing your configuration file, run:

python evaluate.py --file [PATH_TO_CONFIG_FILE]

A sample config file can be found at config_example.ini

Annotate new images

Check out the folder annotator in order to run our annotator to annotate new instances for the task.

Credits

Credits to OpenPifPaf for the pose detection part, and JRDB, Nuscenes and Kitti datasets for the images.

Cite our work

If you use our work for your research please cite us :)

@misc{belkada2021pedestrians,
      title={Do Pedestrians Pay Attention? Eye Contact Detection in the Wild}, 
      author={Younes Belkada and Lorenzo Bertoni and Romain Caristan and Taylor Mordan and Alexandre Alahi},
      year={2021},
      eprint={2112.04212},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • Unable to install openpifpaf=0.13.0

    Unable to install openpifpaf=0.13.0

    Hello!! I have been trying to install the openpifpaf=0.13.0 on a conda environment but getting the following error:

    Building wheel for openpifpaf (PEP 517) ... error
    ERROR: Failed building wheel for openpifpaf Failed to build openpifpaf
    ERROR: Could not build wheels for openpifpaf which use PEP 517 and cannot be installed directly

    Here are my current dependencies: Ubuntu 18.04 CUDA 10.2 conda 4.11.0 Python 3.8.12 torch 1.10.2 torch vision 0.11.3

    Could anyone please guide me on how to fix this issue?

    opened by Matus-Tanonwong 4
  • About the license for this model

    About the license for this model

    Thank you for sharing your great code. :smiley_cat:

    What is the license for this model? I'd like to cite it to the repository I'm working on if possible, but I want to post the license correctly. https://github.com/PINTO0309/PINTO_model_zoo

    Thank you.

    opened by PINTO0309 3
  • requirements dependency issue

    requirements dependency issue

    Hellow there,

    I met following issue.

    The conflict is caused by:
        torchvision 0.8.2 depends on torch==1.7.1
        openpifpaf 0.13.0 depends on torch>=1.9.0
    

    Please let me know if the requirements are correct.

    opened by sunggukcha 2
  • KITTI annotation issue

    KITTI annotation issue

    Hello there,

    According to https://looking-vita-epfl.github.io/dataset/ KITTI Training data and Testing data description, trainset and testset are to be named KITTI/train and KITTI/test respectively, while the annotation file requires the sets to be named Kitti/Train and Kitti/Test.

    It is minor, but might help in the next time.

    opened by sunggukcha 1
  • Downloading Looking annotation link error

    Downloading Looking annotation link error

    Hello there, thanks for your great efforts and sharing great results.

    I am interested in your works and trying to do some experiments with your repo. Following your guidelines, downloading LOOK annotation file link has some error. The page is not found for me. Please check it.

    Regards, Sungguk Cha

    opened by sunggukcha 1
  • Doesn't work when eyes are closed;

    Doesn't work when eyes are closed;

    Hi, first of all very great work on this ....

    But what i could see from some infer'd images is , It fails when eyes are close

    i have attached the image,

    Is there something i could do out of it if possible ...as i am open for suggestion image

    opened by prajotsl123 0
  • How does the normalization work?

    How does the normalization work?

    Hi,

    Can you explain to me the normalization function in utils.predict.py?

    When I use the sample image (python predict.py --images images/people-walking-on-pedestrian-lane-during-daytime-3.jpg) to print out the final keypoints (line 110 in predictor.py) used for prediction with the look model, one coordinate is greater than 1. How can this be?

    image

    image

    Which coordinates exactly should the upper left corner and the lower right corner of the image have after normalization?

    best regards Moritz

    opened by MoritzLennartz 0
  • Wrong image size for JAAD

    Wrong image size for JAAD

    Hey,

    I just wanted to point out that you are using the wrong image size for the JAAD images in the utils_train.py file (line 346). The image size according to the paper is 1920 x 1080 and not 1980 x 1280.

    image

    image

    Best Regards Moritz

    opened by MoritzLennartz 0
Owner
VITA lab at EPFL
Visual Intelligence for Transportation
VITA lab at EPFL
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

null 254 Jan 2, 2023
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
PyTorch implementation of Pay Attention to MLPs

gMLP PyTorch implementation of Pay Attention to MLPs. Quickstart Clone this repository. git clone https://github.com/jaketae/g-mlp.git Navigate to th

Jake Tae 34 Dec 13, 2022
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

null 71 Nov 29, 2022
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers

Motionformer This is an official pytorch implementation of paper Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers. In this rep

Facebook Research 192 Dec 23, 2022
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

William Qi 96 Dec 29, 2022
[CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

TransFuser This repository contains the code for the CVPR 2021 paper Multi-Modal Fusion Transformer for End-to-End Autonomous Driving. If you find our

null 695 Jan 5, 2023
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

null 148 Dec 28, 2022
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 4, 2021
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 5, 2022
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

TUCH This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright License fo

Lea Müller 45 Jan 7, 2023
Official PyTorch implementation of the ICRA 2021 paper: Adversarial Differentiable Data Augmentation for Autonomous Systems.

Adversarial Differentiable Data Augmentation This repository provides the official PyTorch implementation of the ICRA 2021 paper: Adversarial Differen

Manli 3 Oct 15, 2022