N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021) [Paper] [Video].
In this repository, we provide instructions for downloading N-ImageNet along with the implementation of the baseline models presented in the paper. If you have any questions regarding the dataset or the baseline implementations, please leave an issue or contact [email protected].
To download N-ImageNet, please fill out the following questionaire, and we will send guidelines for downloading the data via email: [Link].
Training / Evaluating Baseline Models
The codebase is tested on a Ubuntu 18.04 machine with CUDA 10.1. However, it may work with other configurations as well. First, create and activate a conda environment with the following command.
conda env create -f environment.yml conda activate e2t
In addition, you must install pytorch_scatter. Follow the instructions provided in the pytorch_scatter github repo. You need to install the version for torch 1.7.1 and CUDA 10.1.
Before you move on to the next step, please download N-ImageNet. Once you download N-ImageNet, you will spot a structure as follows.
N_Imagenet ├── train_list.txt ├── val_list.txt ├── extracted_train (train split) │ ├── nXXXXXXXX (label) │ │ ├── XXXXX.npz (event data) │ │ │ │ │ ⋮ │ │ │ │ │ └── YYYYY.npz (event data) └── extracted_val (val split) └── nXXXXXXXX (label) ├── XXXXX.npz (event data) │ ⋮ │ └── YYYYY.npz (event data)
The N-ImageNet variants file (which would be saved as
N_Imagenet_cam once downloaded) will have a similar file structure, except that it only contains validation files. The following instruction is based on N-ImageNet, but one can follow a similar step to test with N-ImageNet variants.
val_list.txt such that it matches the directory structure of the downloaded data. To illustrate, if you open
train_list.txt you will see the following
/home/jhkim/Datasets/N_Imagenet/extracted_train/n01440764/n01440764_10026.npz ⋮ /home/jhkim/Datasets/N_Imagenet/extracted_train/n15075141/n15075141_999.npz
Modify each path within the .txt file so that it accords with the directory in which N-ImageNet is downloaded. For example, if N-ImageNet is located in
train.txt as follows.
/home/karina/assets/Datasets/N_Imagenet/extracted_train/n01440764/n01440764_10026.npz ⋮ /home/karina/assets/Datasets/N_Imagenet/extracted_train/n15075141/n15075141_999.npz
Once this is done, create a
Datasets/ directory within
real_cnn_model, and create a symbolic link within
Datasets. To illustrate, using the directory structure of the previous example, first use the following command.
cd PATH_TO_REPOSITORY/real_cnn_model mkdir Datasets; cd Datasets ln -sf /home/karina/assets/Datasets/N_Imagenet/ ./ ln -sf /home/karina/assets/Datasets/N_Imagenet_cam/ ./ (If you have also downloaded the variants)
Congratulations! Now you can start training/testing models on N-ImageNet.
Training a Model
You can train a model based on the binary event image representation with the following command.
export PYTHONPATH=PATH_TO_REPOSITORY:$PYTHONPATH cd PATH_TO_REPOSITORY/real_cnn_model python main.py --config configs/imagenet/cnn_adam_acc_two_channel_big_kernel_random_idx.ini
For the examples below, we assume the
PYTHONPATH environment variable is set as above. Also, you can change minor details within the config before training by using the
--override flag. For example, if you want to change the batch size use the following command.
python main.py --config configs/imagenet/cnn_adam_acc_two_channel_big_kernel_random_idx.ini --override 'batch_size=8'
Evaluating a Model
Suppose you have a pretrained model saved in
PATH_TO_REPOSITORY/real_cnn_model/experiments/best.tar. You evaluate the performance of this model on the N-ImageNet validation split by using the following command.
python main.py --config configs/imagenet/cnn_adam_acc_two_channel_big_kernel_random_idx.ini --override 'load_model=PATH_TO_REPOSITORY/real_cnn_model/experiments/best.tar'
Downloading Pretrained Models