This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Overview

Skeleton Aware Multi-modal Sign Language Recognition

By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu.

Smile Lab @ Northeastern University

Python 3.7 Packagist Last Commit License: CC0 4.0 PWC


This repo contains the official code of Skeleton Aware Multi-modal Sign Language Recognition (SAM-SLR) that ranked 1st in CVPR 2021 Challenge: Looking at People Large Scale Signer Independent Isolated Sign Language Recognition.

Our paper has been accepted to CVPR21 Workshop. A preprint version is available on arXiv. Please cite our paper if you find this repo useful in your research.

News

[2021/04/10] Our workshop paper has been accepted. Citation info updated.

[2021/03/24] A preprint version of our paper is released here.

[2021/03/20] Our work has been verified and announced by the organizers as the 1st place winner of the challenge!

[2021/03/15] The code is released to public on GitHub.

[2021/03/11] Our team (smilelab2021) ranked 1st in both tracks and here are the links to the leaderboards:

Table of Contents

Data Preparation

Download AUTSL Dataset.

We processed the dataset into six modalities in total: skeleton, skeleton features, rgb frames, flow color, hha and flow depth.

  1. Please put original train, val, test videos in data folder as
    data
    ├── train
    │   ├── signer0_sample1_color.mp4
    │   ├── signer0_sample1_depth.mp4
    │   ├── signer0_sample2_color.mp4
    │   ├── signer0_sample2_depth.mp4
    │   └── ...
    ├── val
    │   └── ...
    └── test
        └── ...
  1. Follow the data_processs/readme.md to process the data.

  2. Use TPose/data_process to extract wholebody pose features.

Requirements and Docker Image

The code is written using Anaconda Python >= 3.6 and Pytorch 1.7 with OpenCV.

Detailed enviroment requirment can be found in requirement.txt in each code folder.

For convenience, we provide a Nvidia docker image to run our code.

Download Docker Image

Pretrained Models

We provide pretrained models for all modalities to reproduce our submitted results. Please download them at and put them into corresponding folders.

Download Pretrained Models

Usage

Reproducing the Results Submitted to CVPR21 Challenge

To test our pretrained model, please put them under each code folders and run the test code as instructed below. To ensemble the tested results and reproduce our final submission. Please copy all the results .pkl files to ensemble/ and follow the instruction to ensemble our final outputs.

For a step-by-step instruction, please see reproduce.md.

Skeleton Keypoints

Skeleton modality can be trained, finetuned and tested using the code in SL-GCN/ folder. Please follow the SL-GCN/readme.md instruction to prepare skeleton data into four streams (joint, bone, joint_motion, bone motion).

Basic usage:

python main.py --config /path/to/config/file

To train, finetune and test our models, please change the config path to corresponding config files. Detailed instruction can be found in SL-GCN/readme.md

Skeleton Feature

For the skeleton feature, we propose a Separable Spatial-Temporal Convolution Network (SSTCN) to capture spatio-temporal information from those features.

Please follow the instruction in SSTCN/readme.txt to prepare the data, train and test the model.

RGB Frames

The RGB frames modality can be trained, finetuned and tested using the following commands in Conv3D/ folder.

python Sign_Isolated_Conv3D_clip.py

python Sign_Isolated_Conv3D_clip_finetune.py

python Sign_Isolated_Conv3D_clip_test.py

Detailed instruction can be found in Conv3D/readme.md

Optical Flow

The RGB optical flow modality can be trained, finetuned and tested using the following commands in Conv3D/ folder.

python Sign_Isolated_Conv3D_flow_clip.py

python Sign_Isolated_Conv3D_flow_clip_funtine.py

python Sign_Isolated_Conv3D_flow_clip_test.py

Detailed instruction can be found in Conv3D/readme.md

Depth HHA

The Depth HHA modality can be trained, finetuned and tested using the following commands in Conv3D/ folder.

python Sign_Isolated_Conv3D_hha_clip_mask.py

python Sign_Isolated_Conv3D_hha_clip_mask_finetune.py

python Sign_Isolated_Conv3D_hha_clip_mask_test.py

Detailed instruction can be found in Conv3D/readme.md

Depth Flow

The Depth Flow modality can be trained, finetuned and tested using the following commands in Conv3D/ folder.

python Sign_Isolated_Conv3D_depth_flow_clip.py

python Sign_Isolated_Conv3D_depth_flow_clip_finetune.py

python Sign_Isolated_Conv3D_depth_flow_clip_test.py

Detailed instruction can be found in Conv3D/readme.md

Model Ensemble

For both RGB and RGBD track, the tested results of all modalities need to be ensemble together to generate the final results.

  1. For RGB track, we use the results from skeleton, skeleton feature, rgb, and flow color modalities to ensemble the final results.

    a. Test the model using newly trained weights or provided pretrained weights.

    b. Copy all the test results to ensemble folder and rename them as their modality names.

    c. Ensemble SL-GCN results from joint, bone, joint motion, bone motion streams in gcn/ .

     python ensemble_wo_val.py; python ensemble_finetune.py
    

    c. Copy test_gcn_w_val_finetune.pkl to ensemble/. Copy RGB, TPose and optical flow results to ensemble/. Ensemble final prediction.

     python ensemble_multimodal_rgb.py
    

    Final predictions are saved in predictions.csv

  2. For RGBD track, we use the results from skeleton, skeleton feature, rgb, flow color, hha and flow depth modalities to ensemble the final results. a. copy hha and flow depth modalities to ensemble/ folder, then

     python ensemble_multimodal_rgb.py
    

To reproduce our results in CVPR21Challenge, we provide .pkl files to ensemble and obtain our final submitted predictions. Detailed instruction can be find in ensemble/readme.md

License

Licensed under the Creative Commons Zero v1.0 Universal license with the following exceptions:

  • The code is released for academic research use only. Commercial use is prohibited.
  • Published versions (changed or unchanged) must include a reference to the origin of the code.

Citation

If you find this project useful in your research, please cite our paper

@inproceedings{jiang2021skeleton,
  title={Skeleton Aware Multi-modal Sign Language Recognition},
  author={Jiang, Songyao and Sun, Bin and Wang, Lichen and Bai, Yue and Li, Kunpeng and Fu, Yun},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  year={2021}
}

@article{jiang2021skeleton,
  title={Skeleton Aware Multi-modal Sign Language Recognition},
  author={Jiang, Songyao and Sun, Bin and Wang, Lichen and Bai, Yue and Li, Kunpeng and Fu, Yun},
  journal={arXiv preprint arXiv:2103.08833},
  year={2021}
}

Reference

https://github.com/Sun1992/SSTCN-for-SLR

https://github.com/jin-s13/COCO-WholeBody

https://github.com/open-mmlab/mmpose

https://github.com/0aqz0/SLR

https://github.com/kchengiva/DecoupleGCN-DropGraph

https://github.com/HRNet/HRNet-Human-Pose-Estimation

https://github.com/charlesCXK/Depth2HHA

Issues
  • FileNotFoundError when trying to test config files in Conv3D

    FileNotFoundError when trying to test config files in Conv3D

    Greetings. Congratulations on your work. I was trying to reproduce your results in Google Colab using processed datasets and pretrained models. I stored 'test_frames' folder inside /Conv3D/data/ and tried to run "python Sign_Isolated_Conv3D_clip_test.py" inside /Conv3D/ folder. I ran into the following error, I'm pasting the error traceback.

    ---Traceback Starts----

    ######################Testing Started####################### Traceback (most recent call last): File "Sign_Isolated_Conv3D_clip_test.py", line 138, in val_loss = val_epoch(model, criterion, val_loader, device, 0, logger, writer, phase=phase, exp_name=exp_name) File "/content/CVPR21Chal-SLR/Conv3D/validation_clip.py", line 13, in val_epoch for batch_idx, data in enumerate(dataloader): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 517, in next data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1199, in _next_data return self._process_data(data) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1225, in _process_data data.reraise() File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/CVPR21Chal-SLR/Conv3D/dataset_sign_clip.py", line 111, in getitem images.append(self.read_images(selected_folder, i)) File "/content/CVPR21Chal-SLR/Conv3D/dataset_sign_clip.py", line 75, in read_images index_list = self.frame_indices_tranform_test(len(os.listdir(folder_path)), self.frames, clip_no) FileNotFoundError: [Errno 2] No such file or directory: '../data/test_frames/signer34_sample1'

    ----Traceback Ends----

    However, there is a signer34_sample1 inside /Conv3D/data/test_frames/. Any insights will be appreciated. Thanks.

    opened by tauhidtamim 4
  • Wholebody yaml missing

    Wholebody yaml missing

    Hi, I get the following error when I try to run the demo.py to extract 2d keypoints

    FileNotFoundError: [Errno 2] No such file or directory: 'wholebody_w48_384x288.yaml'

    opened by iliasprc 4
  • Loss does not decrease, and the model is not trained(Loss不下降,模型没有得到训练)

    Loss does not decrease, and the model is not trained(Loss不下降,模型没有得到训练)

    Hi, honorable competition winner. I deployed your code locally, but the results of many rounds of training seem to be that the model has not been trained, and the results are still random. The accuracy rate can only reach around 1/226.

    For your code, the only modification I made was that batch-size is set to 2 for resource constrain, and the 512512 data was directly resized to 256256 during data preprocessing.

    No other changes were made.

    If it is possible, do you know the reason for my problem?

    (作者您好,我在本地部署了代码,但是多轮训练得到的结果好像模型没有得到训练,结果仍然是随机的。准确率只能达到1/226附近。 对于您的代码,我做的修改仅为资源受限batchsize设为2,数据预处理的时候将512512的数据直接resize成256256.除此之外未作改动。 如果有可能的话请问您知道我遇到问题的原因吗?)

    捕获.PNG

    opened by LiangSiyv 3
  • Problem in sign_gendata.py

    Problem in sign_gendata.py

    https://github.com/jackyjsy/CVPR21Chal-SLR/blob/892de189f4f8afc69031780a44f308044eff2429/SL-GCN/data_gen/sign_gendata.py#L15

    The '27' and '27_2' info of selected_joints is the same.

    Could you please let me know what the idea is?

    opened by SnorlaxSE 3
  • Running Docker Command

    Running Docker Command

    Hello! I have been struggling with getting the docker command to work. I'm working on a Mac with BigSur OS and have downloaded the Docker Desktop. I have been able to get the image to upload to the Docker Desktop but when I run the command in command line, I get the following error: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.

    I'm unsure if my issue is nvidia based or if I've changed the code incorrectly on the path_to_your_data part, as I put in the path to where I downloaded the docker file from the main github page. Should this be the path to where the container is saved or the path to where the dataset is saved?

    Thanks!

    Screen Shot 2021-10-25 at 2 32 26 PM

    opened by celenaponce 3
  • about joint detector

    about joint detector

    Why not use openpose, alphapose joint detector?The current joint detector hrnet is slower than alphapose and openpose?

    opened by HW140701 2
  • Accuracy of V2 paper

    Accuracy of V2 paper

    I'm sorry to ask question below. The result of v1 cited in V2 paper is 97.51%. The result of V2 is 98.00%, which reaches 98.53% after adding extra data. However, the results mentioned in the v1 paper have already reached 98.53%. Then there are two possibilities. Either the improvements mentioned in V2 have no effect at all, or the data have already used the technologies mentioned in V2 when the V1 paper comes out. /(ㄒoㄒ)/~~ Could you please help me to find out the all_modilities_cc 98.53% in V2 paper?

    opened by LiangSiyv 2
  • Conv3D for WLASL

    Conv3D for WLASL

    Hello thank you so much for your code.

    I am trying to train Conv3D for WLASL dataset but the accuracy is too low. Do you have any idea why this is happening ?

    Thank you in advance.

    opened by EvgeniaChroni 6
Owner
Isen (Songyao Jiang)
Isen (Songyao Jiang)
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 35 Nov 20, 2021
null 125 Nov 27, 2021
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

null 130 Nov 24, 2021
Official Implement of CVPR 2021 paper “Cross-Modal Collaborative Representation Learning and a Large-Scale RGBT Benchmark for Crowd Counting”

RGBT Crowd Counting Lingbo Liu, Jiaqi Chen, Hefeng Wu, Guanbin Li, Chenglong Li, Liang Lin. "Cross-Modal Collaborative Representation Learning and a L

null 22 Oct 29, 2021
Pytorch implementation for "Large-Scale Long-Tailed Recognition in an Open World" (CVPR 2019 ORAL)

Large-Scale Long-Tailed Recognition in an Open World [Project] [Paper] [Blog] Overview Open Long-Tailed Recognition (OLTR) is the author's re-implemen

Zhongqi Miao 688 Nov 24, 2021
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place

Mikaela Uy 254 Nov 19, 2021
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 9 Nov 15, 2021
Sign Language Transformers (CVPR'20)

Sign Language Transformers (CVPR'20) This repo contains the training and evaluation code for the paper Sign Language Transformers: Sign Language Trans

Necati Cihan Camgoz 101 Nov 9, 2021
This repository contains the source code of our work on designing efficient CNNs for computer vision

Efficient networks for Computer Vision This repo contains source code of our work on designing efficient networks for different computer vision tasks:

Sachin Mehta 364 Nov 24, 2021
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping

LVI-SAM This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono

Tixiao Shan 763 Nov 27, 2021
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

Wasi Ahmad 70 Nov 21, 2021
DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)

DeepLM DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021) Run Please install th

Jingwei Huang 73 Oct 18, 2021
This is the repo for our work "Towards Persona-Based Empathetic Conversational Models" (EMNLP 2020)

Towards Persona-Based Empathetic Conversational Models (PEC) This is the repo for our work "Towards Persona-Based Empathetic Conversational Models" (E

Zhong Peixiang 25 Nov 14, 2021
AI grand challenge 2020 Repo (Speech Recognition Track)

KorBERT를 활용한 한국어 텍스트 기반 위협 상황인지(2020 인공지능 그랜드 챌린지) 본 프로젝트는 ETRI에서 제공된 한국어 korBERT 모델을 활용하여 폭력 기반 한국어 텍스트를 분류하는 다양한 분류 모델들을 제공합니다. 본 개발자들이 참여한 2020 인공지

Young-Seok Choi 24 May 26, 2021
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

null 32 Nov 22, 2021
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 155 Nov 24, 2021
Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation.

AVATAR Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation. AVATAR stands for jAVA-pyThon progrAm tRanslation. AV

Wasi Ahmad 14 Nov 6, 2021
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

GraspNet Baseline Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020). [paper] [dataset] [API] [do

GraspNet 122 Nov 24, 2021
An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Fast Face Classification (F²C) This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicit

null 33 Jun 27, 2021