Unified learning approach for egocentric hand gesture recognition and fingertip detection

Overview

Unified Gesture Recognition and Fingertip Detection

A unified convolutional neural network (CNN) algorithm for both hand gesture recognition and fingertip detection at the same time. The proposed algorithm uses a single network to predict both finger class probabilities for classification and fingertips positional output for regression in one evaluation. From the finger class probabilities, the gesture is recognized, and using both of the information fingertips are localized. Instead of directly regressing the fingertips position from the fully connected (FC) layer of the CNN, we regress the ensemble of fingertips position from a fully convolutional network (FCN) and subsequently take ensemble average to regress the final fingertips positional output.

Update

Included robust real-time hand detection using yolo for better smooth performance in the first stage of the detection system and most of the code has been cleaned and restructured for ease of use. To get the previous versions, please visit the release section.

GitHub stars GitHub forks GitHub issues Version GitHub license

Requirements

  • TensorFlow-GPU==2.2.0
  • OpenCV==4.2.0
  • ImgAug==0.2.6
  • Weights: Download the pre-trained weights files of the unified gesture recognition and fingertip detection model and put the weights folder in the working directory.

Downloads Downloads

The weights folder contains three weights files. The fingertip.h5 is for unified gesture recignition and finertiop detection. yolo.h5 and solo.h5 are for the yolo and solo method of hand detection. (what is solo?)

Paper

Paper Paper

To get more information about the proposed method and experiments, please go through the paper. Cite the paper as:

@article{alam2021unified,
title = {Unified learning approach for egocentric hand gesture recognition and fingertip detection},
author={Alam, Mohammad Mahmudul and Islam, Mohammad Tariqul and Rahman, SM Mahbubur},
journal = {Pattern Recognition},
volume = {121},
pages = {108200},
year = {2021},
publisher={Elsevier},
}

Dataset

The proposed gesture recognition and fingertip detection model is trained by employing Scut-Ego-Gesture Dataset which has a total of eleven different single hand gesture datasets. Among the eleven different gesture datasets, eight of them are considered for experimentation. A detailed explanation about the partition of the dataset along with the list of the images used in the training, validation, and the test set is provided in the dataset/ folder.

Network Architecture

To implement the algorithm, the following network architecture is proposed where a single CNN is utilized for both hand gesture recognition and fingertip detection.

Prediction

To get the prediction on a single image run the predict.py file. It will run the prediction in the sample image stored in the data/ folder. Here is the output for the sample.jpg image.

Real-Time!

To run in real-time simply clone the repository and download the weights file and then run the real-time.py file.

directory > python real-time.py

In real-time execution, there are two stages. In the first stage, the hand can be detected by using either you only look once (yolo) or single object localization (solo) algorithm. By default, yolo will be used here. The detected hand portion is then cropped and fed to the second stage for gesture recognition and fingertip detection.

Output

Here is the output of the unified gesture recognition and fingertip detection model for all of the 8 classes of the dataset where not only each fingertip is detected but also each finger is classified.

Comments
  • Datasets

    Datasets

    Hello, I have a question about the dataset from your readme, I can't download the Scut-Ego-Gesture Dataset ,Because in China, this website has been banned. Can you share it with me in other ways? For example, Google or QQ email: [email protected]

    opened by CVUsers 10
  • how to download the weights, code not contain?

    how to download the weights, code not contain?

    The weights folder contains three weights files. The comparison.h5 is for first five classes and performance.h5 is for first eight classes. solo.h5 is for hand detection. but no link

    opened by mmxuan18 6
  • OSError: Unable to open file (unable to open file: name = 'yolo.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    OSError: Unable to open file (unable to open file: name = 'yolo.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    I use the Mac Os to run thereal-time.py file, and get the OSError, I also search on Google to find others' the same problem. It is probably the Keras problem. But I do not how to solve it

    opened by Hanswanglin 4
  • OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 88, in h5py.h5f.open OSError: Unable to open file (unable to open file: name = 'weights/performance.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

    opened by Jasonmes 2
  • left hand?

    left hand?

    Hi, first it's really cool work!

    Is the left hand included in the training images? I have been playing around with some of my own images and it seems that it doesn't really recognize the left hand in a palm-down position...

    If I want to include the left hand, do you think it would be possible if I train the network with the image flipped?

    opened by myhjiang 1
  • why are there two hand detection provided?

    why are there two hand detection provided?

    A wonderful work!!As mentioned above, the Yolo and Solo detection models are provided. I wonder what is the advatange of each model comparing to the other and what is the dataset to train the detect.

    opened by DanielMao2015 1
  • Difference of classes5.h5 and classes8.h5

    Difference of classes5.h5 and classes8.h5

    Hi, May i know the difference when training classes5 and classes8? are the difference from the dataset used for training by excluding SingleSix, SingleSeven, SingleEight or there are other modification such as changing the model structure or parameters?

    Thanks

    opened by danieltanimanuel 1
  • Using old versions of tensorflow, can't install the dependencies on my macbook and with newer versions it's constatly failing.

    Using old versions of tensorflow, can't install the dependencies on my macbook and with newer versions it's constatly failing.

    When trying to install the required version of tensorflow:

    pip3 install tensorflow==1.15.0
    ERROR: Could not find a version that satisfies the requirement tensorflow==1.15.0 (from versions: 2.2.0rc3, 2.2.0rc4, 2.2.0, 2.2.1, 2.2.2, 2.3.0rc0, 2.3.0rc1, 2.3.0rc2, 2.3.0, 2.3.1, 2.3.2, 2.4.0rc0, 2.4.0rc1, 2.4.0rc2, 2.4.0rc3, 2.4.0rc4, 2.4.0, 2.4.1)
    ERROR: No matching distribution found for tensorflow==1.15.0
    

    I even tried downloading the .whl file from the pypi and try manually installing it, but that didn't work too:

    pip3 install ~/Downloads/tensorflow-1.15.0-cp37-cp37m-macosx_10_11_x86_64.whl
    ERROR: tensorflow-1.15.0-cp37-cp37m-macosx_10_11_x86_64.whl is not a supported wheel on this platform.
    

    Tried with both python3.6 and python3.8

    So it would be great to update the dependencies :)

    opened by KoStard 1
  • Custom Model keyword arguments Error

    Custom Model keyword arguments Error

    Change model = Model(input=model.input, outputs=[probability, position]) to model = Model(inputs=model.input, outputs=[probability, position]) on line 22 of net/network.py

    opened by Rohit-Jain-2801 1
  • Problem of weights

    Problem of weights

    Hi,when load the solo.h5(In solo.py line 14:"self.model.load_weights(weights)") it will report errors: Process finished with exit code -1073741819 (0xC0000005) keras2.2.5+tensorflow1.14.0+cuda10.0

    opened by MC-E 1
Releases(v2.0)
Owner
Mohammad
Machine Learning | Graduate Research Assistant at CORAL Lab
Mohammad
Deep learning based hand gesture recognition using LSTM and MediaPipie.

Hand Gesture Recognition Deep learning based hand gesture recognition using LSTM and MediaPipie. Demo video using PingPong Robot Files Pretrained mode

Brad 24 Nov 11, 2022
A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions

Overview This is a hobby project which includes a hand-gesture controlled virtual piano using an android phone camera and some OpenCV library. My moti

Abhinav Gupta 1 Nov 19, 2021
Hand Gesture Volume Control | Open CV | Computer Vision

Gesture Volume Control Hand Gesture Volume Control | Open CV | Computer Vision Use gesture control to change the volume of a computer. First we look i

Jhenil Parihar 3 Jun 15, 2022
Virtual hand gesture mouse using a webcam

NonMouse 日本語のREADMEはこちら This is an application that allows you to use your hand itself as a mouse. The program uses a web camera to recognize your han

Yuki Takeyama 55 Jan 1, 2023
Gesture-Volume-Control - This Python program can adjust the system's volume by using hand gestures

Gesture-Volume-Control This Python program can adjust the system's volume by usi

VatsalAryanBhatanagar 1 Dec 30, 2021
Hand Gesture Volume Control is AIML based project which uses image processing to control the volume of your Computer.

Hand Gesture Volume Control Modules There are basically three modules Handtracking Program Handtracking Module Volume Control Program Handtracking Pro

VITTAL 1 Jan 12, 2022
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
MohammadReza Sharifi 27 Dec 13, 2022
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Rishabh Jangir 20 Nov 24, 2022
Hand-distance-measurement-game - Hand Distance Measurement Game

Hand Distance Measurement Game This is program is made to calculate the distance

Priyansh 2 Jan 12, 2022
A gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor.

OpenHands OpenHands is a gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor. Currently the system can iden

Paul Treanor 12 Jan 10, 2022
Implementation of QuickDraw - an online game developed by Google, combined with AirGesture - a simple gesture recognition application

QuickDraw - AirGesture Introduction Here is my python source code for QuickDraw - an online game developed by google, combined with AirGesture - a sim

Viet Nguyen 89 Dec 18, 2022
Gesture recognition on Event Data

Event based Gesture Recognition Gesture recognition on Event Data usually involv

null 2 Feb 14, 2022
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Dinghan Shen 49 Dec 22, 2022
use machine learning to recognize gesture on raspberrypi

Raspberrypi_Gesture-Recognition use machine learning to recognize gesture on raspberrypi 說明 利用 tensorflow lite 訓練手部辨識模型 分辨 "剪刀"、"石頭"、"布" 之手勢 再將訓練模型匯入

null 1 Dec 10, 2021
Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, Pattern Recognition

USDAN The implementation of Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, which is accepte

null 11 Nov 3, 2022
Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

null 105 Nov 7, 2022
UMT is a unified and flexible framework which can handle different input modality combinations, and output video moment retrieval and/or highlight detection results.

Unified Multi-modal Transformers This repository maintains the official implementation of the paper UMT: Unified Multi-modal Transformers for Joint Vi

Applied Research Center (ARC), Tencent PCG 84 Jan 4, 2023
Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Shuwa Gesture Toolkit is a framework that detects and classifies arbitrary gestures in short videos

Google 89 Dec 22, 2022