This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Overview

Hand Cricket

Table of Content

Overview

This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python. Behind the game is a CNN model that is trained to identify hand sign for numbers 0,1,2,3,4,5 & 6. For those who have never played this game, the rules are explained below.

The Game in action

hand-cricket.mov

Installation

  • You need Python (3.6) & git (to clone this repo)
  • git clone [email protected]:abhinavnayak11/Hand-Cricket.git . : Clone this repo
  • cd path/to/Hand-Cricket : cd into the project folder
  • conda env create -f environment.yml : Create a virtual env with all the dependencies
  • conda activate comp-vision : activate the virtual env
  • python src/hand-cricket.py : Run the script

Game rules

Hand signs

  • You can play numbers 0, 1, 2, 3, 4, 5, 6. Their hand sign are shown here

Toss

  • You can choose either odd or even (say you choose odd)
  • Both the players play a number (say players play 3 & 6). Add those numbers (3+6=9).
  • Check if the sum is odd or even. (9 is odd)
  • If the result is same as what you have chosen, you have won the toss, else you have lost. (9 is odd, you chose odd, hence you win)

The Game

  • The person who wins the toss is the batsman, the other player is the bowler. (In the next version of the game, the toss winner will be allowed to chose batting/bowling)
  • Scoring Runs:
    • Both players play a number.
    • The batsman's number is added to his score only when the numbers are different.
    • There is special power given to 0. If batsman plays 0 and bowler plays any number but 0, bowler's number is added to batsman's score
  • Getting out:
    • Batsman gets out when both the players play the same number. Even if both the numbers are 0.
  • Winning/Losing:
    • After both the players have finished their innings, the person scoring more runs wins the game

Game code : hand-cricket.py


Project Details

  1. Data Collection :
    • After failing to find a suitable dataset, I created my own dataset using my phone camera.
    • The dataset contains a total of 1848 images. To ensure generality (i.e prevent overfitting to one type of hand in one type of environment) images were taken with 4 persons, in 6 different lighting conditions, in 3 different background.
    • Sample of images post augmentations are shown below, images
    • Data can be found uploaded at : github | kaggle. Data collection code : collect-data.py
  2. Data preprocessing :
    • A Pytorch dataset was created to handle the preprocessing of the image dataset (code : dataset.py).
    • Images were augmented before training. Following augmentations were used : Random Rotation, Random Horizontal Flip and Normalization. All the images were resized to (128x128).
    • Images were divided into training and validation set. Training set was used to train the model, whereas validation set helped validate the model performance.
  3. Model training :
    • Different pretrained models(resent18, densenet121 etc, which are pre-trained on the ImageNet dataset) from pytorch library were used to train on this dataset. Except the last 2 layers, all the layers were frozen and then trained. With this the pre-trained model helps extracting useful features and the last 2 layers will be fine-tuned to my dataset.
    • Learning rate for training the model was chosen with trial and error. For each model, learning rate was different.
    • Of all the models trained, densnet121 performed the best, with a validation accuracy of 0.994.
    • Training the model : train.py, engine.py, training-notebook

Future Scope

  • Although, this was a fun application, the dataset can be used in applications like sign language recognition.


License: MIT

You might also like...
A neuroanatomy-based augmented reality experience powered by computer vision. Features 3D visuals of the Atlas Brain Map slices.

Brain Augmented Reality (AR) A neuroanatomy-based augmented reality experience powered by computer vision that features 3D visuals of the Atlas Brain

This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge column damage detection

Bridge-damage-segmentation This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge c

Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.
Implementation of self-attention mechanisms for general purpose. Focused on computer vision modules. Ongoing repository.

Self-attention building blocks for computer vision applications in PyTorch Implementation of self attention mechanisms for computer vision in PyTorch

Pytorch implementation of the DeepDream computer vision algorithm
Pytorch implementation of the DeepDream computer vision algorithm

deep-dream-in-pytorch Pytorch (https://github.com/pytorch/pytorch) implementation of the deep dream (https://en.wikipedia.org/wiki/DeepDream) computer

Open source Python module for computer vision

About PCV PCV is a pure Python library for computer vision based on the book "Programming Computer Vision with Python" by Jan Erik Solem. More details

A simple, high level, easy-to-use open source Computer Vision library for Python.
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

PyTorch implementation of popular datasets and models in remote sensing
PyTorch implementation of popular datasets and models in remote sensing

PyTorch Remote Sensing (torchrs) (WIP) PyTorch implementation of popular datasets and models in remote sensing tasks (Change Detection, Image Super Re

Pytorch implementation of the popular Improv RNN model originally proposed by the Magenta team.
Pytorch implementation of the popular Improv RNN model originally proposed by the Magenta team.

Pytorch Implementation of Improv RNN Overview This code is a pytorch implementation of the popular Improv RNN model originally implemented by the Mage

Comments
  • Suggestion for better game-play

    Suggestion for better game-play

    I loved your project and your efforts in data collection. In the gameplay, I feel having synchronization (ie. both computer hand and your hand at the same time) would be more appealing. Maybe you can use some counter so that both systems and you can play at the right point of time.

    opened by snehitvaddi 1
Owner
Abhinav R Nayak
Aspiring data scientist
Abhinav R Nayak
A program that uses computer vision to detect hand gestures, used for controlling movie players.

HandGestureDetection This program uses a Haar Cascade algorithm to detect the presence of your hand, and then passes it on to a self-created and self-

null 2 Nov 22, 2022
MohammadReza Sharifi 27 Dec 13, 2022
A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

Teemu Laurila 19 Feb 12, 2022
Hand Gesture Volume Control is AIML based project which uses image processing to control the volume of your Computer.

Hand Gesture Volume Control Modules There are basically three modules Handtracking Program Handtracking Module Volume Control Program Handtracking Pro

VITTAL 1 Jan 12, 2022
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
Controlling a game using mediapipe hand tracking

These scripts use the Google mediapipe hand tracking solution in combination with a webcam in order to send game instructions to a racing game. It features 2 methods of control

null 3 May 17, 2022
Rohit Ingole 2 Mar 24, 2022
PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.

PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{CV2018, author = {Donny You ([email protected])}, howpubl

Donny You 40 Sep 14, 2022
A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 9, 2023
It's final year project of Diploma Engineering. This project is based on Computer Vision.

Face-Recognition-Based-Attendance-System It's final year project of Diploma Engineering. This project is based on Computer Vision. Brief idea about ou

Neel 10 Nov 2, 2022