LIBRAS-Image-Classifier
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language. At the moment, the project interprets only the first 6 letters of the alphabet. A Convolutional Neural Network was used in order to train and test a network so that, through the webcam, it is possible to identify the signal made by the user's hand.
Data prediction
In order to teach the neural network, we obtain image captures of the hand signal with the webcam inported in grayscale on the /data folter. There are a total of 1200 images, 200 of each signal.
Technologies used
- Keras
- Tensorflow
- OpenCV
- Python 3.8