Emotion AI
This is a Deep Leaning API for classifying emotions from human face and human audios.
Starting the server
To start the server first you need to install all the packages used by running the following command:
pip install -r requirements.txt
# make sure your current directory is "server"
After that you can start the server by running the following commands:
- change the directory from
server
toapi
:
cd api
- run the
app.py
python app.py
The server will start at a default PORT
of 3001
which you can configure in the api/app.py
on the Config
class:
class AppConfig:
PORT = 3001
DEBUG = False
If everything went well you will be able to make api request to the server.
EmotionAI
Consist of two parallel models that are trained with different model architectures to save different task. The one is for audio classification and the other is for facial emotion classfication. Each model is served on a different endpoint but on the same server.
Audio Classification
Sending an audio file to the server at http://127.0.0.1:3001/api/classify/audio
using the POST
method we will be able to get the data that looks as follows as the json
response from the server:
{
"predictions": {
"emotion": { "class": "sad", "label": 3, "probability": 0.22 },
"emotion_intensity": { "class": "normal", "label": 0, "probability": 0.85 },
"gender": { "class": "male", "label": 0, "probability": 1.0 }
},
"success": true
}
Classifying audios
- Using
cURL
To classify the audio using cURL
make sure that you open the command prompt where the audio files are located for example in my case the audios are located in the audios
folder so i open the command prompt in the audios folder or else i will provide the absolute path when making a cURL
request for example
curl -X POST -F [email protected] http://127.0.0.1:3001/api/classify/audio
If everything went well we will get the following response from the server:
{
"predictions": {
"emotion": { "class": "sad", "label": 3, "probability": 0.22 },
"emotion_intensity": { "class": "normal", "label": 0, "probability": 0.85 },
"gender": { "class": "male", "label": 0, "probability": 1.0 }
},
"success": true
}
- Using Postman client
To make this request with postman we do it as follows:
- Change the request method to
POST
athttp://127.0.0.1:3001/api/classify/audio
- Click on form-data
- Select type to be file on the
KEY
attribute - For the
KEY
type audio and select the audio you want to predict under valueClick
send - If everything went well you will get the following response depending on the audio you have selected:
{
"predictions": {
"emotion": { "class": "sad", "label": 3, "probability": 0.22 },
"emotion_intensity": { "class": "normal", "label": 0, "probability": 0.85 },
"gender": { "class": "male", "label": 0, "probability": 1.0 }
},
"success": true
}
-
Using JavaScript
fetch
api. -
First you need to get the input from
html
-
Create a
formData
object -
make a POST requests
const input = document.getElementById("input").files[0];
let formData = new FormData();
formData.append("audio", input);
fetch("http://127.0.0.1:3001/api/classify/audio", {
method: "POST",
body: formData,
})
.then((res) => res.json())
.then((data) => console.log(data));
If everything went well you will be able to get expected response.
{
"predictions": {
"emotion": { "class": "sad", "label": 3, "probability": 0.22 },
"emotion_intensity": { "class": "normal", "label": 0, "probability": 0.85 },
"gender": { "class": "male", "label": 0, "probability": 1.0 }
},
"success": true
}
Notebooks
If you want to see how the models were trained you can open the respective notebooks: