A python library for face detection and features extraction based on mediapipe library

Overview

FaceAnalyzer

A python library for face detection and features extraction based on mediapipe library

Introduction

FaceAnalyzer is a library based on mediapipe library and is provided under MIT Licence. It provides an object oriented tool to play around with faces. It can be used to :

  1. Extract faces from an image
  2. Measure the face position and orientation
  3. Measure eyes openings
  4. Detect blinks
  5. Extract the face from an image (useful for face learning applications)
  6. Compute face triangulation (builds triangular surfaces that can be used to build 3D models of the face)
  7. Copy a face from an image to another.

Requirements

This library requires :

  1. mediapipe (used for facial landmarks extraction)
  2. opencv used for drawing and image morphing
  3. scipy used for efficient delaulay triangulation
  4. numpy, as any thing that uses math

How to install

Just install from pipy.

pip install FaceAnalyzer

Make sure you upgrade the library from time to time as I am adding new features so frequently those days.

pip install FaceAnalyzer --upgrade

How to use

# Import the two main classes FaceAnalyzer and Face 
from FaceAnalyzer import FaceAnalyzer, Face

fa = FaceAnalyzer()
# ... Recover an image in RGB format as numpy array (you can use pillow opencv but if you use opencv make sure you change the color space from BGR to RGB)
# Now process the image
fa.process(image)

# Now you can find faces in fa.faces which is a list of instances of object Face
if fa.nb_faces>0:
    print(f"{fa.nb_faces} Faces found")
    # We can get the landmarks in numpy format NX3 where N is the number of the landmarks and 3 is x,y,z coordinates 
    print(fa.faces[0].npLandmarks)
    # We can draw all landmarks
    # Get head position and orientation compared to the reference pose (here the first frame will define the orientation 0,0,0)
    pos, ori = fa.faces[0].get_head_posture(orientation_style=1)

Make sure you look at the examples folder in the repository for more details.

Structure

The library is structured as follow:

  • Helpers : A module containing Helper functions, namely geometric transformation between rotation formats, or generation of camera matrix etc
  • FaceAnalyzer : A module to process images and extract faces
  • Face : The main module that represents a face. Allows doing multiple operations such as copying the face and put it on another one or estimate eye opening, head position/orientation in space etc.

Examples

face_mesh :

A basic simple example of how to use webcam to get video and process each frame to extract faces and draw face landmarks on the face.

from_image :

A basic simple example of how to extract faces from an image file.

eye_process :

An example of how to extract faces from a video (using webcam) then process eyes and return eyes openings as well as detecting blinks.

face_off :

An example of how to use webcam to switch faces between two persons.

face_mask :

An example of how to use webcam to put a mask on a face.

You might also like...
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV

Realtime Face Anti-Spoofing Detection 🤖 Realtime Face Anti Spoofing Detection with Face Detector to detect real and fake faces Please star this repo

A small fun project using python OpenCV, mediapipe, and pydirectinput
A small fun project using python OpenCV, mediapipe, and pydirectinput

Here I tried a small fun project using python OpenCV, mediapipe, and pydirectinput. Here we can control moves car game when yellow color come to right box (press key 'd') left box (press key 'a') left hand when thumb finger open (press key 'w') right hand when thumb finger open (press key 's') This can be improved later by: Improving press left and right to make them More realistic. Fixing some bugs in hand tracking.

Use Python, OpenCV, and MediaPipe to control a keyboard with facial gestures
Use Python, OpenCV, and MediaPipe to control a keyboard with facial gestures

CheekyKeys A Face-Computer Interface CheekyKeys lets you control your keyboard using your face. View a fuller demo and more background on the project

Python scripts using the Mediapipe models for Halloween.
Python scripts using the Mediapipe models for Halloween.

Mediapipe-Halloween-Examples Python scripts using the Mediapipe models for Halloween. WHY Mainly for fun. But this repository also includes useful exa

[ACL 20] Probing Linguistic Features of Sentence-level Representations in Neural Relation Extraction

REval Table of Contents Introduction Overview Requirements Installation Probing Usage Citation License 🎓 Introduction REval is a simple framework for

A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up/down.

HandTrackingBrightnessControl A hand tracking demo made with mediapipe where you can control lights with pinching your fingers and moving your hand up

 Gesture Volume Control Using OpenCV and MediaPipe
Gesture Volume Control Using OpenCV and MediaPipe

This Project Uses OpenCV and MediaPipe Hand solutions to identify hands and Change system volume by taking thumb and index finger positions

Real life contra a deep learning project built using mediapipe and openc
Real life contra a deep learning project built using mediapipe and openc

real-life-contra Description A python script that translates the body movement into in game control. Welcome to all new real life contra a deep learni

A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx]
Comments
  • Any example regarding how to

    Any example regarding how to "Get the 2D gaze position on a predefined 3D plan(s) allowing to understand what the user is looking at."?

    Thank you very much for sharing your valuable effort. As stated in the title of the issue, it would be really great if you could provide more information and an example regarding how to "Get the 2D gaze position on a predefined 3D plan(s) allowing to understand what the user is looking at".

    opened by ianni67 17
  • Cannot run examples

    Cannot run examples

    Hello @ParisNeo , thank you for sharing your valuable software. I cloned the repo and ran: python setup.py build python setup.py install then tried to run the face_mask example: cd /examples/OpenCV/face_mask python face_mask.py

    but the program exited immediately:

    $ python face_mask.py
    /home/ianni/.virtualenvs/MP/lib/python3.8/site-packages/FaceAnalyzer-0.1.11-py3.8.egg/FaceAnalyzer/Face.py:1507: SyntaxWarning: assertion is always true, perhaps remove parentheses?
    /home/ianni/.virtualenvs/MP/lib/python3.8/site-packages/FaceAnalyzer-0.1.11-py3.8.egg/FaceAnalyzer/Face.py:1507: SyntaxWarning: assertion is always true, perhaps remove parentheses?
    INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
    $
    
    

    Edit: I also tried creating a virtualenv and installing FaceAnalyzer from scratch with pip, according to the instructions: pip install FaceAnalyzer

    and then ran, again:

    cd /examples/OpenCV/face_mask
    python face_mask.py
    

    and again, it did not work:

    $ python face_mask.py 
    INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
    $
    

    What can I do?

    opened by ianni67 6
  • How are the face_3d_reference_positions calculated in Face.py?

    How are the face_3d_reference_positions calculated in Face.py?

    Hi, I have been using your FaceAnalyzer library and it has been super useful in estimating the head pose and blinking parameters, and now I am working on tracking the head position with respect to camera in cm and mapping the corresponding input to my screen object.

    I have noticed that you have used face_3d_reference_positions =np.array([ [0,0,0], # Nose tip #[-80,50,-90], # Left #[0,-70,-30], # Chin #[80,50,-90], # Right [-70,50,-70], # Left left eye [70,50,-70], # Right right eye [0,80,-30] # forehead center ]) I have couple of doubts

    1. I am curious on how are you were you able to get these values?
    2. I am trying to make sense of the position coordinates returned by the get_head_posture() , currently I am getting [[ 292.50133269] [-212.73998924] [ 580.38242832]] as x,y,z even when I am in centre of the camera. I was expecting something like [0,0,580] or am I missing any steps ? P.S. I did make few changes to make this library compatible with logitech c922 webcam at 1920x1080 resolution, since your current version doesn't completely support other webcam resolutions properly. I'll make sure to make the changes and raise a pull request soon. :smile:
    opened by sambhavjain98 6
  • How does the eye opening calculated?

    How does the eye opening calculated?

    In Face.py

    ud = left_eye_upper-left_eye_lower
    ex = left_eye_upper1-left_eye_upper0
    ex /= np.linalg.norm(ex)
    ey = np.cross(np.array([0,0,1]),ex)
    left_eye_opening = np.dot(ud,ey)
    

    May I know what is the meaning of the above code?

    opened by desti-nation 1
Owner
Saifeddine ALOUI
Research engeneer PHD in signal processing and robotics Machine learning expert
Saifeddine ALOUI
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 15 Aug 20, 2022
Extracts essential Mediapipe face landmarks and arranges them in a sequenced order.

simplified_mediapipe_face_landmarks Extracts essential Mediapipe face landmarks and arranges them in a sequenced order. The default 478 Mediapipe face

Irfan 13 Oct 4, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Face Library is an open source package for accurate and real-time face detection and recognition

Face Library Face Library is an open source package for accurate and real-time face detection and recognition. The package is built over OpenCV and us

null 52 Nov 9, 2022
AI Face Mesh: This is a simple face mesh detection program based on Artificial intelligence.

AI Face Mesh: This is a simple face mesh detection program based on Artificial Intelligence which made with Python. It's able to detect 468 different

Md. Rakibul Islam 1 Jan 13, 2022
img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation

img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation Figure 1: We estimate the 6DoF rigid transformation of a 3D face (rendered in si

Vítor Albiero 519 Dec 29, 2022
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022
MohammadReza Sharifi 27 Dec 13, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022