CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

Overview

This is the official repository of the paper:

CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability


A private copy of the paper is available under CR-FIQA


CR-FIQA training

  1. In the paper, we employ MS1MV2 as the training dataset for CR-FIQA(L) which can be downloaded from InsightFace (MS1M-ArcFace in DataZoo)
    1. Download MS1MV2 dataset from insightface on strictly follow the licence distribution
  2. We use CASIA-WebFace as the training dataset for CR-FIQA(S) which can be downloaded from InsightFace (CASIA in DataZoo)
    1. Download CASIA dataset from insightface on strictly follow the licence distribution
  3. Unzip the dataset and place it in the data folder
  4. Intall the requirement from requirement.txt
  5. pip install -r requirements.txt
  6. All code are trained and tested using PyTorch 1.7.1 Details are under (Torch)[https://pytorch.org/get-started/locally/]

CR-FIQA(L)

Set the following in the config.py

  1. config.output to output dir
  2. config.network = "iresnet100"
  3. config.dataset = "emoreIresNet"
  4. Run ./run.sh

CR-FIQA(S)

Set the following in the config.py

  1. config.output to output dir
  2. config.network = "iresnet50"
  3. config.dataset = "webface"
  4. Run ./run.sh

Pretrained model

CR-FIQA(L)

CR-FIQA(S)

Evaluation

Follow these steps to reproduce the results on XQLFW:

  1. Download the XQLFW (please download xqlfw_aligned_112.zip)
  2. Unzip XQLFW (Folder structure should look like this ./data/XQLFW/xqlfw_aligned_112/)
  3. Download also xqlfw_pairs.txt to ./data/XQLFW/xqlfw_pairs.txt
  4. Set (in feature_extraction/extract_xqlfw.py) path = "./data/XQLFW" to your XQLFW data folder and outpath = "./data/quality_data" where you want to save the preprocessed data
  5. Run python extract_xqlfw.py (it creates the output folder, saves the images in BGR format, creates image_path_list.txt and pair_list.txt)
  6. Run evaluation/getQualityScore.py to estimate the quality scores
    1. CR-FIQA(L)
      1. Download the pretrained model
      2. run: python3 evaluation/getQualityScorce.py --data_dir "./data/quality_data" --datasets "XQLFW" --model_path "path_to_pretrained_CF_FIQAL_model" --backbone "iresnet100" --model_id "181952" --score_file_name "CRFIQAL.txt"
    2. CR-FIQA(S)
      1. Download the pretrained model
      2. run: python3 evaluation/getQualityScorce.py --data_dir "./data/quality_data" --datasets "XQLFW" --model_path "path_to_pretrained_CF_FIQAL_model" --backbone "iresnet50" --model_id "32572" --score_file_name "CRFIQAS.txt"

The quality score of LFW, AgeDB-30, CFP-FP, CALFW, CPLFW can be produced by following these steps:

  1. LFW, AgeDB-30, CFP-FP, CALFW, CPLFW are be included in the training dataset folder insightface
  2. Set (in extract_bin.py) path = "/data/faces_emore/lfw.bin" to your LFW bin file and outpath = "./data/quality_data" where you want to save the preprocessed data (subfolder will be created)
  3. Run python extract_bin.py (it creates the output folder, saves the images in BGR format, creates image_path_list.txt and pair_list.txt)
  4. Run evaluation/getQualityScore.py to estimate the quality scores
    1. CR-FIQA(L)
      1. Download the pretrained model
      2. run: python3 evaluation/getQualityScorce.py --data_dir "./data/quality_data" --datasets "XQLFW" --model_path "path_to_pretrained_CF_FIQAL_model" --backbone "iresnet100" --model_id "181952" --score_file_name "CRFIQAL.txt"
    2. CR-FIQA(S)
      1. Download the pretrained model
      2. run: python3 evaluation/getQualityScorce.py --data_dir "./data/quality_data" --datasets "XQLFW" --model_path "path_to_pretrained_CF_FIQAL_model" --backbone "iresnet50" --model_id "32572" --score_file_name "CRFIQAS.txt"

Ploting ERC curves

  1. Download pretrained model e.g. ElasticFace-Arc, MagFac, CurricularFace or ArcFace
  2. Run CUDA_VISIBLE_DEVICES=0 python feature_extraction/extract_emb.py --model_path ./pretrained/ElasticFace --model_id 295672 --dataset_path "./data/quality_data/XQLFW" --modelname "ElasticFaceModel" 2.1 Note: change the path to pretrained model and other arguments according to the evaluated model
  3. Run python3 ERC/erc.py (details in ERC/README.md)

Citation

If you use any of the code provided in this repository or the models provided, please cite the following paper:

@misc{fboutros_CR_FIQA,
      title={CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability}, 
      author={Fadi Boutros, Meiling Fang, Marcel Klemt, Biying Fu, Naser Damer},
      year={2021},
      eprint={},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

License

This project is licensed under the terms of the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Copyright (c) 2021 Fraunhofer Institute for Computer Graphics Research IGD Darmstadt

You might also like...
 Relative Uncertainty Learning for Facial Expression Recognition
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

Cross Quality LFW: A database for Analyzing Cross-Resolution Image Face Recognition in Unconstrained Environments

Cross-Quality Labeled Faces in the Wild (XQLFW) Here, we release the database, evaluation protocol and code for the following paper: Cross Quality LFW

Sample and Computation Redistribution for Efficient Face Detection
Sample and Computation Redistribution for Efficient Face Detection

Introduction SCRFD is an efficient high accuracy face detection approach which initially described in Arxiv. Performance Precision, flops and infer ti

Relative Positional Encoding for Transformers with Linear Complexity
Relative Positional Encoding for Transformers with Linear Complexity

Stochastic Positional Encoding (SPE) This is the source code repository for the ICML 2021 paper Relative Positional Encoding for Transformers with Lin

Fast and robust certifiable relative pose estimation

Fast and Robust Relative Pose Estimation for Calibrated Cameras This repository contains the code for the relative pose estimation between two central

Relative Human dataset, CVPR 2022
Relative Human dataset, CVPR 2022

Relative Human (RH) contains multi-person in-the-wild RGB images with rich human annotations, including: Depth layers (DLs): relative depth relationsh

NIMA: Neural IMage Assessment
NIMA: Neural IMage Assessment

PyTorch NIMA: Neural IMage Assessment PyTorch implementation of Neural IMage Assessment by Hossein Talebi and Peyman Milanfar. You can learn more from

A PyTorch Implementation of Neural IMage Assessment
A PyTorch Implementation of Neural IMage Assessment

NIMA: Neural IMage Assessment This is a PyTorch implementation of the paper NIMA: Neural IMage Assessment (accepted at IEEE Transactions on Image Proc

Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV

Realtime Face Anti-Spoofing Detection 🤖 Realtime Face Anti Spoofing Detection with Face Detector to detect real and fake faces Please star this repo

Comments
  • quastion about the FIQA process

    quastion about the FIQA process

    hi, thx for your amazing job! I have a question about the FIQA part. I’m confused about the “pair” meaning in XQLFW, also, why should I save image in a BGR way (by running extract_xqlfw.py) before get the quality score?

    If i have some images in a folder, can I compute the score of these images directly without saving in BGR format?

    opened by abcsimple 2
  • the perfomance of face recognition

    the perfomance of face recognition

    In the provided code, the loss_qs is directly added to the main CE loss and backward. Since the label (ccs/nnccs) is no stable, I wonder if this regression task will affect the final performance of backbone/face recognition?

    Thanks in advance!

    opened by Noobzm 2
  • inference problem

    inference problem

    nice work! I double that Why dropout=0.4 in training but not in inference? QualityModel._get_model: backbone = iresnet50(num_features=512, qs=1, use_se=False).to(f"cuda:{ctx}")

    opened by carry-xz 1
  • Range of values qs should take

    Range of values qs should take

    Hi,

    Thanks for sharing your work. I wonder if there is an inherent range of values that the quality score would take, I see values > 2 and that looked out of the expected range of [0, 1] in the literature.

    From the CR equation, for CCS [-1, 1] and NNCCS [-0.9, 1], CR looks to have a minimum value just below 0 and its maximum value lies at around 10.

    Would you please elaborate on that?

    Thanks

    opened by MOHAMEDELDAKDOUKY 0
Owner
Fadi Boutros
Fadi Boutros
[CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment

RADN [CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment [Paper on arXiv] Overview Update [2021/5/7] add codes for W

IIGROUP 53 Dec 28, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 5, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python >=3.8.0 Pytorch >=1.7.1 Usage wit

null 7 Oct 13, 2022
No-reference Image Quality Assessment(NIQA) Algorithms (BRISQUE, NIQE, PIQE, RankIQA, MetaIQA)

No-Reference Image Quality Assessment Algorithms No-reference Image Quality Assessment(NIQA) is a task of evaluating an image without a reference imag

Dae-Young Song 26 Jan 4, 2023
[CVPRW 2022] Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network

Attention Helps CNN See Better: Hybrid Image Quality Assessment Network [CVPRW 2022] Code for Hybrid Image Quality Assessment Network [paper] [code] T

IIGROUP 49 Dec 11, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 9, 2022
[ICCV 2021] Group-aware Contrastive Regression for Action Quality Assessment

CoRe Created by Xumin Yu*, Yongming Rao*, Wenliang Zhao, Jiwen Lu, Jie Zhou This is the PyTorch implementation for ICCV paper Group-aware Contrastive

Xumin Yu 31 Dec 24, 2022
Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Implementation of temporal pooling methods studied in [ICIP'20] A Comparative Evaluation Of Temporal Pooling Methods For Blind Video Quality Assessment

Zhengzhong Tu 5 Sep 16, 2022
MRQy is a quality assurance and checking tool for quantitative assessment of magnetic resonance imaging (MRI) data.

Front-end View Backend View Table of Contents Description Prerequisites Running Basic Information Measurements User Interface Feedback and usage Descr

Center for Computational Imaging and Personalized Diagnostics 58 Dec 2, 2022