An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Related tags

Deep Learning FFC
Overview

Fast Face Classification (F²C)

This is the code of our paper An Efficient Training Approach for Very Large Scale Face Recognition or F²C for simplicity.

Training on ultra-large-scale datasets is time-consuming and takes up a lot of hardware resource. Therefore we design a dul-data loaders and dynamic class pool to deal with large-scale face classification.

Pipeline

Arch

Preparation

As FFC contains LRU module, so you may use lru_python_impl.py or instead compile the code under lru_c directory.

If you choose lru_python_impl.py, you should rename lru_python_impl.py to lru_utils.py. As lru is not the bottleneck of the training procedure, so feel free to use python implementation, though the C++ implementation is 5~10 times faster than python version.

Compile LRU (optional)

Command to build LRU

cd lru_c
mkdir build
cd build
cmake ..
make
cd ../../ && ln -s lru_c/build/lru_utils.so .

You can compare this two implementation using lru_c/python/compare_time.py

Database

Training

In main.py, you should provide the path to your training db at line 152-153.

args.source_lmdb = ['/path to msceleb.lmdb']
args.source_file = ['/path to kv file']

We choose lmdb as the format of our training db. Each element in source_file is the path to a text file, each line of which represents lmdb_key label pairs. You may refer to LFS for more details.

Now you can modify train_ffc.sh. Before running the training, you should set the port number and queue_size. queue_size is a trade-off term that controls the performance and the speed. Larger queue_size means higher performance at the cost of time and GPU resource. It can be any positive integer. The common setting is 1%, 0.1%, 0.001 % of the total identities.

Notice

The difference between r50 and ir50 is that r50 requires 224 × 224 images as input while ir50 requires 112 × 112 as what does by ArcFace. The network ir50 comes from ArcFace.

Evaluation

We provide the whole test script under evaluation_code directory. Each script requires the directory to the images and test pair files.

Tips

Code in evaluation_code/test_megaface.py is much faster than official version. It's also applicable to extremely large-scale testing.

You might also like...
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place

Official PyTorch implementation of
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Open-AI's DALL-E for large scale training in mesh-tensorflow.

DALL-E in Mesh-Tensorflow [WIP] Open-AI's DALL-E in Mesh-Tensorflow. If this is similarly efficient to GPT-Neo, this repo should be able to train mode

A large-scale video dataset for the training and evaluation of 3D human pose estimation models
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation mode

A large-scale video dataset for the training and evaluation of 3D human pose estimation models
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation models. It contains 17 different amateur subjects performing 30 sports-related actions each, for a total of 510 action clips.

Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.
Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.

MidiBERT-Piano Authors: Yi-Hui (Sophia) Chou, I-Chun (Bronwin) Chen Introduction This is the official repository for the paper, MidiBERT-Piano: Large-

ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Galileo library for large scale graph training by JD
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

UniLM AI - Large-scale Self-supervised Pre-training across Tasks, Languages, and Modalities

Pre-trained (foundation) models across tasks (understanding, generation and translation), languages (100+ languages), and modalities (language, image, audio, vision + language, audio + language, etc.)

Comments
  • lmdb dataSets

    lmdb dataSets

    您好,我按照推荐的方式,构建了一个比较小的lmdb 数据测试流程,但是出现了错误。能帮忙分析下:(1)数据集生成方式是否正确,能否提供一个你们生成的数据集信息;(2)这个错误如何解决 ①数据集信息如下:source_file.txt 的格式为: img_name.jpg label。生成lmdb的参数为 --shuffle=true --backend=lmdb --resize_width=128 --resize_height=128
    --encoded=true
    --encode_type=jpg ② 错误如下: File "main.py", line 51, in train_one_epoch images1, images2, id_indexes = next(id_iter) File "FFC_py36/lib/python3.6/site-ackages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "FFC_py36/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 964, in _next_data raise StopIteration StopIteration

    opened by MSZLean 2
  • 'FFC' object has no attribute '_momentum_update_gallery'

    'FFC' object has no attribute '_momentum_update_gallery'

    请问下作者,我加载数据运行FFC项目时,提示如下错误,您碰到类似情况吗 File "FFC-master/ffc_ddp.py", line 182, in forward_impl_rollback self._momentum_update_gallery() # update the gallery net File "FFC-py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 772, in __getattr__type(self).name, name)) torch.nn.modules.module.ModuleAttributeError: 'FFC' object has no attribute '_momentum_update_gallery'

    opened by MSZLean 2
Owner
null
A very tiny, very simple, and very secure file encryption tool.

Picocrypt is a very tiny (hence "Pico"), very simple, yet very secure file encryption tool. It uses the modern ChaCha20-Poly1305 cipher suite as well

Evan Su 1k Dec 30, 2022
Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! Very tiny! Stock Market Financial Technical Analysis Python library . Quant Trading automation or cryptocoin exchange

MyTT Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! to Stock Market Financial Technical Analysis Python

dev 34 Dec 27, 2022
Face-Recognition-Attendence-System - This face recognition Attendence system using Python

Face-Recognition-Attendence-System I have developed this face recognition Attend

Riya Gupta 4 May 10, 2022
SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity

SSD: Single Shot MultiBox Detector Introduction Here is my pytorch implementation of 2 models: SSD-Resnet50 and SSDLite-MobilenetV2.

Viet Nguyen 149 Jan 7, 2023
A general-purpose programming language, focused on simplicity, safety and stability.

The Rivet programming language A general-purpose programming language, focused on simplicity, safety and stability. Rivet's goal is to be a very power

The Rivet programming language 17 Dec 29, 2022
Paaster is a secure by default end-to-end encrypted pastebin built with the objective of simplicity.

Follow the development of our desktop client here Paaster Paaster is a secure by default end-to-end encrypted pastebin built with the objective of sim

Ward 211 Dec 25, 2022
The implementation of the CVPR2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes"

STAR-FC This code is the implementation for the CVPR 2021 paper "Structure-Aware Face Clustering on a Large-Scale Graph with 10^7 Nodes" ?? ?? . ?? Re

Shuai Shen 87 Dec 28, 2022
StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking

StackRec: Efficient Training of Very Deep Sequential Recommender Models by Iterative Stacking Datasets You can download datasets that have been pre-pr

null 25 May 29, 2022
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 8, 2022
Pytorch implementation for "Large-Scale Long-Tailed Recognition in an Open World" (CVPR 2019 ORAL)

Large-Scale Long-Tailed Recognition in an Open World [Project] [Paper] [Blog] Overview Open Long-Tailed Recognition (OLTR) is the author's re-implemen

Zhongqi Miao 761 Dec 26, 2022