codes for Self-paced Deep Regression Forests with Consideration on Ranking Fairness

Overview

Self-paced Deep Regression Forests with Consideration on Ranking Fairness

This is official codes for paper Self-paced Deep Regression Forests with Consideration on Ranking Fairness. In this paper, we proposes a new self-paced paradigm for deep discriminative model, which distinguishes noisy and underrepresented examples according to the output likelihood and entropy associated with each example, and we tackle the fundamental ranking problem in SPL from a new perspective: Fairness.

Why should we consider the fairness of self-paced learning?

We find that SPL focuses on easy samples at early pace and the underrepresented ones are always ranked at the end of the whole sequence. This phenomenon demonstrates the SPL has a potential sorting fairness issue. However, SPUDRFs considers sample uncertainty when ranking samples, thus making underrepresented samples be selected at early pace.

Tasks and Performances

Age Estimation on MORPH II Dataset

The gradual learning process of SP-DRFs and SPUDRFs. Left: The typical worst cases at each iteration. Right: The MAEs of SP-DRFs and SPUDRFs at each pace descend gradually. Compared with SP-DRFs, the SPUDRFs show its superiority of taking predictive uncertainty into consideration.

Gaze Estimation on MPII Dataset

The similar phenomena can be observed on MPII dataset.

Head Pose Estimation on BIWI Dataset

For visualization, we plot the leaf node distribution of SP-DRFs and SPUDRFs in gradual learning process. The means of leaf nodes of SP-DRFs gather in a small range, incurring seriously biased solutions, while that of SPUDRFs distribute widely, leading to much better MAE performance.

Fairness Evaluation

We use FRIA, proposed in our paper, as fairness metric. FAIR is defined as following form.

The following table shows the FAIR of different methods on different datasets. SPUDRFs achieve the best performance on all datasets.
Dataset MORPH FGNET BIWI BU-3DFE MPII
DRFs 0.46 0.42 0.46 0.740 0.67
SP-DRFs 0.44 0.37 0.43 0.72 0.67
SPUDRFs 0.48 0.42 0.70 0.76 0.69

How to train your SPUDRFs

Pre-trained models and Dataset

We use pre-trained models for our training. You can download VGGFace from here and VGG IMDB-WIKI from here. The datasets used in our experiment are in following table. We use MTCNN to detect and align face. For BIWI, we use depth images. For MPII, we use normalized left eye and right eye patch as input, and details about normalization can be found here.

Task Dataset
Age Estimation MOPRH and FG-NET
Head Estimation BIWI and BU-3DFE
Gaze Estimation MPII

Environment setup

All codes are based on Pytorch, before you run this repo, please make sure that you have a pytorch envirment. You can install them using following command.

pip install -r requirements.txt

Train SPUDRFs

Code descritption:

Here is the description of the main codes.

step.py:         train SPUDRFs from scratch  
train.py:        complete one pace training for a given train set
predict.py:      complete a test for a given test set
picksamples.py:  select samples for next pace   

Train your SPUDRFs from scratch:

You should download this repo, and prepare your datasets and pre-trained models, then just run following command to train your SPUDRFs from scratch.

  • Clone this repo:
git clone https://github.com/learninginvision/SPUDRFs.git  
cd SPUDFRs  
  • Set config.yml
lr: 0.00002
max_step: 80000
batchsize: 32

total_pace: 10
pace_percent: [0.5, 0.0556, 0.0556, 0.0556, 0.0556, 0.0556, 0.0556, 0.0556, 0.0556, 0.0552]
alpha: 2
threshold: -3.0
ent_pick_per: 0
capped: False
  • Train from scratch
python step.py

Acknowledgments

This code is inspired by caffe-DRFs.

You might also like...
This is an official implementation of "Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

Polarized Self-Attention: Towards High-quality Pixel-wise Regression This is an official implementation of: Huajun Liu, Fuqiang Liu, Xinyi Fan and Don

[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction
[TIP 2021] SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

SADRNet Paper link: SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction Requirements python

Pytorch codes for
Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"

Self-Supervised-MVS This repository is the official PyTorch implementation of our AAAI 2021 paper: "Self-supervised Multi-view Stereo via Effective Co

Codes for CIKM'21 paper 'Self-Supervised Graph Co-Training for Session-based Recommendation'.

COTREC Codes for CIKM'21 paper 'Self-Supervised Graph Co-Training for Session-based Recommendation'. Requirements: Python 3.7, Pytorch 1.6.0 Best Hype

This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

Official codes: Self-Supervised Learning by Estimating Twin Class Distribution
Official codes: Self-Supervised Learning by Estimating Twin Class Distribution

TWIST: Self-Supervised Learning by Estimating Twin Class Distributions Codes and pretrained models for TWIST: @article{wang2021self, title={Self-Sup

Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation'

OD-Rec Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation' Paper, saved teacher models and Andro

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Learning embeddings for classification, retrieval and ranking.
Learning embeddings for classification, retrieval and ranking.

StarSpace StarSpace is a general-purpose neural model for efficient learning of entity embeddings for solving a wide variety of problems: Learning wor

Owner
Learning in Vision
Understanding and learning in computer vision.
Learning in Vision
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 4, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 5.7k Feb 12, 2021
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
ThunderGBM: Fast GBDTs and Random Forests on GPUs

Documentations | Installation | Parameters | Python (scikit-learn) interface What's new? ThunderGBM won 2019 Best Paper Award from IEEE Transactions o

Xtra Computing Group 647 Jan 4, 2023
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
ICLR 2021, Fair Mixup: Fairness via Interpolation

Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti

Ching-Yao Chuang 49 Nov 22, 2022
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

Image Crop Analysis This is a repo for the code used for reproducing our Image Crop Analysis paper as shared on our blog post. If you plan to use this

Twitter Research 239 Jan 2, 2023
Public implementation of "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression" from CoRL'21

Self-Supervised Reward Regression (SSRR) Codebase for CoRL 2021 paper "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression "

null 19 Dec 12, 2022