[WACV21] Code for our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors"

Related tags

Deep Learning DRAGON
Overview

DRAGON: From Generalized zero-shot learning to long-tail with class descriptors

Paper
Project Website
Video

Overview

DRAGON learns to correct the bias towards head classes on a sample-by-sample basis; and fuse information from class-descriptions to improve the tail-class accuracy, as described in our paper: Samuel, Atzmon and Chechik, "From Generalized zero-shot learning to long-tail with class descriptors".

Requirements

  • numpy 1.15.4
  • pandas 0.25.3
  • scipy 1.1.0
  • tensorflow 1.14.0
  • keras 2.2.5

Quick installation under Anaconda:

conda env create -f requirements.yml

Data Preparation

Datasets: CUB, SUN and AWA.
Download data.tar from here, untar it and place it under the project root directory.

DRAGON
| data
   |--CUB
   |--SUN
   |--AWA1
| attribute_expert
| dataset_handler
| fusion
...

Train Experts and Fusion Module

Reproduce results for DRAGON and its modules (Table 1 in our paper):
Training and evaluation should be according to the training protocol described in our paper (Section 5 - training):

  1. First, train each expert without the hold-out set (partial training set) by executing the following commands:

    • CUB:
      # Visual-Expert training
      PYTHONPATH="./" python visual_expert/main.py --base_train_dir=./checkpoints/CUB --dataset_name=CUB --transfer_task=DRAGON --train_dist=dragon --data_dir=data --batch_size=64 --max_epochs=100 --initial_learning_rate=0.0003 --l2=0.005
      # Attribute-Expert training 
      PYTHONPATH="./" python attribute_expert/main.py --base_train_dir=./checkpoints/CUB --dataset_name=CUB --transfer_task=DRAGON --data_dir=data --train_dist=dragon --batch_size=64 --max_epochs=100 --initial_learning_rate=0.001 --LG_beta=1e-7 --LG_lambda=0.0001 --SG_gain=3 --SG_psi=0.01 --SG_num_K=-1
      
    • SUN:
      # Visual-Expert training
      PYTHONPATH="./" python visual_expert/main.py --base_train_dir=./checkpoints/SUN --dataset_name=SUN --transfer_task=DRAGON --train_dist=dragon --data_dir=data --batch_size=64 --max_epochs=100 --initial_learning_rate=0.0001 --l2=0.01
      # Attribute-Expert training 
      PYTHONPATH="./" python attribute_expert/main.py --base_train_dir=./checkpoints/SUN --dataset_name=SUN --transfer_task=DRAGON --data_dir=data --train_dist=dragon --batch_size=64 --max_epochs=100 --initial_learning_rate=0.001 --LG_beta=1e-6 --LG_lambda=0.001 --SG_gain=10 --SG_psi=0.01 --SG_num_K=-1
      
    • AWA:
      # Visual-Expert training
      PYTHONPATH="./" python visual_expert/main.py --base_train_dir=./checkpoints/AWA1 --dataset_name=AWA1 --transfer_task=DRAGON --train_dist=dragon --data_dir=data --batch_size=64 --max_epochs=100 --initial_learning_rate=0.0003 --l2=0.1
      # Attribute-Expert training 
      PYTHONPATH="./" python attribute_expert/main.py --base_train_dir=./checkpoints/AWA1 --dataset_name=AWA1 --transfer_task=DRAGON --data_dir=data --train_dist=dragon --batch_size=64 --max_epochs=100 --initial_learning_rate=0.001 --LG_beta=0.001 --LG_lambda=0.001 --SG_gain=1 --SG_psi=0.01 --SG_num_K=-1
      
  2. Then, re-train each expert, with the hold-out set (full train set) by executing above commands with the --test_mode flag as a parameter.

  3. Rename Visual-lr=0.0003_l2=0.005 to Visual and LAGO-lr=0.001_beta=1e-07_lambda=0.0001_gain=3.0_psi=0.01 to LAGO (this is essential since the FusionModule finds trained experts by their names, without extensions).

  4. Train the fusion-module on partially trained experts (models from step 1) by running the following commands:

    • CUB:
      PYTHONPATH="./" python fusion/main.py --base_train_dir=./checkpoints/CUB --dataset_name=CUB --data_dir=data --initial_learning_rate=0.005 --batch_size=64 --max_epochs=50 --sort_preds=1 --freeze_experts=1 --nparams=2
      
    • SUN:
      PYTHONPATH="./" python fusion/main.py --base_train_dir=./checkpoints/SUN --dataset_name=SUN --data_dir=data --initial_learning_rate=0.0005 --batch_size=64 --max_epochs=50 --sort_preds=1 --freeze_experts=1 --nparams=4
      
    • AWA:
      PYTHONPATH="./" python fusion/main.py --base_train_dir=./checkpoints/AWA1 --dataset_name=AWA1 --data_dir=data --initial_learning_rate=0.005 --batch_size=64 --max_epochs=50 --sort_preds=1 --freeze_experts=1 --nparams=4
      
  5. Finally, evaluate the fusion-module with fully-trained experts (models from step 2), by executing step 4 commands with the --test_mode flag as a parameter.

Pre-trained Models and Checkpoints

Download checkpoints.tar from here, untar it and place it under the project root directory.

checkpoints
  |--CUB
      |--Visual
      |--LAGO
      |--Dual2ParametricRescale-lr=0.005_freeze=1_sort=1_topk=-1_f=2_s=(2, 2)
  |--SUN
      |--Visual
      |--LAGO
      |--Dual4ParametricRescale-lr=0.0005_freeze=1_sort=1_topk=-1_f=2_s=(2, 2)
  |--AWA1
      |--Visual
      |--LAGO
      |--Dual4ParametricRescale-lr=0.005_freeze=1_sort=1_topk=-1_f=2_s=(2, 2)

Cite Our Paper

If you find our paper and repo useful, please cite:

@InProceedings{samuel2020longtail,
  author    = {Samuel, Dvir and Atzmon, Yuval and Chechik, Gal},
  title     = {From Generalized Zero-Shot Learning to Long-Tail With Class Descriptors},
  booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
  year      = {2021}}
You might also like...
The code for our paper submitted to RAL/IROS 2022:  OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.
The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.

OverlapTransformer The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for

Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Code for our CVPR 2021 paper
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

Code for our CVPR2021 paper coordinate attention
Code for our CVPR2021 paper coordinate attention

Coordinate Attention for Efficient Mobile Network Design (preprint) This repository is a PyTorch implementation of our coordinate attention (will appe

[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]
the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]

BGNet This repository contains the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet] Environment Python 3.6.* C

Comments
  • Questions about open source of ImageNet-LT-d

    Questions about open source of ImageNet-LT-d

    Hi, I appreciate your work that introducing class descriptors into the long-tailed problem. I would like to know is that possible to share the ImageNet-LT-d database so that we could compare the results with recent papers that were evaluated on ImageNet-LT.

    opened by zuglerQ 1
  • Question on Bal. in Table 2

    Question on Bal. in Table 2

    Hi, I find that you mention "Bal. refers to training DRAGON with a balanced loss."(last row in Table 2), which improves the performance significantly.

    You also mentioned in section 6.5 "We further trained DRAGON with a balanced cross-entropy loss (Bal.)

    I am wondering where does Bal. come from? Does it from a paper? If so, Could you please introduce me the paper?

    opened by e96031413 1
  • Question on training steps different from paper

    Question on training steps different from paper

    Hi, I found that the training steps you provided in README file seems to be different from your paper (Section D.) Am I misunderstanding your paper? Thanks

    Section D.:

    First, we train each expert on the training data excluding the hold-out set. Second, we freeze the expert weights and train the fusion-module on all the training set. Finally, we re-train the experts on all the training data in order to use all data available.

    README:

    First, train each expert without the hold-out set (partial training set) Then, re-train each expert, with the hold-out set Train the fusion-module on trained experts Finally, evaluate the fusion-module with fully-trained experts

    opened by e96031413 1
  • How to get the corresponding images used in your CUB-LT

    How to get the corresponding images used in your CUB-LT

    Hi, thank you for your sharing. I find that you directly used the extracted feature vectors by resnet101 in your code. But I would like to train my method from feeding original images into my networks. How can I get the corresponding images used in your CUB-LT?

    opened by ming1993li 0
Owner
Dvir Samuel
Dvir Samuel
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

Image Crop Analysis This is a repo for the code used for reproducing our Image Crop Analysis paper as shared on our blog post. If you plan to use this

Twitter Research 239 Jan 2, 2023
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
Code for our TKDE paper "Understanding WeChat User Preferences and “Wow” Diffusion"

wechat-wow-analysis Understanding WeChat User Preferences and “Wow” Diffusion. Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang,

null 18 Sep 16, 2022
The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question IntentionClassification Benchmark for Text-to-SQL"

TriageSQL The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question Intention Classification Benchmark for Text

Yusen Zhang 22 Nov 9, 2022
This is the official code of our paper "Diversity-based Trajectory and Goal Selection with Hindsight Experience Relay" (PRICAI 2021)

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay This is the official implementation of our paper "Diversity-based Traje

Tianhong Dai 6 Jul 18, 2022
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

null 71 Dec 14, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"

GEN-VLKT Code for our CVPR 2022 paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection". Contributed by Yue Lia

Yue Liao 47 Dec 4, 2022
Code for our paper "Graph Pre-training for AMR Parsing and Generation" in ACL2022

AMRBART An implementation for ACL2022 paper "Graph Pre-training for AMR Parsing and Generation". You may find our paper here (Arxiv). Requirements pyt

xfbai 60 Jan 3, 2023