Code release for NeurIPS 2020 paper "Co-Tuning for Transfer Learning"

Overview

CoTuning

Official implementation for NeurIPS 2020 paper Co-Tuning for Transfer Learning.

[News] 2021/01/13 The COCO 70 dataset used in the paper is available for download!

COCO 70 dataset

COCO 70 dataset is a large-scale classification dataset (1000 images per class) created from COCO. It is used to explore the effect of fine-tuning with a large amount of data. Check our paper if you are interested in how it was created. Please respect the original license of COCO when you use it.

To download COCO 70, follow these steps:

  1. download separate files here (the file is too large to upload, so I have to split it into chunks)

  2. merge separate files into a single file by cat COCO70_splita* > COCO70.tar

  3. extract the dataset from the file by tar -xf COCO70.tar

The directory architecture looks like the following:

├── classes.txt #per class name per name

├── dev

├── dev.txt # [filename, class_index] per line, 0 <= class_index <= 69

├── test

├── test.txt

├── train

└── train.txt

There are 100 images per class for validation (dev.txt) and test (test.txt) respectively, and 800 images per class for training (train.txt).

Dependencies

  • python3
  • torch == 1.1.0 (with suitable CUDA and CuDNN version)
  • torchvision == 0.3.0
  • scikit-learn
  • numpy
  • argparse
  • tqdm

Datasets

Dataset Download Link
CUB-200-2011 http://www.vision.caltech.edu/visipedia/CUB-200-2011.html
Stanford Cars http://ai.stanford.edu/~jkrause/cars/car_dataset.html
FGVC Aircraft http://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/

Quick Start

python --gpu [gpu_num] --data_path /path/to/dataset --class_num [class_num] --trade_off 2.3 train.py 

Citation

If you use our code or use the constructed COCO-70 dataset, please consider citing:

@article{you2020co,
  title={Co-Tuning for Transfer Learning},
  author={You, Kaichao and Kou, Zhi and Long, Mingsheng and Wang, Jianmin},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

Contact

If you have any problem about our code, feel free to contact [email protected] or [email protected].

You might also like...
Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation".

PixelTransformer Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation". Project Page Installation Please insta

Code release for ICCV 2021 paper
Code release for ICCV 2021 paper "Anticipative Video Transformer"

Anticipative Video Transformer Ranked first in the Action Anticipation task of the CVPR 2021 EPIC-Kitchens Challenge! (entry: AVT-FB-UT) [project page

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

This repo is the code release of EMNLP 2021 conference paper "Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories".

Connect-the-Dots: Bridging Semantics between Words and Definitions via Aligning Word Sense Inventories This repo is the code release of EMNLP 2021 con

The code release of paper Low-Light Image Enhancement with Normalizing Flow
The code release of paper Low-Light Image Enhancement with Normalizing Flow

[AAAI 2022] Low-Light Image Enhancement with Normalizing Flow Paper | Project Page Low-Light Image Enhancement with Normalizing Flow Yufei Wang, Renji

UDP++ (ECCVW 2020 Oral), (Winner of COCO 2020 Keypoint Challenge).
UDP++ (ECCVW 2020 Oral), (Winner of COCO 2020 Keypoint Challenge).

UDP-Pose This is the pytorch implementation for UDP++, which won the Fisrt place in COCO Keypoint Challenge at ECCV 2020 Workshop. Top-Down Results on

Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]
git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

Official implementation of
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

Comments
  • CoTuning execution result problem

    CoTuning execution result problem

    My environment :

    CUDA 10.1
    python 3.7.10
    pytorch == 1.7.1
    torchvision == 0.8.2
    torchaudio == 0.7.2
    scikit-learn == 0.24.1
    numpy == 1.20.1
    argparse == 1.4.0 
    tqdm == 4.59.0
    

    Dataset Downloadlink : https://drive.google.com/file/d/1hbzc_P1FuxMkcabkgn9ZKinBwW683j45/view

    Offical CUB-200-2011 dataset ( 001. ~ 200. ) image

    My CUB-200-2011 dataset directory architecture :

    CoTung
    ├── data/
    │   └── finetune/ 
    │       └── cub200/ 
    │           ├── test/
    │           │   └── test/
    │           │       └── ... (151. ~ 200. image folder)
    │           ├── train/
    │           │   └── train/
    │           │       └── ... (001. ~ 120. image folder)
    │           └──  val/
    │               └── val/
    │                   └── ... (121. ~ 150. image folder)
    ├── module/
    ├── train.py
    └── ...
    

    My Quick Start :

    $ python train.py --gpu 0 --data_path ./data/finetune/cub200/ --class_num 200 --batch_size 32
    

    Result : image

    How to solve ? Thanks !

    opened by sksksk1748 5
  •  How to determine the number of iter?

    How to determine the number of iter?

    Very nice job! I wonder how to determine the number of iteraitons to better reproduce the result. Is it determined according to the size of the data set and the fixed number of epochs? Thank you very much !

    opened by shihaobai 2
  • Reproducibility issue

    Reproducibility issue

    Do you mind sharing the exact split for each dataset for reproducibility? It would be nice if you can tar the file and put it in a drive for download. While you have roughly specified the architecture and the data splitting for COCO dataset, the data splittings for other datasets are not provided. Thanks in advance.

    opened by korawat-tanwisuth 1
Owner
THUML @ Tsinghua University
Machine Learning Group, School of Software, Tsinghua University
THUML @ Tsinghua University
Code release for The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification (TIP 2020)

The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification Code release for The Devil is in the Channels: Mutual-Channel

PRIS-CV: Computer Vision Group 230 Dec 31, 2022
Code for ICE-BeeM paper - NeurIPS 2020

ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA This repository contains code to run and reproduce the experiments

Ilyes Khemakhem 65 Dec 22, 2022
Code for Discriminative Sounding Objects Localization (NeurIPS 2020)

Discriminative Sounding Objects Localization Code for our NeurIPS 2020 paper Discriminative Sounding Objects Localization via Self-supervised Audiovis

null 51 Dec 11, 2022
Code for the Population-Based Bandits Algorithm, presented at NeurIPS 2020.

Population-Based Bandits (PB2) Code for the Population-Based Bandits (PB2) Algorithm, from the paper Provably Efficient Online Hyperparameter Optimiza

Jack Parker-Holder 22 Nov 16, 2022
This is the official code release for the paper Shape and Material Capture at Home

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashlight or camera with flash.

null 89 Dec 10, 2022
Code release for paper: The Boombox: Visual Reconstruction from Acoustic Vibrations

The Boombox: Visual Reconstruction from Acoustic Vibrations Boyuan Chen, Mia Chiquier, Hod Lipson, Carl Vondrick Columbia University Project Website |

Boyuan Chen 12 Nov 30, 2022
Code release to accompany paper "Geometry-Aware Gradient Algorithms for Neural Architecture Search."

Geometry-Aware Gradient Algorithms for Neural Architecture Search This repository contains the code required to run the experiments for the DARTS sear

null 18 May 27, 2022
Code release of paper "Deep Multi-View Stereo gone wild"

Deep MVS gone wild Pytorch implementation of "Deep MVS gone wild" (Paper | website) This repository provides the code to reproduce the experiments of

François Darmon 53 Dec 24, 2022
Code release for our paper, "SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo"

SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo Thomas Kollar, Michael Laskey, Kevin Stone, Brijen Thananjeyan

null 68 Dec 14, 2022