Code of the paper "Multi-Task Meta-Learning Modification with Stochastic Approximation".

Overview

Multi-Task Meta-Learning Modification with Stochastic Approximation

This repository contains the code for the paper
"Multi-Task Meta-Learning Modification with Stochastic Approximation".

Method pipeline

Dependencies

This code has been tested on Ubuntu 16.04 with Python 3.8 and PyTorch 1.8.

To install the required dependencies:

pip install -r requirements.txt

Usage

To reproduce the results on benchmarks described in our article, use the following scripts. To vary types of the experiments, change the parameters of the scripts responsible for benchmark dataset, shot and way (e.g. miniImageNet 1-shot 5-way or CIFAR-FS 5-shot 2-way).

MAML

Multi-task modification (MTM) for Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017).

Multi-task modifications for MAML are trained on top of baseline MAML model which has to be trained beforehand.

To train MAML (reproduced) on miniImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-miniimagenet \
    --dataset miniimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA-Track on miniImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name mini-imagenet-mtm-spsa-track \
    --load "./results/reproduced-miniimagenet/model.th" \
    --dataset miniimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting spsa-track \
    --normalize-spsa-weights-after 100 \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on tieredImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-tieredimagenet \
    --dataset tieredimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA on tieredImageNet 1-shot 2-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name tiered-imagenet-mtm-spsa \
    --load "./results/reproduced-tieredimagenet/model.th" \
    --dataset tieredimagenet \
    --num-ways 2 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting spsa-delta \
    --normalize-spsa-weights-after 100 \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on FC100 5-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-fc100 \
    --dataset fc100 \
    --num-ways 5 \
    --num-shots 5 \
    --num-steps 5 \
    --num-epochs 300 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM SPSA-Coarse on FC100 5-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name fc100-mtm-spsa-coarse \
    --load "./results/reproduced-fc100/model.th" \
    --dataset fc100 \
    --num-ways 5 \
    --num-shots 5 \
    --num-steps 5 \
    --task-weighting spsa-per-coarse-class \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML (reproduced) on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name reproduced-cifar \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --num-epochs 600 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM Inner First-Order on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name cifar-mtm-inner-first-order \
    --load "./results/reproduced-cifar/model.th" \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting gradient-novel-loss \
    --use-inner-optimizer \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To train MAML MTM Backprop on CIFAR-FS 1-shot 5-way benchmark, run:

python maml/train.py ./datasets/ \
    --run-name cifar-mtm-backprop \
    --load "./results/reproduced-cifar-5shot-5way/model.th" \
    --dataset cifarfs \
    --num-ways 5 \
    --num-shots 1 \
    --num-steps 5 \
    --task-weighting gradient-novel-loss \
    --num-epochs 40 \
    --use-cuda \
    --output-folder ./results

To test any of the above-described benchmarks, run:

python maml/test.py ./results/path-to-config/config.json --num-steps 10 --use-cuda

For instance, to test MAML MTM SPSA-Track on miniImageNet 1-shot 2-way benchmark, run:

python maml/test.py ./results/mini-imagenet-mtm-spsa-track/config.json --num-steps 10 --use-cuda

Prototypical Networks

Multi-task modification (MTM) for Prototypical Networks (ProtoNet) (Snell et al., 2017).

To train ProtoNet MTM SPSA-Track with ResNet-12 backbone on miniImageNet 1-shot 5-way benchmark, run:

python protonet/train.py \
    --dataset miniImageNet \
    --network ResNet12 \
    --tracking \
    --train-shot 1 \
    --train-way 5 \
    --val-shot 1 \
    --val-way 5

To test ProtoNet MTM SPSA-Track with ResNet-12 backbone on miniImageNet 1-shot 5-way benchmark, run:

python protonet/test.py --dataset miniImageNet --network ResNet12 --shot 1 --way 5

To train ProtoNet MTM Backprop with 64-64-64-64 backbone on CIFAR-FS 1-shot 2-way benchmark, run:

python protonet/train.py \
    --dataset CIFAR_FS \
    --train-weights \
    --train-weights-layer \
    --train-shot 1 \
    --train-way 2 \
    --val-shot 1 \
    --val-way 2

To test ProtoNet MTM Backprop with 64-64-64-64 backbone on CIFAR-FS 1-shot 5-way benchmark, run:

python protonet/test.py --dataset CIFAR_FS --shot 1 --way 2

To train ProtoNet MTM Inner First-Order with 64-64-64-64 backbone on FC100 10-shot 5-way benchmark, run:

python protonet/train.py \
    --dataset FC100 \
    --train-weights \
    --train-weights-opt \
    --train-shot 10 \
    --train-way 5 \
    --val-shot 10 \
    --val-way 5

To test ProtoNet MTM Inner First-Order with 64-64-64-64 backbone on FC100 10-shot 5-way benchmark, run:

python protonet/test.py --dataset FC100 --shot 10 --way 5

To train ProtoNet MTM SPSA with 64-64-64-64 backbone on tieredImageNet 5-shot 2-way benchmark, run:

python protonet/train.py \
    --dataset tieredImageNet \
    --train-shot 5 \
    --train-way 2 \
    --val-shot 5 \
    --val-way 2

To test ProtoNet MTM SPSA with 64-64-64-64 backbone on tieredImageNet 5-shot 2-way benchmark, run:

python protonet/test.py --dataset tieredImageNet --shot 5 --way 2

Acknowledgments

Our code uses some dataloaders from Torchmeta.

Code in maml folder is based on the extended implementation from Torchmeta and pytorch-maml. The code has been updated so that baseline scores more closely follow those of the original MAML paper.

Code in protonet folder is based on the implementation from MetaOptNet. All .py files in this folder except for dataloaders.py and optimize.py were adopted from this implementation and modified afterwards. A copy of Apache License, Version 2.0 is available in protonet folder.

You might also like...
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

Code for our CVPR 2021 paper
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

Code for our CVPR2021 paper coordinate attention
Code for our CVPR2021 paper coordinate attention

Coordinate Attention for Efficient Mobile Network Design (preprint) This repository is a PyTorch implementation of our coordinate attention (will appe

Owner
Andrew
Andrew
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

null 73 Nov 6, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 9, 2021
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Malik Boudiaf 138 Dec 12, 2022
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 8, 2023
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python>=3.7 pytorch>=1.6.0 torchvision>=0.8

Yunfan Li 210 Dec 30, 2022
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

Computer Vision Lab at Columbia University 139 Nov 18, 2022