Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Overview

Easy Few-Shot Learning

Python Versions CircleCI Code style: black License: MIT Open In Colab

Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you if:

  • you're new to few-shot learning and want to learn;
  • or you're looking for reliable, clear and easily usable code that you can use for your projects.

Don't get lost in large repositories with hundreds of methods and no explanation on how to use them. Here, we want each line of code to be covered by a tutorial.

What's in there?

Notebooks: learn and practice

You want to learn few-shot learning and don't know where to start? Start with our tutorial.

Code that you can use and understand

Models:

Tools for data loading:

  • EasySet: a ready-to-use Dataset object to handle datasets of images with a class-wise directory split
  • TaskSampler: samples batches in the shape of few-shot classification tasks

Datasets to test your model

QuickStart

  1. Install the package with pip:

pip install git+https://github.com/sicara/easy-few-shot-learning.git

Note: alternatively, you can clone the repository so that you can modify the code as you wish.

  1. Download CU-Birds and the few-shot train/val/test split:
mkdir -p data/CUB && cd data/CUB
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1GDr1OkoXdhaXWGA8S3MAq3a522Tak-nx' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1GDr1OkoXdhaXWGA8S3MAq3a522Tak-nx" -O images.tgz
rm -rf /tmp/cookies.txt
tar  --exclude='._*' -zxvf images.tgz
wget https://raw.githubusercontent.com/sicara/easy-few-shot-learning/master/data/CUB/train.json
wget https://raw.githubusercontent.com/sicara/easy-few-shot-learning/master/data/CUB/val.json
wget https://raw.githubusercontent.com/sicara/easy-few-shot-learning/master/data/CUB/test.json
cd ...
  1. Check that you have a 680,9MB images folder in ./data/CUB along with three JSON files.

  2. From the training subset of CUB, create a dataloader that yields few-shot classification tasks:

from easyfsl.data_tools import EasySet, TaskSampler
from torch.utils.data import DataLoader

train_set = EasySet(specs_file="./data/CUB/train.json", training=True)
train_sampler = TaskSampler(
    train_set, n_way=5, n_shot=5, n_query=10, n_tasks=40000
)
train_loader = DataLoader(
    train_set,
    batch_sampler=train_sampler,
    num_workers=12,
    pin_memory=True,
    collate_fn=train_sampler.episodic_collate_fn,
)
  1. Create and train a model
from easyfsl.methods import PrototypicalNetworks
from torch import nn
from torch.optim import Adam
from torchvision.models import resnet18

convolutional_network = resnet18(pretrained=False)
convolutional_network.fc = nn.Flatten()
model = PrototypicalNetworks(convolutional_network).cuda()

optimizer = Adam(params=model.parameters())

model.fit(train_loader, optimizer)

Troubleshooting: a ResNet18 with a batch size of (5 * (5+10)) = 75 whould use about 4.2GB on your GPU. If you don't have it, switch to CPU, choose a smaller model or reduce the batch size (in TaskSampler above).

  1. Evaluate your model on the test set
test_set = EasySet(specs_file="./data/CUB/test.json", training=False)
test_sampler = TaskSampler(
    test_set, n_way=5, n_shot=5, n_query=10, n_tasks=100
)
test_loader = DataLoader(
    test_set,
    batch_sampler=test_sampler,
    num_workers=12,
    pin_memory=True,
    collate_fn=test_sampler.episodic_collate_fn,
)

model.evaluate(test_loader)

Roadmap

  • Implement unit tests
  • Add validation to AbstractMetaLearner.fit()
  • Integrate more methods:
    • Matching Networks
    • Relation Networks
    • MAML
    • Transductive Propagation Network
  • Integrate non-episodic training
  • Integrate more benchmarks:
    • miniImageNet
    • tieredImageNet
    • Meta-Dataset

Contribute

This project is very open to contributions! You can help in various ways:

  • raise issues
  • resolve issues already opened
  • tackle new features from the roadmap
  • fix typos, improve code quality
Comments
  • Traning with custom dataset

    Traning with custom dataset

    Hi, thanks for your code, it helps me a lot. But it also got some problems for a newbie like me. Although I make the code run successfully now, I also make a lot of compromises to some errors. I combined the code from classical_training.ipynb and my_first_few_shot_classifier.ipynb.

    I post all my code step by step and point out the problems I met. I am running Windows10. The environment is created by Anaconda. Cuda10.2, Cudnn 7.0, PyTorch 1.10.1

    At last, great thanks for your code again. Let's discuss this together.

    question 
    opened by gushengzhao1996 19
  • custom Data

    custom Data

    I want to train this model on custom data but I did not understand the split for CUB and I could not even get documentation on EasySet ? do you know where it is ? i just have 2 classes in my data btw

    question 
    opened by Kunika05 6
  • classical training method evaluation concept

    classical training method evaluation concept

    Hello, i'm new to few shot learning and want to make sure about classical training. When the backbone, after training, is evaluated with a picked method on new set of data, does the method get adjusted or learn from the new data?

    question 
    opened by joshuasir 4
  • How to build my own train_set use own data

    How to build my own train_set use own data

    Problem Thanks for your sharing about FSL, there is one problem: When I finished the tutorial 'Discovering Prototypical Networks' , I want to use my own photo data to build test_set, how can I do that and How should I construct my data's structure

    enhancement question 
    opened by cy2333ytu 4
  • Adding a utility predictor to an image

    Adding a utility predictor to an image

    The intention of adding this predictor is to help those who need to use the trained network for an image, obtaining as a return the inferred class and the tensor with the mean of Euclidean distances. Tests were performed with the PrototypicalNetworks and MatchingNetworks for a 5-way 6-shot dataset.

    enhancement 
    opened by diego91964 4
  • Finetune:

    Finetune: "does not require grad and does not have a grad_fn"

    Problem I am trying to train backbone using classical training, and use Finetune in methods to fine-tune the model by episodic_training.ipynb. How should I implement it? I see that the episodic_training.ipynb you wrote has fixed the parameters for backbone, but when I import the pre-trained model for fintune, it does not work properly. Another question is, how much is n_validation_tasks generally set to? Is there a standard? Because the setting of this hyperparameter will affect the result. I look forward to your answer.

    convolutional_network = resnet50(num_classes=2).to(DEVICE) convolutional_network.load_state_dict(torch.load('save_model/resnet50.pt')) few_shot_classifier = Finetune(convolutional_network).to(DEVICE)

    question 
    opened by Jackieam 4
  • N_QUERY

    N_QUERY

    1. The number of query set of each class has to be equal? Can i use random number of images of each class.
    2. Training epoch by epoch still can be used in few-shot learning? 306034091_661392165355129_2531247296356843848_n
    question 
    opened by earthlovebpt 3
  • PicklingError: Can't pickle <function <lambda> at 0x000001AEE1AC88B0>: attribute lookup <lambda> on __main__ failed

    PicklingError: Can't pickle at 0x000001AEE1AC88B0>: attribute lookup on __main__ failed

    Problem (i am greenhand in this field, so i may ask a simple and silly question. Sorry for that) when i run the front part of "my_first_few_shot_classifier.ipynb": image The following error messages appear: image

    How can we help How to solve this problem? I haven't found a practical solution

    question 
    opened by Meoooww 3
  • how to view the results after training and getting accuracy ?

    how to view the results after training and getting accuracy ?

    So i have trained your episodic training notebook on custom data. but I had a question about how would we view the output I got the accuracy also but how would we view the classification

    question 
    opened by Kunika05 3
  • Question on meta-training in the tutorial notebook

    Question on meta-training in the tutorial notebook

    Hi, Thanks for making such a simple and beautiful library for Few-Shot Learning. I have a query when we run a particular cell from your notebook for training meta-learning model, does it also train the ResNet18 Model on the given Dataset for generating a better representation of Feature Images like we do it while we do transfer learning when we train classifier model on our custom dataset using Imagenet pre-trained parameters or Does it only trains Prototype network?

    Please, clarify this doubt. Thanks again.

    question 
    opened by karndeepsingh 3
  • How to train on custom data

    How to train on custom data

    Hi Thank you for your great work and sharing it with everyone.

    I want to implement few shot learning for a task that I have. where I have collected few samples(10) each for both positive and negative class. How do I train the model on these novel classes using my custom dataset.

    Thank you for your help

    question 
    opened by chetanmr 3
  • Can I use different backbone for classical or episodic learning ?

    Can I use different backbone for classical or episodic learning ?

    Hi,

    I am using your classical and episodic Training notebooks, they are very helpful for my project , although I want to try different backbones like EfficientNet. I am new in this so do you have any idea if I can use different backbone than ResNet if yes what changes I will have to consider in the code?

    Thanks in adavance

    question 
    opened by shraddha291996 0
  • How to get a prediction for custom dataset ?

    How to get a prediction for custom dataset ?

    Hello, Thank you very much for your amazing work, its very helpful.

    I have one question on getting prediction on custom dataset, so basically I am using Easyset for my custom dataset and classical training notebook and I want to see prediction/classification also , for example from which class my test image belongs to. I hope my quetsion is clear to you. Thanks in advance

    question 
    opened by shraddha291996 1
  • Probabilities of a novel image belonging to a Class

    Probabilities of a novel image belonging to a Class

        I created an example that might help you.
    
    
    import torchvision.transforms as tt
    import torch
    from torchvision.datasets import ImageFolder
    from easyfsl.methods import FewShotClassifier
    from torch.utils.data import DataLoader
    
    class FewShotPredictor :
        """
    
            This class aims to implement a predictor for a Few-shot classifier.
    
            The few shot classifiers need a support set that will be used for calculating the distance between the support set and the query image.
    
            To load the support we have used an ImageFolder Dataset, which needs to have the following structure:
    
            folder:
              |_ class_name_folder_1:
                     |_ image_1
                     |_  …
                     |_ image_n
              |_ class_name_folder_2:
                     |_ image_1
                     |_  …
                     |_ image_n
    
            The folder must contain the same number of images per class, being the total images (n_way * n_shot).
    
            There must be n_way folders with n_shot images per folder.
    
        """
    
        def __init__(self ,
                     classifier: FewShotClassifier,
                     device,
                     path_to_support_images,
                     n_way,
                     n_shot,
                     input_size=224):
    
            """
                :param classifier: created and loaded model
                :param device: device to be executed
                :param path_to_support_images: path to creating a support set
                :param n_way: number of classes
                :param n_shot: number of images on each class
                :param input_size: size of image
    
            """
            self.classifier = classifier
            self.device = device
    
            self.predict_transformation = tt.Compose([
                tt.Resize((input_size, input_size)),
                tt.ToTensor()
            ])
    
            self.test_ds = ImageFolder(path_to_support_images, self.predict_transformation)
    
            self.val_loader = DataLoader(
                self.test_ds,
                batch_size= (n_way*n_shot),
                num_workers=1,
                pin_memory=True
            )
    
            self.support_images, self.support_labels = next(iter(self.val_loader))
    
    
    
        def predict (self, tensor_normalized_image):
            """
    
            :param tensor_normalized_image:
            Example of normalized image:
    
                pil_img = PIL.Image.open(img_dir)
    
                torch_img = transforms.Compose([
                    transforms.Resize((224, 224)),
                    transforms.ToTensor()
                ])(pil_img)
    
                tensor_normalized_image = tt.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])(torch_img)[None]
    
    
            :return:
    
            Return
    
            predict = tensor with prediction (mean distance of query image and support set)
            torch_max [1] = predicted class index
    
            """
    
            with torch.no_grad():
               self.classifier.eval()
               self.classifier.to(self.device)
               self.classifier.process_support_set(self.support_images.to(self.device), self.support_labels.to(self.device))
               pre_predict = self.classifier(tensor_normalized_image.to(self.device))
               predict = pre_predict.detach().data
               torch_max = torch.max(predict,1)
               class_name = self.test_ds.classes[torch_max[1].item()]
               return predict, torch_max[1], class_name
    
    

    #49

    Originally posted by @diego91964 in https://github.com/sicara/easy-few-shot-learning/issues/17#issuecomment-1157091822

    Good morning guys and many thanks for the awesome and very helpful code and the effort put to achieve that I have a question regarding novel image class prediction: Is there a way to calculate as in 'classical' classification the percentage/probability of a novel image to belonging to each class? Do you believe a softmax maybe at the 'return predict, torch_max[1], class_name' at the return tensor would have a meaning?

    Thanks in advance

    opened by iou84 3
  • ValueError : Sample Larger than population or is negative for 5 shot 2 way problem

    ValueError : Sample Larger than population or is negative for 5 shot 2 way problem

    Problem I am new to FSL and have a simple problem in my scientific domain that I thought I would try as a learning example. I am trying to perform classical training for a 5 shot 2 way problem. When I am running the code from the tutorial notebook as it is after using EasySet to create a custom data object, I am getting the following error when I encounter the validation epoch during my training:

    ValueError : Sample Larger than population or is negative

    Considered solutions I've tried changing the batch size and n_workers so far, and neither have worked

    How can we help I can't figure out what is going wrong here. I am very new to machine learning and would love to have your help in any way possible!

    enhancement question 
    opened by haricash 5
  • For custom datasets, how to divide the class?

    For custom datasets, how to divide the class?

    Hi. Thank you for your great work and sharing it with everyone.

    I have a question. For custom datasets, how to divide the class? (train, val, test) Randomly select some classes as the training set, or? Do you have any tricks?

    question 
    opened by ssx12042 15
  • Adding more backbones

    Adding more backbones

    Hi @ebennequin, Thanks for this elegant code base, some questions(can be a feature request)

    1. Can we add new backbones like (ViT, densenet, Convnext etc...)?
    2. Building functionalities for model deployments?
    enhancement 
    opened by anish9 2
Releases(v1.1.0)
  • v1.1.0(Sep 5, 2022)

  • v1.0.1(Jun 7, 2022)

    There were some things to fix after the v1 release, so we fixed them:

    • EasySet's format check is now case unsensitive (thanks @mgmalana :smile: )
    • TaskSampler used to yield torch.Tensor objects which caused errors. So now it yields lists of integers, as is standard in PyTorch's interface.
    • When EasySet's initialization didn't find any images in the specified folders, it just built an empty dataset with no warning, which caused silent errors. Now EasySet.__init__() raises the following warning if no image is found: "No images found in the specified directories. The dataset will be empty"
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Mar 21, 2022)

    🎂 Exactly 1 year after the first release of Easy FSL, we have one more year of experience in Few-Shot Learning research. We capitalize on this experience to make Easy FSL easier, cleaner, smarter.

    No more episodic training logic inside Few-Shot Learning methods: you can train them however you want. And more content! 4 additional methods; several ResNet architecture as they're often used in FSL research; and 4 ready-to-use datasets.

    🗞️ What's New

    • Few-Shot Learning methods
    • Pre-designed ResNet architecutres for Few-Shot Learning
    • Most common few-shot classification datasets
      • _tiered_ImageNet
      • _mini_ImageNet
      • CU-Birds
      • Danish Fungi (not common but new, and really great)
      • And also an abstract class FewShotDataset to ease your developement or novel or modified datasets
    • Example notebooks to perform both episodic training and classical training for your Few-Shot Learning methods
    • Support Python 3.9

    🔩 What's Changed

    • AbstractMetaLearner is renamed FewShotClassifier. All the episodic training logic has been removed from this class and moved to the example notebook episodic_training.ipynb
    • FewShotClassifier now supports non-cuda devices
    • FewShotClassifier can now be initialized with a backbone on GPU
    • Relation module in RelationNetworks can now be parameterized
    • Same for embedding modules in Matching Networks
    • Same for image preprocessing in pre-designed datasets like EasySet
    • EasySet now only collects image files

    Full Changelog: https://github.com/sicara/easy-few-shot-learning/compare/v0.2.2...v1.0.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Nov 9, 2021)

    Small fixes in EasySet and AbstractMetaLearner

    • Sort data instances for each class in EasySet

    • Add EasySet.number_of_classes()

    • Fix best validation accuracy update

    • Move switch to train mode inside fit_on_task()

    • Make AbstractMetaLearner.fit() return average loss

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jun 22, 2021)

  • v0.2.0(Jun 1, 2021)

    :newspaper_roll: What's new

    • :tennis: Matching Networks
    • :dna: Relation Networks
    • :mount_fuji: tieredImageNet
    • :blossom: In AbtractMetaLearner and all children classes, forward()now takes only query_images as argument. Support images and labels are now processed by process_support_set().
    • :chart_with_upwards_trend: AbstractMetaLearner.fit() now allows validation on a validation set.
    • :rainbow: EasySet.__getitem__() now forces loaded images conversion to RGB.
    • :heavy_check_mark: The code is tested
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 22, 2021)

    The initial release contains :

    • AbstractMetaLearner: an abstract class with methods that can be used for any meta-trainable algorithm
    • Prototypical Networks
    • EasySet: a ready-to-use Dataset object to handle datasets of images with a class-wise directory split
    • TaskSampler: samples batches in the shape of few-shot classification tasks
    • CU-Birds: we provide a script to download and extract the dataset, along with a meta-train/meta-val/meta-test split along classes. The dataset is ready-to-use with EasySet.
    Source code(tar.gz)
    Source code(zip)
Owner
Sicara
Sicara
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
This repository is related to an Arabic tutorial, within the tutorial we discuss the common data structure and algorithms and their worst and best case for each, then implement the code using Python.

Data Structure and Algorithms with Python This repository is related to the Arabic tutorial here, within the tutorial we discuss the common data struc

Mohamed Ayman 33 Dec 2, 2022
This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

null 4 Aug 2, 2022
The Pytorch code of "Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification", CVPR 2022 (Oral).

DeepBDC for few-shot learning        Introduction In this repo, we provide the implementation of the following paper: "Joint Distribution Matters: Dee

FeiLong 116 Dec 19, 2022
Library of various Few-Shot Learning frameworks for text classification

FewShotText This repository contains code for the paper A Neural Few-Shot Text Classification Reality Check Environment setup # Create environment pyt

Thomas Dopierre 47 Jan 3, 2023
Spatial Contrastive Learning for Few-Shot Classification (SCL)

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image classification in order to learn more general purpose embeddings, and facilitate the test-time adaptation to novel visual categories.

Yassine 34 Dec 25, 2022
(ICCV'21) Official PyTorch implementation of Relational Embedding for Few-Shot Classification

Relational Embedding for Few-Shot Classification (ICCV 2021) Dahyun Kang, Heeseung Kwon, Juhong Min, Minsu Cho [paper], [project hompage] We propose t

Dahyun Kang 82 Dec 24, 2022
An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"

Channel LM Prompting (and beyond) This includes an original implementation of Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer. "Noisy Cha

Sewon Min 92 Jan 7, 2023
TransPrompt - Towards an Automatic Transferable Prompting Framework for Few-shot Text Classification

TransPrompt This code is implement for our EMNLP 2021's paper 《TransPrompt:Towards an Automatic Transferable Prompting Framework for Few-shot Text Cla

WangJianing 23 Dec 21, 2022
vit for few-shot classification

Few-Shot ViT Requirements PyTorch (>= 1.9) TorchVision timm (latest) einops tqdm numpy scikit-learn scipy argparse tensorboardx Pretrained Checkpoints

Martin Dong 26 Nov 30, 2022
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
Simple-Image-Classification - Simple Image Classification Code (PyTorch)

Simple-Image-Classification Simple Image Classification Code (PyTorch) Yechan Kim This repository contains: Python3 / Pytorch code for multi-class ima

Yechan Kim 8 Oct 29, 2022
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

Utkarsh Ojha 251 Dec 11, 2022
Pytorch implementation of few-shot semantic image synthesis

Few-shot Semantic Image Synthesis Using StyleGAN Prior Our method can synthesize photorealistic images from dense or sparse semantic annotations using

null 40 Sep 26, 2022
SCI-AIDE : High-fidelity Few-shot Histopathology Image Synthesis for Rare Cancer Diagnosis

SCI-AIDE : High-fidelity Few-shot Histopathology Image Synthesis for Rare Cancer Diagnosis Pretrained Models In this work, we created synthetic tissue

Emirhan Kurtuluş 1 Feb 7, 2022
Image Classification - A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

A research on image classification and auto insurance claim prediction, a systematic experiments on modeling techniques and approaches

null 0 Jan 23, 2022
Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

MetaAdaptRank This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot

THUNLP 5 Jun 16, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

null 34 Oct 8, 2022
Boost learning for GNNs from the graph structure under challenging heterophily settings. (NeurIPS'20)

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu,

GEMS Lab: Graph Exploration & Mining at Scale, University of Michigan 70 Dec 18, 2022