Transfer Learning Shootout for PyTorch's model zoo (torchvision)

Overview

pytorch-retraining

Transfer Learning shootout for PyTorch's model zoo (torchvision).

  • Load any pretrained model with custom final layer (num_classes) from PyTorch's model zoo in one line
model_pretrained, diff = load_model_merged('inception_v3', num_classes)
  • Retrain minimal (as inferred on load) or a custom amount of layers on multiple GPUs. Optionally with Cyclical Learning Rate (Smith 2017).
final_param_names = [d[0] for d in diff]
stats = train_eval(model_pretrained, trainloader, testloader, final_params_names)
  • Chart training_time, evaluation_time (fps), top-1 accuracy for varying levels of retraining depth (shallow, deep and from scratch)
chart
Transfer learning on example dataset Bee vs Ants with 2xV100 GPUs

Results on more elaborate Dataset

num_classes = 23, slightly unbalanced, high variance in rotation and motion blur artifacts with 1xGTX1080Ti

chart_17
Constant LR with momentum
chart_17_clr
Cyclical Learning Rate
You might also like...
Flower classification model that classifies flowers in 10 classes made using transfer learning (~85% accuracy).

flower-classification-inceptionV3 Flower classification model that classifies flowers in 10 classes. Training and validation are done using a pre-anot

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas.

PyTorch implementation of MuseMorphose, a Transformer-based model for music style transfer.

MuseMorphose This repository contains the official implementation of the following paper: Shih-Lun Wu, Yi-Hsuan Yang MuseMorphose: Full-Song and Fine-

Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT).
Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT).

Active Learning with the Nvidia TLT Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT). In this tutorial, we will show you ho

In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.
In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.

Contrastive Learning of Object Representations Supervisor: Prof. Dr. Gemma Roig Institutions: Goethe University CVAI - Computational Vision & Artifici

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Transfer Learning library for Deep Neural Networks.
Transfer Learning library for Deep Neural Networks.

Transfer and meta-learning in Python Each folder in this repository corresponds to a method or tool for transfer/meta-learning. xfer-ml is a standalon

This repo will contain code to reproduce and build upon understanding transfer learning

What is being transferred in transfer learning? This repo contains the code for the following paper: Behnam Neyshabur*, Hanie Sedghi*, Chiyuan Zhang*.

Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021

Embedding Transfer with Label Relaxation for Improved Metric Learning Official PyTorch implementation of CVPR 2021 paper Embedding Transfer with Label

Comments
  • Densenet not iterable

    Densenet not iterable

    Densenets are giving errors:

    Targeting densenet201 with 2 classes

    Traceback (most recent call last): File "retrain.py", line 340, in model_pretrained, diff = load_model_merged(name, num_classes) TypeError: 'DenseNet' object is not iterable

    opened by silakanveli 3
  • CUDA running out of memory

    CUDA running out of memory

    Hi

    Thanks for this wonderful script. It is really helpful when testing various models! I have issue of running out of memory in GPU. I know that this is NOT exactly a bug too. This is a CUDA memory issue.

    Is there any way to reduce GPU memory usage. I only have 2 GB on my Geforce GTX 1050.

    Only happens when training from scratch and training Deep

    This is the error:

    [29, 30] loss: nan [0.0044375000000000005] [30, 30] loss: nan [0.0043333333333333392] [31, 30] loss: nan [0.0011041666666666609] [32, 30] loss: nan [0.0041250000000000002] Finished Training Evaluating... THCudaCheck FAIL file=/pytorch/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory Traceback (most recent call last): File "retrain.py", line 380, in CLR=use_clr) File "retrain.py", line 322, in train_eval stats_eval = evaluate_stats(net, testloader) File "retrain.py", line 304, in evaluate_stats outputs = net(Variable(images)) File "/usr/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 224, in call result = self.forward(*input, **kwargs) File "/usr/lib64/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 58, in forward return self.module(*inputs[0], **kwargs[0]) File "/usr/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 224, in call result = self.forward(*input, **kwargs) File "/usr/lib/python3.6/site-packages/torchvision/models/inception.py", line 81, in forward x = self.Conv2d_2b_3x3(x) File "/usr/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 224, in call result = self.forward(*input, **kwargs) File "/usr/lib/python3.6/site-packages/torchvision/models/inception.py", line 325, in forward x = self.bn(x) File "/usr/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 224, in call result = self.forward(*input, **kwargs) File "/usr/lib64/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 37, in forward self.training, self.momentum, self.eps) File "/usr/lib64/python3.6/site-packages/torch/nn/functional.py", line 639, in batch_norm return f(input, weight, bias) RuntimeError: cuda runtime error (2) : out of memory at /pytorch/torch/lib/THC/generic/THCStorage.cu:66 [tomppa@localhost pytorch-retraining]$

    nvidia-smi

    +-----------------------------------------------------------------------------+ | NVIDIA-SMI 387.22 Driver Version: 387.22 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1050 Off | 00000000:01:00.0 On | N/A | | 54% 58C P0 N/A / 75W | 1942MiB / 1998MiB | 84% Default | +-------------------------------+----------------------+----------------------+

    +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1405 G /usr/libexec/Xorg 18MiB | | 0 1444 G /usr/bin/gnome-shell 42MiB | | 0 1776 G /usr/libexec/Xorg 114MiB | | 0 1870 G /usr/bin/gnome-shell 87MiB | | 0 6652 G gnome-control-center 1MiB | | 0 7139 C python3 1665MiB | +-----------------------------------------------------------------------------+

    CUDA version:

    nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2016 NVIDIA Corporation Built on Tue_Jan_10_13:22:03_CST_2017 Cuda compilation tools, release 8.0, V8.0.61

    opened by silakanveli 3
  • For own data.

    For own data.

    I have a dataset consisting of 20 classes. How can I use this code for my dataset. Thanks in advanced. Train- class1 1.jpg

    class2
                  1.jpg
    

    Test- class1 1.jpg

       class2
                 1.jpg
    
    opened by redhat12345 1
Owner
Alexander Hirner
<3 smart people, smart devices>>> CTO moonvision.io (Computer Vision + Data Markets)^2
Alexander Hirner
Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance.

Qualcomm Innovation Center 137 Jan 3, 2023
A PaddlePaddle version image model zoo.

Paddle-Image-Models English | 简体中文 A PaddlePaddle version image model zoo. Install Package Install by pip: $ pip install ppim Install by wheel package

AgentMaker 131 Dec 7, 2022
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Hailo Model Zoo The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can mea

Hailo 50 Dec 7, 2022
Model Zoo of BDD100K Dataset

Model Zoo of BDD100K Dataset

ETH VIS Group 200 Dec 27, 2022
TensorFlow2 Classification Model Zoo playing with TensorFlow2 on the CIFAR-10 dataset.

Training CIFAR-10 with TensorFlow2(TF2) TensorFlow2 Classification Model Zoo. I'm playing with TensorFlow2 on the CIFAR-10 dataset. Architectures LeNe

Chia-Hung Yuan 16 Sep 27, 2022
The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines.

The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines. It includes tools for downloading pipelines and their dependencies and tools for measuring their performace.

null 8 Dec 4, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consistent with torchvision. You can easily develop new algorithms, or readily apply existing algorithms.

THUML @ Tsinghua University 2.2k Jan 3, 2023
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022