In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

Overview

cdf_att_classification

classes = {0: 'cat', 1: 'dog', 2: 'flower'}

In this project we use both Resnet and Self-attention layer for cdf-Classification. Specifically, For Resnet, we extract low level features from Convolutional Neural Network (CNN) trained on Dogcatflower_2 dataset(details show later).
We take inspiration from the Self-attention mechanism which is a prominent method in cv domain. We also use Grad-CAM algorithm to Visualize the gradient of the back propagation of the pretrain model to understand this network. The code is released for academic research use only. For commercial use, please contact [[email protected]].

Installation

Clone this repo.

git clone https://github.com/Alan-lab/cdf_classification
cd cdf_classification/

This code requires pytorch, python3.7, cv2, d2l. Please install it.

Dataset Preparation

For cdf_classification, the datasets must be downloaded beforehand. Please download them on the respective webpages. Please cite them if you use the data.

Preparing Cat and Dog Dataset. The dataset can be downloaded here.

Preparing flower Dataset. The dataset can be downloaded here.

You can also download Dogcatflower_2 dataset(made from above datasets) use the following link:

Link:https://pan.baidu.com/s/1ZcP_isbbRQBq9BHU6p_VtQ

key:oz7z

Training New Models

  1. Prepare your own dataset like this (https://github.com/Alan-lab/data/Dogcatflower_2).

  2. Training:

python main.py

model.pth will be extrated in the folder ./cdf_classification.

If av_test_acc < 0.75, model.pth will not save(d2l.train_ch6).

3.Predict

Prepare your valid dataset like this (https://github.com/Alan-lab/data/catsdogsflowers/valid1).

python Predict/predict.py

4.Class Activation Map The response size of the feature map is mapped to the original image, allowing readers to understand the effect of the model more intuitively. Prepare your picture like this (https://github.com/Alan-lab/data/Dogcatflower/test/flower/flower.1501.jpg).

python Viewer/Grad_CAM.py
  1. More details can be found in folder.

The Experimental Result

  1. Preformance
dataset Cat-acc Dog-acc flower-acc
Dogcatflower_2_train 96.2 88.7 93.6
Dogcatflower_2_test 72.7 69.2 89.7
catsdogsflowers_valid1 75.1 76.9 91.4
catsdogsflowers_valid2 75.5 73.5 92.9

2.Visualization

Postive sample fig1 fig2 fig3

Negative sample fig4

Multi-attention

show_attention

Acknowledgments

This work is mainly supported by (https://courses.d2l.ai/zh-v2/) and CSDN.

Contributions

If you have any questions/comments/bug reports, feel free to open a github issue or pull a request or e-mail to the author Lailanqing ([email protected]).

You might also like...
It's A ML based Web Site build with python and Django to find the breed of the dog
It's A ML based Web Site build with python and Django to find the breed of the dog

ML-Based-Dog-Breed-Identifier This is a Django Based Web Site To Identify the Breed of which your DOG belogs All You Need To Do is to Follow These Ste

A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding his way.

GuidEye A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding h

Employs neural networks to classify images into four categories: ship, automobile, dog or frog

Neural Net Image Classifier Employs neural networks to classify images into four categories: ship, automobile, dog or frog Viterbi_1.py uses a classic

TipToiDog - Tip Toi Dog With Python
TipToiDog - Tip Toi Dog With Python

TipToiDog Was ist dieses Projekt? Meine 5-jährige Tochter spielt sehr gerne das

Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding
Official Pytorch Implementation of Relational Self-Attention: What's Missing in Attention for Video Understanding

Relational Self-Attention: What's Missing in Attention for Video Understanding This repository is the official implementation of "Relational Self-Atte

Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Memory Efficient Attention Pytorch Implementation of a memory efficient multi-head attention as proposed in the paper, Self-attention Does Not Need O(

Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

SuperSDR: multiplatform KiwiSDR + CAT transceiver integrator
SuperSDR: multiplatform KiwiSDR + CAT transceiver integrator

SuperSDR SuperSDR integrates a realtime spectrum waterfall and audio receive from any KiwiSDR around the world, together with a local (or remote) cont

[CVPR 2021] Teachers Do More Than Teach: Compressing Image-to-Image Models (CAT)
[CVPR 2021] Teachers Do More Than Teach: Compressing Image-to-Image Models (CAT)

CAT arXiv Pytorch implementation of our method for compressing image-to-image models. Teachers Do More Than Teach: Compressing Image-to-Image Models Q

Owner
null
A tensorflow model that predicts if the image is of a cat or of a dog.

Quick intro Hello and thank you for your interest in my project! This is the backend part of a two-repo application. The other part can be found here

Tudor Matei 0 Mar 8, 2022
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
Flower classification model that classifies flowers in 10 classes made using transfer learning (~85% accuracy).

flower-classification-inceptionV3 Flower classification model that classifies flowers in 10 classes. Training and validation are done using a pre-anot

Ivan R. Mršulja 1 Dec 12, 2021
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021
Official implement of "CAT: Cross Attention in Vision Transformer".

CAT: Cross Attention in Vision Transformer This is official implement of "CAT: Cross Attention in Vision Transformer". Abstract Since Transformer has

null 100 Dec 15, 2022
3D ResNet Video Classification accelerated by TensorRT

Activity Recognition TensorRT Perform video classification using 3D ResNets trained on Kinetics-400 dataset and accelerated with TensorRT P.S Click on

Akash James 39 Nov 21, 2022
Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)

Image Classification Project Killer in PyTorch This repo is designed for those who want to start their experiments two days before the deadline and ki

null 349 Dec 8, 2022
PyTorch implementation of MoCo v3 for self-supervised ResNet and ViT.

MoCo v3 for Self-supervised ResNet and ViT Introduction This is a PyTorch implementation of MoCo v3 for self-supervised ResNet and ViT. The original M

Facebook Research 887 Jan 8, 2023
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification

STAM - Pytorch Implementation of STAM (Space Time Attention Model), yet another pure and simple SOTA attention model that bests all previous models in

Phil Wang 109 Dec 28, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022