Tensorflow implementation of "Learning Deep Features for Discriminative Localization"

Overview

Weakly_detector

Tensorflow implementation of "Learning Deep Features for Discriminative Localization"

B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba Learning Deep Features for Discriminative Localization. Computer Vision and Pattern Recognition (CVPR), 2016. [PDF][Project Page]

Results of Caltech256 Dataset

alt tag

Results of Action40 Dataset

alt tag Object localization using only image-level annotation, without bounding box annotation.

  • If you want to train the model using custom dataset, you need the pretrained VGG Network weights [VGG], which is used in [code].
Comments
  • IOError: [Errno 2] No such file or directory: '../data/caffe_layers_value.pickle'

    IOError: [Errno 2] No such file or directory: '../data/caffe_layers_value.pickle'

    File "train.caltech.py", line 67, in detector = Detector(weight_path, n_labels) File "/home/spark/Weakly_detector/src/detector.py", line 10, in init with open(weight_file_path) as f: IOError: [Errno 2] No such file or directory: '../data/caffe_layers_value.pickle'

    can you tell me how to slove this? thanks

    opened by rambow330 2
  • Wrong activation maps.

    Wrong activation maps.

    I am using your technique to obtain activation maps for only one class in a binary classification problem. The network I have trained is inception v3. I cut the last 3 inception modules and then I added a new convolutional layer, a gap layer and a softmax. The input of the new convolutional layer has shape batch_size x 17 x 17 x 768 and the output's shape is batch_size x 17 x 17 x 1024, using 3 x 3 kernel. The precision of the network is high (89%). Do you have idea what might be wrong? I have not changed your technique.

    opened by chrisrn 0
  • Cannot obtain right activation map.

    Cannot obtain right activation map.

    I have trained inception v3 network and I have achieved high precision. I am cutting the last 3 inception modules of the network and the output of the last module is batch_size x 17 x 17 x 768. In the next steps I am using your technique to get activation map of each image but results are bad. The output classes are only 2 and I want to have right activation maps only for the one class. Can you tell me what am I missing? Here is the code:

    ` def conv2d(input_image, name): filter_shape = [3, 3, 768, 1024] with tf.name_scope('MyConv'): w = tf.Variable(tf.random_uniform(filter_shape, minval=0.0, maxval=0.01), name='weights') b = tf.Variable(tf.zeros(filter_shape[-1]), name='biases')

    	conv = tf.nn.conv2d(input_image, w, [1,1,1,1], padding='SAME')
    	outputs = tf.nn.bias_add(conv, b)
    
    return outputs
    

    `

    `def add_final_training_ops(net, ground_truth_input, class_count, final_tensor_name):

    # Convolutional layer
    convoluted = conv2d(net, 'NewConvLayer')
    
    # GAP layer
    gap = tf.reduce_mean(convoluted, [1, 2])
    
    # Organizing the following ops as `final_training_ops` so they're easier
    # to see in TensorBoard
    layer_name = 'final_training_ops'
    with tf.name_scope(layer_name):
    	with tf.name_scope('weights'):
    		layer_weights = tf.Variable(tf.random_uniform([1024, class_count], minval=0.0, maxval=0.01),
    													name='final_weights')
    
    	with tf.name_scope('Wx_plus_b'):
    		logits = tf.matmul(gap, layer_weights)
    
    final_tensor = tf.nn.softmax(logits, name=final_tensor_name)
    
    return final_tensor, layer_weights, convoluted`
    

    `def heat_map(image, softmax_weights, labels): img_resize = tf.image.resize_bilinear(image, [299, 299]) with tf.variable_scope("GAP", reuse=True): label_w = tf.gather(tf.transpose(softmax_weights), labels) label_w = tf.reshape(label_w, [-1, 1024, 1]) # [batch_size, 1024, 1]

    img_resize = tf.reshape(img_resize, [-1, 299*299, 1024]) # [batch_size, 299*299, 1024] activation = tf.batch_matmul(img_resize, label_w) activation = tf.reshape(activation, [-1, 299, 299]) return activation`

    `final_tensor, softmax_weights, convoluted = add_final_training_ops(inception_7, labels, num_classes, 'final_result')

    	prediction = tf.argmax(final_tensor, 1)
    	activation_map = heat_map(convoluted, softmax_weights, labels)`
    
    opened by chrisrn 0
  • Solved: ValueError: invalid literal for int() with base 10: ''

    Solved: ValueError: invalid literal for int() with base 10: ''

    What I did so far:

    1. I downloaded the caltech256 dataset from kaggle (link: https://www.kaggle.com/jessicali9530/caltech256)
    2. put the zip into the folder (~/WeaklyDetector/data/)
    3. extracted the zip and
    4. adjusted the variable 'dataset_path' in train.caltech.py (line 19) to (~/WeaklyDetector/data/256_ObjectCategories)

    As i run the train.caltech.py file, following error occured: ValueError: invalid literal for int() with base 10: ''

    What the algorithm does:

    The algorithm reads all foldernames in the path we previously set in step 4 and split the name by the delimiter '.'. The numbers are stored in the variable 'labels' and the name of the corresponding classes in 'label_names' (line 32). So the algorithm expects this folder only to contain folders with the scheme '[0-9]+.[A-z_-]+'. (in other words, there are some numbers, followed by a point and a name).

    What raised the error and how to solve:

    In the folder of step 4 there is a hidden folder, called '.DS_Store'. Apparently- this folder does not match the desired scheme. It seems to be necessary for mac users, though. With help of print commands i verified that the algorithm reads this folder. Splitting the name as described in the previous section leads to the empty label '' and the label_name 'DS_Store'. The algorithm tries to map the empty label to an int value, that is of course impossible.

    Just remove that file and it should work. Or try to filter this specific file in the program.

    opened by marcelTim 1
  • Training Loss: nan

    Training Loss: nan

    I tried training a model on this but I keep getting 'Training Loss: nan' I trained for 17 epochs before I stopped because of this. Is it that the model is not learning or could it be a different issue?

    opened by uridah 0
  • TypeError: 'map' object is not subscriptable

    TypeError: 'map' object is not subscriptable

    @jazzsaxmafia Hello, :\Tesnsorflow\Weaklydetector\src>python train.caltech.py Traceback (most recent call last): File "train.caltech.py", line 44, in image_paths_train = np.hstack(map(lambda one_class: one_class[:-10], image_paths_per_label)) File "C:\Program Files\Anaconda3\lib\site-packages\numpy\core\shape_base.py", line 275, in hstack arrs = [atleast_1d(_m) for _m in tup] File "C:\Program Files\Anaconda3\lib\site-packages\numpy\core\shape_base.py", line 275, in arrs = [atleast_1d(_m) for _m in tup] File "train.caltech.py", line 44, in image_paths_train = np.hstack(map(lambda one_class: one_class[:-10], image_paths_per_label)) TypeError: 'map' object is not subscriptable


    :\Tesnsorflow\Weaklydetector\src>python train.caltech.py Traceback (most recent call last): File "train.caltech.py", line 40, in image_paths_train = np.hstack(list(map(lambda one_class: one_class[:-10], image_paths_per_label))) File "train.caltech.py", line 40, in image_paths_train = np.hstack(list(map(lambda one_class: one_class[:-10], image_paths_per_label))) TypeError: 'map' object is not subscriptable

    Any idea of this problem? Thank you very much.

    opened by YangBain 9
  • All-in-one Jupyter notebook for Caltech dataset

    All-in-one Jupyter notebook for Caltech dataset

    A Jupyter notebook including the main files needed to train the VGG model on Caltech256 dataset. Notebook includes code updates (TF v1.2) and documentation, among others.

    opened by MasoodK 6
Owner
Taeksoo Kim
Taeksoo Kim
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

null 2.6k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

xRBM Library Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow Installation Using pip: pip install xrbm Examples Tut

Omid Alemi 55 Dec 29, 2022
Functional TensorFlow Implementation of Singular Value Decomposition for paper Fast Graph Learning

tf-fsvd TensorFlow Implementation of Functional Singular Value Decomposition for paper Fast Graph Learning with Unique Optimal Solutions Cite If you f

Sami Abu-El-Haija 14 Nov 25, 2021
StyleGAN2 - Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation

NVIDIA Research Projects 10.1k Dec 28, 2022
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

Perceiver This Python package implements Perceiver: General Perception with Iterative Attention by Andrew Jaegle in TensorFlow. This model builds on t

Rishit Dagli 84 Oct 15, 2022
Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow.

Denoised-Smoothing-TF Minimal implementation of Denoised Smoothing: A Provable Defense for Pretrained Classifiers in TensorFlow. Denoised Smoothing is

Sayak Paul 19 Dec 11, 2022
Unofficial Implementation of MLP-Mixer in TensorFlow

mlp-mixer-tf Unofficial Implementation of MLP-Mixer [abs, pdf] in TensorFlow. Note: This project may have some bugs in it. I'm still learning how to i

Rishabh Anand 24 Mar 23, 2022
Tensorflow implementation for Self-supervised Graph Learning for Recommendation

If the compilation is successful, the evaluator of cpp implementation will be called automatically. Otherwise, the evaluator of python implementation will be called.

null 152 Jan 7, 2023
Minimal implementation of PAWS (https://arxiv.org/abs/2104.13963) in TensorFlow.

PAWS-TF ?? Implementation of Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples (PAWS)

Sayak Paul 43 Jan 8, 2023
Unofficial TensorFlow implementation of the Keyword Spotting Transformer model

Keyword Spotting Transformer This is the unofficial TensorFlow implementation of the Keyword Spotting Transformer model. This model is used to train o

Intelligent Machines Limited 8 May 11, 2022
A tensorflow implementation of GCN-LPA

GCN-LPA This repository is the implementation of GCN-LPA (arXiv): Unifying Graph Convolutional Neural Networks and Label Propagation Hongwei Wang, Jur

Hongwei Wang 83 Nov 28, 2022
Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line

NAVER/LINE Vision 357 Jan 4, 2023
Unofficial TensorFlow implementation of Protein Interface Prediction using Graph Convolutional Networks.

[TensorFlow] Protein Interface Prediction using Graph Convolutional Networks Unofficial TensorFlow implementation of Protein Interface Prediction usin

YeongHyeon Park 9 Oct 25, 2022
Tensorflow implementation of MIRNet for Low-light image enhancement

MIRNet Tensorflow implementation of the MIRNet architecture as proposed by Learning Enriched Features for Real Image Restoration and Enhancement. Lanu

Soumik Rakshit 91 Jan 6, 2023
Tensorflow python implementation of "Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos"

Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos This repository is the official tensorflow python implementation

Yasamin Jafarian 287 Jan 6, 2023
Tensorflow implementation of Swin Transformer model.

Swin Transformer (Tensorflow) Tensorflow reimplementation of Swin Transformer model. Based on Official Pytorch implementation. Requirements tensorflow

null 167 Jan 8, 2023