A Pytorch Implementation for Compact Bilinear Pooling.

Overview

CompactBilinearPooling-Pytorch

A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling

Prerequisites

Install pytorch_fft by

pip install pytorch_fft

Usage

from torch import nn
from torch.autograd import Variable
from CompactBilinearPooling import CompactBilinearPooling

bottom1 = Variable(torch.randn(128, 512, 14, 14)).cuda()
bottom2 = Variable(torch.randn(128, 512, 14, 14)).cuda()

layer = CompactBilinearPooling(512, 512, 8000)
layer.cuda()
layer.train()

out = layer(bottom1, bottom2)

Reference

Yang Gao, et al. "Compact Bilinear Pooling." in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).
Akira Fukui, et al. "Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding." arXiv preprint arXiv:1606.01847 (2016).
Comments
  • Complex product should be used in eltwise product

    Complex product should be used in eltwise product

    I was investigating differences in results of this package and of the original Caffe version. Things found so far:

    1. Rfft can be used directly, it is provided by pytorch_fft
    2. Complex product should be used here: https://github.com/DeepInsight-PCALab/CompactBilinearPooling-Pytorch/blob/master/CompactBilinearPooling.py#L91, the tf.multiply does complex product, original caffe version does complex product
    3. Output should be mutliplied by output_dim in order to achieve full equivalence with original caffe version
    opened by vadimkantorov 6
  • No module named _th_fft

    No module named _th_fft

    Hi, thanks for your job! lt looks like there is no module named _th_fft when follow the command as you said. Do you have any idea about it? Thanks a lot~ ` dl@dl:~/wxptest$ sudo python wxp.py

    Traceback (most recent call last): File "wxp.py", line 3, in from CompactBilinearPooling import CompactBilinearPooling File "/home/dl/wxptest/CompactBilinearPooling.py", line 6, in import pytorch_fft.fft.autograd as afft File "/home/dl/wxptest/pytorch_fft/init.py", line 1, in from . import fft File "/home/dl/wxptest/pytorch_fft/fft/init.py", line 1, in from .fft import * File "/home/dl/wxptest/pytorch_fft/fft/fft.py", line 3, in from .._ext import th_fft File "/home/dl/wxptest/pytorch_fft/_ext/th_fft/init.py", line 3, in from ._th_fft import lib as _lib, ffi as _ffi ImportError: No module named _th_fft `

    opened by kopingwu 4
  • pytorch_fft install

    pytorch_fft install

    when I install pytorch_fft,the problem is: error in pytorch_fft setup command: 'C:\Users\14375\AppData\Local\Temp\pip-install-6vh12s8w\pytorch-fft\build.py:ffi' must be of the form 'path/build.py:ffi_va riable' Including CUDA code. ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

    opened by 1437539743 3
  • Possible error in computation

    Possible error in computation

    According to the existing code beginning here https://github.com/DeepInsight-PCALab/CompactBilinearPooling-Pytorch/blob/56e23d01e687dad067dfeb9a07bb4d012430e1df/CompactBilinearPooling.py#L92 the product of two complex numbers(a+ib, x+iy) is {(ax-by) + i(ax+by)}.

       `temp_rr, temp_ii = fft1_real.mul(fft2_real), fft1_imag.mul(fft2_imag)
        fft_product_real = temp_rr - temp_ii
        fft_product_imag = temp_rr + temp_ii
    
        cbp_flat = afft.Ifft()(fft_product_real, fft_product_imag)[0]`
    

    However the correct product is {(ax-by) + i(ay+bx)} in which case the following product is wrong.

    Correct me if I am wrong.

    opened by dormantrepo 1
  • NotImplementedError in fft

    NotImplementedError in fft

    I am getting this particular error while running sample test given in readme.

      File "CompactBilinearPooling/test.py", line 14, in <module>
        out = layer(bottom1, bottom2) 
      File "anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
        result = self.forward(*input, **kwargs)
      File "CompactBilinearPooling/CompactBilinearPooling.py", line 91, in forward
        fft1_real, fft1_imag = afft.Fft()(sketch_1, Variable(torch.zeros(sketch_1.size())).cuda())
      File "anaconda3/lib/python3.6/site-packages/pytorch_fft/fft/autograd.py", line 16, in forward
        return fft(X_re, X_im)
      File "anaconda3/lib/python3.6/site-packages/pytorch_fft/fft/fft.py", line 25, in fft
        raise NotImplementedError
    NotImplementedError
    

    Now this is due to this function fft.py:

        if 'Float' in type(X_re).__name__ :
            f = th_fft.th_Float_fft1
        elif 'Double' in type(X_re).__name__: 
            f = th_fft.th_Double_fft1
        else: 
            raise NotImplementedError
        return _fft(X_re, X_im, f, 1)
    

    Because inputs to fft() are tensors, type(sketch_1) does not include 'Float' or 'Double'. Any help is appreciated.

    opened by gullalc 1
  • No parameters for optmizer

    No parameters for optmizer

    Thanks for your working, when I apply your code, I meet the situation that model.parameters() is empty and fail to creat optimizer. Model is compactbilinearpooling(1536,1536,400). Am I wrong? Thanks for answering

    opened by bupt-zsp 1
  • How's the output

    How's the output

    Thank you for your job! I have some question, if x = (4,512,64,64), y = (4,512,64,64), a=CompactBilinearPooling(x,y) , what the shape of a ? And How I can make the a‘s shape is (batch_size,channels,height,width) which the same as x and y?

    opened by Paranoidv 1
  • In the code seem to be different from the setting of the paper.

    In the code seem to be different from the setting of the paper.

    rand_h_1 = np.random.randint(output_dim, size=self.input_dim1) and rand_s_1 = 2 * np.random.randint(2, size=self.input_dim1) - 1 in the code seem to be different from the setting of the paper. In the paper, hk is {1,2,...,k}, sk is {+1 ,-1}.

    opened by roseif 0
  • multi GPUs error: RuntimeError: arguments are located on different GPUs

    multi GPUs error: RuntimeError: arguments are located on different GPUs

    For multi GPU, it outputs:

    RuntimeError: arguments are located on different GPUs at /pytorch/torch/lib/THC/generic/THCTensorMathBlas.cu:236
    

    How to fix it?

    opened by JingyunLiang 3
Owner
null
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 4, 2023
null 270 Dec 24, 2022
A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

Fidelity Investments 56 Sep 13, 2022
A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

null 878 Dec 30, 2022
PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

Cong Cai 12 Dec 19, 2021
A PyTorch implementation of EfficientNet

EfficientNet PyTorch Quickstart Install with pip install efficientnet_pytorch and load a pretrained EfficientNet with: from efficientnet_pytorch impor

Luke Melas-Kyriazi 7.2k Jan 6, 2023
PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf

README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attent

DreamQuark 2k Dec 27, 2022
An implementation of Performer, a linear attention-based transformer, in Pytorch

Performer - Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random

Phil Wang 900 Dec 22, 2022
PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

Kim Seonghyeon 433 Dec 27, 2022
This is an differentiable pytorch implementation of SIFT patch descriptor.

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 3, 2023
Pytorch implementation of Distributed Proximal Policy Optimization

Pytorch-DPPO Pytorch implementation of Distributed Proximal Policy Optimization: https://arxiv.org/abs/1707.02286 Using PPO with clip loss (from https

Alexis David Jacq 164 Jan 5, 2023
A PyTorch implementation of L-BFGS.

PyTorch-LBFGS: A PyTorch Implementation of L-BFGS Authors: Hao-Jun Michael Shi (Northwestern University) and Dheevatsa Mudigere (Facebook) What is it?

Hao-Jun Michael Shi 478 Dec 27, 2022
A PyTorch implementation of Learning to learn by gradient descent by gradient descent

Intro PyTorch implementation of Learning to learn by gradient descent by gradient descent. Run python main.py TODO Initial implementation Toy data LST

Ilya Kostrikov 300 Dec 11, 2022
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

Pretrained models for Pytorch (Work in progress) The goal of this repo is: to help to reproduce research papers results (transfer learning setups for

Remi 8.7k Dec 31, 2022
Model summary in PyTorch similar to `model.summary()` in Keras

Keras style model.summary() in PyTorch Keras has a neat API to view the visualization of the model which is very helpful while debugging your network.

Shubham Chandel 3.7k Dec 29, 2022