Implementation of "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification"

Overview

hypergraph_reid

Implementation of "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification" If you find this help your research, please cite

@inproceedings{DBLP:conf/cvpr/YanQC0ZT020,
  author    = {Yichao Yan and
               Jie Qin and
               Jiaxin Chen and
               Li Liu and
               Fan Zhu and
               Ying Tai and
               Ling Shao},
  title     = {Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification},
  booktitle = {2020 {IEEE/CVF} Conference on Computer Vision and Pattern Recognition,
               {CVPR} 2020, Seattle, WA, USA, June 13-19, 2020},
  pages     = {2896--2905},
  publisher = {{IEEE}},
  year      = {2020}
}

Installation

We use python 3.7 and pytorch=0.4

Data preparation

All experiments are done on MARS, as it is the largest dataset available to date for video-based person reID. Please follow deep-person-reid to prepare the data. The instructions are copied here:

  1. Create a directory named mars/ under data/.
  2. Download dataset to data/mars/ from http://www.liangzheng.com.cn/Project/project_mars.html.
  3. Extract bbox_train.zip and bbox_test.zip.
  4. Download split information from https://github.com/liangzheng06/MARS-evaluation/tree/master/info and put info/ in data/mars (we want to follow the standard split in [8]). The data structure would look like:
mars/
    bbox_test/
    bbox_train/
    info/

Usage

To train the model, please run

sh run_hypergraphsage_part.sh

Performance

Trained model [Google]

The shared trained model achieves 85.6% mAP and 89.5% rank-1 accuracy. According to my training log, the best model achieves 86.2% mAP and 90.0% top-1 accuracy. This may need adjustion in hyperparameters.

Acknowledgements

Our code is developed based on Video-Person-ReID (https://github.com/jiyanggao/Video-Person-ReID).

Comments
  • loss converges to 4.4, map only reaches 60.0%, rank1 only reaches 73.3%, learning rate is set to 0.000075, and the step size is 100.

    loss converges to 4.4, map only reaches 60.0%, rank1 only reaches 73.3%, learning rate is set to 0.000075, and the step size is 100.

    I use GeForce RTX 2080 with 7G memory, loss converges to 4.4, map only reaches 60.0%, rank1 only reaches 73.3%, batchsize is set to 8,learning rate is set to 0.000075, and the step size is 100. Because there was not enough memory during the test, only a part of it was loaded. Do you think it is the parameter setting or too little test load data? I adjusted it several times, but the effect was not satisfactory. Thank you !

    opened by Seven-gcc 14
  •  ValueError: not enough values to uppack (expected 6, got 5)

    ValueError: not enough values to uppack (expected 6, got 5)

    Excuse me. When I tested the computer memory was not enough, I changed the loader mode of the test data set to the random mode, and the computer reported an error: ValueError: not enough values to unpack (expected 6, got 5).Looking foward for your answer.Thank you very much!

    opened by Seven-gcc 7
  • 评估每个超边的重要性的代码

    评估每个超边的重要性的代码

    对您的这篇论文非常感兴趣,想问您一下,您在定义layers1的时候 只用到了BatchedGraphSAGEDynamicRangeMean1 这个类,其他剩下的类您注释了BatchedGraphSAGEDynamicMean1,BatchedGraphSAGEMean1,BatchedGraphSAGEMean1Temporal,BatchedGAT_cat1 请问这几个类有用吗,还有想请问您一下 , for i in range(int(N/p)):
    idx_start = max(0, i-t)
    idx_end = min(i+t+1, int(N/p))
    tmp_x = x[:,idx_startp:idx_endp,] dis = NearestConvolution.cos_dis(tmp_x) 这个是计算特征之间的相似性 if i==0: tk = min(dis.shape[2], self.kn) #print(tk) _, idx = torch.topk(dis, tk, dim=2) 是包含前K个近邻的邻居集 k_nearest = torch.stack([torch.stack([tmp_x[j, idx[j, i]] for i in range(p*(idx_end-idx_start))], dim=0) for j in range(b)], dim=0) #(b, xp, kn, d) #print(k_nearest) k_nearest_list.append(k_nearest[:,p(i-idx_start):p*(i-idx_start+1),]) k_nearest = torch.cat(k_nearest_list, dim=1) #(b,N, kn, d) x_neib = k_nearest[:,:,1:,].contiguous() 我们将除节点 v i 以外的超边中的所有节点特征进行平均操作,作为超边的特征. x_neib = x_neib.mean(dim=2) h_k = torch.cat((self.W_x(x), self.W_neib(x_neib)), 2)

        h_k = F.normalize(h_k, dim=2, p=2)
        h_k = F.relu(h_k)
        #print(h_k.shape)
        if self.use_bn:
            #self.bn = nn.BatchNorm1d(h_k.size(1))
            h_k = self.bn(h_k.permute(0,2,1).contiguous())
            #print(h_k.shape)
            h_k = h_k.permute(0, 2, 1)
            #print(h_k.shape)
    
        return h_k
    

    请问评估每个超边的重要性的代码 在哪里啊 谢谢您的回复

    opened by 13121283123 4
  • About the mutual information

    About the mutual information

    Hi! As shown in table1, the mutual information module improves the performance well. But I can't find the implementation of this module in your code. Since the general mutual information estimators can only get the lower bound of the real mutual information, I have no idea how do you minimize the mutual information between different parts. Do you use the methods like CLUB to model the upper bound of the mutual information? Looking forward to your reply, thx!

    opened by peterzpy 2
  • How to evaluate on other datasets and save results?

    How to evaluate on other datasets and save results?

    Hello, I want to check the performance of this method on my dataset. However, it only provides the way to train. May I ask how to evaluate with the trained model, and how to save the results? Thank you!

    opened by Huhaowen0130 1
  • How to create

    How to create "mars_query.txt" and "mars_gallery.txt"?

    Hello, I want to save the results of ReID with the function "save_results". However, it seems that Mars folder doesn't include the two txt files needed. Could you tell me how to create them? Thank you! image

    opened by Huhaowen0130 0
  • RuntimeError: cublas runtime error

    RuntimeError: cublas runtime error

    Hello, an error occured when I ran "sh run_hypergraphsage_part.sh". It seems to be caused by the version of cudatoolkit. I'm using python3.6.13, torch0.4.0 and cuda9.0. Could you share your environment? Thank you! image

    opened by Huhaowen0130 0
Owner
null
ALBERT-pytorch-implementation - ALBERT pytorch implementation

ALBERT-pytorch-implementation developing... 모델의 개념이해를 돕기 위한 구현물로 현재 변수명을 상세히 적었고

BG Kim 3 Oct 6, 2022
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.3k Dec 30, 2022
PyTorch implementation of neural style transfer algorithm

neural-style-pt This is a PyTorch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias

null 770 Jan 2, 2023
PyTorch implementation of DeepDream algorithm

neural-dream This is a PyTorch implementation of DeepDream. The code is based on neural-style-pt. Here we DeepDream a photograph of the Golden Gate Br

null 121 Nov 5, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

William Falcon 141 Dec 30, 2022
Python implementation of cover trees, near-drop-in replacement for scipy.spatial.kdtree

This is a Python implementation of cover trees, a data structure for finding nearest neighbors in a general metric space (e.g., a 3D box with periodic

Patrick Varilly 28 Nov 25, 2022
Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

RGF-team 364 Dec 28, 2022
Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow

xRBM Library Implementation of Restricted Boltzmann Machine (RBM) and its variants in Tensorflow Installation Using pip: pip install xrbm Examples Tut

Omid Alemi 55 Dec 29, 2022
A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

Mika 251 Dec 8, 2022
🌳 A Python-inspired implementation of the Optimum-Path Forest classifier.

OPFython: A Python-Inspired Optimum-Path Forest Classifier Welcome to OPFython. Note that this implementation relies purely on the standard LibOPF. Th

Gustavo Rosa 30 Jan 4, 2023
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 59 Nov 24, 2022
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

null 101 Nov 25, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
PyTorch implementation of "Conformer: Convolution-augmented Transformer for Speech Recognition" (INTERSPEECH 2020)

PyTorch implementation of Conformer: Convolution-augmented Transformer for Speech Recognition. Transformer models are good at capturing content-based

Soohwan Kim 565 Jan 4, 2023
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

Taojiannan Yang 72 Nov 9, 2022
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."

FullSubNet This Git repository for the official PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech E

郝翔 357 Jan 4, 2023