Code of paper Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification.

Overview

Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification

We provide the codes for reproducing result of our paper Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification.

Installation

  1. Basic environments: python3.6, pytorch1.8.0, cuda11.1.

  2. Our codes structure is based on Torchreid. (More details can be found in link: https://github.com/KaiyangZhou/deep-person-reid , you can download the packages according to Torchreid requirements.)

# create environment
cd AAAI2022_IEEE/
conda create --name ieeeReid python=3.6
conda activate ieeeReid

# install dependencies
# make sure `which python` and `which pip` point to the correct path
pip install -r requirements.txt

# install torch and torchvision (select the proper cuda version to suit your machine)
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

# install torchreid (don't need to re-build it if you modify the source code)
python setup.py develop

Get start

  1. You can use the setting in im_r50_softmax_256x128_amsgrad_RGBNT_ieee_part_margin.yaml to get the results of full IEEE.

    python ./scripts/mainMultiModal.py --config-file ./configs/im_r50_softmax_256x128_amsgrad_RGBNT_ieee_part_margin.yaml --seed 40
  2. You can run other methods by using following configuration file:

    # MLFN
    ./configs/im_r50_softmax_256x128_amsgrad_RGBNT_mlfn.yaml
    
    # HACNN
    ./configs/im_r50_softmax_256x128_amsgrad_RGBNT_hacnn.yaml
    
    # OSNet
    ./configs/im_r50_softmax_256x128_amsgrad_RGBNT_osnet.yaml
    
    # HAMNet
    ./configs/im_r50_softmax_256x128_amsgrad_RGBNT_hamnet.yaml
    
    # PFNet
    ./configs/im_r50_softmax_256x128_amsgrad_RGBNT_hamnet.yaml
    
    # full IEEE
    ./configs/im_r50_softmax_256x128_amsgrad_RGBNT_ieee_part_margin.yaml

Details

  1. The details of our Cross-modal Interacting Module (CIM) and Relation-based Embedding Module (REM) can be found in .\torchreid\models\ieee3modalPart.py. The design of Multi-modal Margin Loss(3M loss) can be found in .\torchreid\losses\multi_modal_margin_loss_new.py.

  2. Ablation study settings.

    You can control these two modules and the loss by change the corresponding codes.

    1. Cross-modal Interacting Module (CIM) and Relation-based Embedding Module (REM)
    # change the code in .\torchreid\models\ieee3modalPart.py
    
    class IEEE3modalPart(nn.Module):
        def __init__(···
        ):
            modal_number = 3
            fc_dims = [128]
            pooling_dims = 768
            super(IEEE3modalPart, self).__init__()
            self.loss = loss
            self.parts = 6
            
            self.backbone = nn.ModuleList(···
            )
    		
    		  # using Cross-modal Interacting Module (CIM)
            self.interaction = True
            # using channel attention in CIM
            self.attention = True
            
            # using Relation-based Embedding Module (REM)
            self.using_REM = True
            
            ···
    1. Multi-modal Margin Loss(3M loss)
    # change the code in .\configs\your_config_file.yaml
    
    # using Multi-modal Margin Loss(3M loss), you can change the margin by modify the parameter of "ieee_margin".
    ···
    loss:
      name: 'margin'
      softmax:
        label_smooth: True
      ieee_margin: 1
      weight_m: 1.0
      weight_x: 1.0
    ···
    
    # using only CE loss
    ···
    loss:
      name: 'softmax'
      softmax:
        label_smooth: True
      weight_x: 1.0
    ···
You might also like...
offical implement of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021
offical implement of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021

LifelongReID Offical implementation of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021 by Nan Pu, Wei Chen, Yu L

[TIP2020] Adaptive Graph Representation Learning for Video Person Re-identification

Introduction This is the PyTorch implementation for Adaptive Graph Representation Learning for Video Person Re-identification. Get started git clone h

[CVPR-2021]  UnrealPerson: An  adaptive pipeline  for  costless person re-identification
[CVPR-2021] UnrealPerson: An adaptive pipeline for costless person re-identification

UnrealPerson: An Adaptive Pipeline for Costless Person Re-identification In our paper (arxiv), we propose a novel pipeline, UnrealPerson, that decreas

Unsupervised Pre-training for Person Re-identification (LUPerson)

LUPerson Unsupervised Pre-training for Person Re-identification (LUPerson). The repository is for our CVPR2021 paper Unsupervised Pre-training for Per

IAUnet: Global Context-Aware Feature Learning for Person Re-Identification

IAUnet This repository contains the code for the paper: IAUnet: Global Context-Aware Feature Learning for Person Re-Identification Ruibing Hou, Bingpe

 Person Re-identification
Person Re-identification

Person Re-identification Final project of Computer Vision Table of content Person Re-identification Table of content Students: Proposed method Dataset

Experiment about Deep Person Re-identification with EfficientNet-v2
Experiment about Deep Person Re-identification with EfficientNet-v2

We evaluated the baseline with Resnet50 and Efficienet-v2 without using pretrained models. Also Resnet50-IBN-A and Efficientnet-v2 using pretrained on ImageNet. We used two datasets: Market-1501 and CUHK03.

Torchreid: Deep learning person re-identification in PyTorch.

Torchreid Torchreid is a library for deep-learning person re-identification, written in PyTorch. It features: multi-GPU training support both image- a

[BMVC2021] The official implementation of
[BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations"

DomainMix [BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations" [paper] [de

Comments
Owner
null
PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Xinlei-Pei 6 Dec 23, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
《LXMERT: Learning Cross-Modality Encoder Representations from Transformers》(EMNLP 2020)

The Most Important Thing. Our code is developed based on: LXMERT: Learning Cross-Modality Encoder Representations from Transformers

null 53 Dec 16, 2022
Implementation of "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification"

hypergraph_reid Implementation of "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification" If you find this help your research,

null 62 Dec 21, 2022
Paper: Cross-View Kernel Similarity Metric Learning Using Pairwise Constraints for Person Re-identification

Cross-View Kernel Similarity Metric Learning Using Pairwise Constraints for Person Re-identification T M Feroz Ali, Subhasis Chaudhuri, ICVGIP-20-21

T M Feroz Ali 3 Jun 17, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon Research.

null 42 Dec 2, 2022
Code and pre-trained models for MultiMAE: Multi-modal Multi-task Masked Autoencoders

MultiMAE: Multi-modal Multi-task Masked Autoencoders Roman Bachmann*, David Mizrahi*, Andrei Atanov, Amir Zamir Website | arXiv | BibTeX Official PyTo

Visual Intelligence & Learning Lab, Swiss Federal Institute of Technology (EPFL) 385 Jan 6, 2023
Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral)

Joint Discriminative and Generative Learning for Person Re-identification [Project] [Paper] [YouTube] [Bilibili] [Poster] [Supp] Joint Discriminative

NVIDIA Research Projects 1.2k Dec 30, 2022
Joint Detection and Identification Feature Learning for Person Search

Person Search Project This repository hosts the code for our paper Joint Detection and Identification Feature Learning for Person Search. The code is

null 712 Dec 17, 2022
A embed able annotation tool for end to end cross document co-reference

CoRefi CoRefi is an emebedable web component and stand alone suite for exaughstive Within Document and Cross Document Coreference Anntoation. For a de

PythicCoder 39 Dec 12, 2022