PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)

Overview

PSTR (CVPR2022)

  • This code is an official implementation of "PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)".
  • End-to-end one-step person search with Transformers, which does not requre NMS post-processing.
  • Pre-trained models with ResNet50, ResNet50-DCN, and PVTv2b2.
  • Curves of different methods on CUHK under different gallery sizes (plot_cuhk.py). If you want to add new results, please feel free to contact us.

Installation

  • We install this project using cuda11.1 and PyTorch1.8.0 (or PyTorch1.9.0) as follows.
# Download this project
git clone https://github.com/JialeCao001/PSTR.git

# Create a new conda enviroment for PSTR
conda create -n pstr python=3.7 -y
conda activate pstr
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
#conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

# Comiple mmcv, which has been included in this project
cd PSTR/mmcv
MMCV_WITH_OPS=1 pip install -e .

# Comiple this project 
cd PSTR
pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"
pip install sklearn
  • If you have the problem local variable 'beta1' referenced before assignment with PyTorch1.8, add one table space in L110 of optim/adamw.py

Train and Inference

Datasets and Annotations
Train with a single GPU
python tools/train.py ${CONFIG_FILE} --no-validate
Test with a single GPU
PRW: sh run_test_prw.sh 
CUHK: sh run_test_cuhk.sh  
  • If you want to output the results of different models, please change CONFIGPATH, MODELPATH, OUTPATH for diffferent models

Results

We provide some models with different backbones and results on PRW and CUHK-SYSU datasets, which have a little difference to CVPR version due to jitter.

name dataset backbone mAP top-1 mAP+ top-1+ download
PSTR PRW PVTv2-B2 57.46 90.57 58.07 92.03 model
PSTR PRW ResNet50 50.03 88.04 50.64 89.94 model
PSTR PRW ResNet50-DCN 51.09 88.33 51.62 90.13 model
PSTR CUHK-SYSU PVTv2-B2 95.31 96.28 95.78 96.83 model
PSTR CUHK-SYSU ResNet50 93.55 94.93 94.16 95.48 model
PSTR CUHK-SYSU ResNet50-DCN 94.22 95.28 94.90 95.97 model
  • All the models are based on multi-scale training and all the results are based on single-scale inference.

  • + indicates adding a re-scoring module during evaluation, where we modify the final matching score as the weighted score of CBGM score and originial matching scores.

Citation

If the project helps your research, please cite this paper.

@article{Cao_PSTR_CVPR_2022,
  author =       {Jiale Cao and Yanwei Pang and Rao Muhammad Anwer and Hisham Cholakkal and Jin Xie and Mubarak Shah and Fahad Shahbaz Khan},
  title =        {PSTR: End-to-End One-Step Person Search With Transformers},
  journal =      {Proc. IEEE Conference on Computer Vision and Pattern Recognition},
  year =         {2022}
}

Acknowledgement

Many thanks to the open source codes: mmdetection, AlignPS, and SeqNet.

You might also like...
[Preprint]
[Preprint] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang

Chasing Sparsity in Vision Transformers: An End-to-End Exploration Codes for [Preprint] Chasing Sparsity in Vision Transformers: An End-to-End Explora

Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

 REGTR: End-to-end Point Cloud Correspondences with Transformers
REGTR: End-to-end Point Cloud Correspondences with Transformers

REGTR: End-to-end Point Cloud Correspondences with Transformers This repository contains the source code for REGTR. REGTR utilizes multiple transforme

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Code for
Code for "Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search"

Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search This is an implementation for our paper Contextual Non-Loca

Code for CVPR 2021 paper: Anchor-Free Person Search
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

Robust Partial Matching for Person Search in the Wild
Robust Partial Matching for Person Search in the Wild

APNet for Person Search Introduction This is the code of Robust Partial Matching for Person Search in the Wild accepted in CVPR2020. The Align-to-Part

CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)

CM-NAS Official Pytorch code of paper CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification in ICCV2021. Vis

Joint Detection and Identification Feature Learning for Person Search
Joint Detection and Identification Feature Learning for Person Search

Person Search Project This repository hosts the code for our paper Joint Detection and Identification Feature Learning for Person Search. The code is

Comments
  • Visualisation of model, data, scalars

    Visualisation of model, data, scalars

    Hello,

    Thanks for your contribution, paper and code.

    My goal is

    I am trying to visualize the model with Tensorboard and I cannot make it work. So my question is, how did visualize for PSTR development, please :grin:.

    What did I try

    Basically, I load the config file from CUHK dataset then I build the dataset and dataloader with your builder functions. Then the model is imported and I got a new version with the MMDataParallel function. Finally, I initialize a writer = SummaryWriter(...) and I try to add the graph of the model to my logs by writing: writer.add_graph(model, data) (where data is the first batch from my Dataloader object). Here a minimal code (greatly inspired by your test script, withouh PATH variables and imports):

    cfg = Config.fromfile(CONFIGPATH)
    
    dataset = build_dataset(cfg.data.test)
    
    data_loader = build_dataloader(
        dataset=dataset,
        samples_per_gpu=1,
        workers_per_gpu=cfg.data.workers_per_gpu,
        dist=False,
        shuffle=False,
    )
    
    model = build_detector(cfg.model, test_cfg=cfg.get("test_cfg"))
    checkpoint = load_checkpoint(
        model,
        str(MODELPATH),
        map_location="cpu",
    )
    model.CLASSES = dataset.CLASSES
    model = MMDataParallel(model, device_ids=[0])
    

    What happened

    Tensorboard does not like that there are additional meta data inside the input. But if I only write writer.add_graph(model, data['img']) then the model cannot infers, obviously. Here the output

    loading annotations into memory...
    Done (t=0.19s)
    creating index...
    index created!
    3
    /PSTR/mmcv/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. 
      warnings.warn(
    /PSTR/mmcv/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. 
      warnings.warn(
    /PSTR/mmcv/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. 
      warnings.warn(
    /PSTR/mmcv/mmcv/cnn/bricks/transformer.py:92: UserWarning: The arguments `dropout` in MultiheadAttention has been deprecated, now you can separately set `attn_drop`(float), proj_drop(float), and `dropout_layer`(dict) 
      warnings.warn('The arguments `dropout` in MultiheadAttention '
    load checkpoint from local path: pstr_r50_cuhk-2fd8c1d2.pth
    2022-05-25 12:55:34,661 - root - INFO - DeformConv2dPack neck.convs.0 is upgraded to version 2.
    2022-05-25 12:55:34,661 - root - INFO - DeformConv2dPack neck.convs.1 is upgraded to version 2.
    2022-05-25 12:55:34,662 - root - INFO - DeformConv2dPack neck.convs.2 is upgraded to version 2.
    Tracer cannot infer type of ({'img_metas': [DataContainer([[{'filename': './data/cuhk/Image/SSM/s15535.jpg', 'ori_filename': 's15535.jpg', 'ori_shape': (450, 800, 3), 'img_shape': (844, 1500, 3), 'pad_shape': (844, 1500, 3), 'scale_factor': array([1.875    , 1.8755555, 1.875    , 1.8755555], dtype=float32), 'flip': False, 'flip_direction': None, 'img_norm_cfg': {'mean': array([123.675, 116.28 , 103.53 ], dtype=float32), 'std': array([58.395, 57.12 , 57.375], dtype=float32), 'to_rgb': True}}]])], 'img': [tensor([[[[-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              ...,
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179]],
    
             [[-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              ...,
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357]],
    
             [[-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              ...,
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044]]]])]},)
    :Could not infer type of list element: Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type DataContainer.
    Error occurs, No graph saved
    Traceback (most recent call last):
      File "/code/PSTR/visualise.py", line 41, in <module>
        writer.add_graph(model, data)
      File "/opt/conda/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 723, in add_graph
        self._get_file_writer().add_graph(graph(model, input_to_model, verbose))
      File "/opt/conda/lib/python3.8/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 292, in graph
        raise e
      File "/opt/conda/lib/python3.8/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 286, in graph
        trace = torch.jit.trace(model, args)
      File "/opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py", line 733, in trace
        return trace_module(
      File "/opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py", line 934, in trace_module
        module._c._create_method_from_trace(
    RuntimeError: Tracer cannot infer type of ({'img_metas': [DataContainer([[{'filename': './data/cuhk/Image/SSM/s15535.jpg', 'ori_filename': 's15535.jpg', 'ori_shape': (450, 800, 3), 'img_shape': (844, 1500, 3), 'pad_shape': (844, 1500, 3), 'scale_factor': array([1.875    , 1.8755555, 1.875    , 1.8755555], dtype=float32), 'flip': False, 'flip_direction': None, 'img_norm_cfg': {'mean': array([123.675, 116.28 , 103.53 ], dtype=float32), 'std': array([58.395, 57.12 , 57.375], dtype=float32), 'to_rgb': True}}]])], 'img': [tensor([[[[-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              ...,
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179],
              [-2.1179, -2.1179, -2.1179,  ..., -2.1179, -2.1179, -2.1179]],
    
             [[-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              ...,
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357],
              [-2.0357, -2.0357, -2.0357,  ..., -2.0357, -2.0357, -2.0357]],
    
             [[-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              ...,
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044],
              [-1.8044, -1.8044, -1.8044,  ..., -1.8044, -1.8044, -1.8044]]]])]},)
    :Could not infer type of list element: Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type DataContainer.
    

    What did I found on the web

    In MMDet repo issue someone has the same problem and tried to upgrade torch version. Which I did, I tried to put at 1.10 but it could not compile your repo because some codes became deprecated (I cannot compile torch 1.10, 1.11 with CUDA 11.1).

    My environment is based on official torch docker images. From this image I just run the install instructions from the README.md expect the conda env and torch install.

    opened by AwePhD 2
  • How to run run_test_cuhk.sh

    How to run run_test_cuhk.sh

    Hello,

    First, thanks for sharing the code of your work and congratulations for your CVPR acceptance!

    I am sorry if my question looks dumb but I cannot figure out how to run run_test_cuhk.sh. I added the log at the bottom of my issue. It seems so that I should fill the working directory - I took the same as you $ROOT/work_dirs - with the pstr_results.plk. I do not understand why I should compute the detection since PSTR aims to compute detection and ReID features "in one shot", kind of. From what I understood, the point of PSTR is to compute detections and reid features so I cannot understand why I should provide detections.

    Plus, it does not seem that PartAttention is part of the attention Registry.

    My packages versions are:

    • mmcv-full: 1.3.17
    • mmdet: 2.18.1
    • torch: 1.8.0
    • torchvision: 0.9.0

    And CUDA is 11.1 (minimal version for my GPU).

    If you need anymore information, let me know :smiley:.

    Best regards, Mathias.


    loading annotations into memory...
    Done (t=0.13s)
    creating index...
    index created!
    3
    /opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. 
      warnings.warn(
    /opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. 
      warnings.warn(
    /opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. 
      warnings.warn(
    /opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:92: UserWarning: The arguments `dropout` in MultiheadAttention has been deprecated, now you can separately set `attn_drop`(float), proj_drop(float), and `dropout_layer`(dict) 
      warnings.warn('The arguments `dropout` in MultiheadAttention '
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg
        return obj_cls(**args)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/utils/transformer.py", line 439, in __init__
        super(DetrTransformerDecoderLayer, self).__init__(
      File "/opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py", line 382, in __init__
        attention = build_attention(attn_cfgs[index])
      File "/opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py", line 40, in build_attention
        return build_from_cfg(cfg, ATTENTION, default_args)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 44, in build_from_cfg
        raise KeyError(
    KeyError: 'PartAttention is not in the attention registry'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg
        return obj_cls(**args)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/utils/transformer.py", line 639, in __init__
        super(DeformableDetrTransformerDecoder, self).__init__(*args, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py", line 545, in __init__
        self.layers.append(build_transformer_layer(transformerlayers[i]))
      File "/opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py", line 50, in build_transformer_layer
        return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
        raise type(e)(f'{obj_cls.__name__}: {e}')
    KeyError: "DetrTransformerDecoderLayer: 'PartAttention is not in the attention registry'"
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg
        return obj_cls(**args)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/utils/transformer.py", line 1076, in __init__
        super(PstrTransformer, self).__init__(**kwargs)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/utils/transformer.py", line 566, in __init__
        self.decoder1 = build_transformer_layer_sequence(decoder1)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py", line 55, in build_transformer_layer_sequence
        return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
        raise type(e)(f'{obj_cls.__name__}: {e}')
    KeyError: 'DeformableDetrTransformerDecoder: "DetrTransformerDecoderLayer: \'PartAttention is not in the attention registry\'"'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg
        return obj_cls(**args)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/dense_heads/pstr_head.py", line 56, in __init__
        super(PSTRHead, self).__init__(
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/dense_heads/detr_reid_head.py", line 143, in __init__
        self.transformer = build_transformer(transformer)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/utils/builder.py", line 11, in build_transformer
        return build_from_cfg(cfg, TRANSFORMER, default_args)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
        raise type(e)(f'{obj_cls.__name__}: {e}')
    KeyError: 'PstrTransformer: \'DeformableDetrTransformerDecoder: "DetrTransformerDecoderLayer: \\\'PartAttention is not in the attention registry\\\'"\''
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg
        return obj_cls(**args)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/detectors/pstr.py", line 10, in __init__
        super(DETRReID, self).__init__(*args, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/detectors/single_stage_reid.py", line 37, in __init__
        self.bbox_head = build_head(bbox_head)
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/builder.py", line 40, in build_head
        return HEADS.build(cfg)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 212, in build
        return self.build_func(*args, **kwargs, registry=self)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
        return build_from_cfg(cfg, registry, default_args)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
        raise type(e)(f'{obj_cls.__name__}: {e}')
    KeyError: 'PSTRHead: \'PstrTransformer: \\\'DeformableDetrTransformerDecoder: "DetrTransformerDecoderLayer: \\\\\\\'PartAttention is not in the attention registry\\\\\\\'"\\\'\''
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "./tools/test.py", line 234, in <module>
        main()
      File "./tools/test.py", line 183, in main
        model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg'))
      File "/opt/conda/lib/python3.8/site-packages/mmdet/models/builder.py", line 58, in build_detector
        return DETECTORS.build(
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 212, in build
        return self.build_func(*args, **kwargs, registry=self)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
        return build_from_cfg(cfg, registry, default_args)
      File "/opt/conda/lib/python3.8/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg
        raise type(e)(f'{obj_cls.__name__}: {e}')
    KeyError: 'PSTR: \'PSTRHead: \\\'PstrTransformer: \\\\\\\'DeformableDetrTransformerDecoder: "DetrTransformerDecoderLayer: \\\\\\\\\\\\\\\'PartAttention is not in the attention registry\\\\\\\\\\\\\\\'"\\\\\\\'\\\'\''
    ------------------------
    Traceback (most recent call last):
      File "./tools/test_results_cuhk.py", line 75, in <module>
        with open(results_path, 'rb') as fid:
    FileNotFoundError: [Errno 2] No such file or directory: 'work_dirs/pstr_results.pkl'
    Traceback (most recent call last):
      File "./tools/test_results_cuhk_cbgm.py", line 76, in <module>
        with open(results_path, 'rb') as fid:
    FileNotFoundError: [Errno 2] No such file or directory: 'work_dirs/pstr_results.pkl'
    
    opened by mufasachan 2
  • Error when run 'tools/test.py' with arguments '--show'

    Error when run 'tools/test.py' with arguments '--show'

    Hi, thank you for your work!

    I got an error 'AssertionError:bboxes.shape[1] should be 4 or 5, but it's 773.' ,when I set the argument '--show' in command line in test.py. I got the same error when run the 'demo/video_demo.py' using the model you provide.

    I gress it is because pstr and mmdet are incompatible. Could you please tell me how to visualize my custom img/video when infer the result by pstr model? Or just tell what's mean about result.shape:[100,773].

    Looking for your reply!

    opened by ChristianLean 0
  • Part Attention Block

    Part Attention Block

    Hi there, I wanted to understand more about part attention block about how does it actually attends to the parts of the object in the image. Moreover, I could not spot the specific code block written to implement part attention block/layer? Could you please help me in spotting it in the repo and understanding a bit more about part attention?

    opened by Suvashsharma 1
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Daniel Voigt Godoy 340 Jan 1, 2023
In this work, we will implement some basic but important algorithm of machine learning step by step.

WoRkS continued English 中文 Français Probability Density Estimation-Non-Parametric Methods(概率密度估计-非参数方法) 1. Kernel / k-Nearest Neighborhood Density Est

liziyu0104 1 Dec 30, 2021
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

?? Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Rendi Chevi 156 Jan 9, 2023
PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation

PocketNet This is the official repository of the paper: PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and M

Fadi Boutros 40 Dec 22, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
[CVPR2021 Oral] End-to-End Video Instance Segmentation with Transformers

VisTR: End-to-End Video Instance Segmentation with Transformers This is the official implementation of the VisTR paper: Installation We provide instru

Yuqing Wang 687 Jan 7, 2023
ISTR: End-to-End Instance Segmentation with Transformers (https://arxiv.org/abs/2105.00637)

This is the project page for the paper: ISTR: End-to-End Instance Segmentation via Transformers, Jie Hu, Liujuan Cao, Yao Lu, ShengChuan Zhang, Yan Wa

Jie Hu 182 Dec 19, 2022
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

MeshTransformer ✨ This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers. MEsh TRansfOrmer is a simple yet effec

Microsoft 473 Dec 31, 2022
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

Kakao Brain 114 Nov 28, 2022