FastReID is a research platform that implements state-of-the-art re-identification algorithms.

Overview

Gitter

Gitter: fast-reid/community

FastReID is a research platform that implements state-of-the-art re-identification algorithms. It is a ground-up rewrite of the previous version, reid strong baseline.

What's New

  • [June 2021] Contiguous parameters is supported, now it can accelerate ~20%.
  • [May 2021] Vision Transformer backbone supported, see configs/Market1501/bagtricks_vit.yml.
  • [Apr 2021] Partial FC supported in FastFace!
  • [Jan 2021] TRT network definition APIs in FastRT has been released! Thanks for Darren's contribution.
  • [Jan 2021] NAIC20(reid track) 1-st solution based on fastreid has been released!
  • [Jan 2021] FastReID V1.0 has been released! 🎉 Support many tasks beyond reid, such image retrieval and face recognition. See release notes.
  • [Oct 2020] Added the Hyper-Parameter Optimization based on fastreid. See projects/FastTune.
  • [Sep 2020] Added the person attribute recognition based on fastreid. See projects/FastAttr.
  • [Sep 2020] Automatic Mixed Precision training is supported with apex. Set cfg.SOLVER.FP16_ENABLED=True to switch it on.
  • [Aug 2020] Model Distillation is supported, thanks for guan'an wang's contribution.
  • [Aug 2020] ONNX/TensorRT converter is supported.
  • [Jul 2020] Distributed training with multiple GPUs, it trains much faster.
  • Includes more features such as circle loss, abundant visualization methods and evaluation metrics, SoTA results on conventional, cross-domain, partial and vehicle re-id, testing on multi-datasets simultaneously, etc.
  • Can be used as a library to support different projects on top of it. We'll open source more research projects in this way.
  • Remove ignite(a high-level library) dependency and powered by PyTorch.

We write a fastreid intro and fastreid v1.0 about this toolbox.

Changelog

Please refer to changelog.md for details and release history.

Installation

See INSTALL.md.

Quick Start

The designed architecture follows this guide PyTorch-Project-Template, you can check each folder's purpose by yourself.

See GETTING_STARTED.md.

Learn more at out documentation. And see projects/ for some projects that are build on top of fastreid.

Model Zoo and Baselines

We provide a large set of baseline results and trained models available for download in the Fastreid Model Zoo.

Deployment

We provide some examples and scripts to convert fastreid model to Caffe, ONNX and TensorRT format in Fastreid deploy.

License

Fastreid is released under the Apache 2.0 license.

Citing FastReID

If you use FastReID in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry.

@article{he2020fastreid,
  title={FastReID: A Pytorch Toolbox for General Instance Re-identification},
  author={He, Lingxiao and Liao, Xingyu and Liu, Wu and Liu, Xinchen and Cheng, Peng and Mei, Tao},
  journal={arXiv preprint arXiv:2006.02631},
  year={2020}
}
Comments
  • 训练完成后,自己写的demo测试,发现结果不对,计算两幅图像距离后,同一个人和不同人,之间特征距离都比较小

    训练完成后,自己写的demo测试,发现结果不对,计算两幅图像距离后,同一个人和不同人,之间特征距离都比较小

    训练完成后,自己写的demo测试,发现结果不对,计算两幅图像距离后,同一个人和不同人,之间特征距离都比较小 以下是文件脚本的测试结果: 查询图像:/home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0001_c2s1_000301_00.jpg

    0 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0001_c2s1_000301_00.jpg 两幅图像距离 : 5.9604645e-08 1 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0001_c2s1_000351_00.jpg 两幅图像距离 : 0.00012886524 2 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0001_c2s1_001976_00.jpg 两幅图像距离 : 0.00013148785 3 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0001_c2s1_082596_00.jpg 两幅图像距离 : 0.00013911724 4 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0001_c2s1_109696_00.jpg 两幅图像距离 : 0.00010842085 5 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0001_c2s3_026007_00.jpg 两幅图像距离 : 0.00014215708 6 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0002_c2s1_000301_00.jpg 两幅图像距离 : 0.00017541647 7 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0002_c2s1_000351_00.jpg 两幅图像距离 : 0.00015747547 8 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0002_c2s1_000801_00.jpg 两幅图像距离 : 0.0002257824 9 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0002_c2s1_000976_00.jpg 两幅图像距离 : 0.00016981363 10 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0002_c2s1_068496_00.jpg 两幅图像距离 : 0.00023156404 11 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0002_c2s1_123041_00.jpg 两幅图像距离 : 0.00023680925 12 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0105_c2s1_017426_01.jpg 两幅图像距离 : 0.0001938343 13 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0105_c2s1_017451_04.jpg 两幅图像距离 : 0.00017023087 14 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0105_c2s1_017476_02.jpg 两幅图像距离 : 0.0001847744 15 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0105_c2s1_017601_01.jpg 两幅图像距离 : 0.00019276142 16 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0105_c2s1_025851_02.jpg 两幅图像距离 : 0.00020557642 17 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0105_c2s1_036726_01.jpg 两幅图像距离 : 0.00016593933 18 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0464_c2s1_119341_04.jpg 两幅图像距离 : 0.000207901 19 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0464_c2s1_119466_04.jpg 两幅图像距离 : 0.00021898746 20 /home/yan/Documents/bg-file/fast-reid-master/tools/deploy/Image/TestImage/C2/0464_c2s1_119466_06.jpg 两幅图像距离 : 0.00017267466

    求大神指教,谢谢

    opened by baigang666 28
  • DukeMTMC

    DukeMTMC

    @JinkaiZheng 您的工程文件代码比较全,我想请教您几个问题

    1、FREEZE_ITERS: 2000 作用? 2、max_iter ,是 1个epoch =数据量 / batch 例如一个数据是10个iter , max_iter最大的迭代= epoch数量 * 每个epoch 的迭代

    3、DATASETS: NAMES: ("Market1501",) TESTS: ("Market1501",) #要改成DukeMTMC? 这里我用自己的数据做训练格式变成 Market1501 数据格式, 测试数据的评估,我只是把数据读取路径 改成duke的, 测试结果不对,训练收敛 是我改动的方式错了吗?

    discussion 
    opened by sky186 26
  • How to train Custom Dataset

    How to train Custom Dataset

    This guide explains how to train your own custom dataset with fastreid's data loaders.

    Before You Start

    Following Getting Started to setup the environment and install requirements.txt dependencies.

    Train on Custom Dataset

    1. Register your dataset (i.e., tell fastreid how to obtain your dataset).

      To let fastreid know how to obtain a dataset named "my_dataset", users need to implement a Class that inherits fastreid.data.datasets.bases.ImageDataset:

      	from fastreid.data.datasets import DATASET_REGISTRY
      	from fastreid.data.datasets.bases import ImageDataset
      
      
      	@DATASET_REGISTRY.register()
      	class MyOwnDataset(ImageDataset):
      		def __init__(self, root='datasets', **kwargs):
      			...
      			super().__init__(train, query, gallery)		
      

      Here, the snippet associates a dataset named "MyOwnDataset" with a class that processes train set, query set and gallery set and then pass to the baseClass. Then add a decorator to this class for registration.

      The class can do arbitrary things and should generate train list: list(str, str, str), query list: list(str, int, int) and gallery list: list(str, int, int) as below.

      	train_list = [
      	(train_path1, pid1, camid1), (train_path2, pid2, camid2), ...]
      
      	query_list = [
      	(query_path1, pid1, camid1), (query_path2, pid2, camid2), ...]
      
      	gallery_list = [
      	(gallery_path1, pid1, camid1), (gallery_path2, pid2, camid2), ...]
      

      You can also pass an empty train_list to generate a "Testset" only with super().__init__([], query, gallery).

      Notice: query and gallery sets could have the same camera views, but for each individual query identity, his/her gallery samples from the same camera are excluded. So if your dataset has no camera annotations, you can set all query identities camera number to 0 and all gallery identities camera number to 1, then you can get the testing results.

    2. Import your dataset.

      Aftre registering your own dataset, you need to import it in train_net.py to make it effective.

      	from dataset_file import MyOwnDataset
      
    documentation stale 
    opened by L1aoXingyu 21
  • 仅加载模型但是没有开始训练,显卡占用为1.5G 没有开始训练。

    仅加载模型但是没有开始训练,显卡占用为1.5G 没有开始训练。

    config文件:

    `_BASE_: "Base-bagtricks.yml"
    
    MODEL:
      BACKBONE:
        NAME: "build_resnest_backbone"
        DEPTH: "50x"
        WITH_IBN: True
        PRETRAIN: True
        PRETRAIN_PATH: "/home/yantianyi/reid-1.0/pretrained/resnest50.pth"
      HEADS:
        NAME: "EmbeddingHead"
        NORM: "BN"
        WITH_BNNECK: True
        NECK_FEAT: "before"
        POOL_LAYER: "avgpool"
        CLS_LAYER: "linear"
        EMBEDDING_DIM: 512
      LOSSES:
        NAME: ("Cosface",)
        COSFACE:
          MARGIN: 0.25
          GAMMA: 128
          SCALE: 1.0
    
    DATASETS:
      NAMES: ("Alidata",)
      TESTS: ("VeRi",)
    
    INPUT:
      SIZE_TRAIN: [224, 224]
      SIZE_TEST: [224, 224]
      DO_AUTOAUG: False
    
      CJ:
        ENABLED: True
        PROB: 0.8
        BRIGHTNESS: 0.35
        CONTRAST: 0.35
        SATURATION: 0.35
        HUE: 0.2
    
    DATALOADER:
      PK_SAMPLER: True
      NAIVE_WAY: True
      NUM_INSTANCE: 4
      NUM_WORKERS: 8
    
    SOLVER:
      OPT: "Adam"
      MAX_EPOCH: 60
      BASE_LR: 0.00035
      BIAS_LR_FACTOR: 2.
      WEIGHT_DECAY: 0.0005
      WEIGHT_DECAY_BIAS: 0.0005
      IMS_PER_BATCH: 196
      FP16_ENABLED: True
    
      SCHED: "WarmupMultiStepLR"
      STEPS: [25, 40]
      GAMMA: 0.1
    
      WARMUP_FACTOR: 0.01
      WARMUP_ITERS: 10
    
      CHECKPOINT_PERIOD: 10
    
    OUTPUT_DIR: "/home/yantianyi/logs/resnest50_ali_512_cos"
    `
    
    部分log:
    `[01/25 17:57:48 fastreid]: Full config saved to /home/yantianyi/logs/resnest50_ali_512_cos/config.yaml
    [01/25 17:57:48 fastreid.utils.env]: Using a generated random seed 48316424
    [01/25 17:57:48 fastreid.engine.defaults]: Prepare training set
    [01/25 17:57:53 fastreid.data.datasets.bases]: => Loaded Alidata in csv format: 
    | subset   | # ids   | # images   | # cameras   |
    |:---------|:--------|:-----------|:------------|
    | train    | 127817  | 1669888    | 1           |
    [01/25 17:57:55 fastreid.engine.defaults]: Auto-scaling the num_classes=127817
    [01/25 17:57:56 fastreid.modeling.backbones.resnest]: Loading pretrained model from /home/yantianyi/reid-1.0/pretrained/resnest50.pth
    [01/25 17:57:56 fastreid.modeling.backbones.resnest]: The checkpoint state_dict contains keys that are not used by the model:
      fc.{weight, bias}
    [01/25 17:57:57 fastreid.engine.defaults]: Model:
    Baseline(
      (backbone): ResNeSt(
        (conv1): Sequential(
          (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): BatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU(inplace=True)
          (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (4): BatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (5): ReLU(inplace=True)
          (6): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bn1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
        (layer1): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
            (downsample): Sequential(
              (0): AvgPool2d(kernel_size=1, stride=1, padding=0)
              (1): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
          (2): Bottleneck(
            (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
        )
        (layer2): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (avd_layer): AvgPool2d(kernel_size=3, stride=2, padding=1)
            (conv2): SplAtConv2d(
              (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
            (downsample): Sequential(
              (0): AvgPool2d(kernel_size=2, stride=2, padding=0)
              (1): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
          (2): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
          (3): Bottleneck(
            (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
        )
        (layer3): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (avd_layer): AvgPool2d(kernel_size=3, stride=2, padding=1)
            (conv2): SplAtConv2d(
              (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
            (downsample): Sequential(
              (0): AvgPool2d(kernel_size=2, stride=2, padding=0)
              (1): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (2): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
          (2): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
          (3): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
          (4): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
          (5): Bottleneck(
            (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
        )
        (layer4): Sequential(
          (0): Bottleneck(
            (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (avd_layer): AvgPool2d(kernel_size=3, stride=1, padding=1)
            (conv2): SplAtConv2d(
              (conv): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=2, bias=False)
              (bn0): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
            (downsample): Sequential(
              (0): AvgPool2d(kernel_size=1, stride=1, padding=0)
              (1): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (2): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            )
          )
          (1): Bottleneck(
            (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), groups=2, bias=False)
              (bn0): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
          (2): Bottleneck(
            (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (conv2): SplAtConv2d(
              (conv): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), groups=2, bias=False)
              (bn0): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (relu): ReLU(inplace=True)
              (fc1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
              (bn1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (fc2): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
              (rsoftmax): rSoftMax()
            )
            (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
            (relu): ReLU(inplace=True)
          )
        )
      )
      (heads): EmbeddingHead(
        (pool_layer): AdaptiveAvgPool2d(output_size=1)
        (bottleneck): Sequential(
          (0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (classifier): Linear(in_features=512, out_features=127817, bias=False)
      )
    )
    

    Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.

    Defaults for this optimization level are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Processing user overrides (additional kwargs that are not None)... After processing overrides, optimization options are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",) Warning: apex was installed without --cpp_ext. Falling back to Python flatten and unflatten. ./fastreid/evaluation/rank.py:15: UserWarning: Cython rank evaluation (very fast so highly recommended) is unavailable, now use python evaluation. 'Cython rank evaluation (very fast so highly recommended) is ' ./fastreid/evaluation/roc.py:19: UserWarning: Cython roc evaluation (very fast so highly recommended) is unavailable, now use python evaluation. 'Cython roc evaluation (very fast so highly recommended) is ' Warning: apex was installed without --cpp_ext. Falling back to Python flatten and unflatten. ./fastreid/evaluation/rank.py:15: UserWarning: Cython rank evaluation (very fast so highly recommended) is unavailable, now use python evaluation. 'Cython rank evaluation (very fast so highly recommended) is ' ./fastreid/evaluation/roc.py:19: UserWarning: Cython roc evaluation (very fast so highly recommended) is unavailable, now use python evaluation. 'Cython roc evaluation (very fast so highly recommended) is '`

    stale 
    opened by yanty123 20
  • Roadmap of FastReID

    Roadmap of FastReID

    We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is available here.

    You can either:

    1. Suggest a new feature by leaving a comment.
    2. Vote for a feature request with 👍 or be against with 👎. (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!)
    3. Tell us that you would like to help implement one of the features in the list or review the PRs. (This is the greatest thing to hear about!)

    V1.4

    • [ ] fused LAMB optimizer
    • [ ] ZeRO optimizer
    • [ ] Gradient accumulation
    • [ ] Swin Transformer backbone

    V1.3 (May)

    • [x] Clip gradient (07b8251ccb9f296e4be5d49fc98efebaa12d679f)
    • [x] Vision Transformer backbone (2cabc3428adf5cb3fc0be0f273cc025c4b0a8781)
    • [x] prefetch_generator (#456)
    • [x] Reduce evaluation memory cost

    V1.2 (March)

    • [x] Multiple machine training (#425)
    • [x] Torch2trt pipeline (#428)
    • [x] RepVGG backbone (#429)
    • [x] Partial FC(https://github.com/JDAI-CV/fast-reid/commit/44cee30dfc929df2051d1ca56e36dc152866e96b)
    • [ ] Visualize activation maps

    V1.1 (February)

    • [ ] Documents
    • [x] ~~RepVGG backbone (#429)~~(delayed to V.12)
    • [x] ~~Torch2trt pipeline. (#428)~~(delayed to V1.2)
    • [x] NAIC20 winner solution
    • [x] Multi-teacher KD
    • [x] ~~Partial FC(https://github.com/JDAI-CV/fast-reid/commit/44cee30dfc929df2051d1ca56e36dc152866e96b)~~ (delayed to V1.2)
    • [x] reid model with tensorrt network definition APIs
    stale 
    opened by L1aoXingyu 19
  • Official wechat group(PS: the author here.)

    Official wechat group(PS: the author here.)

    Welcome to join the fast-reid official communication group. If you have any questions, you can communicate with the author or a group of people who are using fast-reid. The number of people in the group is more than 200, please add this wechat to join wechat group.

    PS: the author here.

    opened by gmt710 19
  • torch to caffe convert error

    torch to caffe convert error

    59,60c59
    <     pad: 1
    <     ceil_mode: false
    ---
    >     pad: 0
    2435a2435,2444
    >   name: "avgpool1"
    >   type: "Pooling"
    >   bottom: "relu_blob49"
    >   top: "avgpool_blob1"
    >   pooling_param {
    >     pool: AVE
    >     global_pooling: true
    >   }
    > }
    > layer {
    2438c2447
    <   bottom: "relu_blob49"
    ---
    >   bottom: "avgpool_blob1"
    2449c2458
    <   top: "batch_norm_blob54"
    ---
    >   top: "output"
    
    

    https://github.com/JDAI-CV/fast-reid/tree/master/tools/deploy I saw the site above and converted model.

    ./run_inference.sh image What should I do?

    opened by Hwijune 19
  • How can I prepare my own data about gallery and query?

    How can I prepare my own data about gallery and query?

    When I test my own data, I find that this problem often occurs.

    python ./demo/visualize_result.py --config-file './configs/Market1501/AGW_R50.yml' --vis-label --dataset-name 'Market1501' --output 'logs/market1501/agw_R50/agw_market1501_vis' --opts MODEL.WEIGHTS "./logs/market1501/agw_R50/model_final.pth"
    
    assert num_valid_q > 0, 'Error: all query identities do not appear in gallery'
    AssertionError: Error: all query identities do not appear in gallery
    

    I don't clearly know why this happens? If you have time, can you tell me how to make your own dataset?

    opened by gmt710 19
  • 训练自定义的数据集

    训练自定义的数据集

    您好,我将我的数据集改成Market1501的命名规则,但是输入 python3 tools/train_net.py --config-file ./configs/Market1501/bagtricks_R50.yml MODEL.DEVICE "cuda:0" 之后,程式码好像就停住无法训练了,请问是什么问题呢? 是因为我的图片数量太少吗? 还是因为我的图片id太少? 或是因为我只有一个camid和一个sequence? 谢谢 error

    enhancement stale 
    opened by liujs1016 16
  • 2021.3 fastreid duke rank1=0.82 map=68.37

    2021.3 fastreid duke rank1=0.82 map=68.37

    @L1aoXingyu 您好,我用的是3月份更新的代码 只用duke 数据训练测试 参数训练 ,configs/DukeMTMC/mgn_R50-ibn.yml 精度没有很高, rank1=82.72 map = 68.37 和您发布的结果差别很大

    区别: 预训练这里我直接给的 pretrain_path: home/fastreid/resnet50_ibn_a.pth.tar 后来排查看到 下载的模型 是 v1.0/resnet50_ibn_a-d9d9bb7b.pth

    只有这个区别

    stale 
    opened by sky186 16
  • 训练过程中,内存一直增长,到后期会把整个服务器的内存都占完

    训练过程中,内存一直增长,到后期会把整个服务器的内存都占完

    ys.platform            linux
    Python                  3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]
    numpy                   1.22.4
    fastreid                1.3 @/ssd8/exec/jiaruoran/python/fast-reid-master/./fastreid
    FASTREID_ENV_MODULE     <not set>
    PyTorch                 1.7.1+cu101 @/ssd7/exec/jiaruoran/anaconda3/lib/python3.9/site-packages/torch
    PyTorch debug build     False
    GPU available           True
    GPU 0,1,2,3             Tesla K80
    CUDA_HOME               /ssd1/shared/local/cuda-10.1
    Pillow                  8.4.0
    torchvision             0.8.2+cu101 @/ssd7/exec/jiaruoran/anaconda3/lib/python3.9/site-packages/torchvision
    torchvision arch flags  sm_35, sm_50, sm_60, sm_70, sm_75
    cv2                     4.5.5
    ----------------------  -----------------------------------------------------------------------------------
    PyTorch built with:
      - GCC 7.3
      - C++ Version: 201402
      - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
      - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
      - OpenMP 201511 (a.k.a. OpenMP 4.5)
      - NNPACK is enabled
      - CPU capability usage: AVX2
      - CUDA Runtime 10.1
      - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75
      - CuDNN 7.6.3
      - Magma 2.5.2
      - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 
    
    

    data_loader num_worker设为多个时,内存增长的尤其快,num_worker=0时也会持续增长,排除pytorch dataloader问题

    stale 
    opened by rrjia 15
  • optimize jaccard distance computation and the ranking

    optimize jaccard distance computation and the ranking

    • I run a line_profiler on the ranking function and discovered that the line I modified takes 91.4 of the run time of the rank function. By modifying we can save a lot of ranking time.
    • For the Jaccard distance, the code is performing a lot of not-needed computation that is increasing the reranking time.
    opened by MarahGamdou 1
  • Add the code for GPU-Reranking

    Add the code for GPU-Reranking

    Hi, Xingyu! I integrate the re-ranking into fast-reid. As suggested, the modified code can make users use the GPU-reranking by changing config. I have tested the code in Market1501 with market_sbs_R101-ibn.pth and the result is all right as shown below.

    07/31 15:10:57 fastreid.evaluation.reid_evaluation]: Test with gpu real-time rerank setting
    [07/31 15:11:05 fastreid.engine.defaults]: Evaluation results for Market1501 in csv format:
    [07/31 15:11:05 fastreid.evaluation.testing]: Evaluation results in csv format: 
    | Dataset    | Rank-1   | Rank-5   | Rank-10   | mAP   | mINP   | metric   |
    |:-----------|:---------|:---------|:----------|:------|:-------|:---------|
    | Market1501 | 96.35    | 98.43    | 98.84     | 95.23 | 90.53  | 95.79    |
    
    

    The method is as follows.

    1. Compile the code of gpu re-ranking . cd fastreid/evaluation/extension; sh make.sh
    2. When evaluating, we can use GPU-reranking by changing config.
    python3 tools/train_net.py --config-file ./configs/Market1501/bagtricks_R50.yml --eval-only \
    TEST.GPU_RERANK.ENABLED True MODEL.WEIGHTS /path/to/checkpoint_file MODEL.DEVICE "cuda:0"
    
    opened by Xuanmeng-Zhang 1
  • Fixed: set default type for parser

    Fixed: set default type for parser

    The value will be treated as str if there is no type. Then there will be error

    Traceback (most recent call last):
      File "demo/visualize_result.py", line 144, in <module>
        query_indices = visualizer.vis_rank_list(args.output, args.vis_label, args.num_vis, args.rank_sort, args.label_sort, args.max_rank)
      File "./fastreid/utils/visualizer.py", line 158, in vis_rank_list
        query_indices = query_indices[:num_vis]
    TypeError: slice indices must be integers or None or have an __index__ method
    

    when we add some custom parameter, such as --num-vis 5

    opened by TelBotDev 0
  • add RGPR data augmentation for person reid

    add RGPR data augmentation for person reid

    Add RGPR data augmentation for person reid(An Effective Data Augmentation for person re-identification(https://arxiv.org/abs/2101.08533)) mgn_repvgg_wo_RGPR(epoch 60): rank1: 90.44, mAP: 81.56 mgn_repvgg_w_RGPR(epoch 60): rank1: 90.66, mAP: 82.16 mgn_repvgg_w_RGPR(epoch 120): rank1: 91.07, mAP: 82.18

    opened by AlphaPlusTT 2
Releases(v1.3.0)
Owner
JDAI-CV
JDAI Computer Vision
JDAI-CV
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

Facebook Research 25.5k Jan 7, 2023
A selection of State Of The Art research papers (and code) on human locomotion (pose + trajectory) prediction (forecasting)

A selection of State Of The Art research papers (and code) on human trajectory prediction (forecasting). Papers marked with [W] are workshop papers.

Karttikeya Manglam 40 Nov 18, 2022
State-of-the-art data augmentation search algorithms in PyTorch

MuarAugment Description MuarAugment is a package providing the easiest way to a state-of-the-art data augmentation pipeline. How to use You can instal

null 43 Dec 12, 2022
Model search is a framework that implements AutoML algorithms for model architecture search at scale

Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers speed up their exploration process for finding the right model architecture for their classification problems (i.e., DNNs with different types of layers).

Google 3.2k Dec 31, 2022
🎃 Core identification module of AI powerful point reading system platform.

ppReader-Kernel Intro Core identification module of AI powerful point reading system platform. Usage 硬件: Windows10、GPU:nvdia GTX 1060 、普通RBG相机 软件: con

CrashKing 1 Jan 11, 2022
State of the Art Neural Networks for Deep Learning

pyradox This python library helps you with implementing various state of the art neural networks in a totally customizable fashion using Tensorflow 2

Ritvik Rastogi 60 May 29, 2022
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)

Image Classification Project Killer in PyTorch This repo is designed for those who want to start their experiments two days before the deadline and ki

null 349 Dec 8, 2022
State of the art Semantic Sentence Embeddings

Contrastive Tension State of the art Semantic Sentence Embeddings Published Paper · Huggingface Models · Report Bug Overview This is the official code

Fredrik Carlsson 88 Dec 30, 2022
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

null 152 Jan 2, 2023
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

TuZheng 405 Jan 4, 2023
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).

Deep Text Search - AI Based Text Search & Recommendation System Deep Text Search is an AI-powered multilingual text search and recommendation engine w

null 19 Sep 29, 2022
A state of the art of new lightweight YOLO model implemented by TensorFlow 2.

CSL-YOLO: A New Lightweight Object Detection System for Edge Computing This project provides a SOTA level lightweight YOLO called "Cross-Stage Lightwe

Miles Zhang 54 Dec 21, 2022
😇A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc

------ Update September 2018 ------ It's been a year since TorchMoji and DeepMoji were released. We're trying to understand how it's being used such t

Hugging Face 865 Dec 24, 2022
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.

Summary Explorer Summary Explorer is a tool to visually inspect the summaries from several state-of-the-art neural summarization models across multipl

Webis 42 Aug 14, 2022
PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+

PaddlePaddle Vision Transformers State-of-the-art Visual Transformer and MLP Models for PaddlePaddle ?? PaddlePaddle Visual Transformers (PaddleViT or

null 1k Dec 28, 2022