A Simple and Versatile Framework for Object Detection and Instance Recognition

Overview

SimpleDet - A Simple and Versatile Framework for Object Detection and Instance Recognition

Major Features

  • FP16 training for memory saving and up to 2.5X acceleration
  • Highly scalable distributed training available out of box
  • Full coverage of state-of-the-art models including FasterRCNN, MaskRCNN, CascadeRCNN, RetinaNet, DCNv1/v2, TridentNet, NASFPN , EfficientNet, and Knowledge Distillation
  • Extensive feature set including large batch BN, loss synchronization, automatic BN fusion, soft NMS, multi-scale train/test
  • Modular design for coding-free exploration of new experiment settings
  • Extensive documentations including annotated config, Fintuning Guide

Recent Updates

  • Add RPN test (2019.05.28)
  • Add NASFPN (2019.06.04)
  • Add new ResNetV1b baselines from GluonCV (2019.06.07)
  • Add Cascade R-CNN with FPN backbone (2019.06.11)
  • Speed up FPN up to 70% (2019.06.16)
  • Update NASFPN to include larger models (2019.07.01)
  • Automatic BN fusion for fixed BN training, saving up to 50% GPU memory (2019.07.04)
  • Speed up MaskRCNN by 80% (2019.07.23)
  • Update MaskRCNN baselines (2019.07.25)
  • Add EfficientNet and DCN (2019.08.06)
  • Add python wheel for easy local installation (2019.08.20)
  • Add FitNet based Knowledge Distill (2019.08.27)
  • Add SE and train from scratch (2019.08.30)
  • Add a lot of docs (2019.09.03)
  • Add support for INT8 training(contributed by Xiaotao Chen & Jingqiu Zhou) (2019.10.24)
  • Add support for FCOS(contributed by Zhen Wei) (2019.11)
  • Add support for Mask Scoring RCNN(contributed by Zehui Chen) (2019.12)
  • Add support for RepPoints(contributed by Bo Ke) (2020.02)
  • Add support for FreeAnchor (2020.03)
  • Add support for Feature Pyramid Grids & PAFPN (2020.06)
  • Add support for CrowdHuman Dataset (2020.06)
  • Add support for Double Pred (2020.06)
  • Add support for SEPC(contributed by Qiaofei Li) (2020.07)

Setup

All-in-one Script

We provide a setup script for install simpledet and preppare the coco dataset. If you use this script, you can skip to the Quick Start.

Install

We provide a conda installation here for Debian/Ubuntu system. To use a pre-built docker or singularity images, please refer to INSTALL.md for more information.

# install dependency
sudo apt update && sudo apt install -y git wget make python3-dev libglib2.0-0 libsm6 libxext6 libxrender-dev unzip

# create conda env
conda create -n simpledet python=3.7
conda activate simpledet

# fetch CUDA environment
conda install cudatoolkit=10.1

# install python dependency
pip install 'matplotlib<3.1' opencv-python pytz

# download and intall pre-built wheel for CUDA 10.1
pip install https://1dv.aflat.top/mxnet_cu101-1.6.0b20191214-py2.py3-none-manylinux1_x86_64.whl

# install pycocotools
pip install 'git+https://github.com/RogerChern/cocoapi.git#subdirectory=PythonAPI'

# install mxnext, a wrapper around MXNet symbolic API
pip install 'git+https://github.com/RogerChern/mxnext#egg=mxnext'

# get simpledet
git clone https://github.com/tusimple/simpledet
cd simpledet
make

# test simpledet installation
mkdir -p experiments/faster_r50v1_fpn_1x
python detection_infer_speed.py --config config/faster_r50v1_fpn_1x.py --shape 800 1333

If the last line execute successfully, the average running speed of Faster R-CNN R-50 FPN will be reported. And you have successfuly setup SimpleDet. Now you can head up to the next section to prepare your dataset.

Preparing Data

We provide a step by step preparation for the COCO dataset below.

cd simpledet

# make data dir
mkdir -p data/coco/images data/src

# skip this if you have the zip files
wget -c http://images.cocodataset.org/zips/train2017.zip -O data/src/train2017.zip
wget -c http://images.cocodataset.org/zips/val2017.zip -O data/src/val2017.zip
wget -c http://images.cocodataset.org/zips/test2017.zip -O data/src/test2017.zip
wget -c http://images.cocodataset.org/annotations/annotations_trainval2017.zip -O data/src/annotations_trainval2017.zip
wget -c http://images.cocodataset.org/annotations/image_info_test2017.zip -O data/src/image_info_test2017.zip

unzip data/src/train2017.zip -d data/coco/images
unzip data/src/val2017.zip -d data/coco/images
unzip data/src/test2017.zip -d data/coco/images
unzip data/src/annotations_trainval2017.zip -d data/coco
unzip data/src/image_info_test2017.zip -d data/coco

python utils/create_coco_roidb.py --dataset coco --dataset-split train2017
python utils/create_coco_roidb.py --dataset coco --dataset-split val2017
python utils/create_coco_roidb.py --dataset coco --dataset-split test-dev2017

For other datasets or your own data, please check DATASET.md for more details.

Quick Start

# train
python detection_train.py --config config/faster_r50v1_fpn_1x.py

# test
python detection_test.py --config config/faster_r50v1_fpn_1x.py

Finetune

Please check FINTUNE.md

Model Zoo

Please refer to MODEL_ZOO.md for available models

Distributed Training

Please refer to DISTRIBUTED.md

Project Organization

Code Structure

detection_train.py
detection_test.py
config/
    detection_config.py
core/
    detection_input.py
    detection_metric.py
    detection_module.py
models/
    FPN/
    tridentnet/
    maskrcnn/
    cascade_rcnn/
    retinanet/
mxnext/
symbol/
    builder.py

Config

Everything is configurable from the config file, all the changes should be out of source.

Experiments

One experiment is a directory in experiments folder with the same name as the config file.

E.g. r50_fixbn_1x.py is the name of a config file

config/
    r50_fixbn_1x.py
experiments/
    r50_fixbn_1x/
        checkpoint.params
        log.txt
        coco_minival2014_result.json

Models

The models directory contains SOTA models implemented in SimpletDet.

How is Faster R-CNN built

Faster R-CNN

Simpledet supports many popular detection methods and here we take Faster R-CNN as a typical example to show how a detector is built.

  • Preprocessing. The preprocessing methods of the detector is implemented through DetectionAugmentation.
    • Image/bbox-related preprocessing, such as Norm2DImage and Resize2DImageBbox.
    • Anchor generator AnchorTarget2D, which generates anchors and corresponding anchor targets for training RPN.
  • Network Structure. The training and testing symbols of Faster-RCNN detector is defined in FasterRcnn. The key components are listed as follow:
    • Backbone. Backbone provides interfaces to build backbone networks, e.g. ResNet and ResNext.
    • Neck. Neck provides interfaces to build complementary feature extraction layers for backbone networks, e.g. FPNNeck builds Top-down pathway for Feature Pyramid Network.
    • RPN head. RpnHead aims to build classification and regression layers to generate proposal outputs for RPN. Meanwhile, it also provides interplace to generate sampled proposals for the subsequent R-CNN.
    • Roi Extractor. RoiExtractor extracts features for each roi (proposal) based on the R-CNN features generated by Backbone and Neck.
    • Bounding Box Head. BboxHead builds the R-CNN layers for proposal refinement.

How to build a custom detector

The flexibility of simpledet framework makes it easy to build different detectors. We take TridentNet as an example to demonstrate how to build a custom detector simply based on the Faster R-CNN framework.

  • Preprocessing. The additional processing methods could be provided accordingly by inheriting from DetectionAugmentation.
    • In TridentNet, a new TridentAnchorTarget2D is implemented to generate anchors for multiple branches and filter anchors for scale-aware training scheme.
  • Network Structure. The new network structure could be constructed easily for a custom detector by modifying some required components as needed and
    • For TridentNet, we build trident blocks in the Backbone according to the descriptions in the paper. We also provide a TridentRpnHead to generate filtered proposals in RPN to implement the scale-aware scheme. Other components are shared the same with original Faster-RCNN.

Contributors

Yuntao Chen, Chenxia Han, Yanghao Li, Zehao Huang, Naiyan Wang, Xiaotao Chen, Jingqiu Zhou, Zhen Wei, Zehui Chen, Zhaoxiang Zhang, Bo Ke

License and Citation

This project is release under the Apache 2.0 license for non-commercial usage. For commercial usage, please contact us for another license.

If you find our project helpful, please consider cite our tech report.

@article{JMLR:v20:19-205,
  author  = {Yuntao Chen and Chenxia Han and Yanghao Li and Zehao Huang and Yi Jiang and Naiyan Wang and Zhaoxiang Zhang},
  title   = {SimpleDet: A Simple and Versatile Distributed Framework for Object Detection and Instance Recognition},
  journal = {Journal of Machine Learning Research},
  year    = {2019},
  volume  = {20},
  number  = {156},
  pages   = {1-8},
  url     = {http://jmlr.org/papers/v20/19-205.html}
}
Comments
  • Traning stop error in Nvidia-socker

    Traning stop error in Nvidia-socker

    Hi,when i training my own datasets on tridentnet_r50v2c4_c5_1x.py, it stop training in Epoch[0] and dosen't training any more, without any params saved; i try several times but failed, how can i solve this problem? Thank you! #19

    opened by dl19940602 33
  • Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading CUDA: invalid device ordinal

    Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading CUDA: invalid device ordinal

    When I set gpus to a list not starting from 0 in config:

    class KvstoreParam:
            kvstore     = "local"
            batch_image = General.batch_image
            gpus        = [0, 1, 2, 3, 4, 5, 6, 7]
            fp16        = General.fp16
    

    for example, if I set it to [1], [2, 3] or others, it raised this error while trainning:

    Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading CUDA: invalid device ordinal

    Could you please help to fix it? Thank you.

    02-03 11:03:56 lr 0.01125, lr_iters [320000, 426666] 02-03 11:03:56 warmup lr 0.0, warmup step 5333 Traceback (most recent call last): File "/home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/symbol/symbol.py", line 1522, in simple_bind ctypes.byref(exe_handle))) File "/home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/base.py", line 251, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [18:04:07] src/storage/storage.cc:65: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading CUDA: invalid device ordinal

    Stack trace returned 10 entries: [bt] (0) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(dmlc::StackTraceabi:cxx11+0x5b) [0x7fdc758f9adb] [bt] (1) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x28) [0x7fdc758fa648] [bt] (2) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::StorageImpl::ActivateDevice(mxnet::Context)+0x5f) [0x7fdc77e7639f] [bt] (3) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::StorageImpl::Alloc(mxnet::Storage::Handle*)+0x50) [0x7fdc77e718f0] [bt] (4) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::common::InitZeros(mxnet::NDArrayStorageType, nnvm::TShape const&, mxnet::Context const&, int)+0x73f) [0x7fdc77f0020f] [bt] (5) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::common::ReshapeOrCreate(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, nnvm::TShape const&, int, mxnet::NDArrayStorageType, mxnet::Context const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::NDArray> > >, bool)+0xa26) [0x7fdc77f093a6] [bt] (6) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<nnvm::TShape, std::allocatornnvm::TShape > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, mxnet::Executor const, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >)+0xd47) [0x7fdc77ef1ff7] [bt] (7) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::Context, std::less<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, nnvm::TShape, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, nnvm::TShape> > > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, int> > > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, int> > > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::NDArray> > >, mxnet::Executor*, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0xa6b) [0x7fdc77efc94b] [bt] (8) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mxnet::Executor::SimpleBind(nnvm::Symbol, mxnet::Context const&, std::map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::Context, std::less<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, nnvm::TShape, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, nnvm::TShape> > > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, int> > > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, int> > > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::NDArray> > >, mxnet::Executor*)+0x169) [0x7fdc77efd099] [bt] (9) /home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(MXExecutorSimpleBind+0x2c99) [0x7fdc77e85559]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "detection_train.py", line 226, in train_net(parse_args()) File "detection_train.py", line 209, in train_net num_epoch=end_epoch File "/data/simple/simpledet/core/detection_module.py", line 959, in fit for_training=True, force_rebind=force_rebind) File "/data/simple/simpledet/core/detection_module.py", line 440, in bind state_names=self._state_names) File "/home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/module/executor_group.py", line 279, in init self.bind_exec(data_shapes, label_shapes, shared_group) File "/home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/module/executor_group.py", line 375, in bind_exec shared_group)) File "/home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/module/executor_group.py", line 662, in _bind_ith_exec shared_buffer=shared_data_arrays, **input_shapes) File "/home/simple/anaconda2/envs/py3/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/symbol/symbol.py", line 1528, in simple_bind raise RuntimeError(error_msg) RuntimeError: simple_bind error. Arguments: data: (3, 3, 800, 1200) im_info: (3, 3) gt_bbox: (3, 300, 5) valid_ranges: (3, 3, 2) rpn_cls_label: (3, 3, 56250) rpn_reg_target: (3, 3, 60, 50, 75) rpn_reg_weight: (3, 3, 60, 50, 75) [11:04:07] src/storage/storage.cc:65: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading CUDA: invalid device ordinal

    opened by Kongsea 16
  • shape do not match total elements

    shape do not match total elements

    i want to train coco dataset on tridentnet, and i use this detection config "tridentnet_r50v1c4_c5_1x.py", but i got an error like this:

    File "/usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/base.py", line 253, in check_call raise MXNetError(py_str(LIB.MXGetLastError())) mxnet.base.MXNetError: [13:25:53] include/mxnet/././tensor_blob.h:290: Check failed: this->shape.Size() == static_cast<size_t>(shape.Size()) (6 vs. 12) : TBlob.get_with_shape: new and old shape do not match total elements Stack trace: [bt] (0) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7f9ce3bb7a72] [bt] (1) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(mshadow::Tensor<mshadow::gpu, 2, float> mxnet::TBlob::get_with_shape<mshadow::gpu, 2, float>(mshadow::Shape<2> const&, mshadow::Streammshadow::gpu) const+0x21c) [0x7f9ce57d141c] [bt] (2) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(mxnet::op::ProposalTargetOp_v2<mshadow::gpu, float>::Forward(mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocatormxnet::TBlob > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::vector<mxnet::TBlob, std::allocatormxnet::TBlob > const&, std::vector<mxnet::TBlob, std::allocatormxnet::TBlob > const&)+0x684) [0x7f9ce59b5f44] [bt] (3) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(mxnet::op::OperatorState::Forward(mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocatormxnet::TBlob > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::vector<mxnet::TBlob, std::allocatormxnet::TBlob > const&)+0xa81) [0x7f9ce54dad51] [bt] (4) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(mxnet::exec::StatefulComputeExecutor::Run(mxnet::RunContext, bool)+0x76) [0x7f9ce5c62ee6] [bt] (5) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(+0x30d2a67) [0x7f9ce5c2ca67] [bt] (6) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(mxnet::engine::ThreadedEngine::ExecuteOprBlock(mxnet::RunContext, mxnet::engine::OprBlock)+0x995) [0x7f9ce5b6c485] [bt] (7) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(void mxnet::engine::ThreadedEnginePerDevice::GPUWorker<(dmlc::ConcurrentQueueType)0>(mxnet::Context, bool, mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)0>, std::shared_ptrdmlc::ManualEvent const&)+0x11d) [0x7f9ce5b84e8d] [bt] (8) /usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/libmxnet.so(std::_Function_handler<void (std::shared_ptrdmlc::ManualEvent), mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock, bool)::{lambda()#4}::operator()() const::{lambda(std::shared_ptrdmlc::ManualEvent)#1}>::_M_invoke(std::_Any_data const&, std::shared_ptrdmlc::ManualEvent&&)+0x4e) [0x7f9ce5b8513e]

    opened by w11m 15
  • Implementation of PAFPN & FPG

    Implementation of PAFPN & FPG

    The results have been reported in config/xxx/README.md. We also provided with an advanced version of PAFPN, by stacking top down and bottom up pathways multiple times, which we found it useful on waymo open dataset.

    doc 
    opened by zehuichen123 12
  • tridentnet: no module named 'operator_py.cython.bbox'

    tridentnet: no module named 'operator_py.cython.bbox'

    Hi, I want to test tridentnet and I run: python3 detection_test.py --config config/tridentnet_r101v2c4_c5_multiscale_addminival_3x_fp16.py

    I get this error: Traceback (most recent call last): File "detection_test.py", line 4, in from core.detection_input import Loader File "/media/jack/code/simpledet/core/detection_input.py", line 10, in from operator_py.cython.bbox import bbox_overlaps_cython ImportError: No module named 'operator_py.cython.bbox'

    How should I use tridentnet? Can you help me?

    opened by globalmaster 12
  • There is no method

    There is no method "mx.sym.contrib.BroadcastScale" in mxnet

    Hi, I bought Nvidia 3070 to run simpledet. But mxnet can not run with cuda 10.1 on 3070. To compile mxnet, I try to find the source corresponding to simpledet. I can not found the gen_contrib.py in the history of mxnet 1.6. So is there anyone working with the new Graphics card? Thanks.

    opened by rwecho 10
  • python detection_test.py --config config/tridentnet_r101v1c4_c5_1x.py

    python detection_test.py --config config/tridentnet_r101v1c4_c5_1x.py

    I test it many time, but the results were very bad, the AP result is around 0.001, I don't know why.my annotation flie is right, but tridentnet can't read all my pictures in test. I test about 3000 images, but the net only test about two dozen images. Is there something wrong with that part? 图片 图片

    opened by chenmw269 10
  • [install MXNET]  wrong: src/operator/contrib/roi_align_v2.cc

    [install MXNET] wrong: src/operator/contrib/roi_align_v2.cc

    src/operator/contrib/roi_align_v2.cc:210:2: error: no matching function for call to ‘nnvm::Op::set_attr(const char [12], mxnet::op::<lambda(const nnvm::NodeAttrs&, std::vectormxnet::TShape, std::vectormxnet::TShape)>)’ }) ^ In file included from include/mxnet/base.h:35:0, from src/operator/contrib/./../mshadow_op.h:29, from src/operator/contrib/./roi_align_v2-inl.h:12, from src/operator/contrib/roi_align_v2.cc:7: /home/fw/Softwares/simpledet/mxnet/3rdparty/tvm/nnvm/include/nnvm/op.h:432:12: note: candidate: template nnvm::Op& nnvm::Op::set_attr(const string&, const ValueType&, int) inline Op& Op::set_attr( // NOLINT() ^ /home/fw/Softwares/simpledet/mxnet/3rdparty/tvm/nnvm/include/nnvm/op.h:432:12: note: template argument deduction/substitution failed: src/operator/contrib/roi_align_v2.cc:210:2: note: cannot convert ‘mxnet::op::<lambda(const nnvm::NodeAttrs&, std::vectormxnet::TShape, std::vectormxnet::TShape)>{}’ (type ‘mxnet::op::<lambda(const nnvm::NodeAttrs&, std::vectormxnet::TShape, std::vectormxnet::TShape)>’) to type ‘const std::function<bool(const nnvm::NodeAttrs&, std::vector<nnvm::TShape, std::allocatornnvm::TShape >, std::vector<nnvm::TShape, std::allocatornnvm::TShape >)>&’ }) ^ src/operator/contrib/roi_align_v2.cc:211:27: error: expected primary-expression before ‘>’ token .set_attrnnvm::FInferType("FInferType", [](const nnvm::NodeAttrs& attrs, ^ src/operator/contrib/roi_align_v2.cc:223:1: warning: left operand of comma operator has no effect [-Wunused-value] }) ^ src/operator/contrib/roi_align_v2.cc:224:2: error: ‘struct mxnet::op::<lambda(const struct nnvm::NodeAttrs&, class std::vector<int, std::allocator >, class std::vector<int, std::allocator >)>’ has no member named ‘set_attr’ .set_attr("FCompute", ROIAlignForward_v2) ^ src/operator/contrib/roi_align_v2.cc:224:19: error: expected primary-expression before ‘>’ token .set_attr("FCompute", ROIAlignForward_v2) ^ src/operator/contrib/roi_align_v2.cc:224:38: warning: left operand of comma operator has no effect [-Wunused-value] .set_attr("FCompute", ROIAlignForward_v2) ^ src/operator/contrib/roi_align_v2.cc:224:38: error: no context to resolve type of ‘ROIAlignForward_v2mxnet::cpu’ src/operator/contrib/roi_align_v2.cc:225:26: error: expected primary-expression before ‘>’ token .set_attrnnvm::FGradient("FGradient", ROIAlignGrad_v2{"_backward_ROIAlign_v2"}) ^ src/operator/contrib/roi_align_v2.cc:225:80: warning: left operand of comma operator has no effect [-Wunused-value] .set_attrnnvm::FGradient("FGradient", ROIAlignGrad_v2{"_backward_ROIAlign_v2"}) ^ src/operator/contrib/roi_align_v2.cc:226:2: error: ‘struct mxnet::op::ROIAlignGrad_v2’ has no member named ‘add_argument’ .add_argument("data", "NDArray-or-Symbol", "Input data to the pooling operator, a 4D Feature maps") ^ g++ -std=c++11 -c -DMSHADOW_FORCE_STREAM -Wall -Wsign-compare -O3 -DNDEBUG=1 -I/home/fw/Softwares/simpledet/mxnet/3rdparty/mshadow/ -I/home/fw/Softwares/simpledet/mxnet/3rdparty/dmlc-core/include -fPIC -I/home/fw/Softwares/simpledet/mxnet/3rdparty/tvm/nnvm/include -I/home/fw/Softwares/simpledet/mxnet/3rdparty/dlpack/include -I/home/fw/Softwares/simpledet/mxnet/3rdparty/tvm/include -Iinclude -funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas -Wno-unused-local-typedefs -msse3 -mf16c -I/usr/local/cuda/include -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 -I/home/fw/Softwares/simpledet/mxnet/3rdparty/mkldnn/build/install/include -DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMSHADOW_USE_PASCAL=0 -DMXNET_USE_MKLDNN=1 -DUSE_MKL=1 -I/home/fw/Softwares/simpledet/mxnet/src/operator/nn/mkldnn/ -I/home/fw/Softwares/simpledet/mxnet/3rdparty/mkldnn/build/install/include -DMXNET_USE_OPENCV=0 -DMSHADOW_USE_CUDNN=1 -DMXNET_USE_DIST_KVSTORE -I/home/fw/Softwares/simpledet/mxnet/3rdparty/ps-lite/include -I/home/fw/Softwares/simpledet/mxnet/deps/include -I/home/fw/Softwares/simpledet/mxnet/3rdparty/nvidia_cub -I/include -DMXNET_USE_NCCL=1 -DMXNET_USE_LIBJPEG_TURBO=0 -MMD -c src/operator/contrib/sync_batch_norm.cc -o build/src/operator/contrib/sync_batch_norm.o Makefile:508: recipe for target 'build/src/operator/contrib/roi_align_v2.o' failed make: *** [build/src/operator/contrib/roi_align_v2.o] Error 1 make: *** Waiting for unfinished jobs.... In file included from src/operator/contrib/sync_batch_norm.cc:26:0: src/operator/contrib/sync_batch_norm-inl.h: In member function ‘virtual bool mxnet::op::SyncBatchNormProp::InferType(std::vector<int, std::allocator >, std::vector<int, std::allocator >, std::vector<int, std::allocator >) const’: src/operator/contrib/sync_batch_norm-inl.h:587:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (index_t i = 1; i < in_type->size(); ++i) { ^ src/operator/contrib/sync_batch_norm-inl.h:594:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (index_t i = 0; i < aux_type->size(); ++i) { ^

    bug 
    opened by heiyuxiaokai 10
  • mxnet install

    mxnet install

    The code in 'incubator-mxnet-master' has been recompiled sucessfully according to ' Setup from Scratch' in the '`INSTALL.md' file after "make -j4", and libmxnet.so libmxnet.a has been created, but after the ''cd python python3 setup.py install '' , and run the simpledet code,it shows "AttributeError: module 'mxnet.symbol.contrib' has no attribute 'ROIAlign_v2' "
    I have run the commad "grep -r contrib.ROIAlign_v2",it has been find in libmxnet.so libmxnet.a. Is there any modification in "incubator-mxnet-master/python"?

    opened by hxy1181501030 10
  • Install error when download and install pre-built wheel for CUDA10.1

    Install error when download and install pre-built wheel for CUDA10.1

    I am trying to install the simpledet follow the setup.sh. when I run pip install https://1dv.alarge.space/mxnet_cu101-1.6.0b20190820-py2.py3-none-manylinux1_x86_64.whl, following error happens:

    Downloading https://1dv.alarge.space/mxnet_cu101-1.6.0b20191214-py2.py3-none-manylinux1_x86_64.whl (556.0MB) |███████████████████████████████▌| 546.7MB 97kB/s eta 0:01:36ERROR: Exception: Traceback (most recent call last): File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py", line 425, in _error_catcher yield File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py", line 507, in read data = self._fp.read(amt) if not fp_closed else b"" File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 62, in read data = self.__fp.read(amt) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/http/client.py", line 457, in read n = self.readinto(b) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/http/client.py", line 501, in readinto n = self.fp.readinto(b) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/ssl.py", line 1071, in recv_into return self.read(nbytes, buffer) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/ssl.py", line 929, in read return self._sslobj.read(len, buffer) ConnectionResetError: [Errno 104] Connection reset by peer

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/cli/base_command.py", line 153, in _main status = self.run(options, args) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/commands/install.py", line 382, in run resolver.resolve(requirement_set) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/legacy_resolve.py", line 201, in resolve self._resolve_one(requirement_set, req) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/legacy_resolve.py", line 365, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/legacy_resolve.py", line 313, in _get_abstract_dist_for req, self.session, self.finder, self.require_hashes File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/operations/prepare.py", line 194, in prepare_linked_requirement progress_bar=self.progress_bar File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/download.py", line 465, in unpack_url progress_bar=progress_bar File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/download.py", line 316, in unpack_http_url progress_bar) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/download.py", line 551, in _download_http_url _download_url(resp, link, content_file, hashes, progress_bar) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/download.py", line 255, in _download_url consume(downloaded_chunks) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/utils/misc.py", line 641, in consume deque(iterator, maxlen=0) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/download.py", line 223, in written_chunks for chunk in chunks: File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/utils/ui.py", line 160, in iter for x in it: File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_internal/download.py", line 212, in resp_read decode_content=False): File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py", line 564, in stream data = self.read(amt=amt, decode_content=decode_content) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py", line 529, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/contextlib.py", line 130, in exit self.gen.throw(type, value, traceback) File "/home/gone/anaconda3/envs/simpledet/lib/python3.7/site-packages/pip/_vendor/urllib3/response.py", line 443, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) pip._vendor.urllib3.exceptions.ProtocolError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))

    I guess it's caused by the server of the whl. Is that correct? Any solutions for this situation? My system is ubuntu 16.04.

    opened by liu-zg15 9
  • The training speed is too slowly

    The training speed is too slowly

    This is my training command:
    python detection_train.py --config config/tridentnet_r101v2c4_c5_multiscale_addminival_3x_fp16.py

    [03:27:36] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) 09-07 11:34:06 Epoch[0] Batch [20] Speed: 1.29 samples/sec Train-RpnAcc=0.554313, RpnL1=1.112343, RcnnAcc=0.540504, RcnnL1=2.836474, 09-07 11:39:05 Epoch[0] Batch [40] Speed: 1.60 samples/sec Train-RpnAcc=0.673116, RpnL1=0.962950, RcnnAcc=0.734650, RcnnL1=2.822075, 09-07 11:44:00 Epoch[0] Batch [60] Speed: 1.63 samples/sec Train-RpnAcc=0.753687, RpnL1=0.881134, RcnnAcc=0.796921, RcnnL1=2.820415, 09-07 11:50:30 Epoch[0] Batch [80] Speed: 1.23 samples/sec Train-RpnAcc=0.796221, RpnL1=0.812162, RcnnAcc=0.828212, RcnnL1=2.809139, 09-07 11:59:41 Epoch[0] Batch [100] Speed: 0.87 samples/sec Train-RpnAcc=0.820873, RpnL1=0.768616, RcnnAcc=0.846827, RcnnL1=2.811466, 09-07 12:05:51 Epoch[0] Batch [120] Speed: 1.30 samples/sec Train-RpnAcc=0.837998, RpnL1=0.729496, RcnnAcc=0.860605, RcnnL1=2.807163, 09-07 12:13:06 Epoch[0] Batch [140] Speed: 1.10 samples/sec Train-RpnAcc=0.850185, RpnL1=0.700992, RcnnAcc=0.869596, RcnnL1=2.804221, 09-07 12:20:44 Epoch[0] Batch [160] Speed: 1.05 samples/sec Train-RpnAcc=0.859989, RpnL1=0.677671, RcnnAcc=0.875914, RcnnL1=2.799287, 09-07 12:28:19 Epoch[0] Batch [180] Speed: 1.05 samples/sec Train-RpnAcc=0.867247, RpnL1=0.662773, RcnnAcc=0.880708, RcnnL1=2.793266, 09-07 12:36:31 Epoch[0] Batch [200] Speed: 0.97 samples/sec Train-RpnAcc=0.873369, RpnL1=0.647463, RcnnAcc=0.884404, RcnnL1=2.789041, 09-07 12:44:33 Epoch[0] Batch [220] Speed: 1.00 samples/sec Train-RpnAcc=0.878552, RpnL1=0.635126, RcnnAcc=0.887782, RcnnL1=2.782216, 09-07 12:51:30 Epoch[0] Batch [240] Speed: 1.15 samples/sec Train-RpnAcc=0.882332, RpnL1=0.627403, RcnnAcc=0.890268, RcnnL1=2.776619, 09-07 12:59:11 Epoch[0] Batch [260] Speed: 1.04 samples/sec Train-RpnAcc=0.885757, RpnL1=0.616748, RcnnAcc=0.892246, RcnnL1=2.770003,

    I use 8 GPUs (1080Ti) ,16 Cpus, why so slowly? I have no idea....

    opened by louielu1027 9
  • list index out of range

    list index out of range

    Hello, I had an error using the network (https://github.com/TuSimple/simpledet/blob/master/config/tridentnet_r50v1c4_c5_1x.py).

    I use Pascal VOC (https://github.com/TuSimple/simpledet/blob/master/utils/create_voc_roidb.py)

    and then train the model without a problem: python3 detection_train.py --config config/tridentnet_r50v1c4_c5_1x.py

    But when I do the testing: (simpledet) doodles@cgu:/data/doodles/simpledet$ python3 detection_test.py --config config/tridentnet_r50v1c4_c5_1x.py

    Traceback (most recent call last): File "/home/doodles/miniconda3/envs/simpledet/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/doodles/miniconda3/envs/simpledet/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "detection_test.py", line 258, in do_nms dataset_cid = coco.getCatIds()[cid] IndexError: list index out of range """

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "detection_test.py", line 268, in output_dict = pool.map(do_nms, output_dict.keys()) File "/home/doodles/miniconda3/envs/simpledet/lib/python3.7/multiprocessing/pool.py", line 268, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/home/doodles/miniconda3/envs/simpledet/lib/python3.7/multiprocessing/pool.py", line 657, in get raise self._value IndexError: list index out of range

    Segmentation fault: 11

    Segmentation fault (core dumped)

    May I know how to solve this issue?

    Thanks!

    opened by adeindra94 0
  • Segmentation fault: 11

    Segmentation fault: 11

    Describe the bug A clear and concise description of what the bug is.

    On executing the below command I am facing segmentation fault and I know that there is the other issue similar to this but there is no conclusion or solution.

    Using GPU device: with tf.device('/device:GPU:0'): !source activate simpledet && python detection_infer_speed.py --config config/faster_r50v1_fpn_1x.py --shape 800 1333

    Without GPU device: !source activate simpledet && python detection_infer_speed.py --config config/faster_r50v1_fpn_1x.py --shape 800 1333

    Error message: src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) 102.16482162475586
    Segmentation fault: 11

    Which config are you using

    I have followed the installation steps and I have also tried install with multiple versions of Mxnet but still no result only the one specified in installation fits I guess and it is giving the above error.

    Hardware info I am using Google colab (Pro version) with 32GB ram and GPU enabled device CPU, GPU, Storage(Disk or NFS)

    Software info Nvidia P-100 series GPU and CUDA version 11.2 (but I have installed 10.1 still when I check using nvcc command it shows 11.1 anyway that's not an issue I guess as of now)

    OS: Ubantu 18.04

    How did you set up your MXNet for SimpleDet

    I downloaded mxnet using below command: !source activate simpledet && pip install https://1dv.aflat.top/mxnet_cu101-1.6.0b20191214-py2.py3-none-manylinux1_x86_64.whl

    Additional context I am trying to resolve this issue from long time any help would be appreciated.

    opened by gopikrishnabs 1
  • create_coco_roidb error

    create_coco_roidb error

    Hello, I want to train a faster rcnn model on my custom data. I followed the getting started tutorial but when I ran the create_coco_roidb command I got this error. PS: I am using colab. `%%shell cd ./simpledet mkdir -p data/src mkdir -p data/newspaper/annotations ln -s content/drive/MyDrive/training_dataset_1/training_dataset.json data/newspaper/annotations/instances_train2007.json ln -s content/drive/MyDrive/validation (1)/validation_dataset.json data/newspaper/annotations/instances_val2007.json

    mkdir -p simpledet/data/coco/images ln -s 'content/drive/MyDrive/training_dataset_1' data/newspaper/images/train2007 ln -s 'content/drive/MyDrive/validation (1)' data/newspaper/images/val2007
    python ./simpledet/utils/create_coco_roidb.py --dataset newspaper --dataset-split train2007 python ./simpledet/utils/create_coco_roidb.py --dataset newspaper --dataset-split val2007 ` The error is in the level of create_coco_roidb image

    opened by FirasKedidi 0
  • Don't import mxnet in https://oss.aflat.top/simpledet.img

    Don't import mxnet in https://oss.aflat.top/simpledet.img

    It don't import mxnet in https://oss.aflat.top/simpledet.img, I find python3 version is 3.5 in img, and mxnet in python3.6, I want to know how to solve!

    I install python3.6.8 in simpledet.img, defult path in /usr/local/lib/python3.6, I do ln -s /root/.pyenv/versions/3.6.8/lib/python3.6/ /usr/local/lib/python3.6/, still cannot import mxnet.

    opened by zerojuzi 0
Owner
TuSimple
The Future of Trucking
TuSimple
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Mask R-CNN for Object Detection and Segmentation This is an implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bound

Matterport, Inc 22.5k Jan 4, 2023
Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Swin Transformer for Object Detection This repo contains the supported code and configuration files to reproduce object detection results of Swin Tran

Swin Transformer 1.4k Dec 30, 2022
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 4, 2023
Object detection and instance segmentation toolkit based on PaddlePaddle.

Object detection and instance segmentation toolkit based on PaddlePaddle.

null 9.3k Jan 2, 2023
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".

Robust Object Detection via Instance-Level Temporal Cycle Confusion This repo contains the implementation of the ICCV 2021 paper, Robust Object Detect

Xin Wang 69 Oct 13, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
Res2Net for Instance segmentation and Object detection using MaskRCNN

Res2Net for Instance segmentation and Object detection using MaskRCNN Since the MaskRCNN-benchmark of facebook is deprecated, we suggest to use our mm

Res2Net Applications 55 Oct 30, 2022
[ArXiv 2021] Data-Efficient Instance Generation from Instance Discrimination

InsGen - Data-Efficient Instance Generation from Instance Discrimination Data-Efficient Instance Generation from Instance Discrimination Ceyuan Yang,

GenForce: May Generative Force Be with You 93 Dec 25, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

MI-AOD Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available tem

Tianning Yuan 269 Dec 21, 2022
Instance-conditional Knowledge Distillation for Object Detection

Instance-conditional Knowledge Distillation for Object Detection This is a MegEngine implementation of the paper "Instance-conditional Knowledge Disti

MEGVII Research 47 Nov 17, 2022
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022
Yolo object detection - Yolo object detection with python

How to run download required files make build_image make download Docker versio

null 3 Jan 26, 2022
X-modaler is a versatile and high-performance codebase for cross-modal analytics.

X-modaler X-modaler is a versatile and high-performance codebase for cross-modal analytics. This codebase unifies comprehensive high-quality modules i

null 910 Dec 28, 2022
[CVPR 2021 Oral] ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis

ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis [arxiv|pdf|v

Yinan He 78 Dec 22, 2022
Totally Versatile Miscellanea for Pytorch

Totally Versatile Miscellania for PyTorch Thomas Viehmann [email protected] This repository collects various things I have implmented for PyTorch Laye

Thomas Viehmann 428 Dec 28, 2022
The code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention.

CrossFormer This repository is the code for our paper CrossFormer: A Versatile Vision Transformer Based on Cross-scale Attention. Introduction Existin

cheerss 238 Jan 6, 2023
Versatile Generative Language Model

Versatile Generative Language Model This is the implementation of the paper: Exploring Versatile Generative Language Model Via Parameter-Efficient Tra

Zhaojiang Lin 17 Dec 2, 2022