FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Overview

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch.

Detectron

Detectron is Facebook AI Research's software system that implements state-of-the-art object detection algorithms, including Mask R-CNN. It is written in Python and powered by the Caffe2 deep learning framework.

At FAIR, Detectron has enabled numerous research projects, including: Feature Pyramid Networks for Object Detection, Mask R-CNN, Detecting and Recognizing Human-Object Interactions, Focal Loss for Dense Object Detection, Non-local Neural Networks, Learning to Segment Every Thing, Data Distillation: Towards Omni-Supervised Learning, DensePose: Dense Human Pose Estimation In The Wild, and Group Normalization.

Example Mask R-CNN output.

Introduction

The goal of Detectron is to provide a high-quality, high-performance codebase for object detection research. It is designed to be flexible in order to support rapid implementation and evaluation of novel research. Detectron includes implementations of the following object detection algorithms:

using the following backbone network architectures:

Additional backbone architectures may be easily implemented. For more details about these models, please see References below.

Update

License

Detectron is released under the Apache 2.0 license. See the NOTICE file for additional details.

Citing Detectron

If you use Detectron in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry.

@misc{Detectron2018,
  author =       {Ross Girshick and Ilija Radosavovic and Georgia Gkioxari and
                  Piotr Doll\'{a}r and Kaiming He},
  title =        {Detectron},
  howpublished = {\url{https://github.com/facebookresearch/detectron}},
  year =         {2018}
}

Model Zoo and Baselines

We provide a large set of baseline results and trained models available for download in the Detectron Model Zoo.

Installation

Please find installation instructions for Caffe2 and Detectron in INSTALL.md.

Quick Start: Using Detectron

After installation, please see GETTING_STARTED.md for brief tutorials covering inference and training with Detectron.

Getting Help

To start, please check the troubleshooting section of our installation instructions as well as our FAQ. If you couldn't find help there, try searching our GitHub issues. We intend the issues page to be a forum in which the community collectively troubleshoots problems.

If bugs are found, we appreciate pull requests (including adding Q&A's to FAQ.md and improving our installation instructions and troubleshooting documents). Please see CONTRIBUTING.md for more information about contributing to Detectron.

References

Comments
  • multi-GPU training throw an illegal memory access

    multi-GPU training throw an illegal memory access

    When I use one GPU to train, there is no problem. But when I use two or four GPUs, the problem come out. The log output:

    terminate called after throwing an instance of 'caffe2::EnforceNotMet' what(): [enforce fail at context_gpu.h:170] . Encountered CUDA error: an illegal memory access was encountered Error from operator: input: "gpu_0/rpn_cls_logits_fpn2_w_grad" input: "gpu_1/rpn_cls_logits_fpn2_w_grad" output: "gpu_0/rpn_cls_logits_fpn2_w_grad" name: "" type: "Add" device_option { device_type: 1 cuda_gpu_id: 0 } *** Aborted at 1516866180 (unix time) try "date -d @1516866180" if you are using GNU date *** terminate called recursively terminate called recursively terminate called recursively PC: @ 0x7ff67559f428 gsignal terminate called recursively terminate called recursively E0125 07:43:00.745853 55683 pybind_state.h:422] Exception encountered running PythonOp function: RuntimeError: [enforce fail at context_gpu.h:307] error == cudaSuccess. 77 vs 0. Error at: /mnt/hzhida/project/caffe2/caffe2/core/context_gpu.h:307: an illegal memory access was encountered

    At: /mnt/hzhida/facebook/detectron/lib/ops/generate_proposals.py(101): forward *** SIGABRT (@0x3e80000d84f) received by PID 55375 (TID 0x7ff453fff700) from PID 55375; stack trace: *** terminate called recursively @ 0x7ff675945390 (unknown) @ 0x7ff67559f428 gsignal @ 0x7ff6755a102a abort @ 0x7ff66f37e84d __gnu_cxx::__verbose_terminate_handler() @ 0x7ff66f37c6b6 (unknown) @ 0x7ff66f37c701 std::terminate() @ 0x7ff66f3a7d38 (unknown) @ 0x7ff67593b6ba start_thread @ 0x7ff67567141d clone @ 0x0 (unknown) Aborted (core dumped)

    upstream bug 
    opened by zdwong 64
  • Not able to run GPU for Caffe2/Detectron

    Not able to run GPU for Caffe2/Detectron

    • Operating system: Ubuntu 16.04
    • GPU models (for all devices if they are not all the same): GTX 1080 8GB
    • python --version: 2.7 Caffe2/Detectron OS: Ubuntu 16.04 Python: 2.7 GPU: GTX 1080 gcc version: 5.4.0

    I have installed CUDA, Cudnn and nccl in a conda environment and followed the steps in the installation file. I used conda (as mentioned) to install caffe2 and other libraries.

    conda install -c caffe2 caffe2-cuda9.0-cudnn7

    Then, to see if GPU is working, I get the following: WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. WARNING:root:Debug message: libnccl.so.2: cannot open shared object file: No such file or directory Segmentation fault (core dumped)

    I don't know what I am doing wrong or something I am missing. Please let me know.

    opened by Flock1 44
  • Community effort to bring CPU and pure Caffe2 / C++ inference support

    Community effort to bring CPU and pure Caffe2 / C++ inference support

    It looks like many people are asking for CPU inference and it seems it needs much work to make it happen. What I offer is that we use this issue to publicly state what work is needed and so people eager to have this feature could easily help to implement it.

    @daquexian, @orionr, @rbgirshick do you have time to share a list of features / ops needed to convert all the models with convert_pkl_to_pb.py ?

    | Feature/Operator | Where do we need it ?| State | Difficulty | | ------------- | ------------- | --------------- |------------- | | CollectAndDistributeFpnRpnProposals | FPN | 🕔 PR #372 submitted & Review needed | ? | | ... | ... | ... | ... |

    I would like to contribute to this effort but I do not know where to begin. If you are willing to implement a feature do not hesitate to tell it in this issue.

    Ps: To avoid any confusion I am only a random user of the Detectron & my initiative was not solicited by the maintainers

    opened by gadcam 37
  • Trouble training custom dataset

    Trouble training custom dataset

    Training Detectron on custom dataset

    I'm trying to train Mask RCNN on my custom dataset to perform segmentation task on new classes that coco or ImageNet never seen.

    • I first converted my dataset to coco format so it can be loaded by pycocotools.
    • I added my dataset path into dataset_catalog.py and created the correct link to images directory and annotations path. The config file I used is based on configs/getting_started/tutorial_1gpu_e2e_faster_rcnn_R-50-FPN.yaml . My dataset contains only 4 classes without background so I set NUM_CLASSES to 5 ( 4 does not work either). When I try to train using the command bellow : python2 tools/train_net.py --cfg configs/encov/copy_maskrcnn_R-101-FPN.yaml OUTPUT_DIR /tmp/detectron-output/

    ERROR 1:

    I get the following error (complete log file is here output.txt) At: /home/encov/Softwares/Detectron/lib/roi_data/fast_rcnn.py(269): _expand_bbox_targets /home/encov/Softwares/Detectron/lib/roi_data/fast_rcnn.py(181): _sample_rois /home/encov/Softwares/Detectron/lib/roi_data/fast_rcnn.py(112): add_fast_rcnn_blobs /home/encov/Softwares/Detectron/lib/ops/collect_and_distribute_fpn_rpn_proposals.py(62): forward terminate called after throwing an instance of 'caffe2::EnforceNotMet' what(): [enforce fail at pybind_state.h:423] . Exception encountered running PythonOp function: ValueError: could not broadcast input array from shape (4) into shape (0)

    This error comes from the expand box procedure that increase the size of bounding box weights by 4 (see roi_data/fast_rcnn.py). It basically takes the first element which represents the class, checks that it is not 0 (the background) and copy weights values at index_class x 4. Error happens because the index is greater than the NUM_CLASSES parameter which has been used to create the output array.


    ERROR 2

    I try same training except I set NUM_CLASSES to 81 which was the number of classes used for coco training which is working on my set-up by the way. The error I described above does not appear but in the really early beginning of the the iterations, bounding box areas is null which cause some divisions by zero. output2.txt

    Has someone experienced the same issue for training fast rcnn or mask rcnn on a custom dataset ? I really suspect an error in my json coco-like file because training on coco dataset in working correctly. Thank you for your help,

    System information

    • Operating system: Ubuntu 16.04
    • Compiler version: GCC 5.4.0
    • CUDA version: 8.0
    • cuDNN version: 7.0
    • NVIDIA driver version: 384
    • GPU model: GeForce GTX 1080 (x1)
    • python --version output: Python 2.7.12
    community help wanted 
    opened by francoto 30
  • Support exporting fpn

    Support exporting fpn

    Based on @orionr's work

    • [x] Solve the problem about GenerateProposals
    • [x] Use the existing ResizeNearest layer instead of UpsampleNearest. ResizeNearest has cpu implementation and neon optimization
    • [x] Make it work (with https://github.com/pytorch/pytorch/pull/7091)

    With this PR, FPN is supported in cooperation with https://github.com/pytorch/pytorch/pull/7091. I have verified that it works on e2e_faster_rcnn_R-50-FPN_1x.yaml

    CLA Signed 
    opened by daquexian 27
  • Detectron ops lib not found

    Detectron ops lib not found

    After installing caffe2 from source on Ubuntu 16.04, and trying to test with: python2 detectron/tests/test_spatial_narrow_as_op.py I get the following:

    No handlers could be found for logger "caffe2.python.net_drawer"
    net_drawer will not run correctly. Please install the correct dependencies.
    E0207 16:36:41.320443  4125 init_intrinsics_check.cc:59] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
    Traceback (most recent call last):
      File "detectron/tests/test_spatial_narrow_as_op.py", line 88, in <module>
        utils.c2.import_detectron_ops()
      File "/home/gene/detectron/lib/utils/c2.py", line 41, in import_detectron_ops
        detectron_ops_lib = envu.get_detectron_ops_lib()
      File "/home/gene/detectron/lib/utils/env.py", line 73, in get_detectron_ops_lib
        'version includes Detectron module').format(detectron_ops_lib)
    AssertionError: Detectron ops lib not found at '/usr/local/lib/python2.7/dist-packages/lib/libcaffe2_detectron_ops_gpu.so'; make sure that your Caffe2 version includes Detectron module
    

    But the detectron module is present in the modules folder. Do I need to modify CMakeLists somehow before installing caffe2 to make sure it gets included correctly?

    System information

    • Operating system: Ubuntu 16.04
    • Compiler version: gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.5)
    • CUDA version: 8.0
    • cuDNN version: 6.0.21
    • NVIDIA driver version:
    • GPU models (for all devices if they are not all the same): 4x Tesla k80
    • PYTHONPATH environment variable: /usr/local:/home/ubuntu/caffe2/build
    • python --version output: 2.7.12
    opened by genekogan 27
  • Support exporting for CPU Mask & Keypoint nets

    Support exporting for CPU Mask & Keypoint nets

    Prerequisite : ~~#372~~ Purpose : enable exporting all the models for CPU by exporting 2 separate nets : one for the bboxes and one for the rest of the inference.

    Two main modifications

    • Refactor the main() : it will call a function convert_to_pb for each sub network
    • run_model_pb : always do the inference for bbox and then call mask or keypoint part if needed. The exact same approach is adopted.

    Then helper functions are only lightly modified to fit with the new objective to export 2 pb files

    CLA Signed 
    opened by gadcam 26
  • How to visualize the network structure

    How to visualize the network structure

    Hi, Is there any easy way to visualize the network? like Netscope for caffe? I can use the net_drawer in caffe2, but found it;' so hard to read the network?

    opened by JiYuanFeng 19
  • there are no output .pdf when successfully complier demo files?

    there are no output .pdf when successfully complier demo files?

    root@e9019bc3c0c3:/detectron# python2 tools/infer_simple.py \

    --cfg configs/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml \
    --output-dir demo/output \
    --image-ext jpg \
    --wts https://s3-us-west-2.amazonaws.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl \
    demo
    

    E0202 10:06:27.787480 38 init_intrinsics_check.cc:54] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. E0202 10:06:27.787498 38 init_intrinsics_check.cc:54] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. E0202 10:06:27.787502 38 init_intrinsics_check.cc:54] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. WARNING cnn.py: 40: [====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information. INFO net.py: 57: Loading weights from: /tmp/detectron-download-cache/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl I0202 10:06:32.272874 38 net_dag_utils.cc:118] Operator graph pruning prior to chain compute took: 0.000144569 secs I0202 10:06:32.273200 38 net_dag.cc:61] Number of parallel execution chains 63 Number of operators = 402 I0202 10:06:32.290997 38 net_dag_utils.cc:118] Operator graph pruning prior to chain compute took: 0.000129367 secs I0202 10:06:32.291281 38 net_dag.cc:61] Number of parallel execution chains 30 Number of operators = 358 I0202 10:06:32.292923 38 net_dag_utils.cc:118] Operator graph pruning prior to chain compute took: 1.0203e-05 secs I0202 10:06:32.292951 38 net_dag.cc:61] Number of parallel execution chains 5 Number of operators = 18 INFO infer_simple.py: 111: Processing demo/18124840932_e42b3e377c_k.jpg -> demo/output/18124840932_e42b3e377c_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.872s INFO infer_simple.py: 121: | im_detect_bbox: 0.824s INFO infer_simple.py: 121: | misc_mask: 0.023s INFO infer_simple.py: 121: | im_detect_mask: 0.023s INFO infer_simple.py: 121: | misc_bbox: 0.002s INFO infer_simple.py: 124: \ Note: inference on the first image will be slower than the rest (caches and auto-tuning need to warm up) INFO infer_simple.py: 111: Processing demo/17790319373_bd19b24cfc_k.jpg -> demo/output/17790319373_bd19b24cfc_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.380s INFO infer_simple.py: 121: | im_detect_bbox: 0.307s INFO infer_simple.py: 121: | misc_mask: 0.040s INFO infer_simple.py: 121: | im_detect_mask: 0.031s INFO infer_simple.py: 121: | misc_bbox: 0.002s INFO infer_simple.py: 111: Processing demo/19064748793_bb942deea1_k.jpg -> demo/output/19064748793_bb942deea1_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.320s INFO infer_simple.py: 121: | im_detect_bbox: 0.210s INFO infer_simple.py: 121: | misc_mask: 0.058s INFO infer_simple.py: 121: | im_detect_mask: 0.050s INFO infer_simple.py: 121: | misc_bbox: 0.002s INFO infer_simple.py: 111: Processing demo/34501842524_3c858b3080_k.jpg -> demo/output/34501842524_3c858b3080_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.239s INFO infer_simple.py: 121: | im_detect_bbox: 0.215s INFO infer_simple.py: 121: | misc_mask: 0.012s INFO infer_simple.py: 121: | im_detect_mask: 0.010s INFO infer_simple.py: 121: | misc_bbox: 0.002s INFO infer_simple.py: 111: Processing demo/15673749081_767a7fa63a_k.jpg -> demo/output/15673749081_767a7fa63a_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.324s INFO infer_simple.py: 121: | im_detect_bbox: 0.220s INFO infer_simple.py: 121: | misc_mask: 0.056s INFO infer_simple.py: 121: | im_detect_mask: 0.045s INFO infer_simple.py: 121: | misc_bbox: 0.002s INFO infer_simple.py: 111: Processing demo/16004479832_a748d55f21_k.jpg -> demo/output/16004479832_a748d55f21_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.238s INFO infer_simple.py: 121: | im_detect_bbox: 0.211s INFO infer_simple.py: 121: | misc_mask: 0.015s INFO infer_simple.py: 121: | im_detect_mask: 0.009s INFO infer_simple.py: 121: | misc_bbox: 0.002s INFO infer_simple.py: 111: Processing demo/24274813513_0cfd2ce6d0_k.jpg -> demo/output/24274813513_0cfd2ce6d0_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.250s INFO infer_simple.py: 121: | im_detect_bbox: 0.215s INFO infer_simple.py: 121: | misc_mask: 0.017s INFO infer_simple.py: 121: | im_detect_mask: 0.016s INFO infer_simple.py: 121: | misc_bbox: 0.002s INFO infer_simple.py: 111: Processing demo/33823288584_1d21cf0a26_k.jpg -> demo/output/33823288584_1d21cf0a26_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.417s INFO infer_simple.py: 121: | im_detect_bbox: 0.313s INFO infer_simple.py: 121: | misc_mask: 0.058s INFO infer_simple.py: 121: | im_detect_mask: 0.044s INFO infer_simple.py: 121: | misc_bbox: 0.002s INFO infer_simple.py: 111: Processing demo/33887522274_eebd074106_k.jpg -> demo/output/33887522274_eebd074106_k.jpg.pdf INFO infer_simple.py: 119: Inference time: 0.224s INFO infer_simple.py: 121: | im_detect_bbox: 0.202s INFO infer_simple.py: 121: | misc_mask: 0.011s INFO infer_simple.py: 121: | im_detect_mask: 0.009s INFO infer_simple.py: 121: | misc_bbox: 0.002s

    demo/output folder got no pdf files.

    community help wanted 
    opened by ccs1605 19
  • ImportError: cannot import name task_evaluation

    ImportError: cannot import name task_evaluation

    Traceback (most recent call last): File "tools/infer_simple.py", line 42, in import core.test_engine as infer_engine File "/home/user523/zjs/detectron/lib/core/test_engine.py", line 36, in from datasets import task_evaluation ImportError: cannot import name task_evaluation

    I have added detectron/lib to pythonpath,but it still can not work.

    cannot repro 
    opened by zhangjinsong3 19
  • OSError: /usr/local/lib/libcaffe2_detectron_ops_gpu.so: undefined symbol:

    OSError: /usr/local/lib/libcaffe2_detectron_ops_gpu.so: undefined symbol:

    I tried some solutions in issues, like exporting python path, but it didn't work for me.

    /home/chopin/anaconda3/envs/caffe2/caffe2/build/caffe2/python to ~/.bashrc and then pip install utils .

    I saw somewhere talking about the opencv version. I have 3.3.1 and the required version is 3.4.1. But caffe2 installed through conda (caffe2-cuda9.0-cudnn7) doesn't work with opencv 3.4.1.

    Expected results

    I was running this line:

    python launch.py --cfg configs/video/2d_best/01_R101_best_hungarian-4GPU.yaml --mode test TEST.WEIGHTS pretrained_models/configs/video/2d_best/01_R101_best_hungarian.yaml/model_final.pkl

    Actual results

    And then I ran into this error:

    Traceback (most recent call last): File "tools/test_net.py", line 33, in utils.c2.import_detectron_ops() File "/home/jingweim/DetectAndTrack/lib/utils/c2.py", line 50, in import_detectron_ops dyndep.InitOpsLibrary(detectron_ops_lib) File "/home/jingweim/anaconda2/envs/detect_and_track/lib/python2.7/site-packages/caffe2/python/dyndep.py", line 35, in InitOpsLibrary _init_impl(name) File "/home/jingweim/anaconda2/envs/detect_and_track/lib/python2.7/site-packages/caffe2/python/dyndep.py", line 48, in _init_impl ctypes.CDLL(path) File "/home/jingweim/anaconda2/envs/detect_and_track/lib/python2.7/ctypes/init.py", line 366, in init self._handle = _dlopen(self._name, mode) OSError: /usr/local/lib/libcaffe2_detectron_ops_gpu.so: undefined symbol: _ZN6caffe28TypeMeta2IdINS_6TensorINS_11CUDAContextEEEEENS_11CaffeTypeIdEv

    System information

    • Operating system: Ubuntu 16.04
    • Compiler version: gcc 5.4
    • CUDA version: 9.0
    • cuDNN version: 7.0
    • NVIDIA driver version: 396
    • GPU models (for all devices if they are not all the same): TITAN XP
    • PYTHONPATH environment variable: /home/jingweim/anaconda2/envs/detect_and_track/include/caffe2/python:/usr/bin/python
    • python --version output: Python 2.7.15 :: Anaconda custom (64-bit)
    • Anything else that seems relevant: opencv 3.3.1
    opened by jma100 17
  • App etiquette

    App etiquette

    PLEASE FOLLOW THESE INSTRUCTIONS BEFORE POSTING

    1. Please thoroughly read README.md, INSTALL.md, GETTING_STARTED.md, and FAQ.md
    2. Please search existing open and closed issues in case your issue has already been reported
    3. Please try to debug the issue in case you can solve it on your own before posting

    After following steps 1-3 above and agreeing to provide the detailed information requested below, you may continue with posting your issue

    (Delete this line and the text above it.)

    Expected results

    What did you expect to see?

    Actual results

    What did you observe instead?

    Detailed steps to reproduce

    E.g.:

    The command that you ran
    

    System information

    • Operating system: ?
    • Compiler version: ?
    • CUDA version: ?
    • cuDNN version: ?
    • NVIDIA driver version: ?
    • GPU models (for all devices if they are not all the same): ?
    • PYTHONPATH environment variable: ?
    • python --version output: ?
    • Anything else that seems relevant: ?
    opened by 16CentAstrology 0
  • Convert Cityscapes to COCO format: How to convert to other classes (ex: traffic light)

    Convert Cityscapes to COCO format: How to convert to other classes (ex: traffic light)

    Has somebody successfully converted cityscapes dataset filtering classes such as traffic lights, poles, etc? Indeed, I did the change of desired classes in category_instancesonly and also commented on the invalid contour warning. However, even if I did that, I still get .json output files with 0 annotations after loading all the images.

    opened by MarceloContreras 0
  • Caffe 2 merged with pytorch new installation instruction?

    Caffe 2 merged with pytorch new installation instruction?

    PLEASE FOLLOW THESE INSTRUCTIONS BEFORE POSTING

    1. Please thoroughly read README.md, INSTALL.md, GETTING_STARTED.md, and FAQ.md
    2. Please search existing open and closed issues in case your issue has already been reported
    3. Please try to debug the issue in case you can solve it on your own before posting

    After following steps 1-3 above and agreeing to provide the detailed information requested below, you may continue with posting your issue

    (Delete this line and the text above it.)

    Expected results

    What did you expect to see?

    Actual results

    What did you observe instead?

    Detailed steps to reproduce

    E.g.:

    The command that you ran
    

    System information

    • Operating system: ?
    • Compiler version: ?
    • CUDA version: ?
    • cuDNN version: ?
    • NVIDIA driver version: ?
    • GPU models (for all devices if they are not all the same): ?
    • PYTHONPATH environment variable: ?
    • python --version output: ?
    • Anything else that seems relevant: ?
    opened by atulya-deep 0
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In Detectron, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    numpy>=1.13
    pyyaml==3.12
    matplotlib*
    opencv-python>=3.2
    setuptools*
    Cython*
    mock*
    scipy*
    six*
    future*
    protobuf*
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency future can be changed to ==0.3.0. The version constraint of dependency future can be changed to >=0.12.0,<=0.18.2.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the future
    urllib.request.urlopen
    
    The calling methods from the all methods
    inputs.items
    caffenet_weights.ParseFromString
    core_config.merge_cfg_from_cfg
    A.W.H._labels.reshape.transpose
    re.match
    get_min_max_levels
    model.BilinearInterpolation
    metrics.keys
    valid_segms.append
    type
    round
    self.net.NextName
    add_residual_block
    model.mask_net.Proto
    core.InjectDeviceCopiesAmongNets
    np.logical_and
    W.H._bbox_targets.reshape.transpose
    FieldOfAnchors
    _coco_segms_results_one_category
    reversed
    new_net.Proto.external_input.extend
    response.info
    threading.Event
    FPN.add_fpn_rpn_losses
    _bbox_inside_weights.reshape
    w.start
    model.AddGradientOperators
    losses.append
    AttributeError
    np.transpose
    convert_coco_blobs_to_cityscape_blobs
    coco_blob.reshape
    anchor_by_gt_overlap.argmax
    log_json_stats
    get_nvidia_smi_output
    self.COCO.getImgIds
    logging.basicConfig
    core.DeviceOption
    _use_voc_evaluator
    model.TrainableParams
    remove_momentum
    self._SetNewLr
    CudaDevice
    rois_idx_order.np.argsort.astype
    gauss_fill
    mask_rcnn_fcn_head_v1upXconvs
    blob_utils.im_list_to_blob
    core.Net
    time.sleep
    sys.stdout.flush
    np.where
    plt.figure
    box_to_gt_ind_map.astype
    model.net.SelectSmoothL1Loss
    all_anchors.reshape
    e.copy
    ops_ref.extend
    self._add_proposals_from_file
    model.GenerateProposals
    _DATASETS.keys
    self.SpatialGN
    ex_gt_overlaps.argmax
    normalize_resnet_name
    input_blobs.items
    enumerate
    core.NameScope
    training_stats.UpdateIterStats
    np.append
    vis_utils.convert_from_cls_format
    hash_obj.hexdigest
    self.values
    add_single_scale_rpn_outputs
    np.spacing
    im_info.astype
    keypoint_utils.keypoints_to_heatmap_labels
    blobs.append
    lvl.str.k.blobs.append
    roi_data_minibatch.get_minibatch_blob_names
    os.makedirs
    unscoped_param_name.find
    head_loss_gradients.values
    _create_tensor
    ord
    cache_cfg_urls
    npr.choice
    workspace.GlobalInit
    mask_rcnn_heads.add_mask_rcnn_losses
    segm_utils.polys_to_boxes
    _t.toc
    self.Conv
    kwargs.OpFilter.check
    h5py.File
    k.roi_map_probs.max
    response.read
    keypoint_rcnn_heads.add_keypoint_outputs
    mask_rcnn_roi_data.add_mask_rcnn_blobs
    _write_coco_keypoint_results_file
    caffe_pb2.NetParameter
    _scale_enum
    self.GetStats
    c.reshape
    np.amax
    _remove_layers
    A.W.H._bbox_inside_weights.reshape.transpose
    op.device_option.CopyFrom
    model.ConvGN
    envu.get_py_bin_ext
    flipped_roidb.append
    convert_func.cs.getattr
    x
    _export_to_logfiledb
    cfg_to_load.replace
    json_dataset_evaluator.evaluate_boxes
    coordinated_get
    mask.sum
    remove_spatial_bn_layers
    AttrDict
    argparse.ArgumentParser
    _empty_box_results
    model.AddLosses
    task_evaluation.evaluate_all
    model.RoIFeatureTransform
    _key_is_renamed
    get_minibatch
    timers.items
    model.net.Name
    fuse_first_affine
    labels.fill
    is_valid
    self.ConvTranspose
    envu.yaml_load
    workspace.ResetWorkspace
    caffemodel_file_name.open.read
    stats.items
    model.WeightedSum
    model.param_init_net.ConstantFill
    blobs.update
    workspace.RunNet
    k.blobs.append
    init_net.Proto
    plt.close
    top_segms_out.append
    spatial_scales.insert
    _check_and_coerce_cfg_value_type
    proposal_to_gt_overlaps.argmax
    np.column_stack
    self.train.CollectAndDistributeFpnRpnProposalsOp.forward.self.net.Python
    join
    smtplib.SMTP
    threading.local
    np.isnan
    k.endswith
    get_bounds
    combined_roidb_for_training
    np.ones
    build_generic_detection_model
    np.meshgrid
    configure_bbox_reg_weights
    self.coordinator.wait_for_stop
    check_blobs.keys
    c2_utils.GpuNameScope
    _coco_eval_to_box_results
    i.rpn_blobs.items
    c2_utils.CudaScope
    open
    im_list_to_blob
    log_subprocess_output
    self.net.GenerateProposals
    _get_rpn_blobs
    mutils.compare_model
    subprocess.check_output
    generate_anchors
    _get_voc_results_file_template
    TrainingStats
    merge_cfg_from_cfg
    heatmaps.copy
    lr_policy.get_lr_at_iter
    segm_utils.polys_to_mask_wrt_box
    c2_utils.import_detectron_ops
    model_dict.keys
    score.format.lstrip
    convert_gen_proposals
    compute_bbox_regression_targets
    range
    output_idx.outputs.reshape
    compute_oks
    ious.max
    model.net.SigmoidCrossEntropyLoss
    rois_idx_restore.astype
    np.sqrt
    self.net.Python
    sio.loadmat
    model.net.MomentumSGDUpdate
    _key_is_deprecated
    _sort_proposals
    copyfile
    pprint.pformat
    cmap
    fig.savefig
    cs.instances2dict_with_polygons
    img_name.replace.strip
    GenerateProposalLabelsOp
    model_builder.create
    bbox_deltas.transpose.reshape
    op.name.startswith
    logger.warn
    self._event.is_set
    net.op.extend
    fuse_func
    dataset_catalog.get_ann_fn
    line.rstrip.decode
    next
    self._add_gt_annotations
    K.shifts.reshape.transpose
    kwargs.get
    shutil.copy
    box_utils.filter_small_boxes
    deque
    JsonDataset
    evaluate_proposal_file
    print
    download_url
    overlaps.max
    json_dataset_evaluator.evaluate_box_proposals
    np.asarray
    cv2.putText
    _save_models
    model.net.SoftmaxFocalLoss
    main
    _roi_bbox_targets.astype
    field_of_anchors.astype
    rfcn_heads.add_rfcn_outputs
    self._get_gt_keypoints
    workspace.FeedBlob
    convert_from_cls_format
    np.floor
    blob.transpose
    _coco_bbox_results_one_category
    cache_url
    box_utils.xywh_to_xyxy
    pickle_weights
    np.hstack
    np.log
    np.random.choice
    rpn_roi_data.add_rpn_blobs
    fast_rcnn_heads.add_fast_rcnn_losses
    c2_utils.BlobReferenceList
    subprocess.call
    caffe2_pb2.NetDef
    np.random.randn
    reset_names
    boxes.copy
    keypoints.index
    c2_utils.CudaDevice
    rpn_blobs.items
    i.classes.rjust
    parse_args
    self.create_blobs_queues
    keypoint_rcnn_roi_data.finalize_keypoint_minibatch
    add_tensor
    get_anchors
    v.immutable
    gt_boxes.astype
    convert_coco_blob_to_cityscapes_blob
    f.read
    net_drawer.GetPydotGraphMinimal
    blob_out.init
    cfg.RESNETS.STEM_FUNC.globals
    convert_op_in_proto
    np.ascontiguousarray
    bbox_overlaps
    self.should_stop
    fig.set_size_inches
    self.coordinator.request_stop
    c2_utils.UnscopeName
    nu.print_net
    plt.get_cmap
    json_dataset.name.find
    blobs.copy
    caffenet.layer.pop
    p.stdout.close
    os.path.join
    inputs.append
    cs_blob.reshape
    model.net.Reshape
    valid.astype
    extend_with_flipped_entries
    d.BB.astype
    conv_body_func
    recalls.mean.mean
    response.info.getheader.strip
    bboxs_util.xyxy_to_xywh
    ResNet.add_stage
    caffe_pb2.BlobProto
    op.input.isdigit
    Timer
    pickle.loads
    wraps
    bl_out_list.append
    model.net.SampleAs
    np.average
    scores.argsort
    json_dataset.COCO.getImgIds
    defaultdict
    add_conv_body_func
    box_utils.bbox_transform_inv
    info.decode
    str
    np.mean
    net.BlobIsDefined
    get_output_dir
    cell_anchors.reshape
    parser.add_argument
    box_utils.nms
    v.astype
    _filter_boxes
    mime.as_string
    rois.rois_idx_restore.rois_stacked.all
    logger.warning
    get_minibatch_blob_names
    get_retinanet_bias_init
    model_engine.initialize_model_from_cfg
    self.Dropout
    CudaScope
    MIMEText
    weights.reshape
    keypoint_utils.flip_keypoints
    _flip_poly
    np.arange
    hasattr
    Extension
    self.create_enqueue_blobs
    check_blobs.values
    bg_inds.sampled_boxes.reshape
    workspace.CreateBlob
    training_stats.ResetIterTimer
    sys.exit
    model.net.GroupSpatialSoftmax
    func_name.split
    filter_for_training
    detectron.utils.train.train_model
    vis_keypoints
    any
    COCOeval
    evaluate_boxes
    _empty_box_proposal_results
    utils.Caffe2TensorToNumpyArray
    processed_ims.append
    roi_data_loader.get_next_minibatch
    nu.broadcast_parameters
    blob_utils.serialize
    _whctrs
    _write_coco_segms_results_file
    coco_eval.evaluate
    keypoint_flip_map.items
    np.testing.assert_array_almost_equal
    proposal_to_gt_overlaps.max
    inds.sum
    COCO
    segms_util.polys_to_boxes
    convert_net
    pretrained_weights.protos.extend
    gt_inds.rois.astype
    _sort_results
    prototxt_file_name.open.read
    logger.info
    self.deque.append
    ret.fill
    CollectAndDistributeFpnRpnProposalsOp
    GenerateProposalLabelsOp.forward.self.net.Python
    gt_overlaps.max
    self._shuffle_roidb_inds
    os.path.split
    rpn_roi_data.get_rpn_blob_names
    model1_func
    box_utils.unique_boxes
    BlobReferenceList
    workspace.FetchBlobs
    voc_dataset_evaluator.evaluate_boxes
    self.COCO.loadCats
    url.startswith
    polygons_norm.append
    new_net.Proto.external_output.extend
    _add_multilevel_rois
    sampled_labels.astype
    sys.stdout.write
    single_gpu_build_func
    run_inference
    c2_utils.import_contrib_ops
    fid_txt.write
    do_reval
    self.net.Conv
    t.float_data.extend
    test_timer.tic
    vis
    _merge_a_into_b
    subprocess_stdout.close
    cityscapes_eval.main
    response.info.get.strip
    npr.randint
    ret_net.Proto
    self.train.spatial_scale.anchors.GenerateProposalsOp.forward.self.net.Python
    colormap
    _do_broadcast
    const_fill
    method.self.net.__getattr__
    model.roi_data_loader.has_stopped
    add_stage
    np.uint8.obj.pickle.dumps.np.fromstring.astype
    self._CorrectMomentum
    voc_info
    workspace.Blobs
    scores_to_probs
    logging.getLogger.setLevel
    model2_func
    format
    img_name.replace
    dataset.name.find
    all
    load_and_convert_caffe_model
    os.getcwd
    removed_tensors.append
    keep.append
    self.series.append
    add_roi_mask_head_func
    _coco_kp_results_one_category
    fpn.add_multilevel_roi_blobs
    fast_rcnn_heads.add_fast_rcnn_outputs
    retinanet_roi_data.get_retinanet_blob_names
    get_group_gn
    retinanet_roi_data.add_retinanet_blobs
    loader_loop
    cfg.immutable
    uuid4
    blobs.clear
    self.smoothed_mb_qsize.GetMedianValue
    results.keys
    task_evaluation.log_box_proposal_results
    detpath.format
    initialize_gpu_from_weights_file
    OrderedDict
    pickle.dumps
    np.zeros_like
    envu.import_nccl_ops
    box_utils.clip_boxes_to_image
    Coordinator
    get_keypoints
    isinstance
    layer.blobs.extend
    a.items
    test_timer.toc
    R.x.x.np.array.astype
    self.get_next_minibatch
    zip
    os.remove
    self.net.Concat
    osp.join
    _do_segmentation_eval
    arr.astype
    FPN.add_fpn_rpn_outputs
    box_utils.clip_tiled_boxes
    np.cumsum
    t.gt_overlaps.sum
    add_bbox_regression_targets
    np.array
    _save_image_graphs
    box_utils.bbox_transform
    model.FC
    create_input_blobs_for_net
    self.__dict__.values
    task_evaluation.log_copy_paste_friendly_results
    boxes.boxes.all
    self.smoothed_mb_qsize.AddValue
    logging.getLogger
    set
    flipped_segms.append
    iteritems
    caffenet_weights.layer.pop
    data_set.split
    coco_blob.mean
    NotImplementedError
    uuid.uuid4
    net.external_input.extend
    model.GetLossScale
    model.ConvTranspose
    queue.put
    c2_utils.CpuScope
    json_dataset.add_proposals
    chr
    add_bbox_ops
    flip_map.items
    model.net.UpsampleNearest
    _mkanchors
    bias.astype
    _t.tic
    os.rename
    p.str.find
    weights.gt_rois.ex_rois.box_utils.bbox_transform_inv.astype
    threading.Thread
    convert_coco_stuff_mat
    ann.segms_util.polys_to_boxes.bboxs_util.xyxy_to_xywh.tolist
    reset_blob_names
    _ornone
    caffe2_pb2.TensorProto
    os.listdir
    plt.autoscale
    x.strip
    test_engine.initialize_model_from_cfg
    _expand_to_class_specific_mask_targets
    name_compat.get_new_name
    vis_utils.vis_one_image
    cv2.addWeighted
    kp.keypoints.astype
    c2_py_utils.GetGPUMemoryUsageStats
    super
    generalized_rcnn
    _add_class_assignments
    self._init_keypoints
    broadcast_parameters
    self.iter_timer.reset
    blob_utils.get_loss_gradients
    score_inputs.blob.blob.data.np.concatenate.squeeze
    scoped_name.startswith
    func
    np.random.permutation
    envu.get_runtime_dir
    results.extend
    add_single_scale_rpn_losses
    np.nonzero
    im.astype
    dataset_catalog.contains
    self.COCO.loadImgs
    os.walk
    blobs.astype
    rpn_engine.im_proposals
    np.concatenate
    logger.setLevel
    np.array_split
    os.path.abspath
    evaluate_masks
    self.gn_params.append
    coordinated_put
    outputs.append
    _use_json_dataset_evaluator
    importlib.import_module
    min
    run_model_pb
    src_blobs.keys
    envu.get_detectron_ops_lib
    add_fpn
    entry.extend
    parser.parse_args
    all_dets.astype
    self._minibatch_queue.full
    full_key.split
    self.AffineChannel
    os.path.exists
    _get_lr_change_ratio
    predictor_exporter.save_to_db
    self._perm.rotate
    blobs.items
    dataset.get_roidb
    i.retinanet_blobs.items
    _write_voc_results_files
    k.roi_map.argmax
    annopath.format
    shlex_quote
    workspace.RunOperatorOnce
    os.path.splitext
    fid.write
    model.param_init_net.Proto
    net_drawer.GetPydotGraph
    graph.write_png
    f.fileno
    inds.max
    inds.scores.squeeze
    data_utils.get_field_of_anchors
    k.find
    _sample_rois
    np.sum
    FpnLevelInfo
    logger.error
    multi_gpu_generate_rpn_on_dataset
    cython_nms.soft_nms
    queue.get
    blob_utils.zeros
    hash_obj.update
    workspace.RunNetOnce
    coco_blob.std
    line.rstrip
    response.info.getheader
    key.blobs.append
    training_stats.LogIterStats
    src_name.src_blobs.astype
    _labels.reshape
    box_utils.bbox_overlaps
    A.W.H._bbox_outside_weights.reshape.transpose
    keypoint_rcnn_heads.add_keypoint_losses
    categories.append
    KeyError
    op_filter
    send_email
    _add_roi_keypoint_head
    hashlib.md5
    vis_mask
    np.histogram
    valid_objs.append
    os.path.dirname
    w.join
    foas.append
    get_func
    recalls.mean
    k.ljust
    json.dumps
    outputs.reshape
    cuda_visible_devices.split
    mask.copy
    model.net.DequeueBlobs
    model.roi_data_loader.register_sigint_handler
    scores.transpose
    sorted
    possibly_scoped_name.rfind
    input_name.isdigit
    model.net.Clone
    self.DetectionModelHelper.super.__init__
    self.close_blobs_queues
    putils.MakeArgument
    url_md5sum.urllib.request.urlopen.read
    np.logical_not
    cv2.findContours
    add_roi_box_head_func
    P_temp.mean
    model_dict.items
    add_shortcut
    w.setDaemon
    convert_collect_and_distribute
    _coco_eval_to_keypoint_results
    voc_ap
    core_config.load_cfg
    w.h._labels.astype
    fpn_level_info_func
    ds.get_roidb
    _distribute_rois_over_fpn_levels
    gradients.append
    entry.toarray
    v.GetMedianValue
    f.write
    cv2.line
    net.Proto.op.extend
    mask_util.frPyObjects
    scores.squeeze
    ax.imshow
    model_engine.im_detect_all
    self.create_threads
    _expand_bbox_targets
    f.readlines
    scores.transpose.reshape
    boxes_from_polys.astype
    flipped_poly.tolist
    shift_x.ravel
    np.log2
    bbox_feat_list.append
    mask_rcnn_heads.add_mask_rcnn_outputs
    core.DeviceScope
    x.strip.split
    new_external_outputs.extend
    workspace.GetCuDNNVersion
    temp.max
    model.net.Sum
    all_net.Proto
    np.expand_dims
    cfg.RESNETS.TRANS_FUNC.globals
    self.TrainableParams
    cv2.drawContours
    pickle.load
    DetectionModelHelper
    core.ScopedBlobReference
    roi_data_loader.shutdown
    self.reset
    new_net.Proto
    os.path.basename
    model_type_func.get_func
    all_loss_gradients.update
    merge_cfg_from_file
    np.maximum
    ax.add_patch
    model.Relu
    _get_reference_md5sum
    get_rpn_box_proposals
    tee
    max
    osp.exists
    dataset_catalog.get_im_dir
    name.find
    dataset.results.items
    max_overlaps.argmax
    _write_coco_bbox_results_file
    _do_python_eval
    model.net.SmoothL1Loss
    muji.OnGPU
    op_func_chain
    np.dtype
    GenerateProposalsOp
    cython_nms.nms
    _raise_key_rename_error
    _bbox_targets.reshape
    mutils.get_ws_blobs
    fast_rcnn_roi_data.get_fast_rcnn_blob_names
    matplotlib.use
    pickle.dump
    np.get_include
    time.time
    ious.transpose
    training_stats.IterTic
    distribute
    bias_blob.data.extend
    infer_engine.im_detect_all
    np.minimum
    initializers.Initializer
    boxes.append
    annotations.append
    np.sort
    keypoint_coords.copy
    ValueError
    np.argsort
    shifts.reshape
    shift_y.ravel.shift_x.ravel.shift_y.ravel.shift_x.ravel.np.vstack.transpose
    add_missing_biases
    add_fpn_onto_conv_body
    create_model
    subprocess.Popen
    _add_fast_rcnn_head
    np.reshape
    self.coordinator.should_stop
    assert_and_infer_cfg
    np.random.seed
    mutils.get_op_arg_valf
    keypoint_rcnn_roi_data.add_keypoint_rcnn_blobs
    rois_names.len.blobs.np.concatenate.squeeze
    model.Conv
    np.absolute
    memonger.share_grad_blobs
    np.finfo
    retinanet_heads.add_fpn_retinanet_outputs
    prep_im_for_blob
    setup
    rjust
    nu.save_model_to_weights_file
    self._minibatch_queue.qsize
    np.zeros
    self.create_param
    anchors.astype
    i.boxes.astype
    blob_utils.deserialize
    abs
    json_dataset.get_roidb
    src_name.endswith
    plt.Axes
    envu.get_custom_ops_lib
    add_topdown_lateral_module
    _build_forward_graph
    _get_image_blob
    bbox_deltas.transpose
    get_devkit_dir
    net.Proto.external_input.extend
    mutils.create_input_blobs_for_net
    box_utils.boxes_area
    net.DequeueBlobs
    url_md5sum.urllib.request.urlopen.read.strip
    self.get_output_names
    self._get_next_minibatch_inds
    self.debug_timer.toc
    locals
    R.astype
    len
    imgIds.sort
    np.unique
    self.shutdown
    os.path.isfile
    _coco_eval_to_mask_results
    re.findall
    dyndep.InitOpsLibrary
    all_net.Proto.SerializeToString
    logger.debug
    self.smoothed_losses_and_metrics.items
    coco_eval.accumulate
    sum
    collect
    _remove_proposals_not_in_roidb
    model.net.ConstantFill
    os.fsync
    muji.Allreduce
    model.keypoint_net.Proto
    self._prep_roidb_entry
    infer_engine.initialize_model_from_cfg
    img.astype
    model.roi_data_loader.shutdown
    load_model
    heats.reshape
    ids.append
    float
    mutils.filter_op
    self.iter_timer.tic
    _get_thr_ind
    cfg_to_load.readlines
    scale.boxes.np.round.dot
    metrics.values
    new_net.BlobIsDefined
    data_utils.unmap
    c2_utils.get_nvidia_info
    vis_class
    workspace.HasBlob
    cfg.is_immutable
    mutils.update_mobile_engines
    c2_utils.SuffixNet
    gt_overlaps.argmax
    self.net.Proto
    workspace.FetchBlob
    self.AttrDict.super.__init__
    cv2.imread
    get_class_string
    model.Softmax
    _use_cityscapes_evaluator
    np.all
    np.copy
    net.layer.pop
    file_in.sio.loadmat.ravel
    ims.im.im.shape.np.array.max
    data.get
    logger.critical
    loss_gradients.update
    filenames.append
    t.dims.extend
    dataset.name.all_results.update
    cs_json_dataset_evaluator.evaluate_masks
    merge_cfg_from_list
    json_dataset_evaluator.evaluate_keypoints
    threading.Lock
    load_timer.toc
    ws.sum
    cmd.format
    build_generic_retinanet_model
    _get_file_md5sum
    plt.plot
    dataset_catalog.get_im_prefix
    _do_detection_eval
    np.vstack
    coco_eval.summarize
    box_utils.xyxy_to_xywh
    id_or_index
    self.losses_and_metrics.keys
    collections.namedtuple
    assert_cache_file_is_ok
    heats.astype
    get_step_index
    p.wait
    blobs_fpn.insert
    fast_rcnn_roi_data.add_fast_rcnn_blobs
    _compute_and_log_stats
    tree.findall
    segm_utils.is_poly
    model.param_to_grad.values
    training_stats.IterToc
    json_dataset.COCO.loadRes
    im_scales.append
    tmp.append
    model.net.Sigmoid
    build_generic_rfcn_model
    cv2.imwrite
    load_cfg
    glob.iglob
    imageio.imsave
    load_object
    info.strip
    _get_retinanet_blobs
    progress_hook
    _get_result_blobs
    output_name.find
    np.median
    generate_proposals_on_roidb
    segm_utils.flip_segms
    self._event.set
    _filter_crowd_proposals
    model.AveragePool
    net.Clone
    test_model
    _ratio_enum
    f
    k.decoded_top_masks.sum
    np.uint8.arr.astype.tobytes
    optimize_memory
    model.ConvShared
    cv2.rectangle
    workspace.CreateNet
    mask_rcnn_fcn_head_v1upXconvs_gn
    self._event.wait
    ax.axis
    plt.Rectangle
    COCOmask.iou
    self.model.roi_data_loader._minibatch_queue.qsize
    np.empty
    evaluate_keypoints
    s.sendmail
    model.LRN
    remove_layers_without_parameters
    overlaps.argmax
    net.Proto
    bn_tensors.extend
    cfg.RPN.ASPECT_RATIOS.anchor_sizes.spatial_scale.generate_anchors.generate_anchors.astype
    roi_data_loader.get_output_names
    roi_map.copy
    model_builder.add_inference_inputs
    _generate_anchors
    xy.append
    self.__dict__.update
    urllib.request.urlopen
    ws.mean
    GpuNameScope
    np.cos
    cythonize
    new_ops.extend
    processes.append
    rpn_heads.add_generic_rpn_outputs
    subprocess_utils.process_in_parallel
    boxes.astype
    A.W.H._bbox_targets.reshape.transpose
    nu.average_multi_gpu_blob
    fused_conv.input.append
    np.float32
    self.smoothed_total_loss.AddValue
    datetime.timedelta
    _narrow_to_fpn_roi_levels
    self.COCO.loadAnns
    self.debug_timer.tic
    np.min
    namedtuple
    envu.yaml_dump
    u.append
    dataset.results.keys
    new_net.Proto.op.extend
    field_of_anchors.reshape
    i_boxes.astype
    all_blobs.append
    all_init_net.Proto
    os.environ.get
    all_init_net.Proto.SerializeToString
    salt.json_dataset._get_voc_results_file_template.format
    fig.add_axes
    _within_box
    model.AffineChannel
    cv2.getTextSize
    mutils.get_device_option_cuda
    keypoint_utils.get_keypoints
    self.json_category_id_to_contiguous_id.items
    _log_detection_eval_metrics
    cv2.resize
    model.net.SpatialNarrowAs
    _empty_keypoint_results
    np.prod
    max_overlaps.max
    get_op_arg
    self.net.BatchPermutation
    model.net.SoftmaxWithLoss
    model.net.SigmoidFocalLoss
    dataset_keypoints.index
    generate_anchors.generate_anchors
    np.round
    self.COCO.getCatIds
    self.COCO.getAnnIds
    RoIDataLoader
    hash_obj.hexdigest.encode
    int
    new_net.GetBlobRef
    envu.set_up_matplotlib
    self.net.AffineChannel
    np.clip
    _add_roi_mask_head
    os.mkdir
    _merge_proposal_boxes_into_roidb
    box_utils.clip_xyxy_to_image
    convert_op_in_ops
    OpFilter
    ET.parse
    SmoothedValue
    _flip_rle
    TypeError
    model.GenerateProposalLabels
    roi_data_loader.start
    core_config._merge_a_into_b
    verify_model
    np.argmax
    voc_eval
    Exception
    np.linspace
    self.net.__getattr__
    roi_data_loader.register_sigint_handler
    box_list.append
    model.StopGradient
    scipy.sparse.csr_matrix
    coordinator.should_stop
    net.external_input.remove
    obj.find
    unscope_name
    blob_out.reshape
    model.Accuracy
    _do_matlab_eval
    self.smoothed_total_loss.GetMedianValue
    signal.signal
    thresh.inds.gt_overlaps.sum
    filename.endswith
    literal_eval
    model.ConvAffine
    tuple
    get_lr_func
    mask_util.encode
    blob_utils.prep_im_for_blob
    add_ResNet_roi_conv5_head_for_masks
    dataset.name.startswith
    cProfile.runctx
    np.random.randint
    model.net._net.op.extend
    model.net.Proto
    cv2.circle
    top_dets.copy
    self.do_not_update_params.append
    add_single_gpu_param_update_ops
    json_dataset_evaluator.evaluate_masks
    self.request_stop
    _roi_fg_bbox_locs.astype
    shift_y.ravel
    os.environ.copy
    task_evaluation.evaluate_box_proposals
    self.coordinator.stop_on_exception
    im_proposals
    add_model_training_inputs
    blob_utils.py_op_copy_blob
    nu.initialize_gpu_from_weights_file
    upsample_filt
    model.MaxPool
    json.load
    _do_keypoint_eval
    setup_model_for_training
    add_ResNet_convX_body
    mean.std.cs_shape.np.random.randn.astype
    getattr
    response.info.get
    np.uint8
    net.Proto.external_output.extend
    ret_init_net.Proto
    model.roi_data_loader.start
    model_builder.add_training_inputs
    predictor_exporter.PredictorExportMeta
    np.isclose
    plt.setp
    blob_uses
    model.CollectAndDistributeFpnRpnProposals
    image_ids.sort
    _voc_eval_to_box_results
    fuse_net
    c2_utils.NamedCudaScope
    convert_model_gpu
    dict
    mutils.save_graph
    model.net.Mul
    retinanet_heads.add_fpn_retinanet_losses
    ex_inds.rois.astype
    np.exp
    is_poly
    map
    A.reshape
    convert_cityscapes_instance_only
    f.flush
    entry.items
    dump_proto_files
    self.enqueue_blobs
    model.Scale
    traceback.print_exc
    get_raw_dir
    roi_data_loader._minibatch_queue.qsize
    ax.text
    model.AddMetrics
    os.path.isdir
    check_args
    blobs.keys
    converted_ops.extend
    add_roi_keypoint_head_func
    self.iter_timer.toc
    roidb_utils.add_bbox_regression_targets
    globals
    mask_util.iou
    vis_bbox
    parse_rec
    roidb.extend
    np.argpartition
    gen_init_net
    core.ScopedName
    targets_dh.targets_dw.targets_dy.targets_dx.np.vstack.transpose
    frcn_blobs.items
    _empty_mask_results
    Queue.Queue
    json.dump
    np.max
    mutils.fuse_net_affine
    scores.append
    model.net.PSRoIPool
    _decode_cfg_value
    color_list.reshape
    test_engine.im_detect_all
    url.replace
    caffe2_pb2.DeviceOption
    run_model_cfg
    core.CreateOperator
    unscoped_param_names.keys
    dets.astype
    cv2.ocl.setUseOpenCL
    entry.copy
    output_name.startswith
    copy.deepcopy
    normalize_shape
    iter
    rle.decode
    weights.keys
    model.net.NCCLAllreduce
    generate_rpn_on_range
    caffe_translator.TranslateModel
    objects.append
    log.debug
    kp_connections
    np.array.astype
    _bbox_outside_weights.reshape
    _cs_eval_to_mask_results
    model.UpdateWorkspaceLr
    blobs_out.append
    v.AddValue
    areas.items
    dummy_datasets.get_coco_dataset
    text_format.Merge
    shift_y.shift_x.shift_y.shift_x.np.vstack.transpose
    setup_logging
    workspace.GetCUDAVersion
    bbox.find
    parser.print_help
    rois_fg.astype
    self.proposals_for_one_image
    handle_critical_error
    load_and_convert_coco_model
    load_timer.tic
    save_object
    model.net.Alias
    _add_allreduce_graph
    data_utils.compute_targets
    np.fromstring
    Polygon
    nu.sum_multi_gpu_blob
    blob_utils.get_image_blob
    mutils.get_device_option_cpu
    np.ceil
    filter_op
    blob_utils.ones
    pairwise
    inds.min
    outfile.write
    blobs.values
    list
    mask_util.decode
    images.append
    optim.build_data_parallel_model
    _prepare_blobs
    get_roidb
    mutils.gen_init_net_from_blobs
    fpn.map_rois_to_fpn_levels
    sum_multi_gpu_blob
    

    @zpao Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
  • Is there any script for batch inference?

    Is there any script for batch inference?

    detectron/tools/infer_simple.py is a clean and tiny inference demo for the case when batchsize equals to 1.

    For efficient inference, I want to detect several images at the same time.

    Could anyone give me some advice on implementing "batchsize > 1" inference demo?

    opened by rogercmq 0
Owner
Facebook Research
Facebook Research
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Mask R-CNN for Object Detection and Segmentation This is an implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bound

Matterport, Inc 22.5k Jan 4, 2023
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

MIC-DKFZ 1.2k Jan 4, 2023
Face Mask Detection is a project to determine whether someone is wearing mask or not, using deep neural network.

face-mask-detection Face Mask Detection is a project to determine whether someone is wearing mask or not, using deep neural network. It contains 3 scr

amirsalar 13 Jan 18, 2022
DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation

DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation This project hosts the code for implementing the DCT-MASK algorithms

Alibaba Cloud 57 Nov 27, 2022
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 5, 2022
This is a Keras implementation of a CNN for estimating age, gender and mask from a camera.

face-detector-age-gender This is a Keras implementation of a CNN for estimating age, gender and mask from a camera. Before run face detector app, expr

Devdreamsolution 2 Dec 4, 2021
一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

Haoyu Xu 203 Jan 3, 2023
An implementation of RetinaNet in PyTorch.

RetinaNet An implementation of RetinaNet in PyTorch. Installation Training COCO 2017 Pascal VOC Custom Dataset Evaluation Todo Credits Installation In

Conner Vercellino 297 Jan 4, 2023
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022
[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

[NeurIPS 2021] A weak-shot object detection approach by transferring semantic similarity and mask prior.

BCMI 49 Jul 27, 2022
This is an example of object detection on Micro bacterium tuberculosis using Mask-RCNN

Mask-RCNN on Mycobacterium tuberculosis This is an example of object detection on Mycobacterium Tuberculosis using Mask RCNN. Implement of Mask R-CNN

Jun-En Ding 1 Sep 16, 2021
This is an example of object detection on Micro bacterium tuberculosis using Mask-RCNN

Mask-RCNN on Mycobacterium tuberculosis This is an example of object detection on Mycobacterium Tuberculosis using Mask RCNN. Implement of Mask R-CNN

Jun-En Ding 1 Sep 16, 2021
NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

null 5 Nov 3, 2022
This is an open source library implementing hyperbox-based machine learning algorithms

hyperbox-brain is a Python open source toolbox implementing hyperbox-based machine learning algorithms built on top of scikit-learn and is distributed

Complex Adaptive Systems (CAS) Lab - University of Technology Sydney 21 Dec 14, 2022
BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.

Overview BisQue is a web-based platform specifically designed to provide researchers with organizational and quantitative analysis tools for up to 5D

Vision Research Lab @ UCSB 26 Nov 29, 2022
Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

null 61 Jan 7, 2023
FastReID is a research platform that implements state-of-the-art re-identification algorithms.

FastReID is a research platform that implements state-of-the-art re-identification algorithms.

JDAI-CV 2.8k Jan 7, 2023
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

null 183 Dec 28, 2022
Sparse R-CNN: End-to-End Object Detection with Learnable Proposals, CVPR2021

End-to-End Object Detection with Learnable Proposal, CVPR2021

Peize Sun 1.2k Dec 27, 2022