Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Overview

Active Learning for Deep Object Detection via Probabilistic Modeling

This repository is the official PyTorch implementation of Active Learning for Deep Object Detection via Probabilistic Modeling, ICCV 2021.

The proposed method is implemented based on the SSD pytorch.

Our approach relies on mixture density networks to estimate, in a single forward pass of a single model, both localization and classification uncertainties, and leverages them in the scoring function for active learning.

Our method performs on par with multiple model-based methods (e.g., ensembles and MC-Dropout). Therefore, our method provides the best trade-off between accuracy and computational cost.

License

To view a NVIDIA Source Code License for this work, visit https://github.com/NVlabs/AL-MDN/blob/main/LICENSE

Requirements

For setup and data preparation, please refer to the README in SSD pytorch.

Code was tested in virtual environment with Python 3+ and Pytorch 1.1.

Training

  • Make directory mkdir weights and cd weights.

  • Download the FC-reduced VGG-16 backbone weight in the weights directory, and cd ...

  • If necessary, change the VOC_ROOT in data/voc0712.py or COCO_ROOT in data/coco.py.

  • Please refer to data/config.py for configuration.

  • Run the training code:

# Supervised learning
CUDA_VISIBLE_DEVICES=<GPU_ID> python train_ssd_gmm_supervised_learning.py

# Active learning
CUDA_VISIBLE_DEVICES=<GPU_ID> python train_ssd_gmm_active_learining.py

Evaluation

  • To evaluate on MS-COCO, change the COCO_ROOT_EVAL in data/coco_eval.py.

  • Run the evaluation code:

# Evaluation on PASCAL VOC
python eval_voc.py --trained_model <trained weight path>

# Evaluation on MS-COCO
python eval_coco.py --trained_model <trained weight path>

Visualization

  • Run the visualization code:
python demo.py --trained_model <trained weight path>

Citation

@InProceedings{Choi_2021_ICCV,
    author    = {Choi, Jiwoong and Elezi, Ismail and Lee, Hyuk-Jae and Farabet, Clement and Alvarez, Jose M.},
    title     = {Active Learning for Deep Object Detection via Probabilistic Modeling},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {10264-10273}
}
Comments
  • How to output aleatoric and epistemic uncertainties associated to the class

    How to output aleatoric and epistemic uncertainties associated to the class

    Greetings,

    I was trying to print out the uncertainties just the way it is shown in Figure 3 from the paper:

    uncertainties

    Where should I tweak the code, so that I can output those 4 uncertainty values for each image?

    In /layers/functions/detection_gmm.py there's one output variable with should contain uncertainties, but I wasn't able to understand the output. Is there anything I'm missing/misunderstanding?

    opened by BrenoAV 7
  • The question of the mixture weight π of GMM

    The question of the mixture weight π of GMM

    We have trained a model with this method on our dataset. In the test phase, We find that the π value in the one of four classification components of GMM is close to 1, and the rest of π values are too small (close to 0), is this correct? Do you have similar situations? If not, what’s the π value of the four classification components when testing? Looking forward to your reply, thanks!

    opened by hantaotao 3
  • RuntimeError: Error(s) in loading state_dict for DataParallel:

    RuntimeError: Error(s) in loading state_dict for DataParallel:

    python eval_coco.py --dataset_root /coco --trained_model weights/vgg16_reducedfc.pth 81 :20: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. init.constant(self.weight,self.gamma) True Loading weight: weights/vgg16_reducedfc.pth odict_keys(['0.weight', '0.bias', '2.weight', '2.bias', '5.weight', '5.bias', '7.weight', '7.bias', '10.weight', '10.bias', '12.weight', '12.bias', '14.weight', '14.bias', '17.weight', '17.bias', '19.weight', '19.bias', '21.weight', '21.bias', '24.weight', '24.bias', '26.weight', '26.bias', '28.weight', '28.bias', '31.weight', '31.bias', '33.weight', '33.bias']) Traceback (most recent call last): File "eval_coco.py", line 188, in net.load_state_dict(ckp['weight'] if 'weight' in ckp.keys() else ckp) File "/lib/python3.7/site-packages/torch/nn/modules/module.py", line 830, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for DataParallel: Missing key(s) in state_dict: "module.vgg.0.weight", "module.vgg.0.bias", "module.vgg.2.weight", "module.vgg.2.bias", "module.vgg.5.weight", "module.vgg.5.bias", "module.vgg.7.weight", "module.vgg.7.bias", "module.vgg.10.weight", "module.vgg.10.bias", "module.vgg.12.weight", "module.vgg.12.bias", "module.vgg.14.weight", "module.vgg.14.bias", "module.vgg.17.weight", "module.vgg.17.bias", "module.vgg.19.weight", "module.vgg.19.bias", "module.vgg.21.weight", "module.vgg.21.bias", "module.vgg.24.weight", "module.vgg.24.bias", "module.vgg.26.weight", "module.vgg.26.bias", "module.vgg.28.weight", "module.vgg.28.bias", "module.vgg.31.weight", "module.vgg.31.bias", "module.vgg.33.weight", "module.vgg.33.bias", "module.L2Norm.weight", "module.extras.0.weight", "module.extras.0.bias", "module.extras.1.weight", "module.extras.1.bias", "module.extras.2.weight", "module.extras.2.bias", "module.extras.3.weight", "module.extras.3.bias", "module.extras.4.weight", "module.extras.4.bias", "module.extras.5.weight", "module.extras.5.bias", "module.extras.6.weight", "module.extras.6.bias", "module.extras.7.weight", "module.extras.7.bias", "module.loc_mu_1.0.weight", "module.loc_mu_1.0.bias", "module.loc_mu_1.1.weight", "module.loc_mu_1.1.bias", "module.loc_mu_1.2.weight", "module.loc_mu_1.2.bias", "module.loc_mu_1.3.weight", "module.loc_mu_1.3.bias", "module.loc_mu_1.4.weight", "module.loc_mu_1.4.bias", "module.loc_mu_1.5.weight", "module.loc_mu_1.5.bias", "module.loc_var_1.0.weight", "module.loc_var_1.0.bias", "module.loc_var_1.1.weight", "module.loc_var_1.1.bias", "module.loc_var_1.2.weight", "module.loc_var_1.2.bias", "module.loc_var_1.3.weight", "module.loc_var_1.3.bias", "module.loc_var_1.4.weight", "module.loc_var_1.4.bias", "module.loc_var_1.5.weight", "module.loc_var_1.5.bias", "module.loc_pi_1.0.weight", "module.loc_pi_1.0.bias", "module.loc_pi_1.1.weight", "module.loc_pi_1.1.bias", "module.loc_pi_1.2.weight", "module.loc_pi_1.2.bias", "module.loc_pi_1.3.weight", "module.loc_pi_1.3.bias", "module.loc_pi_1.4.weight", "module.loc_pi_1.4.bias", "module.loc_pi_1.5.weight", "module.loc_pi_1.5.bias", "module.loc_mu_2.0.weight", "module.loc_mu_2.0.bias", "module.loc_mu_2.1.weight", "module.loc_mu_2.1.bias", "module.loc_mu_2.2.weight", "module.loc_mu_2.2.bias", "module.loc_mu_2.3.weight", "module.loc_mu_2.3.bias", "module.loc_mu_2.4.weight", "module.loc_mu_2.4.bias", "module.loc_mu_2.5.weight", "module.loc_mu_2.5.bias", "module.loc_var_2.0.weight", "module.loc_var_2.0.bias", "module.loc_var_2.1.weight", "module.loc_var_2.1.bias", "module.loc_var_2.2.weight", "module.loc_var_2.2.bias", "module.loc_var_2.3.weight", "module.loc_var_2.3.bias", "module.loc_var_2.4.weight", "module.loc_var_2.4.bias", "module.loc_var_2.5.weight", "module.loc_var_2.5.bias", "module.loc_pi_2.0.weight", "module.loc_pi_2.0.bias", "module.loc_pi_2.1.weight", "module.loc_pi_2.1.bias", "module.loc_pi_2.2.weight", "module.loc_pi_2.2.bias", "module.loc_pi_2.3.weight", "module.loc_pi_2.3.bias", "module.loc_pi_2.4.weight", "module.loc_pi_2.4.bias", "module.loc_pi_2.5.weight", "module.loc_pi_2.5.bias", "module.loc_mu_3.0.weight", "module.loc_mu_3.0.bias", "module.loc_mu_3.1.weight", "module.loc_mu_3.1.bias", "module.loc_mu_3.2.weight", "module.loc_mu_3.2.bias", "module.loc_mu_3.3.weight", "module.loc_mu_3.3.bias", "module.loc_mu_3.4.weight", "module.loc_mu_3.4.bias", "module.loc_mu_3.5.weight", "module.loc_mu_3.5.bias", "module.loc_var_3.0.weight", "module.loc_var_3.0.bias", "module.loc_var_3.1.weight", "module.loc_var_3.1.bias", "module.loc_var_3.2.weight", "module.loc_var_3.2.bias", "module.loc_var_3.3.weight", "module.loc_var_3.3.bias", "module.loc_var_3.4.weight", "module.loc_var_3.4.bias", "module.loc_var_3.5.weight", "module.loc_var_3.5.bias", "module.loc_pi_3.0.weight", "module.loc_pi_3.0.bias", "module.loc_pi_3.1.weight", "module.loc_pi_3.1.bias", "module.loc_pi_3.2.weight", "module.loc_pi_3.2.bias", "module.loc_pi_3.3.weight", "module.loc_pi_3.3.bias", "module.loc_pi_3.4.weight", "module.loc_pi_3.4.bias", "module.loc_pi_3.5.weight", "module.loc_pi_3.5.bias", "module.loc_mu_4.0.weight", "module.loc_mu_4.0.bias", "module.loc_mu_4.1.weight", "module.loc_mu_4.1.bias", "module.loc_mu_4.2.weight", "module.loc_mu_4.2.bias", "module.loc_mu_4.3.weight", "module.loc_mu_4.3.bias", "module.loc_mu_4.4.weight", "module.loc_mu_4.4.bias", "module.loc_mu_4.5.weight", "module.loc_mu_4.5.bias", "module.loc_var_4.0.weight", "module.loc_var_4.0.bias", "module.loc_var_4.1.weight", "module.loc_var_4.1.bias", "module.loc_var_4.2.weight", "module.loc_var_4.2.bias", "module.loc_var_4.3.weight", "module.loc_var_4.3.bias", "module.loc_var_4.4.weight", "module.loc_var_4.4.bias", "module.loc_var_4.5.weight", "module.loc_var_4.5.bias", "module.loc_pi_4.0.weight", "module.loc_pi_4.0.bias", "module.loc_pi_4.1.weight", "module.loc_pi_4.1.bias", "module.loc_pi_4.2.weight", "module.loc_pi_4.2.bias", "module.loc_pi_4.3.weight", "module.loc_pi_4.3.bias", "module.loc_pi_4.4.weight", "module.loc_pi_4.4.bias", "module.loc_pi_4.5.weight", "module.loc_pi_4.5.bias", "module.conf_mu_1.0.weight", "module.conf_mu_1.0.bias", "module.conf_mu_1.1.weight", "module.conf_mu_1.1.bias", "module.conf_mu_1.2.weight", "module.conf_mu_1.2.bias", "module.conf_mu_1.3.weight", "module.conf_mu_1.3.bias", "module.conf_mu_1.4.weight", "module.conf_mu_1.4.bias", "module.conf_mu_1.5.weight", "module.conf_mu_1.5.bias", "module.conf_var_1.0.weight", "module.conf_var_1.0.bias", "module.conf_var_1.1.weight", "module.conf_var_1.1.bias", "module.conf_var_1.2.weight", "module.conf_var_1.2.bias", "module.conf_var_1.3.weight", "module.conf_var_1.3.bias", "module.conf_var_1.4.weight", "module.conf_var_1.4.bias", "module.conf_var_1.5.weight", "module.conf_var_1.5.bias", "module.conf_pi_1.0.weight", "module.conf_pi_1.0.bias", "module.conf_pi_1.1.weight", "module.conf_pi_1.1.bias", "module.conf_pi_1.2.weight", "module.conf_pi_1.2.bias", "module.conf_pi_1.3.weight", "module.conf_pi_1.3.bias", "module.conf_pi_1.4.weight", "module.conf_pi_1.4.bias", "module.conf_pi_1.5.weight", "module.conf_pi_1.5.bias", "module.conf_mu_2.0.weight", "module.conf_mu_2.0.bias", "module.conf_mu_2.1.weight", "module.conf_mu_2.1.bias", "module.conf_mu_2.2.weight", "module.conf_mu_2.2.bias", "module.conf_mu_2.3.weight", "module.conf_mu_2.3.bias", "module.conf_mu_2.4.weight", "module.conf_mu_2.4.bias", "module.conf_mu_2.5.weight", "module.conf_mu_2.5.bias", "module.conf_var_2.0.weight", "module.conf_var_2.0.bias", "module.conf_var_2.1.weight", "module.conf_var_2.1.bias", "module.conf_var_2.2.weight", "module.conf_var_2.2.bias", "module.conf_var_2.3.weight", "module.conf_var_2.3.bias", "module.conf_var_2.4.weight", "module.conf_var_2.4.bias", "module.conf_var_2.5.weight", "module.conf_var_2.5.bias", "module.conf_pi_2.0.weight", "module.conf_pi_2.0.bias", "module.conf_pi_2.1.weight", "module.conf_pi_2.1.bias", "module.conf_pi_2.2.weight", "module.conf_pi_2.2.bias", "module.conf_pi_2.3.weight", "module.conf_pi_2.3.bias", "module.conf_pi_2.4.weight", "module.conf_pi_2.4.bias", "module.conf_pi_2.5.weight", "module.conf_pi_2.5.bias", "module.conf_mu_3.0.weight", "module.conf_mu_3.0.bias", "module.conf_mu_3.1.weight", "module.conf_mu_3.1.bias", "module.conf_mu_3.2.weight", "module.conf_mu_3.2.bias", "module.conf_mu_3.3.weight", "module.conf_mu_3.3.bias", "module.conf_mu_3.4.weight", "module.conf_mu_3.4.bias", "module.conf_mu_3.5.weight", "module.conf_mu_3.5.bias", "module.conf_var_3.0.weight", "module.conf_var_3.0.bias", "module.conf_var_3.1.weight", "module.conf_var_3.1.bias", "module.conf_var_3.2.weight", "module.conf_var_3.2.bias", "module.conf_var_3.3.weight", "module.conf_var_3.3.bias", "module.conf_var_3.4.weight", "module.conf_var_3.4.bias", "module.conf_var_3.5.weight", "module.conf_var_3.5.bias", "module.conf_pi_3.0.weight", "module.conf_pi_3.0.bias", "module.conf_pi_3.1.weight", "module.conf_pi_3.1.bias", "module.conf_pi_3.2.weight", "module.conf_pi_3.2.bias", "module.conf_pi_3.3.weight", "module.conf_pi_3.3.bias", "module.conf_pi_3.4.weight", "module.conf_pi_3.4.bias", "module.conf_pi_3.5.weight", "module.conf_pi_3.5.bias", "module.conf_mu_4.0.weight", "module.conf_mu_4.0.bias", "module.conf_mu_4.1.weight", "module.conf_mu_4.1.bias", "module.conf_mu_4.2.weight", "module.conf_mu_4.2.bias", "module.conf_mu_4.3.weight", "module.conf_mu_4.3.bias", "module.conf_mu_4.4.weight", "module.conf_mu_4.4.bias", "module.conf_mu_4.5.weight", "module.conf_mu_4.5.bias", "module.conf_var_4.0.weight", "module.conf_var_4.0.bias", "module.conf_var_4.1.weight", "module.conf_var_4.1.bias", "module.conf_var_4.2.weight", "module.conf_var_4.2.bias", "module.conf_var_4.3.weight", "module.conf_var_4.3.bias", "module.conf_var_4.4.weight", "module.conf_var_4.4.bias", "module.conf_var_4.5.weight", "module.conf_var_4.5.bias", "module.conf_pi_4.0.weight", "module.conf_pi_4.0.bias", "module.conf_pi_4.1.weight", "module.conf_pi_4.1.bias", "module.conf_pi_4.2.weight", "module.conf_pi_4.2.bias", "module.conf_pi_4.3.weight", "module.conf_pi_4.3.bias", "module.conf_pi_4.4.weight", "module.conf_pi_4.4.bias", "module.conf_pi_4.5.weight", "module.conf_pi_4.5.bias". Unexpected key(s) in state_dict: "0.weight", "0.bias", "2.weight", "2.bias", "5.weight", "5.bias", "7.weight", "7.bias", "10.weight", "10.bias", "12.weight", "12.bias", "14.weight", "14.bias", "17.weight", "17.bias", "19.weight", "19.bias", "21.weight", "21.bias", "24.weight", "24.bias", "26.weight", "26.bias", "28.weight", "28.bias", "31.weight", "31.bias", "33.weight", "33.bias".

    opened by ayennam 2
  • Question of AL selecting the best weight

    Question of AL selecting the best weight

    Hi, I have some questions about the code in https://github.com/NVlabs/AL-MDN/blob/main/train_ssd_gmm_active_learining.py#L292-L318 In the active learning stage, why only select the best weight for the PASCAL VOC dataset but not for the COCO dataset? Thanks!

    opened by yisyuanliou 2
  • How can I apply this algorithm for detector with focal loss

    How can I apply this algorithm for detector with focal loss

    Hi, I am very interested in your work. So I wanna apply this algorithm for my work. As a commonly used loss function focal loss, the output of clssification is different from the cross entropy loss. The classification output layer num is equal to the class num, not class num add 1, So I wanna know how to change the loss function in this paper for applying focal loss output. Thanks a lot.

    opened by lichengwei-code 2
  • Issue in running

    Issue in running

    nit.constant(self.weight,self.gamma) Finished loading model! Traceback (most recent call last): File "C:\Users\fi42\Rony\AL-MDN\eval_voc.py", line 439, in mean_ap = test_net(args.save_folder, net, args.cuda, dataset, File "C:\Users\fi42\Rony\AL-MDN\eval_voc.py", line 385, in test_net detections = net(x).data File "C:\Users\fi42\anaconda3\envs\ACT\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\fi42\anaconda3\envs\ACT\lib\site-packages\torch\nn\parallel\data_parallel.py", line 166, in forward return self.module(*inputs[0], **kwargs[0]) File "C:\Users\fi42\anaconda3\envs\ACT\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\fi42\Rony\AL-MDN\ssd_gmm.py", line 272, in forward output = self.detect( File "C:\Users\fi42\anaconda3\envs\ACT\lib\site-packages\torch\autograd\function.py", line 150, in call raise RuntimeError( RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

    Can you let me know what exact version of pytorch, python , cuda, you used? I tried several but got this error?

    opened by nabi-rony 1
  • Why using reparameterizaiton-trick for classification loss computation

    Why using reparameterizaiton-trick for classification loss computation

    Hi author, Thanks for your great work. After reading the paper, I have a question regarding the computation of the classification loss.

    It's clear that, for the localization loss, you regress the mean of GMM w.r.t the offset of the anchor to GT boxes, and use the variance term to predict the un-certainty of the offset prediction, and optimize this procedure with Log-likelihood Maximizaiton.

    However, for classification, as far as I am concerned, you do the optimization in another way. You treat the input data as random variable, and by using the re-parameterization trick, you get the sampled class-specific random variable from the learned GMM, and finally compute the BCE loss between GT and re-parameterized random variable.

    My question is that, why do this in this way? Can we just compute the classification loss in a similar way as the localization? Doing something like Maximize the likelihood of pos, and neg samples given the predicted mean and variance of GMM, like N(GT_pos | mu_p, Sigma_p), N(GT_neg | mu_p, Sigma_p), where mu_p and Sigma_p are computed by the network.

    I hope my puzzel could be considered,

    Best regards

    opened by TianpengBu 1
  • How to use GMM in Faster RCNN?

    How to use GMM in Faster RCNN?

    Thanks for your great work! As you have mentioned in your paper, GMM works well on Faster RCNN. So can you share your code on Faster RCNN? I want to do some experiments on a two-stage detector.

    opened by ChuQiaosong 1
  • Error in VOC Evaluation script

    Error in VOC Evaluation script

    Traceback (most recent call last): File "eval_voc.py", line 440, in <module> thresh=args.confidence_threshold) File "eval_voc.py", line 384, in test_net detections = net(x).data File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward return self.module(*inputs[0], **kwargs[0]) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/MDN/ssd_gmm.py", line 296, in forward conf_pi_4.view(conf_var_4.size(0), -1, 1) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/function.py", line 151, in __call__ "Legacy autograd function with non-static forward method is deprecated. " RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

    Pytorch Version - 1.9.0

    opened by sainivedh 1
  • Training stop in the last iteration  iter:119999

    Training stop in the last iteration iter:119999

    I have this error in the end training phase : Runtime Error: Legacy autograd function with non static forward method is deprecated. Please use new-style autograd function with static forward method . error in File /AL-MDN-main/utils/test_voc.py line 348, in test_net "" detections = net(x).data

    For the version of pytorch i have 1.10.1+cu111

    opened by wagaabderrahim 0
  • Where is the released code of AL-SSL?

    Where is the released code of AL-SSL?

    Your another work 《Not All Labels Are Equal: Rationalizing The Labeling Costs for Training Object Detection》 is wonderfull to follow. But where is the released code? You mentioned it in the paper: Code is available at https://github.com/NVlabs/AL-SSL. However, the url is 404.

    opened by shuangshuangguo 1
  • How to train on custom dataset ?

    How to train on custom dataset ?

    Could someone please enumerate the steps needed to train these models on a custom dataset? I can get my data in the PASCAL VOC format

    Thank you very much!

    opened by abwgmo 1
  • Same learning rate schedules for COCO active learning and supervised learning

    Same learning rate schedules for COCO active learning and supervised learning

    Hi,

    Seems to me that in the config, both coco300_active and coco have the same learning rate schedule and iterations.

    What is the reason for this? Can transfer learning be used here for faster active learning?

    opened by KamalM9 0
  • Uncertainties for different classes / class imbalance problem

    Uncertainties for different classes / class imbalance problem

    Hello, Thank you for the paper and the repo. I was wondering how can I deal with class imbalance during the active learning loop. Do you think the model will be choosing more samples from a class with a fewer number of images? Or will it be the other way around? Which part of the code should I tweak if I want to prioritize some of the classes during the active learning cycle? I really appreciate any help you can provide.

    opened by ahmetdemirkayaee 1
  • Stop training after first iteration

    Stop training after first iteration

    $ CUDA_VISIBLE_DEVICES='0,1' python train_ssd_gmm_supervised_learning.py C:\Users\fi42\Active_learning\AL-MDN\layers\modules\l2norm.py:20: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. init.constant(self.weight,self.gamma) Loading base network... Initializing weights... train_ssd_gmm_supervised_learning.py:225: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_. init.xavier_uniform(param) Training SSD on: VOC0712 Using the specified args: Namespace(basenet='vgg16_reducedfc.pth', batch_size=32, cuda=True, dataset='VOC300', dataset_root='C:\Users\fi42\data/VOCdevkit/', gamma=0.1, id=1, lr=0.001, momentum=0.9, num_workers=8, resume=None, save_folder='weights/', start_iter=0, visdom=False, weight_decay=0.0 005) C:\Users\fi42\Active_learning\AL-MDN\utils\augmentations.py:240: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray mode = random.choice(self.sample_options) C:\Users\fi42\Active_learning\AL-MDN\utils\augmentations.py:240: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray mode = random.choice(self.sample_options) C:\Users\fi42\Active_learning\AL-MDN\utils\augmentations.py:240: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray mode = random.choice(self.sample_options) C:\Users\fi42\Active_learning\AL-MDN\utils\augmentations.py:240: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray mode = random.choice(self.sample_options) C:\Users\fi42\Active_learning\AL-MDN\utils\augmentations.py:240: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray mode = random.choice(self.sample_options) C:\Users\fi42\Active_learning\AL-MDN\utils\augmentations.py:240: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray mode = random.choice(self.sample_options) C:\Users\fi42\Active_learning\AL-MDN\utils\augmentations.py:240: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray mode = random.choice(self.sample_options) C:\Users\fi42\Active_learning\AL-MDN\utils\augmentations.py:240: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray mode = random.choice(self.sample_options) C:\Users\fi42\Anaconda3\envs\py36\lib\site-packages\torch\cuda\nccl.py:24: UserWarning: PyTorch is not compiled with NCCL support warnings.warn('PyTorch is not compiled with NCCL support') timer: 2174.4121 sec. iter 0 || Loss: 29.8597 || loss: 29.8597 , loss_c: 20.1772 , loss_l: 9.6825 , lr : 0.0000


    I am using the VOC2007 dataset to train, but it stopped without throwing any error after iteration 0. I didn't change anything in the code. It took a long time to start. What might be the issue?

    opened by nabi-rony 0
(ICCV 2021) ProHMR - Probabilistic Modeling for Human Mesh Recovery

ProHMR - Probabilistic Modeling for Human Mesh Recovery Code repository for the paper: Probabilistic Modeling for Human Mesh Recovery Nikos Kolotouros

Nikos Kolotouros 209 Dec 13, 2022
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

MI-AOD Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available tem

Tianning Yuan 269 Dec 21, 2022
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Qianli Ma 158 Nov 24, 2022
InferPy: Deep Probabilistic Modeling with Tensorflow Made Easy

InferPy: Deep Probabilistic Modeling Made Easy InferPy is a high-level API for probabilistic modeling written in Python and capable of running on top

PGM-Lab 141 Oct 13, 2022
A Python library for Deep Probabilistic Modeling

Abstract DeeProb-kit is a Python library that implements deep probabilistic models such as various kinds of Sum-Product Networks, Normalizing Flows an

DeeProb-org 46 Dec 26, 2022
[ICCV 2021] Official PyTorch implementation for Deep Relational Metric Learning.

Deep Relational Metric Learning This repository is the official PyTorch implementation of Deep Relational Metric Learning. Framework Datasets CUB-200-

Borui Zhang 39 Dec 10, 2022
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

null 32 Dec 26, 2022
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Yunyao 35 Oct 16, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)

ILVR + ADM This is the implementation of ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral). This repository is h

Jooyoung Choi 225 Dec 28, 2022
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
Submodular Subset Selection for Active Domain Adaptation (ICCV 2021)

S3VAADA: Submodular Subset Selection for Virtual Adversarial Active Domain Adaptation ICCV 2021 Harsh Rangwani, Arihant Jain*, Sumukh K Aithal*, R. Ve

Video Analytics Lab -- IISc 13 Dec 28, 2022
Generative Flow Networks for Discrete Probabilistic Modeling

Energy-based GFlowNets Code for Generative Flow Networks for Discrete Probabilistic Modeling by Dinghuai Zhang, Nikolay Malkin, Zhen Liu, Alexandra Vo

Narsil-Dinghuai Zhang 51 Dec 20, 2022
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
Official PyTorch implementation for FastDPM, a fast sampling algorithm for diffusion probabilistic models

Official PyTorch implementation for "On Fast Sampling of Diffusion Probabilistic Models". FastDPM generation on CIFAR-10, CelebA, and LSUN datasets. S

Zhifeng Kong 68 Dec 26, 2022