Official implementation for the paper: "Multi-label Classification with Partial Annotations using Class-aware Selective Loss"

Overview

PWC

Multi-label Classification with Partial Annotations using Class-aware Selective Loss


Paper | Pretrained models

Official PyTorch Implementation

Emanuel Ben-Baruch, Tal Ridnik, Itamar Friedman, Avi Ben-Cohen, Nadav Zamir, Asaf Noy, Lihi Zelnik-Manor
DAMO Academy, Alibaba Group

Abstract

Large-scale multi-label classification datasets are commonly, and perhaps inevitably, partially annotated. That is, only a small subset of labels are annotated per sample. Different methods for handling the missing labels induce different properties on the model and impact its accuracy. In this work, we analyze the partial labeling problem, then propose a solution based on two key ideas. First, un-annotated labels should be treated selectively according to two probability quantities: the class distribution in the overall dataset and the specific label likelihood for a given data sample. We propose to estimate the class distribution using a dedicated temporary model, and we show its improved efficiency over a naive estimation computed using the dataset's partial annotations. Second, during the training of the target model, we emphasize the contribution of annotated labels over originally un-annotated labels by using a dedicated asymmetric loss. Experiments conducted on three partially labeled datasets, OpenImages, LVIS, and simulated-COCO, demonstrate the effectiveness of our approach. Specifically, with our novel selective approach, we achieve state-of-the-art results on OpenImages dataset. Code will be made available.

Class-aware Selective Approach

An overview of our approach is summarized in the following figure:

Loss Implementation

Our loss consists of a selective approach for adjusting the training mode for each class individualy and a partial asymmetric loss.

An implementation of the Class-aware Selective Loss (CSL) can be found here.

  • class PartialSelectiveLoss(nn.Module)

Pretrained Models

We provide models pretrained on the OpenImages datasset with different modes and architectures:

Model Architecture Link mAP
Ignore TResNet-M link 85.38
Negative TResNet-M link 85.85
Selective (CSL) TResNet-M link 86.72
Selective (CSL) TResNet-L link 87.34

Inference Code (Demo)

We provide inference code, that demonstrate how to load the model, pre-process an image and do inference. Example run of OpenImages model (after downloading the relevant model):

python infer.py  \
--dataset_type=OpenImages \
--model_name=tresnet_m \
--model_path=./models_local/mtresnet_opim_86.72.pth \
--pic_path=./pics/10162266293_c7634cbda9_o.jpg \
--input_size=448

Result Examples

Training Code

Training code is provided in (train.py). Also, code for simulating partial annotation for the MS-COCO dataset is available (here). In particular, two "partial" simulation schemes are implemented: fix-per-class(FPC) and random-per-sample (RPS).

  • FPC: For each class, we randomly sample a fixed number of positive annotations and the same number of negative annotations. The rest of the annotations are dropped.
  • RPA: We omit each annotation with probability p.

Pretrained weights using the ImageNet-21k dataset can be found here: link
Pretrained weights using the ImageNet-1k dataset can be found here: link

Example of training with RPS simulation:

--data=/mnt/datasets/COCO/COCO_2014
--model-path=models/pretrain/mtresnet_21k
--gamma_pos=0
--gamma_neg=4
--gamma_unann=4
--simulate_partial_type=rps
--simulate_partial_param=0.5
--partial_loss_mode=selective
--likelihood_topk=5
--prior_threshold=0.5
--prior_path=./outputs/priors/prior_fpc_1000.csv

Example of training with FPC simulation:

--data=/mnt/datasets/COCO/COCO_2014
--model-path=models/pretrain/mtresnet_21k
--gamma_pos=0
--gamma_neg=4
--gamma_unann=4
--simulate_partial_type=fpc
--simulate_partial_param=1000
--partial_loss_mode=selective
--likelihood_topk=5
--prior_threshold=0.5
--prior_path=./outputs/priors/prior_fpc_1000.csv

Typical Training Results

FPC (1,000) simulation scheme:

Model mAP
Ignore, CE 76.46
Negative, CE 81.24
Negative, ASL (4,1) 81.64
CSL - Selective, P-ASL(4,3,1) 83.44

RPS (0.5) simulation scheme:

Model mAP
Ignore, CE 84.90
Negative, CE 81.21
Negative, ASL (4,1) 81.91
CSL- Selective, P-ASL(4,1,1) 85.21

Estimating the Class Distribution

The training code contains also the procedure for estimting the class distribution from the data. Our approach enables to rank the classes based on training a temporary model usinig the Ignore mode. link

Top 10 classes:

Method Top 10 ranked classes
Original 'person', 'chair', 'car', 'dining table', 'cup', 'bottle', 'bowl', 'handbag', 'truck', 'backpack'
Estiimate (Ignore mode) 'person', 'chair', 'handbag', 'cup', 'bench', 'bottle', 'backpack', 'car', 'cell phone', 'potted plant'
Estimate (Negative mode) 'kite' 'truck' 'carrot' 'baseball glove' 'tennis racket' 'remote' 'cat' 'tie' 'horse' 'boat'

Citation

@misc{benbaruch2021multilabel,
      title={Multi-label Classification with Partial Annotations using Class-aware Selective Loss}, 
      author={Emanuel Ben-Baruch and Tal Ridnik and Itamar Friedman and Avi Ben-Cohen and Nadav Zamir and Asaf Noy and Lihi Zelnik-Manor},
      year={2021},
      eprint={2110.10955},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

Several images from OpenImages dataset are used in this project. ֿ
Some components of this code implementation are adapted from the repository https://github.com/Alibaba-MIIL/ASL.

Comments
  • OID-V6 dataset preprocessing

    OID-V6 dataset preprocessing

    Amazing paper, especially so soon after the last one, great work! I had one question regarding the oidv6 dataset: did you do any preprocessing to filter out bad classes etc? How many of the 9 million images did you end up using? Also, are there any large differeneces between train.py (COCO) and what you used to train on the oidv6 dataset?

    Thanks in advance.

    opened by Leterax 8
  • Soft labels

    Soft labels

    The correctness (or concordance with the equations in your paper) of the one_side_w (and also asymmetric_w below) relies on the fact that your labels (or targets y) are hard labels (0 or 1). However, when one uses soft labels (using label smoothing), the concordance would fail.

    I think the following code works for both hard labels (identical) and soft labels (not considering efficiency)

        def forward(self, x, y):
            """"
            Parameters
            ----------
            x: input logits
            y: targets (multi-label binarized vector)
            """
    
            # Calculating Probabilities
            x_sigmoid = torch.sigmoid(x)
            xs_pos = x_sigmoid
            xs_neg = 1 - x_sigmoid
    
            # Asymmetric Clipping
            if self.clip is not None and self.clip > 0:
                xs_neg = (xs_neg + self.clip).clamp(max=1)
    
            # Basic CE calculation
            los_pos = y*torch.log(xs_pos.clamp(min=self.eps))
            los_neg = (1-y)*torch.log(xs_neg.clamp(min=self.eps))
            # loss = los_pos + los_neg
    
            # Asymmetric Focusing
            if self.gamma_neg > 0 or self.gamma_pos > 0:
                if self.disable_torch_grad_focal_loss:
                    prev = torch.is_grad_enabled()
                    torch.set_grad_enabled(False)
                los_pos *= torch.pow(1-xs_pos, self.gamma_pos)
                los_neg *= torch.pow(xs_pos, self.gamma_neg)
                if self.disable_torch_grad_focal_loss:
                    torch.set_grad_enabled(prev)
            loss = los_pos + los_neg
    
            return -loss.sum()
    
    opened by wenh06 2
  • Issue while loading TResNet-M model

    Issue while loading TResNet-M model

    Hi,

    I'm trying to load mtresnet_opim_86.72.pth and an error occurs while loading state_dict: model.load_state_dict(state['model'], strict=True) error: Exception has occurred: RuntimeError Error(s) in loading state_dict for TResNet: Missing key(s) in state_dict: "head.fc.weight", "head.fc.bias". Unexpected key(s) in state_dict: "head.fc.embedding_generator.0.weight", "head.fc.embedding_generator.0.bias", "head.fc.FC.weight", "head.fc.FC.bias".

    If strict loading is disabled, then model is loaded without error, but classes are not found in the provided example image.

    Does the model depend on exact PyTorch, CUDA versions? I`m running it on Windows: PyTorch: 1.10.2 CUDA: 11.3

    Or I`m missing something else?

    opened by DMatHome 0
  • inference not running

    inference not running

    I get the following error when running infer.py

    Inference demo with CSL model Creating and loading the model... Traceback (most recent call last): File "infer.py", line 112, in <module> main() File "infer.py", line 68, in main model = create_model(args).cuda() File "/content/PartialLabelingCSL/src/models/utils/factory.py", line 11, in create_model model_params = {'args': args, 'num_classes': args.num_classes} AttributeError: 'Namespace' object has no attribute 'num_classes'

    Command to reproduce - python infer.py
    --dataset_type=OpenImages
    --model_name=tresnet_m
    --model_path=/content/drive/MyDrive/mtresnet_opim_86.72.pth
    --pic_path=/content/wildlife-painting-art-500x500.jpeg
    --input_size=224

    opened by jmayank23 2
  • Error while loading

    Error while loading "Selective (CSL) TResNet-L" model

    Runtime Error occurs while loading state_dict for provided TResNet-L model. I think provided pretrained model and TResnetL class differ. Selective (CSL) - TResNet-M works without any problems.

    Console Output

    Traceback (most recent call last): File "infer.py", line 104, in main() File "infer.py", line 70, in main model.load_state_dict(state['model'], strict=True) File "/home/user/anaconda3/envs/partial_labeling_csl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for TResNet: Missing key(s) in state_dict: "body.layer1.3.conv1.0.weight", "body.layer1.3.conv1.1.weight", "body.layer1.3.conv1.1.bias", "body.layer1.3.conv1.1.running_mean", "body.layer1.3.conv1.1.running_var", "body.layer1.3.conv2.0.weight", "body.layer1.3.conv2.1.weight", "body.layer1.3.conv2.1.bias", "body.layer1.3.conv2.1.running_mean", "body.layer1.3.conv2.1.running_var", "body.layer1.3.se.fc1.weight", "body.layer1.3.se.fc1.bias", "body.layer1.3.se.fc2.weight", "body.layer1.3.se.fc2.bias", "body.layer2.0.conv1.0.0.weight", "body.layer2.0.conv1.0.1.weight", "body.layer2.0.conv1.0.1.bias", "body.layer2.0.conv1.0.1.running_mean", "body.layer2.0.conv1.0.1.running_var", "body.layer2.0.conv2.0.weight", "body.layer2.0.conv2.1.weight", "body.layer2.0.conv2.1.bias", "body.layer2.0.conv2.1.running_mean", "body.layer2.0.conv2.1.running_var", "body.layer2.4.conv1.0.weight", "body.layer2.4.conv1.1.weight", "body.layer2.4.conv1.1.bias", "body.layer2.4.conv1.1.running_mean", "body.layer2.4.conv1.1.running_var", "body.layer2.4.conv2.0.weight", "body.layer2.4.conv2.1.weight", "body.layer2.4.conv2.1.bias", "body.layer2.4.conv2.1.running_mean", "body.layer2.4.conv2.1.running_var", "body.layer2.4.se.fc1.weight", "body.layer2.4.se.fc1.bias", "body.layer2.4.se.fc2.weight", "body.layer2.4.se.fc2.bias". Unexpected key(s) in state_dict: "body.layer1.0.conv3.0.weight", "body.layer1.0.conv3.1.weight", "body.layer1.0.conv3.1.bias", "body.layer1.0.conv3.1.running_mean", "body.layer1.0.conv3.1.running_var", "body.layer1.0.conv3.1.num_batches_tracked", "body.layer1.0.downsample.0.0.weight", "body.layer1.0.downsample.0.1.weight", "body.layer1.0.downsample.0.1.bias", "body.layer1.0.downsample.0.1.running_mean", "body.layer1.0.downsample.0.1.running_var", "body.layer1.0.downsample.0.1.num_batches_tracked", "body.layer1.1.conv3.0.weight", "body.layer1.1.conv3.1.weight", "body.layer1.1.conv3.1.bias", "body.layer1.1.conv3.1.running_mean", "body.layer1.1.conv3.1.running_var", "body.layer1.1.conv3.1.num_batches_tracked", "body.layer1.2.conv3.0.weight", "body.layer1.2.conv3.1.weight", "body.layer1.2.conv3.1.bias", "body.layer1.2.conv3.1.running_mean", "body.layer1.2.conv3.1.running_var", "body.layer1.2.conv3.1.num_batches_tracked", "body.layer2.0.conv3.0.weight", "body.layer2.0.conv3.1.weight", "body.layer2.0.conv3.1.bias", "body.layer2.0.conv3.1.running_mean", "body.layer2.0.conv3.1.running_var", "body.layer2.0.conv3.1.num_batches_tracked", "body.layer2.0.conv1.0.weight", "body.layer2.0.conv1.1.weight", "body.layer2.0.conv1.1.bias", "body.layer2.0.conv1.1.running_mean", "body.layer2.0.conv1.1.running_var", "body.layer2.0.conv1.1.num_batches_tracked", "body.layer2.0.conv2.0.0.weight", "body.layer2.0.conv2.0.1.weight", "body.layer2.0.conv2.0.1.bias", "body.layer2.0.conv2.0.1.running_mean", "body.layer2.0.conv2.0.1.running_var", "body.layer2.0.conv2.0.1.num_batches_tracked", "body.layer2.1.conv3.0.weight", "body.layer2.1.conv3.1.weight", "body.layer2.1.conv3.1.bias", "body.layer2.1.conv3.1.running_mean", "body.layer2.1.conv3.1.running_var", "body.layer2.1.conv3.1.num_batches_tracked", "body.layer2.2.conv3.0.weight", "body.layer2.2.conv3.1.weight", "body.layer2.2.conv3.1.bias", "body.layer2.2.conv3.1.running_mean", "body.layer2.2.conv3.1.running_var", "body.layer2.2.conv3.1.num_batches_tracked", "body.layer2.3.conv3.0.weight", "body.layer2.3.conv3.1.weight", "body.layer2.3.conv3.1.bias", "body.layer2.3.conv3.1.running_mean", "body.layer2.3.conv3.1.running_var", "body.layer2.3.conv3.1.num_batches_tracked", "body.layer3.18.conv1.0.weight", "body.layer3.18.conv1.1.weight", "body.layer3.18.conv1.1.bias", "body.layer3.18.conv1.1.running_mean", "body.layer3.18.conv1.1.running_var", "body.layer3.18.conv1.1.num_batches_tracked", "body.layer3.18.conv2.0.weight", "body.layer3.18.conv2.1.weight", "body.layer3.18.conv2.1.bias", "body.layer3.18.conv2.1.running_mean", "body.layer3.18.conv2.1.running_var", "body.layer3.18.conv2.1.num_batches_tracked", "body.layer3.18.conv3.0.weight", "body.layer3.18.conv3.1.weight", "body.layer3.18.conv3.1.bias", "body.layer3.18.conv3.1.running_mean", "body.layer3.18.conv3.1.running_var", "body.layer3.18.conv3.1.num_batches_tracked", "body.layer3.18.se.fc1.weight", "body.layer3.18.se.fc1.bias", "body.layer3.18.se.fc2.weight", "body.layer3.18.se.fc2.bias", "body.layer3.19.conv1.0.weight", "body.layer3.19.conv1.1.weight", "body.layer3.19.conv1.1.bias", "body.layer3.19.conv1.1.running_mean", "body.layer3.19.conv1.1.running_var", "body.layer3.19.conv1.1.num_batches_tracked", "body.layer3.19.conv2.0.weight", "body.layer3.19.conv2.1.weight", "body.layer3.19.conv2.1.bias", "body.layer3.19.conv2.1.running_mean", "body.layer3.19.conv2.1.running_var", "body.layer3.19.conv2.1.num_batches_tracked", "body.layer3.19.conv3.0.weight", "body.layer3.19.conv3.1.weight", "body.layer3.19.conv3.1.bias", "body.layer3.19.conv3.1.running_mean", "body.layer3.19.conv3.1.running_var", "body.layer3.19.conv3.1.num_batches_tracked", "body.layer3.19.se.fc1.weight", "body.layer3.19.se.fc1.bias", "body.layer3.19.se.fc2.weight", "body.layer3.19.se.fc2.bias", "body.layer3.20.conv1.0.weight", "body.layer3.20.conv1.1.weight", "body.layer3.20.conv1.1.bias", "body.layer3.20.conv1.1.running_mean", "body.layer3.20.conv1.1.running_var", "body.layer3.20.conv1.1.num_batches_tracked", "body.layer3.20.conv2.0.weight", "body.layer3.20.conv2.1.weight", "body.layer3.20.conv2.1.bias", "body.layer3.20.conv2.1.running_mean", "body.layer3.20.conv2.1.running_var", "body.layer3.20.conv2.1.num_batches_tracked", "body.layer3.20.conv3.0.weight", "body.layer3.20.conv3.1.weight", "body.layer3.20.conv3.1.bias", "body.layer3.20.conv3.1.running_mean", "body.layer3.20.conv3.1.running_var", "body.layer3.20.conv3.1.num_batches_tracked", "body.layer3.20.se.fc1.weight", "body.layer3.20.se.fc1.bias", "body.layer3.20.se.fc2.weight", "body.layer3.20.se.fc2.bias", "body.layer3.21.conv1.0.weight", "body.layer3.21.conv1.1.weight", "body.layer3.21.conv1.1.bias", "body.layer3.21.conv1.1.running_mean", "body.layer3.21.conv1.1.running_var", "body.layer3.21.conv1.1.num_batches_tracked", "body.layer3.21.conv2.0.weight", "body.layer3.21.conv2.1.weight", "body.layer3.21.conv2.1.bias", "body.layer3.21.conv2.1.running_mean", "body.layer3.21.conv2.1.running_var", "body.layer3.21.conv2.1.num_batches_tracked", "body.layer3.21.conv3.0.weight", "body.layer3.21.conv3.1.weight", "body.layer3.21.conv3.1.bias", "body.layer3.21.conv3.1.running_mean", "body.layer3.21.conv3.1.running_var", "body.layer3.21.conv3.1.num_batches_tracked", "body.layer3.21.se.fc1.weight", "body.layer3.21.se.fc1.bias", "body.layer3.21.se.fc2.weight", "body.layer3.21.se.fc2.bias", "body.layer3.22.conv1.0.weight", "body.layer3.22.conv1.1.weight", "body.layer3.22.conv1.1.bias", "body.layer3.22.conv1.1.running_mean", "body.layer3.22.conv1.1.running_var", "body.layer3.22.conv1.1.num_batches_tracked", "body.layer3.22.conv2.0.weight", "body.layer3.22.conv2.1.weight", "body.layer3.22.conv2.1.bias", "body.layer3.22.conv2.1.running_mean", "body.layer3.22.conv2.1.running_var", "body.layer3.22.conv2.1.num_batches_tracked", "body.layer3.22.conv3.0.weight", "body.layer3.22.conv3.1.weight", "body.layer3.22.conv3.1.bias", "body.layer3.22.conv3.1.running_mean", "body.layer3.22.conv3.1.running_var", "body.layer3.22.conv3.1.num_batches_tracked", "body.layer3.22.se.fc1.weight", "body.layer3.22.se.fc1.bias", "body.layer3.22.se.fc2.weight", "body.layer3.22.se.fc2.bias". size mismatch for body.conv1.0.weight: copying a param with shape torch.Size([64, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([76, 48, 3, 3]). size mismatch for body.conv1.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.conv1.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.conv1.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.conv1.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.conv1.0.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([76, 76, 3, 3]). size mismatch for body.layer1.0.conv1.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.conv1.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.conv1.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.conv1.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.conv2.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([76, 76, 3, 3]). size mismatch for body.layer1.0.conv2.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.conv2.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.conv2.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.conv2.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.0.se.fc1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 76, 1, 1]). size mismatch for body.layer1.0.se.fc2.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([76, 64, 1, 1]). size mismatch for body.layer1.0.se.fc2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.conv1.0.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([76, 76, 3, 3]). size mismatch for body.layer1.1.conv1.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.conv1.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.conv1.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.conv1.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.conv2.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([76, 76, 3, 3]). size mismatch for body.layer1.1.conv2.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.conv2.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.conv2.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.conv2.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.1.se.fc1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 76, 1, 1]). size mismatch for body.layer1.1.se.fc2.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([76, 64, 1, 1]). size mismatch for body.layer1.1.se.fc2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.conv1.0.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([76, 76, 3, 3]). size mismatch for body.layer1.2.conv1.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.conv1.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.conv1.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.conv1.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.conv2.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([76, 76, 3, 3]). size mismatch for body.layer1.2.conv2.1.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.conv2.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.conv2.1.running_mean: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.conv2.1.running_var: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer1.2.se.fc1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 76, 1, 1]). size mismatch for body.layer1.2.se.fc2.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([76, 64, 1, 1]). size mismatch for body.layer1.2.se.fc2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([76]). size mismatch for body.layer2.0.downsample.1.0.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 76, 1, 1]). size mismatch for body.layer2.0.downsample.1.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.0.downsample.1.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.0.downsample.1.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.0.downsample.1.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.0.se.fc1.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 152, 1, 1]). size mismatch for body.layer2.0.se.fc2.weight: copying a param with shape torch.Size([128, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 64, 1, 1]). size mismatch for body.layer2.0.se.fc2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.conv1.0.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 152, 3, 3]). size mismatch for body.layer2.1.conv1.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.conv1.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.conv1.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.conv1.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.conv2.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([152, 152, 3, 3]). size mismatch for body.layer2.1.conv2.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.conv2.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.conv2.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.conv2.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.1.se.fc1.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 152, 1, 1]). size mismatch for body.layer2.1.se.fc2.weight: copying a param with shape torch.Size([128, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 64, 1, 1]). size mismatch for body.layer2.1.se.fc2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.conv1.0.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 152, 3, 3]). size mismatch for body.layer2.2.conv1.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.conv1.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.conv1.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.conv1.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.conv2.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([152, 152, 3, 3]). size mismatch for body.layer2.2.conv2.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.conv2.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.conv2.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.conv2.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.2.se.fc1.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 152, 1, 1]). size mismatch for body.layer2.2.se.fc2.weight: copying a param with shape torch.Size([128, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 64, 1, 1]). size mismatch for body.layer2.2.se.fc2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.conv1.0.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 152, 3, 3]). size mismatch for body.layer2.3.conv1.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.conv1.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.conv1.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.conv1.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.conv2.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([152, 152, 3, 3]). size mismatch for body.layer2.3.conv2.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.conv2.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.conv2.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.conv2.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer2.3.se.fc1.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 152, 1, 1]). size mismatch for body.layer2.3.se.fc2.weight: copying a param with shape torch.Size([128, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 64, 1, 1]). size mismatch for body.layer2.3.se.fc2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.0.conv1.0.weight: copying a param with shape torch.Size([256, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.0.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.0.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.0.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.0.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.0.conv2.0.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.0.conv2.0.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.0.conv2.0.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.0.conv2.0.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.0.conv2.0.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.0.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.0.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.0.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.0.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.0.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.0.downsample.1.0.weight: copying a param with shape torch.Size([1024, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 152, 1, 1]). size mismatch for body.layer3.0.downsample.1.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.0.downsample.1.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.0.downsample.1.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.0.downsample.1.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.0.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.0.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.0.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.0.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.1.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.1.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.1.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.1.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.1.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.1.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.1.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.1.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.1.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.1.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.1.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.2.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.2.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.2.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.2.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.2.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.2.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.2.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.2.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.2.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.2.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.2.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.3.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.3.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.3.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.3.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.3.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.3.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.3.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.3.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.3.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.3.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.3.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.4.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.4.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.4.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.4.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.4.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.4.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.4.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.4.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.4.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.4.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.4.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.5.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.5.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.5.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.5.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.5.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.5.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.5.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.5.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.5.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.5.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.5.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.6.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.6.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.6.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.6.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.6.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.6.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.6.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.6.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.6.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.6.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.6.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.7.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.7.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.7.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.7.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.7.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.7.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.7.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.7.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.7.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.7.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.7.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.8.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.8.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.8.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.8.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.8.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.8.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.8.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.8.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.8.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.8.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.8.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.9.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.9.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.9.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.9.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.9.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.9.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.9.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.9.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.9.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.9.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.9.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.10.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.10.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.10.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.10.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.10.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.10.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.10.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.10.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.10.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.10.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.10.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.11.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.11.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.11.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.11.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.11.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.11.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.11.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.11.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.11.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.11.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.11.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.12.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.12.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.12.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.12.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.12.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.12.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.12.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.12.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.12.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.12.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.12.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.13.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.13.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.13.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.13.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.13.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.13.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.13.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.13.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.13.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.13.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.13.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.14.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.14.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.14.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.14.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.14.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.14.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.14.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.14.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.14.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.14.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.14.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.15.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.15.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.15.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.15.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.15.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.15.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.15.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.15.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.15.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.15.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.15.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.16.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.16.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.16.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.16.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.16.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.16.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.16.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.16.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.16.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.16.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.16.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv1.0.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 1216, 1, 1]). size mismatch for body.layer3.17.conv1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv2.0.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([304, 304, 3, 3]). size mismatch for body.layer3.17.conv2.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv2.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv2.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv2.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer3.17.conv3.0.weight: copying a param with shape torch.Size([1024, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1216, 304, 1, 1]). size mismatch for body.layer3.17.conv3.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.17.conv3.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.17.conv3.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.17.conv3.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([1216]). size mismatch for body.layer3.17.se.fc1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([152, 304, 1, 1]). size mismatch for body.layer3.17.se.fc1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([152]). size mismatch for body.layer3.17.se.fc2.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([304, 152, 1, 1]). size mismatch for body.layer3.17.se.fc2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([304]). size mismatch for body.layer4.0.conv1.0.weight: copying a param with shape torch.Size([512, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([608, 1216, 1, 1]). size mismatch for body.layer4.0.conv1.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.0.conv1.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.0.conv1.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.0.conv1.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.0.conv2.0.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([608, 608, 3, 3]). size mismatch for body.layer4.0.conv2.0.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.0.conv2.0.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.0.conv2.0.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.0.conv2.0.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.0.conv3.0.weight: copying a param with shape torch.Size([2048, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([2432, 608, 1, 1]). size mismatch for body.layer4.0.conv3.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.0.conv3.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.0.conv3.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.0.conv3.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.0.downsample.1.0.weight: copying a param with shape torch.Size([2048, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([2432, 1216, 1, 1]). size mismatch for body.layer4.0.downsample.1.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.0.downsample.1.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.0.downsample.1.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.0.downsample.1.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.1.conv1.0.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([608, 2432, 1, 1]). size mismatch for body.layer4.1.conv1.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.1.conv1.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.1.conv1.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.1.conv1.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.1.conv2.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([608, 608, 3, 3]). size mismatch for body.layer4.1.conv2.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.1.conv2.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.1.conv2.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.1.conv2.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.1.conv3.0.weight: copying a param with shape torch.Size([2048, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([2432, 608, 1, 1]). size mismatch for body.layer4.1.conv3.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.1.conv3.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.1.conv3.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.1.conv3.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.2.conv1.0.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([608, 2432, 1, 1]). size mismatch for body.layer4.2.conv1.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.2.conv1.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.2.conv1.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.2.conv1.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.2.conv2.0.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([608, 608, 3, 3]). size mismatch for body.layer4.2.conv2.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.2.conv2.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.2.conv2.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.2.conv2.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([608]). size mismatch for body.layer4.2.conv3.0.weight: copying a param with shape torch.Size([2048, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([2432, 608, 1, 1]). size mismatch for body.layer4.2.conv3.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.2.conv3.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.2.conv3.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for body.layer4.2.conv3.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([2432]). size mismatch for head.fc.embedding_generator.0.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([512, 2432]).

    Environment

    OS: Ubuntu 18.04 PyTorch: 1.10.1 CUDA: 10.2

    Command to Reproduce

    python infer.py --dataset_type=OpenImages --model_name=tresnet_l --model_path=ltresnet_v2_opim_87.34.pth --pic_path=test_img.jpg

    opened by enesmsahin 2
  • Minor error fixes for performing inference

    Minor error fixes for performing inference

    • Modified image destination path since current path requires root access to call os.makedirs("/results").
    • Returned tensor_batch from inference(im, model, class_list, args) function since it is to be used in example loss calculation.
    • Removed double display of the output image.
    • Replaced torch._C.set_grad_enabled() calls with torch.set_grad_enabled() since it throws following AttributeError with PyTorch version 1.10.1:

    AttributeError: module 'torch._C' has no attribute 'set_grad_enabled'

    opened by enesmsahin 0
Owner
null
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

null 101 Nov 25, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
Official implementation of our CVPR2021 paper "OTA: Optimal Transport Assignment for Object Detection" in Pytorch.

OTA: Optimal Transport Assignment for Object Detection This project provides an implementation for our CVPR2021 paper "OTA: Optimal Transport Assignme

null 217 Jan 3, 2023
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 3, 2023
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick."

PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick." [Project page] [Paper

Gyungin Shin 59 Sep 25, 2022
Official implementation of GraphMask as presented in our paper Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking.

GraphMask This repository contains an implementation of GraphMask, the interpretability technique for graph neural networks presented in our ICLR 2021

Michael Schlichtkrull 29 Sep 2, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

null 574 Jan 2, 2023
The official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang Gong, Yi Ma. "Fully Convolutional Line Parsing." *.

F-Clip — Fully Convolutional Line Parsing This repository contains the official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang

Xili Dai 115 Dec 28, 2022