Load pretrained Model
0 BlockArgs(kernel_size=3, num_repeat=2, input_filters=32, output_filters=16, expand_ratio=1, id_skip=True, stride=[1], se_ratio=0.25)
0 BlockArgs(kernel_size=3, num_repeat=2, input_filters=16, output_filters=16, expand_ratio=1, id_skip=True, stride=1, se_ratio=0.25)
1 BlockArgs(kernel_size=3, num_repeat=3, input_filters=16, output_filters=24, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
1 BlockArgs(kernel_size=3, num_repeat=3, input_filters=24, output_filters=24, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
1 BlockArgs(kernel_size=3, num_repeat=3, input_filters=24, output_filters=24, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
2 BlockArgs(kernel_size=5, num_repeat=3, input_filters=24, output_filters=40, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
2 BlockArgs(kernel_size=5, num_repeat=3, input_filters=40, output_filters=40, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
2 BlockArgs(kernel_size=5, num_repeat=3, input_filters=40, output_filters=40, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
3 BlockArgs(kernel_size=3, num_repeat=4, input_filters=40, output_filters=80, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
3 BlockArgs(kernel_size=3, num_repeat=4, input_filters=80, output_filters=80, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
3 BlockArgs(kernel_size=3, num_repeat=4, input_filters=80, output_filters=80, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
3 BlockArgs(kernel_size=3, num_repeat=4, input_filters=80, output_filters=80, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
4 BlockArgs(kernel_size=5, num_repeat=4, input_filters=80, output_filters=112, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
4 BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=112, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
4 BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=112, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
4 BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=112, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
5 BlockArgs(kernel_size=5, num_repeat=5, input_filters=112, output_filters=192, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
5 BlockArgs(kernel_size=5, num_repeat=5, input_filters=192, output_filters=192, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
5 BlockArgs(kernel_size=5, num_repeat=5, input_filters=192, output_filters=192, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
5 BlockArgs(kernel_size=5, num_repeat=5, input_filters=192, output_filters=192, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
5 BlockArgs(kernel_size=5, num_repeat=5, input_filters=192, output_filters=192, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
6 BlockArgs(kernel_size=3, num_repeat=2, input_filters=192, output_filters=320, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25)
6 BlockArgs(kernel_size=3, num_repeat=2, input_filters=320, output_filters=320, expand_ratio=6, id_skip=True, stride=1, se_ratio=0.25)
Loaded pretrained weights for efficientnet-b1
BIFPN in_channels: [40, 80, 112, 192, 320]
Traceback (most recent call last):
File "demo.py", line 169, in <module>
detect = Detect(weights = args.weight)
File "demo.py", line 63, in __init__
self.model.load_state_dict(state_dict)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 839, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for EfficientDet:
Unexpected key(s) in state_dict: "BIFPN.stack_bifpn_convs.1.w1", "BIFPN.stack_bifpn_convs.1.w2", "BIFPN.stack_bifpn_convs.1.bifpn_convs.0.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.0.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.0.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.0.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.1.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.1.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.1.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.1.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.2.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.2.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.2.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.2.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.3.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.3.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.3.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.3.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.4.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.4.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.4.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.4.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.5.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.5.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.5.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.5.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.6.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.6.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.6.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.6.1.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.7.0.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.7.0.conv.bias", "BIFPN.stack_bifpn_convs.1.bifpn_convs.7.1.conv.weight", "BIFPN.stack_bifpn_convs.1.bifpn_convs.7.1.conv.bias", "BIFPN.stack_bifpn_convs.2.w1", "BIFPN.stack_bifpn_convs.2.w2", "BIFPN.stack_bifpn_convs.2.bifpn_convs.0.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.0.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.0.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.0.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.1.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.1.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.1.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.1.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.2.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.2.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.2.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.2.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.3.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.3.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.3.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.3.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.4.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.4.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.4.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.4.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.5.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.5.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.5.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.5.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.6.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.6.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.6.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.6.1.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.7.0.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.7.0.conv.bias", "BIFPN.stack_bifpn_convs.2.bifpn_convs.7.1.conv.weight", "BIFPN.stack_bifpn_convs.2.bifpn_convs.7.1.conv.bias".
size mismatch for BIFPN.lateral_convs.0.conv.weight: copying a param with shape torch.Size([88, 40, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 40, 1, 1]).
size mismatch for BIFPN.lateral_convs.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.lateral_convs.1.conv.weight: copying a param with shape torch.Size([88, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 80, 1, 1]).
size mismatch for BIFPN.lateral_convs.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.lateral_convs.2.conv.weight: copying a param with shape torch.Size([88, 112, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 112, 1, 1]).
size mismatch for BIFPN.lateral_convs.2.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.lateral_convs.3.conv.weight: copying a param with shape torch.Size([88, 192, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 192, 1, 1]).
size mismatch for BIFPN.lateral_convs.3.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.lateral_convs.4.conv.weight: copying a param with shape torch.Size([88, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 320, 1, 1]).
size mismatch for BIFPN.lateral_convs.4.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.0.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.0.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.0.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.0.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.1.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.1.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.1.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.1.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.2.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.2.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.2.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.2.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.3.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.3.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.3.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.3.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.4.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.4.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.4.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.4.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.5.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.5.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.5.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.5.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.6.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.6.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.6.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.6.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.7.0.conv.weight: copying a param with shape torch.Size([88, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 1, 3, 3]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.7.0.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.7.1.conv.weight: copying a param with shape torch.Size([88, 88, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for BIFPN.stack_bifpn_convs.0.bifpn_convs.7.1.conv.bias: copying a param with shape torch.Size([88]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for regressionModel.conv1.weight: copying a param with shape torch.Size([256, 88, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for classificationModel.conv1.weight: copying a param with shape torch.Size([256, 88, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).