我的日志如下:
2021-12-26 16:04:19,060 - mmtrack - INFO - Environment info:
sys.platform: linux
Python: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]
CUDA available: True
GPU 0,1,2: GeForce RTX 2080 Ti
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.0, V10.0.130
GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
PyTorch: 1.5.0
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.3
- Magma 2.5.2
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
TorchVision: 0.6.0a0+82fd1c8
OpenCV: 4.5.4
MMCV: 1.4.1
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.1
MMTracking: 0.8.0+
2021-12-26 16:04:19,061 - mmtrack - INFO - Distributed training: True
2021-12-26 16:04:19,761 - mmtrack - INFO - Config:
model = dict(
detector=dict(
type='FasterRCNN',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(3, ),
strides=(1, 2, 2, 1),
dilations=(1, 1, 1, 2),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch'),
neck=dict(
type='ChannelMapper',
in_channels=[2048],
out_channels=512,
kernel_size=3),
rpn_head=dict(
type='RPNHead',
in_channels=512,
feat_channels=512,
anchor_generator=dict(
type='AnchorGenerator',
scales=[4, 8, 16, 32],
ratios=[0.5, 1.0, 2.0],
strides=[16]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(
type='SmoothL1Loss', beta=0.1111111111111111,
loss_weight=1.0)),
roi_head=dict(
type='SelsaRoIHead',
bbox_roi_extractor=dict(
type='TemporalRoIAlign',
roi_layer=dict(
type='RoIAlign', output_size=7, sampling_ratio=2),
out_channels=512,
featmap_strides=[16],
num_most_similar_points=2,
num_temporal_attention_blocks=4),
bbox_head=dict(
type='SelsaBBoxHead',
in_channels=512,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=30,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.2, 0.2, 0.2, 0.2]),
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0),
num_shared_fcs=3,
aggregator=dict(
type='SelsaAggregator',
in_channels=1024,
num_attention_blocks=16))),
train_cfg=dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_pre=6000,
max_per_img=600,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)),
test_cfg=dict(
rpn=dict(
nms_pre=6000,
max_per_img=300,
nms=dict(type='nms', iou_threshold=0.7),
min_bbox_size=0),
rcnn=dict(
score_thr=0.0001,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100))),
type='SELSA')
dataset_type = 'ImagenetVIDDataset'
data_root = 'data/FALD_VID/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadMultiImagesFromFile'),
dict(type='SeqLoadAnnotations', with_bbox=True, with_track=True),
dict(type='SeqResize', img_scale=(1000, 600), keep_ratio=True),
dict(type='SeqRandomFlip', share_params=True, flip_ratio=0.5),
dict(
type='SeqNormalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='SeqPad', size_divisor=16),
dict(
type='VideoCollect',
keys=['img', 'gt_bboxes', 'gt_labels', 'gt_instance_ids']),
dict(type='ConcatVideoReferences'),
dict(type='SeqDefaultFormatBundle', ref_prefix='ref')
]
test_pipeline = [
dict(type='LoadMultiImagesFromFile'),
dict(type='SeqResize', img_scale=(1000, 600), keep_ratio=True),
dict(type='SeqRandomFlip', share_params=True, flip_ratio=0.0),
dict(
type='SeqNormalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='SeqPad', size_divisor=16),
dict(
type='VideoCollect',
keys=['img'],
meta_keys=('num_left_ref_imgs', 'frame_stride')),
dict(type='ConcatVideoReferences'),
dict(type='MultiImagesToTensor', ref_prefix='ref'),
dict(type='ToList')
]
data = dict(
samples_per_gpu=1,
workers_per_gpu=2,
train=dict(
type='ImagenetVIDDataset',
ann_file=
'data/FALD_VID/COCOVIDannotations/imagenet_vid_train_every10frames.json',
img_prefix='data/FALD_VID/Data/VID',
ref_img_sampler=dict(
num_ref_imgs=2,
frame_range=9,
filter_key_img=False,
method='bilateral_uniform'),
pipeline=[
dict(type='LoadMultiImagesFromFile'),
dict(type='SeqLoadAnnotations', with_bbox=True, with_track=True),
dict(type='SeqResize', img_scale=(1000, 600), keep_ratio=True),
dict(type='SeqRandomFlip', share_params=True, flip_ratio=0.5),
dict(
type='SeqNormalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='SeqPad', size_divisor=16),
dict(
type='VideoCollect',
keys=['img', 'gt_bboxes', 'gt_labels', 'gt_instance_ids']),
dict(type='ConcatVideoReferences'),
dict(type='SeqDefaultFormatBundle', ref_prefix='ref')
]),
val=dict(
type='ImagenetVIDDataset',
ann_file='data/FALD_VID/annotations/imagenet_vid_val.json',
img_prefix='data/FALD_VID/Data/VID',
ref_img_sampler=dict(
num_ref_imgs=14,
frame_range=[-7, 7],
method='test_with_adaptive_stride'),
pipeline=[
dict(type='LoadMultiImagesFromFile'),
dict(type='SeqResize', img_scale=(1000, 600), keep_ratio=True),
dict(type='SeqRandomFlip', share_params=True, flip_ratio=0.0),
dict(
type='SeqNormalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='SeqPad', size_divisor=16),
dict(
type='VideoCollect',
keys=['img'],
meta_keys=('num_left_ref_imgs', 'frame_stride')),
dict(type='ConcatVideoReferences'),
dict(type='MultiImagesToTensor', ref_prefix='ref'),
dict(type='ToList')
],
test_mode=True),
test=dict(
type='ImagenetVIDDataset',
ann_file='data/FALD_VID/annotations/imagenet_vid_val.json',
img_prefix='data/FALD_VID/Data/VID',
ref_img_sampler=dict(
num_ref_imgs=14,
frame_range=[-7, 7],
method='test_with_adaptive_stride'),
pipeline=[
dict(type='LoadMultiImagesFromFile'),
dict(type='SeqResize', img_scale=(1000, 600), keep_ratio=True),
dict(type='SeqRandomFlip', share_params=True, flip_ratio=0.0),
dict(
type='SeqNormalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='SeqPad', size_divisor=16),
dict(
type='VideoCollect',
keys=['img'],
meta_keys=('num_left_ref_imgs', 'frame_stride')),
dict(type='ConcatVideoReferences'),
dict(type='MultiImagesToTensor', ref_prefix='ref'),
dict(type='ToList')
],
test_mode=True))
optimizer = dict(type='SGD', lr=0.005, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.3333333333333333,
step=[2, 5])
total_epochs = 4
evaluation = dict(metric=['bbox'], interval=4)
work_dir = './work_dirs/20211226_001_try3/'
gpu_ids = range(0, 1)
2021-12-26 16:04:24,438 - mmtrack - INFO - Set random seed to 2034425034, deterministic: False
2021-12-26 16:04:25,201 - mmtrack - INFO - initialize ResNet with init_cfg [{'type': 'Kaiming', 'layer': 'Conv2d'}, {'type': 'Constant', 'val': 1, 'layer': ['_BatchNorm', 'GroupNorm']}]
2021-12-26 16:04:25,466 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,467 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,468 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,470 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,471 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,472 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,473 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,475 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,477 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,479 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,481 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,482 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,484 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,490 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,496 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,500 - mmtrack - INFO - initialize Bottleneck with init_cfg {'type': 'Constant', 'val': 0, 'override': {'name': 'norm3'}}
2021-12-26 16:04:25,523 - mmtrack - INFO - initialize ChannelMapper with init_cfg {'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
2021-12-26 16:04:25,583 - mmtrack - INFO - initialize RPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01}
2021-12-26 16:04:25,637 - mmtrack - INFO - initialize SelsaBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'distribution': 'uniform', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}]
Name of parameter - Initialization information
detector.backbone.conv1.weight - torch.Size([64, 3, 7, 7]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.bn1.weight - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.bn1.bias - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.0.conv1.weight - torch.Size([64, 64, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.0.bn1.weight - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.0.bn1.bias - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.0.conv2.weight - torch.Size([64, 64, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.0.bn2.weight - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.0.bn2.bias - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.0.conv3.weight - torch.Size([256, 64, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.0.bn3.weight - torch.Size([256]):
ConstantInit: val=0, bias=0
detector.backbone.layer1.0.bn3.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.0.downsample.0.weight - torch.Size([256, 64, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.0.downsample.1.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.0.downsample.1.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.1.conv1.weight - torch.Size([64, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.1.bn1.weight - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.1.bn1.bias - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.1.conv2.weight - torch.Size([64, 64, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.1.bn2.weight - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.1.bn2.bias - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.1.conv3.weight - torch.Size([256, 64, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.1.bn3.weight - torch.Size([256]):
ConstantInit: val=0, bias=0
detector.backbone.layer1.1.bn3.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.2.conv1.weight - torch.Size([64, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.2.bn1.weight - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.2.bn1.bias - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.2.conv2.weight - torch.Size([64, 64, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.2.bn2.weight - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.2.bn2.bias - torch.Size([64]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer1.2.conv3.weight - torch.Size([256, 64, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer1.2.bn3.weight - torch.Size([256]):
ConstantInit: val=0, bias=0
detector.backbone.layer1.2.bn3.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.0.conv1.weight - torch.Size([128, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.0.bn1.weight - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.0.bn1.bias - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.0.conv2.weight - torch.Size([128, 128, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.0.bn2.weight - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.0.bn2.bias - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.0.conv3.weight - torch.Size([512, 128, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.0.bn3.weight - torch.Size([512]):
ConstantInit: val=0, bias=0
detector.backbone.layer2.0.bn3.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.0.downsample.0.weight - torch.Size([512, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.0.downsample.1.weight - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.0.downsample.1.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.1.conv1.weight - torch.Size([128, 512, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.1.bn1.weight - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.1.bn1.bias - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.1.conv2.weight - torch.Size([128, 128, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.1.bn2.weight - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.1.bn2.bias - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.1.conv3.weight - torch.Size([512, 128, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.1.bn3.weight - torch.Size([512]):
ConstantInit: val=0, bias=0
detector.backbone.layer2.1.bn3.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.2.conv1.weight - torch.Size([128, 512, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.2.bn1.weight - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.2.bn1.bias - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.2.conv2.weight - torch.Size([128, 128, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.2.bn2.weight - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.2.bn2.bias - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.2.conv3.weight - torch.Size([512, 128, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.2.bn3.weight - torch.Size([512]):
ConstantInit: val=0, bias=0
detector.backbone.layer2.2.bn3.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.3.conv1.weight - torch.Size([128, 512, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.3.bn1.weight - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.3.bn1.bias - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.3.conv2.weight - torch.Size([128, 128, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.3.bn2.weight - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.3.bn2.bias - torch.Size([128]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer2.3.conv3.weight - torch.Size([512, 128, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer2.3.bn3.weight - torch.Size([512]):
ConstantInit: val=0, bias=0
detector.backbone.layer2.3.bn3.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.0.conv1.weight - torch.Size([256, 512, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.0.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.0.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.0.conv2.weight - torch.Size([256, 256, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.0.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.0.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.0.conv3.weight - torch.Size([1024, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.0.bn3.weight - torch.Size([1024]):
ConstantInit: val=0, bias=0
detector.backbone.layer3.0.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.0.downsample.0.weight - torch.Size([1024, 512, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.0.downsample.1.weight - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.0.downsample.1.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.1.conv1.weight - torch.Size([256, 1024, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.1.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.1.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.1.conv2.weight - torch.Size([256, 256, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.1.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.1.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.1.conv3.weight - torch.Size([1024, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.1.bn3.weight - torch.Size([1024]):
ConstantInit: val=0, bias=0
detector.backbone.layer3.1.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.2.conv1.weight - torch.Size([256, 1024, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.2.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.2.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.2.conv2.weight - torch.Size([256, 256, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.2.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.2.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.2.conv3.weight - torch.Size([1024, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.2.bn3.weight - torch.Size([1024]):
ConstantInit: val=0, bias=0
detector.backbone.layer3.2.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.3.conv1.weight - torch.Size([256, 1024, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.3.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.3.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.3.conv2.weight - torch.Size([256, 256, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.3.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.3.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.3.conv3.weight - torch.Size([1024, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.3.bn3.weight - torch.Size([1024]):
ConstantInit: val=0, bias=0
detector.backbone.layer3.3.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.4.conv1.weight - torch.Size([256, 1024, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.4.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.4.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.4.conv2.weight - torch.Size([256, 256, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.4.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.4.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.4.conv3.weight - torch.Size([1024, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.4.bn3.weight - torch.Size([1024]):
ConstantInit: val=0, bias=0
detector.backbone.layer3.4.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.5.conv1.weight - torch.Size([256, 1024, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.5.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.5.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.5.conv2.weight - torch.Size([256, 256, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.5.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.5.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer3.5.conv3.weight - torch.Size([1024, 256, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer3.5.bn3.weight - torch.Size([1024]):
ConstantInit: val=0, bias=0
detector.backbone.layer3.5.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.0.conv1.weight - torch.Size([512, 1024, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.0.bn1.weight - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.0.bn1.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.0.conv2.weight - torch.Size([512, 512, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.0.bn2.weight - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.0.bn2.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.0.conv3.weight - torch.Size([2048, 512, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.0.bn3.weight - torch.Size([2048]):
ConstantInit: val=0, bias=0
detector.backbone.layer4.0.bn3.bias - torch.Size([2048]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.0.downsample.0.weight - torch.Size([2048, 1024, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.0.downsample.1.weight - torch.Size([2048]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.0.downsample.1.bias - torch.Size([2048]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.1.conv1.weight - torch.Size([512, 2048, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.1.bn1.weight - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.1.bn1.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.1.conv2.weight - torch.Size([512, 512, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.1.bn2.weight - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.1.bn2.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.1.conv3.weight - torch.Size([2048, 512, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.1.bn3.weight - torch.Size([2048]):
ConstantInit: val=0, bias=0
detector.backbone.layer4.1.bn3.bias - torch.Size([2048]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.2.conv1.weight - torch.Size([512, 2048, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.2.bn1.weight - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.2.bn1.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.2.conv2.weight - torch.Size([512, 512, 3, 3]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.2.bn2.weight - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.2.bn2.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.backbone.layer4.2.conv3.weight - torch.Size([2048, 512, 1, 1]):
KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
detector.backbone.layer4.2.bn3.weight - torch.Size([2048]):
ConstantInit: val=0, bias=0
detector.backbone.layer4.2.bn3.bias - torch.Size([2048]):
The value is the same before and after calling init_weights
of SELSA
detector.neck.convs.0.conv.weight - torch.Size([512, 2048, 3, 3]):
XavierInit: gain=1, distribution=uniform, bias=0
detector.neck.convs.0.conv.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.rpn_head.rpn_conv.weight - torch.Size([512, 512, 3, 3]):
NormalInit: mean=0, std=0.01, bias=0
detector.rpn_head.rpn_conv.bias - torch.Size([512]):
NormalInit: mean=0, std=0.01, bias=0
detector.rpn_head.rpn_cls.weight - torch.Size([12, 512, 1, 1]):
NormalInit: mean=0, std=0.01, bias=0
detector.rpn_head.rpn_cls.bias - torch.Size([12]):
NormalInit: mean=0, std=0.01, bias=0
detector.rpn_head.rpn_reg.weight - torch.Size([48, 512, 1, 1]):
NormalInit: mean=0, std=0.01, bias=0
detector.rpn_head.rpn_reg.bias - torch.Size([48]):
NormalInit: mean=0, std=0.01, bias=0
detector.roi_head.bbox_roi_extractor.embed_network.conv.weight - torch.Size([512, 512, 3, 3]):
Initialized by user-defined init_weights
in ConvModule
detector.roi_head.bbox_roi_extractor.embed_network.conv.bias - torch.Size([512]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.fc_cls.weight - torch.Size([31, 1024]):
NormalInit: mean=0, std=0.01, bias=0
detector.roi_head.bbox_head.fc_cls.bias - torch.Size([31]):
NormalInit: mean=0, std=0.01, bias=0
detector.roi_head.bbox_head.fc_reg.weight - torch.Size([120, 1024]):
NormalInit: mean=0, std=0.001, bias=0
detector.roi_head.bbox_head.fc_reg.bias - torch.Size([120]):
NormalInit: mean=0, std=0.001, bias=0
detector.roi_head.bbox_head.shared_fcs.0.weight - torch.Size([1024, 25088]):
XavierInit: gain=1, distribution=uniform, bias=0
detector.roi_head.bbox_head.shared_fcs.0.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=uniform, bias=0
detector.roi_head.bbox_head.shared_fcs.1.weight - torch.Size([1024, 1024]):
XavierInit: gain=1, distribution=uniform, bias=0
detector.roi_head.bbox_head.shared_fcs.1.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=uniform, bias=0
detector.roi_head.bbox_head.shared_fcs.2.weight - torch.Size([1024, 1024]):
XavierInit: gain=1, distribution=uniform, bias=0
detector.roi_head.bbox_head.shared_fcs.2.bias - torch.Size([1024]):
XavierInit: gain=1, distribution=uniform, bias=0
detector.roi_head.bbox_head.aggregator.0.fc_embed.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.0.fc_embed.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.0.ref_fc_embed.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.0.ref_fc_embed.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.0.fc.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.0.fc.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.0.ref_fc.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.0.ref_fc.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.1.fc_embed.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.1.fc_embed.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.1.ref_fc_embed.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.1.ref_fc_embed.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.1.fc.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.1.fc.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.1.ref_fc.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.1.ref_fc.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.2.fc_embed.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.2.fc_embed.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.2.ref_fc_embed.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.2.ref_fc_embed.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.2.fc.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.2.fc.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.2.ref_fc.weight - torch.Size([1024, 1024]):
The value is the same before and after calling init_weights
of SELSA
detector.roi_head.bbox_head.aggregator.2.ref_fc.bias - torch.Size([1024]):
The value is the same before and after calling init_weights
of SELSA
2021-12-26 16:04:28,460 - mmtrack - INFO - Start running, host: [email protected], work_dir: /data/yangjiahui/VIDProject/mmtracking/work_dirs/20211226_001_try3
2021-12-26 16:04:28,461 - mmtrack - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) CheckpointHook
(NORMAL ) DistEvalHook
(VERY_LOW ) TextLoggerHook
before_train_epoch:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) DistSamplerSeedHook
(NORMAL ) DistEvalHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
before_train_iter:
(VERY_HIGH ) StepLrUpdaterHook
(NORMAL ) DistEvalHook
(LOW ) IterTimerHook
after_train_iter:
(ABOVE_NORMAL) OptimizerHook
(NORMAL ) CheckpointHook
(NORMAL ) DistEvalHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
after_train_epoch:
(NORMAL ) CheckpointHook
(NORMAL ) DistEvalHook
(VERY_LOW ) TextLoggerHook
before_val_epoch:
(NORMAL ) DistSamplerSeedHook
(LOW ) IterTimerHook
(VERY_LOW ) TextLoggerHook
before_val_iter:
(LOW ) IterTimerHook
after_val_iter:
(LOW ) IterTimerHook
after_val_epoch:
(VERY_LOW ) TextLoggerHook
after_run:
(VERY_LOW ) TextLoggerHook
2021-12-26 16:04:28,461 - mmtrack - INFO - workflow: [('train', 1)], max: 4 epochs
2021-12-26 16:04:28,461 - mmtrack - INFO - Checkpoints will be saved to /data/yangjiahui/VIDProject/mmtracking/work_dirs/20211226_001_try3 by HardDiskBackend.
2021-12-26 16:05:00,501 - mmtrack - INFO - Saving checkpoint at 1 epochs
2021-12-26 16:05:32,658 - mmtrack - INFO - Saving checkpoint at 2 epochs
2021-12-26 16:06:04,769 - mmtrack - INFO - Saving checkpoint at 3 epochs
2021-12-26 16:06:37,068 - mmtrack - INFO - Saving checkpoint at 4 epochs