2022/10/12 10:59:09 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] CUDA available: True numpy_random_seed: 153894046 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/petrelfs/share/cuda-11.3 NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44) PyTorch: 1.12.0+cu113 PyTorch compiling details: PyTorch built with: - GCC 9.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.3 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86 - CuDNN 8.2.1 - Built with CuDNN 8.3.2 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.13.0+cu113 OpenCV: 4.6.0 MMEngine: 0.1.0 Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None Distributed launcher: slurm Distributed training: True GPU number: 8 ------------------------------------------------------------ 2022/10/12 10:59:10 - mmengine - INFO - Config: default_scope = 'mmpose' default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=50), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict( type='CheckpointHook', interval=10, max_keep_ckpts=1, save_best='coco/AP', rule='greater'), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='PoseVisualizationHook', enable=False)) custom_hooks = [dict(type='SyncBuffersHook')] env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) vis_backends = [dict(type='LocalVisBackend')] visualizer = dict( type='PoseLocalVisualizer', vis_backends=[dict(type='LocalVisBackend')], name='visualizer') log_processor = dict( type='LogProcessor', window_size=50, by_epoch=True, num_digits=6) log_level = 'INFO' load_from = None resume = False file_client_args = dict( backend='petrel', path_mapping=dict({ './data/': 's3://openmmlab/datasets/detection/', 'data/': 's3://openmmlab/datasets/detection/' })) train_cfg = dict(by_epoch=True, max_epochs=210, val_interval=10) val_cfg = dict() test_cfg = dict() optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.005)) param_scheduler = [ dict( type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False), dict( type='MultiStepLR', begin=0, end=210, milestones=[170, 200], gamma=0.1, by_epoch=True) ] auto_scale_lr = dict(base_batch_size=256) kernel_sizes = [15, 11, 9, 7, 5] codec = [ dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=15), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=11), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=9), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=7), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=5) ] model = dict( type='TopdownPoseEstimator', data_preprocessor=dict( type='PoseDataPreprocessor', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], bgr_to_rgb=True), backbone=dict( type='RSN', unit_channels=256, num_stages=3, num_units=4, num_blocks=[3, 4, 6, 3], num_steps=4, norm_cfg=dict(type='BN')), head=dict( type='MSPNHead', out_shape=(64, 48), unit_channels=256, out_channels=17, num_stages=3, num_units=4, norm_cfg=dict(type='BN'), level_indices=[0, 1, 2, 3, 0, 1, 2, 3, 1, 2, 3, 4], loss=[ dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointOHKMMSELoss', use_target_weight=True, loss_weight=1.0), dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointOHKMMSELoss', use_target_weight=True, loss_weight=1.0), dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointMSELoss', use_target_weight=True, loss_weight=0.25), dict( type='KeypointOHKMMSELoss', use_target_weight=True, loss_weight=1.0) ], decoder=dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=5)), test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=False)) dataset_type = 'CocoDataset' data_mode = 'topdown' data_root = 'data/coco/' train_pipeline = [ dict( type='LoadImage', file_client_args=dict( backend='petrel', path_mapping=dict({ './data/': 's3://openmmlab/datasets/detection/', 'data/': 's3://openmmlab/datasets/detection/' }))), dict(type='GetBBoxCenterScale'), dict(type='RandomFlip', direction='horizontal'), dict(type='RandomHalfBody'), dict(type='RandomBBoxTransform'), dict(type='TopdownAffine', input_size=(192, 256)), dict( type='GenerateTarget', target_type='multilevel_heatmap', encoder=[ dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=15), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=11), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=9), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=7), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=5) ]), dict(type='PackPoseInputs') ] val_pipeline = [ dict( type='LoadImage', file_client_args=dict( backend='petrel', path_mapping=dict({ './data/': 's3://openmmlab/datasets/detection/', 'data/': 's3://openmmlab/datasets/detection/' }))), dict(type='GetBBoxCenterScale'), dict(type='TopdownAffine', input_size=(192, 256)), dict(type='PackPoseInputs') ] train_dataloader = dict( batch_size=32, num_workers=4, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='CocoDataset', data_root='data/coco/', data_mode='topdown', ann_file='annotations/person_keypoints_train2017.json', data_prefix=dict(img='train2017/'), pipeline=[ dict( type='LoadImage', file_client_args=dict( backend='petrel', path_mapping=dict({ './data/': 's3://openmmlab/datasets/detection/', 'data/': 's3://openmmlab/datasets/detection/' }))), dict(type='GetBBoxCenterScale'), dict(type='RandomFlip', direction='horizontal'), dict(type='RandomHalfBody'), dict(type='RandomBBoxTransform'), dict(type='TopdownAffine', input_size=(192, 256)), dict( type='GenerateTarget', target_type='multilevel_heatmap', encoder=[ dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=15), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=11), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=9), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=7), dict( type='MegviiHeatmap', input_size=(192, 256), heatmap_size=(48, 64), kernel_size=5) ]), dict(type='PackPoseInputs') ])) val_dataloader = dict( batch_size=32, num_workers=4, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False, round_up=False), dataset=dict( type='CocoDataset', data_root='data/coco/', data_mode='topdown', ann_file='annotations/person_keypoints_val2017.json', bbox_file= 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json', data_prefix=dict(img='val2017/'), test_mode=True, pipeline=[ dict( type='LoadImage', file_client_args=dict( backend='petrel', path_mapping=dict({ './data/': 's3://openmmlab/datasets/detection/', 'data/': 's3://openmmlab/datasets/detection/' }))), dict(type='GetBBoxCenterScale'), dict(type='TopdownAffine', input_size=(192, 256)), dict(type='PackPoseInputs') ])) test_dataloader = dict( batch_size=32, num_workers=4, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False, round_up=False), dataset=dict( type='CocoDataset', data_root='data/coco/', data_mode='topdown', ann_file='annotations/person_keypoints_val2017.json', bbox_file= 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json', data_prefix=dict(img='val2017/'), test_mode=True, pipeline=[ dict( type='LoadImage', file_client_args=dict( backend='petrel', path_mapping=dict({ './data/': 's3://openmmlab/datasets/detection/', 'data/': 's3://openmmlab/datasets/detection/' }))), dict(type='GetBBoxCenterScale'), dict(type='TopdownAffine', input_size=(192, 256)), dict(type='PackPoseInputs') ])) val_evaluator = dict( type='CocoMetric', ann_file='data/coco/annotations/person_keypoints_val2017.json', nms_mode='none') test_evaluator = dict( type='CocoMetric', ann_file='data/coco/annotations/person_keypoints_val2017.json', nms_mode='none') fp16 = dict(loss_scale='dynamic') launcher = 'slurm' work_dir = 'work_dirs/20221012/rsn3x' 2022/10/12 10:59:57 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "data sampler" registry tree. As a workaround, the current "data sampler" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 10:59:57 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "optimizer wrapper constructor" registry tree. As a workaround, the current "optimizer wrapper constructor" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 10:59:57 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "optimizer" registry tree. As a workaround, the current "optimizer" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 10:59:57 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "optim_wrapper" registry tree. As a workaround, the current "optim_wrapper" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 10:59:57 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "parameter scheduler" registry tree. As a workaround, the current "parameter scheduler" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 10:59:57 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "parameter scheduler" registry tree. As a workaround, the current "parameter scheduler" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 10:59:57 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "parameter scheduler" registry tree. As a workaround, the current "parameter scheduler" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 10:59:57 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "parameter scheduler" registry tree. As a workaround, the current "parameter scheduler" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 11:00:01 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "data sampler" registry tree. As a workaround, the current "data sampler" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 11:00:03 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "weight initializer" registry tree. As a workaround, the current "weight initializer" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 11:00:03 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "weight initializer" registry tree. As a workaround, the current "weight initializer" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 11:00:03 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "weight initializer" registry tree. As a workaround, the current "weight initializer" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 11:00:03 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "weight initializer" registry tree. As a workaround, the current "weight initializer" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 11:00:03 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "weight initializer" registry tree. As a workaround, the current "weight initializer" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. 2022/10/12 11:00:03 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "weight initializer" registry tree. As a workaround, the current "weight initializer" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized. Name of parameter - Initialization information backbone.top.top.0.conv.weight - torch.Size([64, 3, 7, 7]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.top.top.0.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.top.top.0.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.0.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.1.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer1.2.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.downsample.conv.weight - torch.Size([128, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.downsample.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.downsample.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn3.conv.weight - torch.Size([128, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.0.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.1.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.2.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer2.3.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.downsample.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.downsample.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.downsample.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn3.conv.weight - torch.Size([256, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.0.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.1.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.2.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.3.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.4.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer3.5.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.downsample.conv.weight - torch.Size([512, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.downsample.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.downsample.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn3.conv.weight - torch.Size([512, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.0.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu1.conv.weight - torch.Size([832, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu1.bn.weight - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu1.bn.bias - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_1_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_1_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_1_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_2_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_2_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_2_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_2_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_2_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_2_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_3_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_4.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_4.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn_relu2_4_4.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn3.conv.weight - torch.Size([512, 832, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.1.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu1.conv.weight - torch.Size([832, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu1.bn.weight - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu1.bn.bias - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_1_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_1_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_1_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_2_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_2_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_2_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_2_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_2_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_2_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_3_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_4.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_4.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn_relu2_4_4.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn3.conv.weight - torch.Size([512, 832, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.downsample.layer4.2.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up1.in_skip.conv.weight - torch.Size([256, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up1.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up1.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up1.out_skip1.conv.weight - torch.Size([512, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up1.out_skip1.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up1.out_skip1.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up1.out_skip2.conv.weight - torch.Size([512, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up1.out_skip2.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up1.out_skip2.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up2.in_skip.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up2.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up2.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up2.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up2.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up2.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up2.out_skip1.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up2.out_skip1.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up2.out_skip1.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up2.out_skip2.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up2.out_skip2.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up2.out_skip2.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up3.in_skip.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up3.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up3.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up3.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up3.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up3.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up3.out_skip1.conv.weight - torch.Size([128, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up3.out_skip1.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up3.out_skip1.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up3.out_skip2.conv.weight - torch.Size([128, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up3.out_skip2.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up3.out_skip2.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.in_skip.conv.weight - torch.Size([256, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up4.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up4.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.out_skip1.conv.weight - torch.Size([64, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up4.out_skip1.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.out_skip1.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.out_skip2.conv.weight - torch.Size([64, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up4.out_skip2.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.out_skip2.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.cross_conv.conv.weight - torch.Size([64, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.0.upsample.up4.cross_conv.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.0.upsample.up4.cross_conv.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.0.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.1.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer1.2.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.downsample.conv.weight - torch.Size([128, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.downsample.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.downsample.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn3.conv.weight - torch.Size([128, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.0.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.1.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.2.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer2.3.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.downsample.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.downsample.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.downsample.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn3.conv.weight - torch.Size([256, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.0.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.1.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.2.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.3.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.4.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer3.5.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.downsample.conv.weight - torch.Size([512, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.downsample.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.downsample.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn3.conv.weight - torch.Size([512, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.0.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu1.conv.weight - torch.Size([832, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu1.bn.weight - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu1.bn.bias - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_1_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_1_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_1_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_2_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_2_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_2_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_2_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_2_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_2_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_3_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_4.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_4.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn_relu2_4_4.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn3.conv.weight - torch.Size([512, 832, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.1.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu1.conv.weight - torch.Size([832, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu1.bn.weight - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu1.bn.bias - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_1_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_1_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_1_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_2_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_2_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_2_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_2_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_2_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_2_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_3_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_4.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_4.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn_relu2_4_4.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn3.conv.weight - torch.Size([512, 832, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.downsample.layer4.2.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up1.in_skip.conv.weight - torch.Size([256, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up1.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up1.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up1.out_skip1.conv.weight - torch.Size([512, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up1.out_skip1.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up1.out_skip1.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up1.out_skip2.conv.weight - torch.Size([512, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up1.out_skip2.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up1.out_skip2.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up2.in_skip.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up2.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up2.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up2.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up2.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up2.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up2.out_skip1.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up2.out_skip1.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up2.out_skip1.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up2.out_skip2.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up2.out_skip2.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up2.out_skip2.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up3.in_skip.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up3.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up3.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up3.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up3.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up3.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up3.out_skip1.conv.weight - torch.Size([128, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up3.out_skip1.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up3.out_skip1.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up3.out_skip2.conv.weight - torch.Size([128, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up3.out_skip2.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up3.out_skip2.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.in_skip.conv.weight - torch.Size([256, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up4.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up4.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.out_skip1.conv.weight - torch.Size([64, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up4.out_skip1.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.out_skip1.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.out_skip2.conv.weight - torch.Size([64, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up4.out_skip2.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.out_skip2.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.cross_conv.conv.weight - torch.Size([64, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.1.upsample.up4.cross_conv.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.1.upsample.up4.cross_conv.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.0.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.1.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn3.conv.weight - torch.Size([64, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer1.2.conv_bn3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.downsample.conv.weight - torch.Size([128, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.downsample.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.downsample.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu1.conv.weight - torch.Size([104, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_1_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_1_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_1_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_2_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_2_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_2_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_2_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_2_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_2_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_3_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_1.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_1.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_1.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_2.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_2.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_2.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_3.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_3.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_3.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_4.conv.weight - torch.Size([26, 26, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_4.bn.weight - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn_relu2_4_4.bn.bias - torch.Size([26]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn3.conv.weight - torch.Size([128, 104, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.0.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.1.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.2.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn3.conv.weight - torch.Size([128, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn3.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer2.3.conv_bn3.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.downsample.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.downsample.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.downsample.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu1.conv.weight - torch.Size([208, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_1_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_1_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_1_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_2_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_2_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_2_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_2_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_2_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_2_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_3_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_1.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_1.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_1.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_2.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_2.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_2.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_3.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_3.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_3.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_4.conv.weight - torch.Size([52, 52, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_4.bn.weight - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn_relu2_4_4.bn.bias - torch.Size([52]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn3.conv.weight - torch.Size([256, 208, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.0.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.1.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.2.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.3.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.4.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn3.conv.weight - torch.Size([256, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn3.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer3.5.conv_bn3.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.downsample.conv.weight - torch.Size([512, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.downsample.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.downsample.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu1.conv.weight - torch.Size([416, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu1.bn.weight - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu1.bn.bias - torch.Size([416]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_1_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_1_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_1_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_2_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_2_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_2_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_2_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_2_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_2_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_3_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_1.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_1.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_1.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_2.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_2.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_2.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_3.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_3.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_3.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_4.conv.weight - torch.Size([104, 104, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_4.bn.weight - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn_relu2_4_4.bn.bias - torch.Size([104]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn3.conv.weight - torch.Size([512, 416, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.0.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu1.conv.weight - torch.Size([832, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu1.bn.weight - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu1.bn.bias - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_1_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_1_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_1_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_2_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_2_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_2_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_2_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_2_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_2_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_3_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_4.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_4.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn_relu2_4_4.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn3.conv.weight - torch.Size([512, 832, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.1.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu1.conv.weight - torch.Size([832, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu1.bn.weight - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu1.bn.bias - torch.Size([832]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_1_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_1_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_1_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_2_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_2_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_2_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_2_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_2_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_2_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_3_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_1.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_1.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_1.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_2.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_2.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_2.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_3.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_3.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_3.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_4.conv.weight - torch.Size([208, 208, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_4.bn.weight - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn_relu2_4_4.bn.bias - torch.Size([208]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn3.conv.weight - torch.Size([512, 832, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn3.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.downsample.layer4.2.conv_bn3.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up1.in_skip.conv.weight - torch.Size([256, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.upsample.up1.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up1.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up2.in_skip.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.upsample.up2.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up2.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up2.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.upsample.up2.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up2.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up3.in_skip.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.upsample.up3.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up3.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up3.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.upsample.up3.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up3.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up4.in_skip.conv.weight - torch.Size([256, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.upsample.up4.in_skip.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up4.in_skip.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up4.up_conv.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.multi_stage_rsn.2.upsample.up4.up_conv.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator backbone.multi_stage_rsn.2.upsample.up4.up_conv.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.0.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.0.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.0.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.0.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.0.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.0.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.1.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.1.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.1.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.1.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.1.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.1.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.2.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.2.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.2.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.2.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.2.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.2.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.3.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.3.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.3.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.3.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.3.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.3.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.4.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.4.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.4.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.4.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.4.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.4.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.5.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.5.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.5.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.5.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.5.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.5.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.6.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.6.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.6.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.6.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.6.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.6.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.7.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.7.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.7.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.7.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.7.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.7.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.8.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.8.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.8.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.8.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.8.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.8.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.9.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.9.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.9.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.9.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.9.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.9.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.10.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.10.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.10.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.10.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.10.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.10.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.11.conv_layers.0.conv.weight - torch.Size([256, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.11.conv_layers.0.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.11.conv_layers.0.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.11.conv_layers.1.conv.weight - torch.Size([17, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 head.predict_layers.11.conv_layers.1.bn.weight - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator head.predict_layers.11.conv_layers.1.bn.bias - torch.Size([17]): The value is the same before and after calling `init_weights` of TopdownPoseEstimator 2022/10/12 11:00:03 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x by HardDiskBackend. 2022/10/12 11:00:47 - mmengine - INFO - Epoch(train) [1][50/586] lr: 4.954910e-04 eta: 1 day, 5:52:08 time: 0.874144 data_time: 0.188337 memory: 12959 loss_kpt: 646.907001 acc_pose: 0.070390 loss: 646.907001 2022/10/12 11:01:20 - mmengine - INFO - Epoch(train) [1][100/586] lr: 9.959920e-04 eta: 1 day, 2:06:48 time: 0.654946 data_time: 0.056716 memory: 12959 loss_kpt: 638.082263 acc_pose: 0.038325 loss: 638.082263 2022/10/12 11:01:52 - mmengine - INFO - Epoch(train) [1][150/586] lr: 1.496493e-03 eta: 1 day, 0:52:15 time: 0.656300 data_time: 0.058954 memory: 12959 loss_kpt: 630.153047 acc_pose: 0.061773 loss: 630.153047 2022/10/12 11:02:25 - mmengine - INFO - Epoch(train) [1][200/586] lr: 1.996994e-03 eta: 1 day, 0:11:33 time: 0.650131 data_time: 0.062198 memory: 12959 loss_kpt: 642.415453 acc_pose: 0.111804 loss: 642.415453 2022/10/12 11:02:58 - mmengine - INFO - Epoch(train) [1][250/586] lr: 2.497495e-03 eta: 23:48:04 time: 0.652964 data_time: 0.054426 memory: 12959 loss_kpt: 635.540636 acc_pose: 0.054080 loss: 635.540636 2022/10/12 11:03:30 - mmengine - INFO - Epoch(train) [1][300/586] lr: 2.997996e-03 eta: 23:29:54 time: 0.646134 data_time: 0.054990 memory: 12959 loss_kpt: 628.529572 acc_pose: 0.102310 loss: 628.529572 2022/10/12 11:04:02 - mmengine - INFO - Epoch(train) [1][350/586] lr: 3.498497e-03 eta: 23:17:52 time: 0.649903 data_time: 0.055889 memory: 12959 loss_kpt: 629.459880 acc_pose: 0.072646 loss: 629.459880 2022/10/12 11:04:37 - mmengine - INFO - Epoch(train) [1][400/586] lr: 3.998998e-03 eta: 23:17:05 time: 0.682650 data_time: 0.050161 memory: 12959 loss_kpt: 624.767295 acc_pose: 0.089245 loss: 624.767295 2022/10/12 11:05:11 - mmengine - INFO - Epoch(train) [1][450/586] lr: 4.499499e-03 eta: 23:16:29 time: 0.683280 data_time: 0.055246 memory: 12959 loss_kpt: 629.353782 acc_pose: 0.035919 loss: 629.353782 2022/10/12 11:05:45 - mmengine - INFO - Epoch(train) [1][500/586] lr: 5.000000e-03 eta: 23:15:01 time: 0.679002 data_time: 0.053190 memory: 12959 loss_kpt: 621.782061 acc_pose: 0.059535 loss: 621.782061 2022/10/12 11:06:18 - mmengine - INFO - Epoch(train) [1][550/586] lr: 5.000000e-03 eta: 23:12:49 time: 0.674124 data_time: 0.053425 memory: 12959 loss_kpt: 623.942493 acc_pose: 0.120257 loss: 623.942493 2022/10/12 11:06:43 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:07:17 - mmengine - INFO - Epoch(train) [2][50/586] lr: 5.000000e-03 eta: 21:54:27 time: 0.690863 data_time: 0.066329 memory: 12959 loss_kpt: 618.058107 acc_pose: 0.121858 loss: 618.058107 2022/10/12 11:07:52 - mmengine - INFO - Epoch(train) [2][100/586] lr: 5.000000e-03 eta: 22:00:15 time: 0.686814 data_time: 0.058806 memory: 12959 loss_kpt: 621.090416 acc_pose: 0.150235 loss: 621.090416 2022/10/12 11:08:26 - mmengine - INFO - Epoch(train) [2][150/586] lr: 5.000000e-03 eta: 22:04:50 time: 0.684326 data_time: 0.054286 memory: 12959 loss_kpt: 619.319335 acc_pose: 0.165810 loss: 619.319335 2022/10/12 11:09:00 - mmengine - INFO - Epoch(train) [2][200/586] lr: 5.000000e-03 eta: 22:08:15 time: 0.680384 data_time: 0.060341 memory: 12959 loss_kpt: 612.902996 acc_pose: 0.119293 loss: 612.902996 2022/10/12 11:09:34 - mmengine - INFO - Epoch(train) [2][250/586] lr: 5.000000e-03 eta: 22:10:47 time: 0.676965 data_time: 0.057171 memory: 12959 loss_kpt: 615.555541 acc_pose: 0.155594 loss: 615.555541 2022/10/12 11:10:07 - mmengine - INFO - Epoch(train) [2][300/586] lr: 5.000000e-03 eta: 22:11:21 time: 0.662938 data_time: 0.058069 memory: 12959 loss_kpt: 613.031815 acc_pose: 0.173167 loss: 613.031815 2022/10/12 11:10:41 - mmengine - INFO - Epoch(train) [2][350/586] lr: 5.000000e-03 eta: 22:14:00 time: 0.683281 data_time: 0.059187 memory: 12959 loss_kpt: 621.259099 acc_pose: 0.204349 loss: 621.259099 2022/10/12 11:11:16 - mmengine - INFO - Epoch(train) [2][400/586] lr: 5.000000e-03 eta: 22:17:24 time: 0.693663 data_time: 0.060703 memory: 12959 loss_kpt: 625.046593 acc_pose: 0.167555 loss: 625.046593 2022/10/12 11:11:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:11:50 - mmengine - INFO - Epoch(train) [2][450/586] lr: 5.000000e-03 eta: 22:20:12 time: 0.691477 data_time: 0.062328 memory: 12959 loss_kpt: 616.394528 acc_pose: 0.151432 loss: 616.394528 2022/10/12 11:12:24 - mmengine - INFO - Epoch(train) [2][500/586] lr: 5.000000e-03 eta: 22:22:10 time: 0.685852 data_time: 0.055256 memory: 12959 loss_kpt: 620.234188 acc_pose: 0.153997 loss: 620.234188 2022/10/12 11:13:00 - mmengine - INFO - Epoch(train) [2][550/586] lr: 5.000000e-03 eta: 22:25:16 time: 0.700990 data_time: 0.061874 memory: 12959 loss_kpt: 620.735800 acc_pose: 0.243726 loss: 620.735800 2022/10/12 11:13:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:13:59 - mmengine - INFO - Epoch(train) [3][50/586] lr: 5.000000e-03 eta: 21:47:53 time: 0.700113 data_time: 0.070021 memory: 12959 loss_kpt: 618.459906 acc_pose: 0.203094 loss: 618.459906 2022/10/12 11:14:33 - mmengine - INFO - Epoch(train) [3][100/586] lr: 5.000000e-03 eta: 21:49:50 time: 0.675397 data_time: 0.058719 memory: 12959 loss_kpt: 616.896233 acc_pose: 0.176507 loss: 616.896233 2022/10/12 11:15:07 - mmengine - INFO - Epoch(train) [3][150/586] lr: 5.000000e-03 eta: 21:51:33 time: 0.674550 data_time: 0.056671 memory: 12959 loss_kpt: 623.448435 acc_pose: 0.205666 loss: 623.448435 2022/10/12 11:15:40 - mmengine - INFO - Epoch(train) [3][200/586] lr: 5.000000e-03 eta: 21:52:32 time: 0.667108 data_time: 0.062848 memory: 12959 loss_kpt: 607.443037 acc_pose: 0.266864 loss: 607.443037 2022/10/12 11:16:14 - mmengine - INFO - Epoch(train) [3][250/586] lr: 5.000000e-03 eta: 21:54:26 time: 0.681266 data_time: 0.056330 memory: 12959 loss_kpt: 607.214744 acc_pose: 0.205590 loss: 607.214744 2022/10/12 11:16:48 - mmengine - INFO - Epoch(train) [3][300/586] lr: 5.000000e-03 eta: 21:55:31 time: 0.672106 data_time: 0.057107 memory: 12959 loss_kpt: 618.186053 acc_pose: 0.210689 loss: 618.186053 2022/10/12 11:17:21 - mmengine - INFO - Epoch(train) [3][350/586] lr: 5.000000e-03 eta: 21:55:58 time: 0.664095 data_time: 0.056532 memory: 12959 loss_kpt: 607.414651 acc_pose: 0.236014 loss: 607.414651 2022/10/12 11:17:55 - mmengine - INFO - Epoch(train) [3][400/586] lr: 5.000000e-03 eta: 21:57:42 time: 0.684907 data_time: 0.062089 memory: 12959 loss_kpt: 603.069552 acc_pose: 0.262500 loss: 603.069552 2022/10/12 11:18:29 - mmengine - INFO - Epoch(train) [3][450/586] lr: 5.000000e-03 eta: 21:59:01 time: 0.680661 data_time: 0.057986 memory: 12959 loss_kpt: 615.804514 acc_pose: 0.209646 loss: 615.804514 2022/10/12 11:19:03 - mmengine - INFO - Epoch(train) [3][500/586] lr: 5.000000e-03 eta: 21:59:58 time: 0.676325 data_time: 0.057045 memory: 12959 loss_kpt: 616.807836 acc_pose: 0.307634 loss: 616.807836 2022/10/12 11:19:37 - mmengine - INFO - Epoch(train) [3][550/586] lr: 5.000000e-03 eta: 22:01:26 time: 0.686782 data_time: 0.055655 memory: 12959 loss_kpt: 607.929637 acc_pose: 0.250988 loss: 607.929637 2022/10/12 11:20:01 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:20:36 - mmengine - INFO - Epoch(train) [4][50/586] lr: 5.000000e-03 eta: 21:36:17 time: 0.690552 data_time: 0.071079 memory: 12959 loss_kpt: 612.369235 acc_pose: 0.280050 loss: 612.369235 2022/10/12 11:21:10 - mmengine - INFO - Epoch(train) [4][100/586] lr: 5.000000e-03 eta: 21:37:46 time: 0.678461 data_time: 0.056765 memory: 12959 loss_kpt: 599.899174 acc_pose: 0.310842 loss: 599.899174 2022/10/12 11:21:44 - mmengine - INFO - Epoch(train) [4][150/586] lr: 5.000000e-03 eta: 21:39:56 time: 0.693705 data_time: 0.059335 memory: 12959 loss_kpt: 608.511879 acc_pose: 0.305229 loss: 608.511879 2022/10/12 11:22:19 - mmengine - INFO - Epoch(train) [4][200/586] lr: 5.000000e-03 eta: 21:42:20 time: 0.700750 data_time: 0.063827 memory: 12959 loss_kpt: 610.619547 acc_pose: 0.254034 loss: 610.619547 2022/10/12 11:22:49 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:22:54 - mmengine - INFO - Epoch(train) [4][250/586] lr: 5.000000e-03 eta: 21:44:13 time: 0.693185 data_time: 0.059529 memory: 12959 loss_kpt: 609.978231 acc_pose: 0.285889 loss: 609.978231 2022/10/12 11:23:28 - mmengine - INFO - Epoch(train) [4][300/586] lr: 5.000000e-03 eta: 21:45:41 time: 0.687501 data_time: 0.055463 memory: 12959 loss_kpt: 605.224452 acc_pose: 0.353712 loss: 605.224452 2022/10/12 11:24:03 - mmengine - INFO - Epoch(train) [4][350/586] lr: 5.000000e-03 eta: 21:47:15 time: 0.691587 data_time: 0.062374 memory: 12959 loss_kpt: 610.347941 acc_pose: 0.331719 loss: 610.347941 2022/10/12 11:24:38 - mmengine - INFO - Epoch(train) [4][400/586] lr: 5.000000e-03 eta: 21:49:07 time: 0.699995 data_time: 0.054430 memory: 12959 loss_kpt: 605.813918 acc_pose: 0.305517 loss: 605.813918 2022/10/12 11:25:13 - mmengine - INFO - Epoch(train) [4][450/586] lr: 5.000000e-03 eta: 21:50:51 time: 0.699401 data_time: 0.057147 memory: 12959 loss_kpt: 609.037544 acc_pose: 0.348477 loss: 609.037544 2022/10/12 11:25:48 - mmengine - INFO - Epoch(train) [4][500/586] lr: 5.000000e-03 eta: 21:52:16 time: 0.695027 data_time: 0.059103 memory: 12959 loss_kpt: 596.926635 acc_pose: 0.302115 loss: 596.926635 2022/10/12 11:26:23 - mmengine - INFO - Epoch(train) [4][550/586] lr: 5.000000e-03 eta: 21:54:07 time: 0.706493 data_time: 0.063299 memory: 12959 loss_kpt: 605.223877 acc_pose: 0.338600 loss: 605.223877 2022/10/12 11:26:48 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:27:22 - mmengine - INFO - Epoch(train) [5][50/586] lr: 5.000000e-03 eta: 21:34:51 time: 0.686854 data_time: 0.065602 memory: 12959 loss_kpt: 606.518718 acc_pose: 0.306457 loss: 606.518718 2022/10/12 11:27:55 - mmengine - INFO - Epoch(train) [5][100/586] lr: 5.000000e-03 eta: 21:35:14 time: 0.666250 data_time: 0.059545 memory: 12959 loss_kpt: 604.576855 acc_pose: 0.229530 loss: 604.576855 2022/10/12 11:28:30 - mmengine - INFO - Epoch(train) [5][150/586] lr: 5.000000e-03 eta: 21:36:42 time: 0.694004 data_time: 0.059212 memory: 12959 loss_kpt: 602.677938 acc_pose: 0.318101 loss: 602.677938 2022/10/12 11:29:04 - mmengine - INFO - Epoch(train) [5][200/586] lr: 5.000000e-03 eta: 21:37:28 time: 0.678188 data_time: 0.055199 memory: 12959 loss_kpt: 598.660016 acc_pose: 0.322090 loss: 598.660016 2022/10/12 11:29:39 - mmengine - INFO - Epoch(train) [5][250/586] lr: 5.000000e-03 eta: 21:38:41 time: 0.691448 data_time: 0.058793 memory: 12959 loss_kpt: 601.966149 acc_pose: 0.386387 loss: 601.966149 2022/10/12 11:30:13 - mmengine - INFO - Epoch(train) [5][300/586] lr: 5.000000e-03 eta: 21:39:31 time: 0.682951 data_time: 0.056564 memory: 12959 loss_kpt: 595.253771 acc_pose: 0.392991 loss: 595.253771 2022/10/12 11:30:48 - mmengine - INFO - Epoch(train) [5][350/586] lr: 5.000000e-03 eta: 21:40:50 time: 0.697415 data_time: 0.065720 memory: 12959 loss_kpt: 601.312266 acc_pose: 0.381073 loss: 601.312266 2022/10/12 11:31:22 - mmengine - INFO - Epoch(train) [5][400/586] lr: 5.000000e-03 eta: 21:41:40 time: 0.685769 data_time: 0.058202 memory: 12959 loss_kpt: 608.194427 acc_pose: 0.331121 loss: 608.194427 2022/10/12 11:31:56 - mmengine - INFO - Epoch(train) [5][450/586] lr: 5.000000e-03 eta: 21:42:27 time: 0.686041 data_time: 0.064869 memory: 12959 loss_kpt: 602.137688 acc_pose: 0.357055 loss: 602.137688 2022/10/12 11:32:30 - mmengine - INFO - Epoch(train) [5][500/586] lr: 5.000000e-03 eta: 21:42:49 time: 0.675772 data_time: 0.057390 memory: 12959 loss_kpt: 606.174505 acc_pose: 0.364359 loss: 606.174505 2022/10/12 11:33:04 - mmengine - INFO - Epoch(train) [5][550/586] lr: 5.000000e-03 eta: 21:43:32 time: 0.686378 data_time: 0.061589 memory: 12959 loss_kpt: 593.510110 acc_pose: 0.384460 loss: 593.510110 2022/10/12 11:33:28 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:34:03 - mmengine - INFO - Epoch(train) [6][50/586] lr: 5.000000e-03 eta: 21:28:14 time: 0.691646 data_time: 0.069002 memory: 12959 loss_kpt: 602.299196 acc_pose: 0.326571 loss: 602.299196 2022/10/12 11:34:17 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:34:37 - mmengine - INFO - Epoch(train) [6][100/586] lr: 5.000000e-03 eta: 21:28:39 time: 0.672752 data_time: 0.055129 memory: 12959 loss_kpt: 601.513373 acc_pose: 0.319665 loss: 601.513373 2022/10/12 11:35:11 - mmengine - INFO - Epoch(train) [6][150/586] lr: 5.000000e-03 eta: 21:29:17 time: 0.680179 data_time: 0.060676 memory: 12959 loss_kpt: 596.680151 acc_pose: 0.322452 loss: 596.680151 2022/10/12 11:35:44 - mmengine - INFO - Epoch(train) [6][200/586] lr: 5.000000e-03 eta: 21:29:39 time: 0.673070 data_time: 0.056420 memory: 12959 loss_kpt: 577.457501 acc_pose: 0.365228 loss: 577.457501 2022/10/12 11:36:17 - mmengine - INFO - Epoch(train) [6][250/586] lr: 5.000000e-03 eta: 21:29:42 time: 0.664019 data_time: 0.062292 memory: 12959 loss_kpt: 591.674834 acc_pose: 0.342735 loss: 591.674834 2022/10/12 11:36:51 - mmengine - INFO - Epoch(train) [6][300/586] lr: 5.000000e-03 eta: 21:29:43 time: 0.663238 data_time: 0.055086 memory: 12959 loss_kpt: 599.314417 acc_pose: 0.370558 loss: 599.314417 2022/10/12 11:37:24 - mmengine - INFO - Epoch(train) [6][350/586] lr: 5.000000e-03 eta: 21:29:53 time: 0.669236 data_time: 0.060140 memory: 12959 loss_kpt: 596.790626 acc_pose: 0.333510 loss: 596.790626 2022/10/12 11:37:57 - mmengine - INFO - Epoch(train) [6][400/586] lr: 5.000000e-03 eta: 21:29:52 time: 0.663446 data_time: 0.053633 memory: 12959 loss_kpt: 600.353921 acc_pose: 0.383329 loss: 600.353921 2022/10/12 11:38:31 - mmengine - INFO - Epoch(train) [6][450/586] lr: 5.000000e-03 eta: 21:30:05 time: 0.671805 data_time: 0.061736 memory: 12959 loss_kpt: 599.097820 acc_pose: 0.349721 loss: 599.097820 2022/10/12 11:39:05 - mmengine - INFO - Epoch(train) [6][500/586] lr: 5.000000e-03 eta: 21:30:22 time: 0.675059 data_time: 0.057292 memory: 12959 loss_kpt: 602.296106 acc_pose: 0.372858 loss: 602.296106 2022/10/12 11:39:38 - mmengine - INFO - Epoch(train) [6][550/586] lr: 5.000000e-03 eta: 21:30:30 time: 0.671002 data_time: 0.059557 memory: 12959 loss_kpt: 592.364421 acc_pose: 0.408241 loss: 592.364421 2022/10/12 11:40:02 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:40:37 - mmengine - INFO - Epoch(train) [7][50/586] lr: 5.000000e-03 eta: 21:17:41 time: 0.687796 data_time: 0.071278 memory: 12959 loss_kpt: 591.492419 acc_pose: 0.368916 loss: 591.492419 2022/10/12 11:41:10 - mmengine - INFO - Epoch(train) [7][100/586] lr: 5.000000e-03 eta: 21:18:00 time: 0.672748 data_time: 0.058094 memory: 12959 loss_kpt: 600.574600 acc_pose: 0.361490 loss: 600.574600 2022/10/12 11:41:44 - mmengine - INFO - Epoch(train) [7][150/586] lr: 5.000000e-03 eta: 21:18:39 time: 0.685228 data_time: 0.066661 memory: 12959 loss_kpt: 599.409713 acc_pose: 0.385382 loss: 599.409713 2022/10/12 11:42:17 - mmengine - INFO - Epoch(train) [7][200/586] lr: 5.000000e-03 eta: 21:18:36 time: 0.660798 data_time: 0.056553 memory: 12959 loss_kpt: 592.389998 acc_pose: 0.326941 loss: 592.389998 2022/10/12 11:42:51 - mmengine - INFO - Epoch(train) [7][250/586] lr: 5.000000e-03 eta: 21:18:47 time: 0.669943 data_time: 0.064138 memory: 12959 loss_kpt: 602.557912 acc_pose: 0.385919 loss: 602.557912 2022/10/12 11:43:24 - mmengine - INFO - Epoch(train) [7][300/586] lr: 5.000000e-03 eta: 21:18:51 time: 0.666400 data_time: 0.058464 memory: 12959 loss_kpt: 590.583290 acc_pose: 0.445455 loss: 590.583290 2022/10/12 11:43:58 - mmengine - INFO - Epoch(train) [7][350/586] lr: 5.000000e-03 eta: 21:19:21 time: 0.684234 data_time: 0.062104 memory: 12959 loss_kpt: 588.903662 acc_pose: 0.454017 loss: 588.903662 2022/10/12 11:44:32 - mmengine - INFO - Epoch(train) [7][400/586] lr: 5.000000e-03 eta: 21:19:26 time: 0.667903 data_time: 0.054790 memory: 12959 loss_kpt: 592.220674 acc_pose: 0.434442 loss: 592.220674 2022/10/12 11:45:06 - mmengine - INFO - Epoch(train) [7][450/586] lr: 5.000000e-03 eta: 21:19:47 time: 0.680210 data_time: 0.057019 memory: 12959 loss_kpt: 593.382327 acc_pose: 0.400432 loss: 593.382327 2022/10/12 11:45:29 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:45:40 - mmengine - INFO - Epoch(train) [7][500/586] lr: 5.000000e-03 eta: 21:20:01 time: 0.675466 data_time: 0.062709 memory: 12959 loss_kpt: 592.575139 acc_pose: 0.461192 loss: 592.575139 2022/10/12 11:46:13 - mmengine - INFO - Epoch(train) [7][550/586] lr: 5.000000e-03 eta: 21:20:10 time: 0.673664 data_time: 0.058768 memory: 12959 loss_kpt: 584.162493 acc_pose: 0.448969 loss: 584.162493 2022/10/12 11:46:37 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:47:12 - mmengine - INFO - Epoch(train) [8][50/586] lr: 5.000000e-03 eta: 21:09:13 time: 0.690439 data_time: 0.068117 memory: 12959 loss_kpt: 589.175538 acc_pose: 0.424245 loss: 589.175538 2022/10/12 11:47:46 - mmengine - INFO - Epoch(train) [8][100/586] lr: 5.000000e-03 eta: 21:09:35 time: 0.678489 data_time: 0.058084 memory: 12959 loss_kpt: 592.003452 acc_pose: 0.539189 loss: 592.003452 2022/10/12 11:48:20 - mmengine - INFO - Epoch(train) [8][150/586] lr: 5.000000e-03 eta: 21:09:52 time: 0.675678 data_time: 0.061316 memory: 12959 loss_kpt: 598.182275 acc_pose: 0.461599 loss: 598.182275 2022/10/12 11:48:53 - mmengine - INFO - Epoch(train) [8][200/586] lr: 5.000000e-03 eta: 21:10:05 time: 0.674142 data_time: 0.056998 memory: 12959 loss_kpt: 590.070454 acc_pose: 0.461175 loss: 590.070454 2022/10/12 11:49:27 - mmengine - INFO - Epoch(train) [8][250/586] lr: 5.000000e-03 eta: 21:10:18 time: 0.674728 data_time: 0.057430 memory: 12959 loss_kpt: 583.472473 acc_pose: 0.402858 loss: 583.472473 2022/10/12 11:50:00 - mmengine - INFO - Epoch(train) [8][300/586] lr: 5.000000e-03 eta: 21:10:11 time: 0.660637 data_time: 0.063234 memory: 12959 loss_kpt: 589.759462 acc_pose: 0.464723 loss: 589.759462 2022/10/12 11:50:34 - mmengine - INFO - Epoch(train) [8][350/586] lr: 5.000000e-03 eta: 21:10:16 time: 0.670515 data_time: 0.059668 memory: 12959 loss_kpt: 587.798624 acc_pose: 0.418781 loss: 587.798624 2022/10/12 11:51:08 - mmengine - INFO - Epoch(train) [8][400/586] lr: 5.000000e-03 eta: 21:10:32 time: 0.678512 data_time: 0.055885 memory: 12959 loss_kpt: 585.590450 acc_pose: 0.375380 loss: 585.590450 2022/10/12 11:51:41 - mmengine - INFO - Epoch(train) [8][450/586] lr: 5.000000e-03 eta: 21:10:33 time: 0.668787 data_time: 0.054865 memory: 12959 loss_kpt: 580.776527 acc_pose: 0.372873 loss: 580.776527 2022/10/12 11:52:14 - mmengine - INFO - Epoch(train) [8][500/586] lr: 5.000000e-03 eta: 21:10:30 time: 0.665903 data_time: 0.060319 memory: 12959 loss_kpt: 581.818882 acc_pose: 0.494098 loss: 581.818882 2022/10/12 11:52:48 - mmengine - INFO - Epoch(train) [8][550/586] lr: 5.000000e-03 eta: 21:10:37 time: 0.674284 data_time: 0.058980 memory: 12959 loss_kpt: 583.149187 acc_pose: 0.449668 loss: 583.149187 2022/10/12 11:53:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:53:47 - mmengine - INFO - Epoch(train) [9][50/586] lr: 5.000000e-03 eta: 21:01:23 time: 0.708254 data_time: 0.065205 memory: 12959 loss_kpt: 584.606997 acc_pose: 0.499396 loss: 584.606997 2022/10/12 11:54:22 - mmengine - INFO - Epoch(train) [9][100/586] lr: 5.000000e-03 eta: 21:01:55 time: 0.691529 data_time: 0.061661 memory: 12959 loss_kpt: 583.151190 acc_pose: 0.355048 loss: 583.151190 2022/10/12 11:54:56 - mmengine - INFO - Epoch(train) [9][150/586] lr: 5.000000e-03 eta: 21:02:11 time: 0.679089 data_time: 0.056675 memory: 12959 loss_kpt: 580.133348 acc_pose: 0.469060 loss: 580.133348 2022/10/12 11:55:30 - mmengine - INFO - Epoch(train) [9][200/586] lr: 5.000000e-03 eta: 21:02:32 time: 0.684723 data_time: 0.059269 memory: 12959 loss_kpt: 576.971773 acc_pose: 0.531803 loss: 576.971773 2022/10/12 11:56:04 - mmengine - INFO - Epoch(train) [9][250/586] lr: 5.000000e-03 eta: 21:02:46 time: 0.679879 data_time: 0.062014 memory: 12959 loss_kpt: 584.628852 acc_pose: 0.462550 loss: 584.628852 2022/10/12 11:56:38 - mmengine - INFO - Epoch(train) [9][300/586] lr: 5.000000e-03 eta: 21:03:02 time: 0.681415 data_time: 0.057201 memory: 12959 loss_kpt: 578.943719 acc_pose: 0.499733 loss: 578.943719 2022/10/12 11:56:46 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 11:57:13 - mmengine - INFO - Epoch(train) [9][350/586] lr: 5.000000e-03 eta: 21:03:30 time: 0.693315 data_time: 0.057467 memory: 12959 loss_kpt: 584.754683 acc_pose: 0.510858 loss: 584.754683 2022/10/12 11:57:47 - mmengine - INFO - Epoch(train) [9][400/586] lr: 5.000000e-03 eta: 21:03:43 time: 0.681625 data_time: 0.060110 memory: 12959 loss_kpt: 581.531654 acc_pose: 0.478703 loss: 581.531654 2022/10/12 11:58:21 - mmengine - INFO - Epoch(train) [9][450/586] lr: 5.000000e-03 eta: 21:04:00 time: 0.685256 data_time: 0.061254 memory: 12959 loss_kpt: 580.052844 acc_pose: 0.426756 loss: 580.052844 2022/10/12 11:58:55 - mmengine - INFO - Epoch(train) [9][500/586] lr: 5.000000e-03 eta: 21:04:12 time: 0.681566 data_time: 0.058374 memory: 12959 loss_kpt: 569.580989 acc_pose: 0.392825 loss: 569.580989 2022/10/12 11:59:29 - mmengine - INFO - Epoch(train) [9][550/586] lr: 5.000000e-03 eta: 21:04:17 time: 0.677070 data_time: 0.057573 memory: 12959 loss_kpt: 576.786865 acc_pose: 0.399151 loss: 576.786865 2022/10/12 11:59:53 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:00:28 - mmengine - INFO - Epoch(train) [10][50/586] lr: 5.000000e-03 eta: 20:55:55 time: 0.703114 data_time: 0.069363 memory: 12959 loss_kpt: 581.663948 acc_pose: 0.458933 loss: 581.663948 2022/10/12 12:01:03 - mmengine - INFO - Epoch(train) [10][100/586] lr: 5.000000e-03 eta: 20:56:11 time: 0.683938 data_time: 0.061063 memory: 12959 loss_kpt: 573.936962 acc_pose: 0.450638 loss: 573.936962 2022/10/12 12:01:37 - mmengine - INFO - Epoch(train) [10][150/586] lr: 5.000000e-03 eta: 20:56:23 time: 0.681280 data_time: 0.059093 memory: 12959 loss_kpt: 581.687866 acc_pose: 0.484486 loss: 581.687866 2022/10/12 12:02:10 - mmengine - INFO - Epoch(train) [10][200/586] lr: 5.000000e-03 eta: 20:56:26 time: 0.673774 data_time: 0.058964 memory: 12959 loss_kpt: 575.373083 acc_pose: 0.442037 loss: 575.373083 2022/10/12 12:02:45 - mmengine - INFO - Epoch(train) [10][250/586] lr: 5.000000e-03 eta: 20:56:41 time: 0.684942 data_time: 0.063165 memory: 12959 loss_kpt: 575.548333 acc_pose: 0.550324 loss: 575.548333 2022/10/12 12:03:19 - mmengine - INFO - Epoch(train) [10][300/586] lr: 5.000000e-03 eta: 20:56:50 time: 0.680158 data_time: 0.058234 memory: 12959 loss_kpt: 579.193250 acc_pose: 0.451245 loss: 579.193250 2022/10/12 12:03:53 - mmengine - INFO - Epoch(train) [10][350/586] lr: 5.000000e-03 eta: 20:57:03 time: 0.685113 data_time: 0.067269 memory: 12959 loss_kpt: 578.634692 acc_pose: 0.522717 loss: 578.634692 2022/10/12 12:04:27 - mmengine - INFO - Epoch(train) [10][400/586] lr: 5.000000e-03 eta: 20:57:12 time: 0.682623 data_time: 0.059081 memory: 12959 loss_kpt: 576.661405 acc_pose: 0.487395 loss: 576.661405 2022/10/12 12:05:01 - mmengine - INFO - Epoch(train) [10][450/586] lr: 5.000000e-03 eta: 20:57:23 time: 0.684558 data_time: 0.065468 memory: 12959 loss_kpt: 571.120775 acc_pose: 0.517483 loss: 571.120775 2022/10/12 12:05:35 - mmengine - INFO - Epoch(train) [10][500/586] lr: 5.000000e-03 eta: 20:57:28 time: 0.678910 data_time: 0.060414 memory: 12959 loss_kpt: 570.939926 acc_pose: 0.500472 loss: 570.939926 2022/10/12 12:06:10 - mmengine - INFO - Epoch(train) [10][550/586] lr: 5.000000e-03 eta: 20:57:41 time: 0.688543 data_time: 0.062022 memory: 12959 loss_kpt: 570.588056 acc_pose: 0.472356 loss: 570.588056 2022/10/12 12:06:34 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:06:34 - mmengine - INFO - Saving checkpoint at 10 epochs 2022/10/12 12:06:57 - mmengine - INFO - Epoch(val) [10][50/407] eta: 0:02:13 time: 0.373387 data_time: 0.113284 memory: 12959 2022/10/12 12:07:10 - mmengine - INFO - Epoch(val) [10][100/407] eta: 0:01:20 time: 0.262303 data_time: 0.007569 memory: 2407 2022/10/12 12:07:23 - mmengine - INFO - Epoch(val) [10][150/407] eta: 0:01:06 time: 0.260165 data_time: 0.008120 memory: 2407 2022/10/12 12:07:36 - mmengine - INFO - Epoch(val) [10][200/407] eta: 0:00:53 time: 0.259031 data_time: 0.008168 memory: 2407 2022/10/12 12:07:49 - mmengine - INFO - Epoch(val) [10][250/407] eta: 0:00:40 time: 0.260280 data_time: 0.007842 memory: 2407 2022/10/12 12:08:02 - mmengine - INFO - Epoch(val) [10][300/407] eta: 0:00:27 time: 0.259945 data_time: 0.007877 memory: 2407 2022/10/12 12:08:15 - mmengine - INFO - Epoch(val) [10][350/407] eta: 0:00:14 time: 0.259522 data_time: 0.008119 memory: 2407 2022/10/12 12:08:28 - mmengine - INFO - Epoch(val) [10][400/407] eta: 0:00:01 time: 0.262190 data_time: 0.007707 memory: 2407 2022/10/12 12:08:43 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 12:08:59 - mmengine - INFO - Epoch(val) [10][407/407] coco/AP: 0.308372 coco/AP .5: 0.637885 coco/AP .75: 0.254501 coco/AP (M): 0.312849 coco/AP (L): 0.320518 coco/AR: 0.411571 coco/AR .5: 0.736461 coco/AR .75: 0.393577 coco/AR (M): 0.390412 coco/AR (L): 0.440877 2022/10/12 12:09:01 - mmengine - INFO - The best checkpoint with 0.3084 coco/AP at 10 epoch is saved to best_coco/AP_epoch_10.pth. 2022/10/12 12:09:37 - mmengine - INFO - Epoch(train) [11][50/586] lr: 5.000000e-03 eta: 20:50:10 time: 0.707980 data_time: 0.073684 memory: 12959 loss_kpt: 588.515753 acc_pose: 0.517104 loss: 588.515753 2022/10/12 12:10:11 - mmengine - INFO - Epoch(train) [11][100/586] lr: 5.000000e-03 eta: 20:50:31 time: 0.694764 data_time: 0.064263 memory: 12959 loss_kpt: 578.659808 acc_pose: 0.422536 loss: 578.659808 2022/10/12 12:10:39 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:10:46 - mmengine - INFO - Epoch(train) [11][150/586] lr: 5.000000e-03 eta: 20:50:50 time: 0.693034 data_time: 0.061948 memory: 12959 loss_kpt: 565.522850 acc_pose: 0.394364 loss: 565.522850 2022/10/12 12:11:21 - mmengine - INFO - Epoch(train) [11][200/586] lr: 5.000000e-03 eta: 20:51:07 time: 0.692052 data_time: 0.058732 memory: 12959 loss_kpt: 576.566829 acc_pose: 0.471796 loss: 576.566829 2022/10/12 12:11:56 - mmengine - INFO - Epoch(train) [11][250/586] lr: 5.000000e-03 eta: 20:51:43 time: 0.712181 data_time: 0.063658 memory: 12959 loss_kpt: 566.181039 acc_pose: 0.393033 loss: 566.181039 2022/10/12 12:12:32 - mmengine - INFO - Epoch(train) [11][300/586] lr: 5.000000e-03 eta: 20:52:12 time: 0.707061 data_time: 0.054107 memory: 12959 loss_kpt: 565.021821 acc_pose: 0.421854 loss: 565.021821 2022/10/12 12:13:07 - mmengine - INFO - Epoch(train) [11][350/586] lr: 5.000000e-03 eta: 20:52:42 time: 0.708878 data_time: 0.057864 memory: 12959 loss_kpt: 569.698096 acc_pose: 0.462271 loss: 569.698096 2022/10/12 12:13:43 - mmengine - INFO - Epoch(train) [11][400/586] lr: 5.000000e-03 eta: 20:53:13 time: 0.710690 data_time: 0.058900 memory: 12959 loss_kpt: 565.897324 acc_pose: 0.497513 loss: 565.897324 2022/10/12 12:14:18 - mmengine - INFO - Epoch(train) [11][450/586] lr: 5.000000e-03 eta: 20:53:33 time: 0.699730 data_time: 0.057723 memory: 12959 loss_kpt: 571.218083 acc_pose: 0.480106 loss: 571.218083 2022/10/12 12:14:53 - mmengine - INFO - Epoch(train) [11][500/586] lr: 5.000000e-03 eta: 20:54:01 time: 0.709637 data_time: 0.054482 memory: 12959 loss_kpt: 574.761190 acc_pose: 0.454930 loss: 574.761190 2022/10/12 12:15:29 - mmengine - INFO - Epoch(train) [11][550/586] lr: 5.000000e-03 eta: 20:54:28 time: 0.710396 data_time: 0.058002 memory: 12959 loss_kpt: 571.068317 acc_pose: 0.481397 loss: 571.068317 2022/10/12 12:15:53 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:16:28 - mmengine - INFO - Epoch(train) [12][50/586] lr: 5.000000e-03 eta: 20:47:27 time: 0.702814 data_time: 0.073295 memory: 12959 loss_kpt: 572.125095 acc_pose: 0.454084 loss: 572.125095 2022/10/12 12:17:02 - mmengine - INFO - Epoch(train) [12][100/586] lr: 5.000000e-03 eta: 20:47:25 time: 0.675615 data_time: 0.056585 memory: 12959 loss_kpt: 572.044880 acc_pose: 0.589328 loss: 572.044880 2022/10/12 12:17:36 - mmengine - INFO - Epoch(train) [12][150/586] lr: 5.000000e-03 eta: 20:47:26 time: 0.679772 data_time: 0.058985 memory: 12959 loss_kpt: 558.073892 acc_pose: 0.553507 loss: 558.073892 2022/10/12 12:18:10 - mmengine - INFO - Epoch(train) [12][200/586] lr: 5.000000e-03 eta: 20:47:27 time: 0.680837 data_time: 0.056509 memory: 12959 loss_kpt: 569.971371 acc_pose: 0.555261 loss: 569.971371 2022/10/12 12:18:45 - mmengine - INFO - Epoch(train) [12][250/586] lr: 5.000000e-03 eta: 20:47:41 time: 0.696163 data_time: 0.064341 memory: 12959 loss_kpt: 568.949432 acc_pose: 0.470151 loss: 568.949432 2022/10/12 12:19:18 - mmengine - INFO - Epoch(train) [12][300/586] lr: 5.000000e-03 eta: 20:47:33 time: 0.671142 data_time: 0.058097 memory: 12959 loss_kpt: 560.019890 acc_pose: 0.459408 loss: 560.019890 2022/10/12 12:19:53 - mmengine - INFO - Epoch(train) [12][350/586] lr: 5.000000e-03 eta: 20:47:34 time: 0.681728 data_time: 0.062460 memory: 12959 loss_kpt: 572.076285 acc_pose: 0.508335 loss: 572.076285 2022/10/12 12:20:27 - mmengine - INFO - Epoch(train) [12][400/586] lr: 5.000000e-03 eta: 20:47:38 time: 0.686909 data_time: 0.059585 memory: 12959 loss_kpt: 559.700165 acc_pose: 0.444702 loss: 559.700165 2022/10/12 12:21:02 - mmengine - INFO - Epoch(train) [12][450/586] lr: 5.000000e-03 eta: 20:47:50 time: 0.696561 data_time: 0.061124 memory: 12959 loss_kpt: 563.175969 acc_pose: 0.497673 loss: 563.175969 2022/10/12 12:21:36 - mmengine - INFO - Epoch(train) [12][500/586] lr: 5.000000e-03 eta: 20:47:47 time: 0.679686 data_time: 0.054140 memory: 12959 loss_kpt: 560.999859 acc_pose: 0.556363 loss: 560.999859 2022/10/12 12:22:10 - mmengine - INFO - Epoch(train) [12][550/586] lr: 5.000000e-03 eta: 20:47:41 time: 0.676419 data_time: 0.055574 memory: 12959 loss_kpt: 571.826974 acc_pose: 0.462189 loss: 571.826974 2022/10/12 12:22:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:22:34 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:23:08 - mmengine - INFO - Epoch(train) [13][50/586] lr: 5.000000e-03 eta: 20:40:49 time: 0.673919 data_time: 0.066390 memory: 12959 loss_kpt: 559.990811 acc_pose: 0.534898 loss: 559.990811 2022/10/12 12:23:41 - mmengine - INFO - Epoch(train) [13][100/586] lr: 5.000000e-03 eta: 20:40:37 time: 0.666652 data_time: 0.052842 memory: 12959 loss_kpt: 562.108455 acc_pose: 0.526795 loss: 562.108455 2022/10/12 12:24:15 - mmengine - INFO - Epoch(train) [13][150/586] lr: 5.000000e-03 eta: 20:40:26 time: 0.668259 data_time: 0.054016 memory: 12959 loss_kpt: 569.242734 acc_pose: 0.527233 loss: 569.242734 2022/10/12 12:24:48 - mmengine - INFO - Epoch(train) [13][200/586] lr: 5.000000e-03 eta: 20:40:11 time: 0.663939 data_time: 0.055555 memory: 12959 loss_kpt: 554.653876 acc_pose: 0.527325 loss: 554.653876 2022/10/12 12:25:21 - mmengine - INFO - Epoch(train) [13][250/586] lr: 5.000000e-03 eta: 20:40:01 time: 0.670007 data_time: 0.054735 memory: 12959 loss_kpt: 560.033596 acc_pose: 0.499943 loss: 560.033596 2022/10/12 12:25:56 - mmengine - INFO - Epoch(train) [13][300/586] lr: 5.000000e-03 eta: 20:40:03 time: 0.686124 data_time: 0.053576 memory: 12959 loss_kpt: 563.380361 acc_pose: 0.471190 loss: 563.380361 2022/10/12 12:26:31 - mmengine - INFO - Epoch(train) [13][350/586] lr: 5.000000e-03 eta: 20:40:18 time: 0.702684 data_time: 0.062304 memory: 12959 loss_kpt: 561.430033 acc_pose: 0.527138 loss: 561.430033 2022/10/12 12:27:06 - mmengine - INFO - Epoch(train) [13][400/586] lr: 5.000000e-03 eta: 20:40:30 time: 0.700165 data_time: 0.055773 memory: 12959 loss_kpt: 565.653715 acc_pose: 0.570960 loss: 565.653715 2022/10/12 12:27:40 - mmengine - INFO - Epoch(train) [13][450/586] lr: 5.000000e-03 eta: 20:40:27 time: 0.681409 data_time: 0.058967 memory: 12959 loss_kpt: 552.831988 acc_pose: 0.540475 loss: 552.831988 2022/10/12 12:28:15 - mmengine - INFO - Epoch(train) [13][500/586] lr: 5.000000e-03 eta: 20:40:34 time: 0.695643 data_time: 0.054753 memory: 12959 loss_kpt: 562.291141 acc_pose: 0.426255 loss: 562.291141 2022/10/12 12:28:49 - mmengine - INFO - Epoch(train) [13][550/586] lr: 5.000000e-03 eta: 20:40:36 time: 0.688856 data_time: 0.058669 memory: 12959 loss_kpt: 566.067608 acc_pose: 0.554876 loss: 566.067608 2022/10/12 12:29:14 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:29:48 - mmengine - INFO - Epoch(train) [14][50/586] lr: 5.000000e-03 eta: 20:34:17 time: 0.679148 data_time: 0.071241 memory: 12959 loss_kpt: 562.176396 acc_pose: 0.495806 loss: 562.176396 2022/10/12 12:30:22 - mmengine - INFO - Epoch(train) [14][100/586] lr: 5.000000e-03 eta: 20:34:14 time: 0.679518 data_time: 0.057708 memory: 12959 loss_kpt: 553.298203 acc_pose: 0.532754 loss: 553.298203 2022/10/12 12:30:56 - mmengine - INFO - Epoch(train) [14][150/586] lr: 5.000000e-03 eta: 20:34:08 time: 0.677665 data_time: 0.063733 memory: 12959 loss_kpt: 558.257964 acc_pose: 0.490576 loss: 558.257964 2022/10/12 12:31:30 - mmengine - INFO - Epoch(train) [14][200/586] lr: 5.000000e-03 eta: 20:33:55 time: 0.668662 data_time: 0.056333 memory: 12959 loss_kpt: 549.669988 acc_pose: 0.574485 loss: 549.669988 2022/10/12 12:32:04 - mmengine - INFO - Epoch(train) [14][250/586] lr: 5.000000e-03 eta: 20:33:50 time: 0.678506 data_time: 0.063151 memory: 12959 loss_kpt: 561.067533 acc_pose: 0.512589 loss: 561.067533 2022/10/12 12:32:37 - mmengine - INFO - Epoch(train) [14][300/586] lr: 5.000000e-03 eta: 20:33:41 time: 0.674695 data_time: 0.055319 memory: 12959 loss_kpt: 560.721327 acc_pose: 0.466250 loss: 560.721327 2022/10/12 12:33:12 - mmengine - INFO - Epoch(train) [14][350/586] lr: 5.000000e-03 eta: 20:33:45 time: 0.693889 data_time: 0.060804 memory: 12959 loss_kpt: 549.848649 acc_pose: 0.486655 loss: 549.848649 2022/10/12 12:33:35 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:33:47 - mmengine - INFO - Epoch(train) [14][400/586] lr: 5.000000e-03 eta: 20:34:00 time: 0.708412 data_time: 0.058786 memory: 12959 loss_kpt: 561.545074 acc_pose: 0.556160 loss: 561.545074 2022/10/12 12:34:23 - mmengine - INFO - Epoch(train) [14][450/586] lr: 5.000000e-03 eta: 20:34:19 time: 0.714911 data_time: 0.064680 memory: 12959 loss_kpt: 547.404692 acc_pose: 0.522403 loss: 547.404692 2022/10/12 12:34:58 - mmengine - INFO - Epoch(train) [14][500/586] lr: 5.000000e-03 eta: 20:34:29 time: 0.703719 data_time: 0.056825 memory: 12959 loss_kpt: 559.616879 acc_pose: 0.552103 loss: 559.616879 2022/10/12 12:35:34 - mmengine - INFO - Epoch(train) [14][550/586] lr: 5.000000e-03 eta: 20:34:43 time: 0.710264 data_time: 0.060261 memory: 12959 loss_kpt: 556.121232 acc_pose: 0.556179 loss: 556.121232 2022/10/12 12:35:59 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:36:34 - mmengine - INFO - Epoch(train) [15][50/586] lr: 5.000000e-03 eta: 20:29:00 time: 0.695588 data_time: 0.073237 memory: 12959 loss_kpt: 546.023463 acc_pose: 0.463560 loss: 546.023463 2022/10/12 12:37:08 - mmengine - INFO - Epoch(train) [15][100/586] lr: 5.000000e-03 eta: 20:28:55 time: 0.681526 data_time: 0.052713 memory: 12959 loss_kpt: 563.615240 acc_pose: 0.616131 loss: 563.615240 2022/10/12 12:37:42 - mmengine - INFO - Epoch(train) [15][150/586] lr: 5.000000e-03 eta: 20:28:50 time: 0.683054 data_time: 0.061109 memory: 12959 loss_kpt: 558.051663 acc_pose: 0.509477 loss: 558.051663 2022/10/12 12:38:16 - mmengine - INFO - Epoch(train) [15][200/586] lr: 5.000000e-03 eta: 20:28:44 time: 0.680692 data_time: 0.055547 memory: 12959 loss_kpt: 561.057578 acc_pose: 0.471866 loss: 561.057578 2022/10/12 12:38:50 - mmengine - INFO - Epoch(train) [15][250/586] lr: 5.000000e-03 eta: 20:28:43 time: 0.688185 data_time: 0.055116 memory: 12959 loss_kpt: 552.233857 acc_pose: 0.551424 loss: 552.233857 2022/10/12 12:39:25 - mmengine - INFO - Epoch(train) [15][300/586] lr: 5.000000e-03 eta: 20:28:49 time: 0.700183 data_time: 0.057381 memory: 12959 loss_kpt: 556.836634 acc_pose: 0.623788 loss: 556.836634 2022/10/12 12:40:00 - mmengine - INFO - Epoch(train) [15][350/586] lr: 5.000000e-03 eta: 20:28:50 time: 0.693056 data_time: 0.054346 memory: 12959 loss_kpt: 549.548114 acc_pose: 0.497311 loss: 549.548114 2022/10/12 12:40:34 - mmengine - INFO - Epoch(train) [15][400/586] lr: 5.000000e-03 eta: 20:28:49 time: 0.691910 data_time: 0.057404 memory: 12959 loss_kpt: 561.523577 acc_pose: 0.626832 loss: 561.523577 2022/10/12 12:41:09 - mmengine - INFO - Epoch(train) [15][450/586] lr: 5.000000e-03 eta: 20:28:50 time: 0.693356 data_time: 0.055059 memory: 12959 loss_kpt: 555.520908 acc_pose: 0.464272 loss: 555.520908 2022/10/12 12:41:44 - mmengine - INFO - Epoch(train) [15][500/586] lr: 5.000000e-03 eta: 20:28:48 time: 0.691161 data_time: 0.061206 memory: 12959 loss_kpt: 550.248541 acc_pose: 0.582443 loss: 550.248541 2022/10/12 12:42:18 - mmengine - INFO - Epoch(train) [15][550/586] lr: 5.000000e-03 eta: 20:28:45 time: 0.688925 data_time: 0.057141 memory: 12959 loss_kpt: 552.698350 acc_pose: 0.627780 loss: 552.698350 2022/10/12 12:42:42 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:43:17 - mmengine - INFO - Epoch(train) [16][50/586] lr: 5.000000e-03 eta: 20:23:16 time: 0.685680 data_time: 0.066907 memory: 12959 loss_kpt: 550.951338 acc_pose: 0.486925 loss: 550.951338 2022/10/12 12:43:50 - mmengine - INFO - Epoch(train) [16][100/586] lr: 5.000000e-03 eta: 20:22:54 time: 0.658812 data_time: 0.056956 memory: 12959 loss_kpt: 550.128355 acc_pose: 0.567613 loss: 550.128355 2022/10/12 12:44:23 - mmengine - INFO - Epoch(train) [16][150/586] lr: 5.000000e-03 eta: 20:22:37 time: 0.666231 data_time: 0.059075 memory: 12959 loss_kpt: 552.615527 acc_pose: 0.573845 loss: 552.615527 2022/10/12 12:44:57 - mmengine - INFO - Epoch(train) [16][200/586] lr: 5.000000e-03 eta: 20:22:29 time: 0.680285 data_time: 0.063380 memory: 12959 loss_kpt: 548.371406 acc_pose: 0.558917 loss: 548.371406 2022/10/12 12:45:04 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:45:30 - mmengine - INFO - Epoch(train) [16][250/586] lr: 5.000000e-03 eta: 20:22:13 time: 0.668801 data_time: 0.057975 memory: 12959 loss_kpt: 543.561987 acc_pose: 0.574409 loss: 543.561987 2022/10/12 12:46:04 - mmengine - INFO - Epoch(train) [16][300/586] lr: 5.000000e-03 eta: 20:21:54 time: 0.665212 data_time: 0.053682 memory: 12959 loss_kpt: 550.714115 acc_pose: 0.454469 loss: 550.714115 2022/10/12 12:46:37 - mmengine - INFO - Epoch(train) [16][350/586] lr: 5.000000e-03 eta: 20:21:35 time: 0.663188 data_time: 0.055299 memory: 12959 loss_kpt: 554.056807 acc_pose: 0.508083 loss: 554.056807 2022/10/12 12:47:10 - mmengine - INFO - Epoch(train) [16][400/586] lr: 5.000000e-03 eta: 20:21:20 time: 0.671743 data_time: 0.060126 memory: 12959 loss_kpt: 553.944975 acc_pose: 0.532283 loss: 553.944975 2022/10/12 12:47:44 - mmengine - INFO - Epoch(train) [16][450/586] lr: 5.000000e-03 eta: 20:21:03 time: 0.668270 data_time: 0.059169 memory: 12959 loss_kpt: 546.755323 acc_pose: 0.614586 loss: 546.755323 2022/10/12 12:48:17 - mmengine - INFO - Epoch(train) [16][500/586] lr: 5.000000e-03 eta: 20:20:44 time: 0.665880 data_time: 0.054781 memory: 12959 loss_kpt: 549.305453 acc_pose: 0.581538 loss: 549.305453 2022/10/12 12:48:51 - mmengine - INFO - Epoch(train) [16][550/586] lr: 5.000000e-03 eta: 20:20:31 time: 0.675458 data_time: 0.056882 memory: 12959 loss_kpt: 550.292562 acc_pose: 0.576948 loss: 550.292562 2022/10/12 12:49:15 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:49:50 - mmengine - INFO - Epoch(train) [17][50/586] lr: 5.000000e-03 eta: 20:15:35 time: 0.707711 data_time: 0.065412 memory: 12959 loss_kpt: 554.228325 acc_pose: 0.551725 loss: 554.228325 2022/10/12 12:50:24 - mmengine - INFO - Epoch(train) [17][100/586] lr: 5.000000e-03 eta: 20:15:27 time: 0.682179 data_time: 0.057312 memory: 12959 loss_kpt: 548.613281 acc_pose: 0.541014 loss: 548.613281 2022/10/12 12:50:59 - mmengine - INFO - Epoch(train) [17][150/586] lr: 5.000000e-03 eta: 20:15:23 time: 0.689714 data_time: 0.060411 memory: 12959 loss_kpt: 538.778334 acc_pose: 0.515492 loss: 538.778334 2022/10/12 12:51:32 - mmengine - INFO - Epoch(train) [17][200/586] lr: 5.000000e-03 eta: 20:15:10 time: 0.674212 data_time: 0.053618 memory: 12959 loss_kpt: 550.298172 acc_pose: 0.587591 loss: 550.298172 2022/10/12 12:52:07 - mmengine - INFO - Epoch(train) [17][250/586] lr: 5.000000e-03 eta: 20:15:04 time: 0.686833 data_time: 0.062997 memory: 12959 loss_kpt: 550.725538 acc_pose: 0.571440 loss: 550.725538 2022/10/12 12:52:41 - mmengine - INFO - Epoch(train) [17][300/586] lr: 5.000000e-03 eta: 20:14:54 time: 0.679775 data_time: 0.056080 memory: 12959 loss_kpt: 542.230420 acc_pose: 0.632481 loss: 542.230420 2022/10/12 12:53:15 - mmengine - INFO - Epoch(train) [17][350/586] lr: 5.000000e-03 eta: 20:14:44 time: 0.681702 data_time: 0.058325 memory: 12959 loss_kpt: 543.466245 acc_pose: 0.564909 loss: 543.466245 2022/10/12 12:53:48 - mmengine - INFO - Epoch(train) [17][400/586] lr: 5.000000e-03 eta: 20:14:26 time: 0.667819 data_time: 0.055102 memory: 12959 loss_kpt: 535.686701 acc_pose: 0.652136 loss: 535.686701 2022/10/12 12:54:22 - mmengine - INFO - Epoch(train) [17][450/586] lr: 5.000000e-03 eta: 20:14:10 time: 0.671080 data_time: 0.059961 memory: 12959 loss_kpt: 544.584209 acc_pose: 0.653368 loss: 544.584209 2022/10/12 12:54:55 - mmengine - INFO - Epoch(train) [17][500/586] lr: 5.000000e-03 eta: 20:13:53 time: 0.669719 data_time: 0.055583 memory: 12959 loss_kpt: 536.874441 acc_pose: 0.586298 loss: 536.874441 2022/10/12 12:55:29 - mmengine - INFO - Epoch(train) [17][550/586] lr: 5.000000e-03 eta: 20:13:36 time: 0.669862 data_time: 0.057387 memory: 12959 loss_kpt: 539.515162 acc_pose: 0.399325 loss: 539.515162 2022/10/12 12:55:52 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:56:19 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 12:56:27 - mmengine - INFO - Epoch(train) [18][50/586] lr: 5.000000e-03 eta: 20:08:40 time: 0.680742 data_time: 0.068910 memory: 12959 loss_kpt: 542.584550 acc_pose: 0.551815 loss: 542.584550 2022/10/12 12:57:00 - mmengine - INFO - Epoch(train) [18][100/586] lr: 5.000000e-03 eta: 20:08:23 time: 0.668190 data_time: 0.061392 memory: 12959 loss_kpt: 544.901674 acc_pose: 0.541618 loss: 544.901674 2022/10/12 12:57:33 - mmengine - INFO - Epoch(train) [18][150/586] lr: 5.000000e-03 eta: 20:08:03 time: 0.663243 data_time: 0.052196 memory: 12959 loss_kpt: 544.734600 acc_pose: 0.517538 loss: 544.734600 2022/10/12 12:58:07 - mmengine - INFO - Epoch(train) [18][200/586] lr: 5.000000e-03 eta: 20:07:46 time: 0.668507 data_time: 0.058276 memory: 12959 loss_kpt: 540.890101 acc_pose: 0.648901 loss: 540.890101 2022/10/12 12:58:40 - mmengine - INFO - Epoch(train) [18][250/586] lr: 5.000000e-03 eta: 20:07:30 time: 0.670593 data_time: 0.060904 memory: 12959 loss_kpt: 550.059031 acc_pose: 0.533728 loss: 550.059031 2022/10/12 12:59:13 - mmengine - INFO - Epoch(train) [18][300/586] lr: 5.000000e-03 eta: 20:07:06 time: 0.658392 data_time: 0.058003 memory: 12959 loss_kpt: 549.767397 acc_pose: 0.641792 loss: 549.767397 2022/10/12 12:59:46 - mmengine - INFO - Epoch(train) [18][350/586] lr: 5.000000e-03 eta: 20:06:42 time: 0.656536 data_time: 0.055265 memory: 12959 loss_kpt: 546.980305 acc_pose: 0.597935 loss: 546.980305 2022/10/12 13:00:19 - mmengine - INFO - Epoch(train) [18][400/586] lr: 5.000000e-03 eta: 20:06:25 time: 0.669283 data_time: 0.053667 memory: 12959 loss_kpt: 536.310518 acc_pose: 0.605660 loss: 536.310518 2022/10/12 13:00:53 - mmengine - INFO - Epoch(train) [18][450/586] lr: 5.000000e-03 eta: 20:06:05 time: 0.664165 data_time: 0.053228 memory: 12959 loss_kpt: 544.700430 acc_pose: 0.585854 loss: 544.700430 2022/10/12 13:01:25 - mmengine - INFO - Epoch(train) [18][500/586] lr: 5.000000e-03 eta: 20:05:41 time: 0.659181 data_time: 0.055918 memory: 12959 loss_kpt: 537.407740 acc_pose: 0.596516 loss: 537.407740 2022/10/12 13:01:59 - mmengine - INFO - Epoch(train) [18][550/586] lr: 5.000000e-03 eta: 20:05:22 time: 0.665939 data_time: 0.055734 memory: 12959 loss_kpt: 543.175867 acc_pose: 0.458402 loss: 543.175867 2022/10/12 13:02:23 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:02:57 - mmengine - INFO - Epoch(train) [19][50/586] lr: 5.000000e-03 eta: 20:00:47 time: 0.690635 data_time: 0.068951 memory: 12959 loss_kpt: 541.308608 acc_pose: 0.583203 loss: 541.308608 2022/10/12 13:03:31 - mmengine - INFO - Epoch(train) [19][100/586] lr: 5.000000e-03 eta: 20:00:29 time: 0.668618 data_time: 0.057206 memory: 12959 loss_kpt: 539.957551 acc_pose: 0.596447 loss: 539.957551 2022/10/12 13:04:04 - mmengine - INFO - Epoch(train) [19][150/586] lr: 5.000000e-03 eta: 20:00:16 time: 0.675814 data_time: 0.055327 memory: 12959 loss_kpt: 542.524803 acc_pose: 0.579389 loss: 542.524803 2022/10/12 13:04:38 - mmengine - INFO - Epoch(train) [19][200/586] lr: 5.000000e-03 eta: 20:00:05 time: 0.680740 data_time: 0.057564 memory: 12959 loss_kpt: 546.479291 acc_pose: 0.510438 loss: 546.479291 2022/10/12 13:05:12 - mmengine - INFO - Epoch(train) [19][250/586] lr: 5.000000e-03 eta: 19:59:50 time: 0.674201 data_time: 0.054760 memory: 12959 loss_kpt: 536.482832 acc_pose: 0.478701 loss: 536.482832 2022/10/12 13:05:45 - mmengine - INFO - Epoch(train) [19][300/586] lr: 5.000000e-03 eta: 19:59:31 time: 0.667193 data_time: 0.055606 memory: 12959 loss_kpt: 540.436111 acc_pose: 0.550954 loss: 540.436111 2022/10/12 13:06:19 - mmengine - INFO - Epoch(train) [19][350/586] lr: 5.000000e-03 eta: 19:59:14 time: 0.670517 data_time: 0.058936 memory: 12959 loss_kpt: 547.041516 acc_pose: 0.433160 loss: 547.041516 2022/10/12 13:06:53 - mmengine - INFO - Epoch(train) [19][400/586] lr: 5.000000e-03 eta: 19:58:57 time: 0.671940 data_time: 0.053384 memory: 12959 loss_kpt: 538.110607 acc_pose: 0.525149 loss: 538.110607 2022/10/12 13:07:26 - mmengine - INFO - Epoch(train) [19][450/586] lr: 5.000000e-03 eta: 19:58:41 time: 0.672406 data_time: 0.054904 memory: 12959 loss_kpt: 537.354654 acc_pose: 0.406592 loss: 537.354654 2022/10/12 13:07:27 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:08:00 - mmengine - INFO - Epoch(train) [19][500/586] lr: 5.000000e-03 eta: 19:58:23 time: 0.670115 data_time: 0.053459 memory: 12959 loss_kpt: 536.382625 acc_pose: 0.637224 loss: 536.382625 2022/10/12 13:08:33 - mmengine - INFO - Epoch(train) [19][550/586] lr: 5.000000e-03 eta: 19:58:08 time: 0.674440 data_time: 0.060155 memory: 12959 loss_kpt: 538.649935 acc_pose: 0.524494 loss: 538.649935 2022/10/12 13:08:57 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:09:31 - mmengine - INFO - Epoch(train) [20][50/586] lr: 5.000000e-03 eta: 19:53:40 time: 0.679195 data_time: 0.072315 memory: 12959 loss_kpt: 543.778336 acc_pose: 0.557577 loss: 543.778336 2022/10/12 13:10:04 - mmengine - INFO - Epoch(train) [20][100/586] lr: 5.000000e-03 eta: 19:53:15 time: 0.655741 data_time: 0.056519 memory: 12959 loss_kpt: 543.006619 acc_pose: 0.624934 loss: 543.006619 2022/10/12 13:10:37 - mmengine - INFO - Epoch(train) [20][150/586] lr: 5.000000e-03 eta: 19:52:57 time: 0.666800 data_time: 0.061823 memory: 12959 loss_kpt: 546.720226 acc_pose: 0.528853 loss: 546.720226 2022/10/12 13:11:11 - mmengine - INFO - Epoch(train) [20][200/586] lr: 5.000000e-03 eta: 19:52:38 time: 0.667803 data_time: 0.056810 memory: 12959 loss_kpt: 535.312245 acc_pose: 0.622465 loss: 535.312245 2022/10/12 13:11:45 - mmengine - INFO - Epoch(train) [20][250/586] lr: 5.000000e-03 eta: 19:52:25 time: 0.679604 data_time: 0.060495 memory: 12959 loss_kpt: 528.404489 acc_pose: 0.608606 loss: 528.404489 2022/10/12 13:12:19 - mmengine - INFO - Epoch(train) [20][300/586] lr: 5.000000e-03 eta: 19:52:17 time: 0.689768 data_time: 0.059240 memory: 12959 loss_kpt: 535.154993 acc_pose: 0.551394 loss: 535.154993 2022/10/12 13:12:53 - mmengine - INFO - Epoch(train) [20][350/586] lr: 5.000000e-03 eta: 19:52:07 time: 0.686546 data_time: 0.060656 memory: 12959 loss_kpt: 541.794492 acc_pose: 0.573067 loss: 541.794492 2022/10/12 13:13:27 - mmengine - INFO - Epoch(train) [20][400/586] lr: 5.000000e-03 eta: 19:51:51 time: 0.672796 data_time: 0.055416 memory: 12959 loss_kpt: 527.765353 acc_pose: 0.644456 loss: 527.765353 2022/10/12 13:14:01 - mmengine - INFO - Epoch(train) [20][450/586] lr: 5.000000e-03 eta: 19:51:35 time: 0.674698 data_time: 0.060237 memory: 12959 loss_kpt: 528.621202 acc_pose: 0.602099 loss: 528.621202 2022/10/12 13:14:35 - mmengine - INFO - Epoch(train) [20][500/586] lr: 5.000000e-03 eta: 19:51:20 time: 0.677985 data_time: 0.055534 memory: 12959 loss_kpt: 531.085026 acc_pose: 0.508299 loss: 531.085026 2022/10/12 13:15:09 - mmengine - INFO - Epoch(train) [20][550/586] lr: 5.000000e-03 eta: 19:51:09 time: 0.684404 data_time: 0.062063 memory: 12959 loss_kpt: 533.399329 acc_pose: 0.535407 loss: 533.399329 2022/10/12 13:15:33 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:15:33 - mmengine - INFO - Saving checkpoint at 20 epochs 2022/10/12 13:15:51 - mmengine - INFO - Epoch(val) [20][50/407] eta: 0:01:37 time: 0.273511 data_time: 0.013265 memory: 12959 2022/10/12 13:16:04 - mmengine - INFO - Epoch(val) [20][100/407] eta: 0:01:20 time: 0.261291 data_time: 0.007915 memory: 2407 2022/10/12 13:16:17 - mmengine - INFO - Epoch(val) [20][150/407] eta: 0:01:07 time: 0.264349 data_time: 0.008244 memory: 2407 2022/10/12 13:16:30 - mmengine - INFO - Epoch(val) [20][200/407] eta: 0:00:54 time: 0.263415 data_time: 0.008062 memory: 2407 2022/10/12 13:16:43 - mmengine - INFO - Epoch(val) [20][250/407] eta: 0:00:41 time: 0.267242 data_time: 0.008404 memory: 2407 2022/10/12 13:16:57 - mmengine - INFO - Epoch(val) [20][300/407] eta: 0:00:28 time: 0.266238 data_time: 0.007792 memory: 2407 2022/10/12 13:17:10 - mmengine - INFO - Epoch(val) [20][350/407] eta: 0:00:15 time: 0.263984 data_time: 0.008005 memory: 2407 2022/10/12 13:17:23 - mmengine - INFO - Epoch(val) [20][400/407] eta: 0:00:01 time: 0.255771 data_time: 0.007470 memory: 2407 2022/10/12 13:17:37 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 13:17:53 - mmengine - INFO - Epoch(val) [20][407/407] coco/AP: 0.433429 coco/AP .5: 0.740842 coco/AP .75: 0.439423 coco/AP (M): 0.423964 coco/AP (L): 0.467636 coco/AR: 0.532604 coco/AR .5: 0.818168 coco/AR .75: 0.564074 coco/AR (M): 0.498662 coco/AR (L): 0.579339 2022/10/12 13:17:53 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_10.pth is removed 2022/10/12 13:17:55 - mmengine - INFO - The best checkpoint with 0.4334 coco/AP at 20 epoch is saved to best_coco/AP_epoch_20.pth. 2022/10/12 13:18:30 - mmengine - INFO - Epoch(train) [21][50/586] lr: 5.000000e-03 eta: 19:46:55 time: 0.683826 data_time: 0.069562 memory: 12959 loss_kpt: 533.768122 acc_pose: 0.674657 loss: 533.768122 2022/10/12 13:19:03 - mmengine - INFO - Epoch(train) [21][100/586] lr: 5.000000e-03 eta: 19:46:41 time: 0.678518 data_time: 0.054059 memory: 12959 loss_kpt: 536.599214 acc_pose: 0.684138 loss: 536.599214 2022/10/12 13:19:38 - mmengine - INFO - Epoch(train) [21][150/586] lr: 5.000000e-03 eta: 19:46:29 time: 0.683001 data_time: 0.062897 memory: 12959 loss_kpt: 529.505344 acc_pose: 0.519504 loss: 529.505344 2022/10/12 13:20:11 - mmengine - INFO - Epoch(train) [21][200/586] lr: 5.000000e-03 eta: 19:46:07 time: 0.662492 data_time: 0.053500 memory: 12959 loss_kpt: 537.532878 acc_pose: 0.568663 loss: 537.532878 2022/10/12 13:20:44 - mmengine - INFO - Epoch(train) [21][250/586] lr: 5.000000e-03 eta: 19:45:48 time: 0.668048 data_time: 0.058456 memory: 12959 loss_kpt: 537.842781 acc_pose: 0.520353 loss: 537.842781 2022/10/12 13:21:05 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:21:19 - mmengine - INFO - Epoch(train) [21][300/586] lr: 5.000000e-03 eta: 19:45:40 time: 0.692405 data_time: 0.059971 memory: 12959 loss_kpt: 533.423719 acc_pose: 0.635797 loss: 533.423719 2022/10/12 13:21:53 - mmengine - INFO - Epoch(train) [21][350/586] lr: 5.000000e-03 eta: 19:45:31 time: 0.689720 data_time: 0.058622 memory: 12959 loss_kpt: 534.116519 acc_pose: 0.596725 loss: 534.116519 2022/10/12 13:22:28 - mmengine - INFO - Epoch(train) [21][400/586] lr: 5.000000e-03 eta: 19:45:20 time: 0.686743 data_time: 0.056788 memory: 12959 loss_kpt: 521.046711 acc_pose: 0.504026 loss: 521.046711 2022/10/12 13:23:03 - mmengine - INFO - Epoch(train) [21][450/586] lr: 5.000000e-03 eta: 19:45:14 time: 0.697993 data_time: 0.061940 memory: 12959 loss_kpt: 529.519368 acc_pose: 0.534178 loss: 529.519368 2022/10/12 13:23:37 - mmengine - INFO - Epoch(train) [21][500/586] lr: 5.000000e-03 eta: 19:45:08 time: 0.699058 data_time: 0.054655 memory: 12959 loss_kpt: 530.695519 acc_pose: 0.543227 loss: 530.695519 2022/10/12 13:24:12 - mmengine - INFO - Epoch(train) [21][550/586] lr: 5.000000e-03 eta: 19:45:02 time: 0.700684 data_time: 0.061097 memory: 12959 loss_kpt: 532.765204 acc_pose: 0.567180 loss: 532.765204 2022/10/12 13:24:37 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:25:11 - mmengine - INFO - Epoch(train) [22][50/586] lr: 5.000000e-03 eta: 19:41:00 time: 0.686650 data_time: 0.070868 memory: 12959 loss_kpt: 531.913148 acc_pose: 0.671535 loss: 531.913148 2022/10/12 13:25:45 - mmengine - INFO - Epoch(train) [22][100/586] lr: 5.000000e-03 eta: 19:40:41 time: 0.668295 data_time: 0.058593 memory: 12959 loss_kpt: 536.702514 acc_pose: 0.561247 loss: 536.702514 2022/10/12 13:26:19 - mmengine - INFO - Epoch(train) [22][150/586] lr: 5.000000e-03 eta: 19:40:25 time: 0.676035 data_time: 0.054850 memory: 12959 loss_kpt: 531.680709 acc_pose: 0.663877 loss: 531.680709 2022/10/12 13:26:53 - mmengine - INFO - Epoch(train) [22][200/586] lr: 5.000000e-03 eta: 19:40:13 time: 0.685425 data_time: 0.054667 memory: 12959 loss_kpt: 528.546253 acc_pose: 0.680688 loss: 528.546253 2022/10/12 13:27:27 - mmengine - INFO - Epoch(train) [22][250/586] lr: 5.000000e-03 eta: 19:39:58 time: 0.679177 data_time: 0.057786 memory: 12959 loss_kpt: 538.317163 acc_pose: 0.605267 loss: 538.317163 2022/10/12 13:28:01 - mmengine - INFO - Epoch(train) [22][300/586] lr: 5.000000e-03 eta: 19:39:42 time: 0.679116 data_time: 0.051379 memory: 12959 loss_kpt: 534.502632 acc_pose: 0.597621 loss: 534.502632 2022/10/12 13:28:35 - mmengine - INFO - Epoch(train) [22][350/586] lr: 5.000000e-03 eta: 19:39:27 time: 0.678165 data_time: 0.063774 memory: 12959 loss_kpt: 530.165953 acc_pose: 0.558015 loss: 530.165953 2022/10/12 13:29:08 - mmengine - INFO - Epoch(train) [22][400/586] lr: 5.000000e-03 eta: 19:39:03 time: 0.659353 data_time: 0.051721 memory: 12959 loss_kpt: 529.182390 acc_pose: 0.531769 loss: 529.182390 2022/10/12 13:29:41 - mmengine - INFO - Epoch(train) [22][450/586] lr: 5.000000e-03 eta: 19:38:42 time: 0.668580 data_time: 0.062115 memory: 12959 loss_kpt: 529.992943 acc_pose: 0.632573 loss: 529.992943 2022/10/12 13:30:15 - mmengine - INFO - Epoch(train) [22][500/586] lr: 5.000000e-03 eta: 19:38:24 time: 0.672375 data_time: 0.052477 memory: 12959 loss_kpt: 529.011321 acc_pose: 0.580213 loss: 529.011321 2022/10/12 13:30:48 - mmengine - INFO - Epoch(train) [22][550/586] lr: 5.000000e-03 eta: 19:38:05 time: 0.671609 data_time: 0.059716 memory: 12959 loss_kpt: 519.606116 acc_pose: 0.486470 loss: 519.606116 2022/10/12 13:31:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:31:46 - mmengine - INFO - Epoch(train) [23][50/586] lr: 5.000000e-03 eta: 19:34:10 time: 0.681458 data_time: 0.066555 memory: 12959 loss_kpt: 536.994760 acc_pose: 0.636167 loss: 536.994760 2022/10/12 13:32:19 - mmengine - INFO - Epoch(train) [23][100/586] lr: 5.000000e-03 eta: 19:33:47 time: 0.661741 data_time: 0.058594 memory: 12959 loss_kpt: 528.072195 acc_pose: 0.566302 loss: 528.072195 2022/10/12 13:32:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:32:53 - mmengine - INFO - Epoch(train) [23][150/586] lr: 5.000000e-03 eta: 19:33:27 time: 0.666576 data_time: 0.054719 memory: 12959 loss_kpt: 528.461757 acc_pose: 0.697600 loss: 528.461757 2022/10/12 13:33:27 - mmengine - INFO - Epoch(train) [23][200/586] lr: 5.000000e-03 eta: 19:33:12 time: 0.681770 data_time: 0.053886 memory: 12959 loss_kpt: 531.100217 acc_pose: 0.628214 loss: 531.100217 2022/10/12 13:34:00 - mmengine - INFO - Epoch(train) [23][250/586] lr: 5.000000e-03 eta: 19:32:52 time: 0.667796 data_time: 0.052960 memory: 12959 loss_kpt: 524.630475 acc_pose: 0.618790 loss: 524.630475 2022/10/12 13:34:33 - mmengine - INFO - Epoch(train) [23][300/586] lr: 5.000000e-03 eta: 19:32:30 time: 0.665411 data_time: 0.058262 memory: 12959 loss_kpt: 523.854091 acc_pose: 0.505164 loss: 523.854091 2022/10/12 13:35:07 - mmengine - INFO - Epoch(train) [23][350/586] lr: 5.000000e-03 eta: 19:32:14 time: 0.678572 data_time: 0.058970 memory: 12959 loss_kpt: 526.403533 acc_pose: 0.628287 loss: 526.403533 2022/10/12 13:35:41 - mmengine - INFO - Epoch(train) [23][400/586] lr: 5.000000e-03 eta: 19:31:53 time: 0.665481 data_time: 0.055037 memory: 12959 loss_kpt: 527.859456 acc_pose: 0.609710 loss: 527.859456 2022/10/12 13:36:14 - mmengine - INFO - Epoch(train) [23][450/586] lr: 5.000000e-03 eta: 19:31:29 time: 0.661545 data_time: 0.053090 memory: 12959 loss_kpt: 522.576792 acc_pose: 0.580988 loss: 522.576792 2022/10/12 13:36:47 - mmengine - INFO - Epoch(train) [23][500/586] lr: 5.000000e-03 eta: 19:31:06 time: 0.662046 data_time: 0.053585 memory: 12959 loss_kpt: 526.519984 acc_pose: 0.703369 loss: 526.519984 2022/10/12 13:37:20 - mmengine - INFO - Epoch(train) [23][550/586] lr: 5.000000e-03 eta: 19:30:44 time: 0.666186 data_time: 0.061829 memory: 12959 loss_kpt: 519.461428 acc_pose: 0.561057 loss: 519.461428 2022/10/12 13:37:43 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:38:17 - mmengine - INFO - Epoch(train) [24][50/586] lr: 5.000000e-03 eta: 19:26:54 time: 0.670746 data_time: 0.067957 memory: 12959 loss_kpt: 522.661519 acc_pose: 0.563256 loss: 522.661519 2022/10/12 13:38:51 - mmengine - INFO - Epoch(train) [24][100/586] lr: 5.000000e-03 eta: 19:26:41 time: 0.685025 data_time: 0.056310 memory: 12959 loss_kpt: 532.661902 acc_pose: 0.568320 loss: 532.661902 2022/10/12 13:39:26 - mmengine - INFO - Epoch(train) [24][150/586] lr: 5.000000e-03 eta: 19:26:28 time: 0.687820 data_time: 0.063084 memory: 12959 loss_kpt: 530.160236 acc_pose: 0.637856 loss: 530.160236 2022/10/12 13:40:00 - mmengine - INFO - Epoch(train) [24][200/586] lr: 5.000000e-03 eta: 19:26:20 time: 0.697560 data_time: 0.052676 memory: 12959 loss_kpt: 524.030677 acc_pose: 0.587583 loss: 524.030677 2022/10/12 13:40:35 - mmengine - INFO - Epoch(train) [24][250/586] lr: 5.000000e-03 eta: 19:26:05 time: 0.683340 data_time: 0.058424 memory: 12959 loss_kpt: 522.527577 acc_pose: 0.734355 loss: 522.527577 2022/10/12 13:41:10 - mmengine - INFO - Epoch(train) [24][300/586] lr: 5.000000e-03 eta: 19:25:56 time: 0.698054 data_time: 0.056261 memory: 12959 loss_kpt: 525.938176 acc_pose: 0.686494 loss: 525.938176 2022/10/12 13:41:45 - mmengine - INFO - Epoch(train) [24][350/586] lr: 5.000000e-03 eta: 19:25:51 time: 0.707870 data_time: 0.062672 memory: 12959 loss_kpt: 524.371275 acc_pose: 0.670741 loss: 524.371275 2022/10/12 13:42:21 - mmengine - INFO - Epoch(train) [24][400/586] lr: 5.000000e-03 eta: 19:25:49 time: 0.718302 data_time: 0.055349 memory: 12959 loss_kpt: 516.740500 acc_pose: 0.542602 loss: 516.740500 2022/10/12 13:42:56 - mmengine - INFO - Epoch(train) [24][450/586] lr: 5.000000e-03 eta: 19:25:41 time: 0.701522 data_time: 0.058834 memory: 12959 loss_kpt: 520.603326 acc_pose: 0.615185 loss: 520.603326 2022/10/12 13:43:31 - mmengine - INFO - Epoch(train) [24][500/586] lr: 5.000000e-03 eta: 19:25:35 time: 0.707365 data_time: 0.058409 memory: 12959 loss_kpt: 528.226927 acc_pose: 0.620956 loss: 528.226927 2022/10/12 13:43:47 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:44:07 - mmengine - INFO - Epoch(train) [24][550/586] lr: 5.000000e-03 eta: 19:25:29 time: 0.708445 data_time: 0.057481 memory: 12959 loss_kpt: 523.914654 acc_pose: 0.688947 loss: 523.914654 2022/10/12 13:44:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:45:06 - mmengine - INFO - Epoch(train) [25][50/586] lr: 5.000000e-03 eta: 19:21:56 time: 0.692697 data_time: 0.067747 memory: 12959 loss_kpt: 522.921111 acc_pose: 0.539460 loss: 522.921111 2022/10/12 13:45:39 - mmengine - INFO - Epoch(train) [25][100/586] lr: 5.000000e-03 eta: 19:21:34 time: 0.668014 data_time: 0.055360 memory: 12959 loss_kpt: 526.564061 acc_pose: 0.604670 loss: 526.564061 2022/10/12 13:46:13 - mmengine - INFO - Epoch(train) [25][150/586] lr: 5.000000e-03 eta: 19:21:14 time: 0.670334 data_time: 0.058505 memory: 12959 loss_kpt: 529.113401 acc_pose: 0.545403 loss: 529.113401 2022/10/12 13:46:46 - mmengine - INFO - Epoch(train) [25][200/586] lr: 5.000000e-03 eta: 19:20:52 time: 0.667562 data_time: 0.053531 memory: 12959 loss_kpt: 529.076561 acc_pose: 0.607268 loss: 529.076561 2022/10/12 13:47:20 - mmengine - INFO - Epoch(train) [25][250/586] lr: 5.000000e-03 eta: 19:20:36 time: 0.682004 data_time: 0.054691 memory: 12959 loss_kpt: 517.018801 acc_pose: 0.619476 loss: 517.018801 2022/10/12 13:47:55 - mmengine - INFO - Epoch(train) [25][300/586] lr: 5.000000e-03 eta: 19:20:22 time: 0.686840 data_time: 0.056112 memory: 12959 loss_kpt: 517.430286 acc_pose: 0.584952 loss: 517.430286 2022/10/12 13:48:29 - mmengine - INFO - Epoch(train) [25][350/586] lr: 5.000000e-03 eta: 19:20:09 time: 0.691469 data_time: 0.054970 memory: 12959 loss_kpt: 528.376899 acc_pose: 0.700771 loss: 528.376899 2022/10/12 13:49:03 - mmengine - INFO - Epoch(train) [25][400/586] lr: 5.000000e-03 eta: 19:19:49 time: 0.671515 data_time: 0.056953 memory: 12959 loss_kpt: 521.538528 acc_pose: 0.635847 loss: 521.538528 2022/10/12 13:49:38 - mmengine - INFO - Epoch(train) [25][450/586] lr: 5.000000e-03 eta: 19:19:36 time: 0.692009 data_time: 0.059077 memory: 12959 loss_kpt: 519.522828 acc_pose: 0.637847 loss: 519.522828 2022/10/12 13:50:12 - mmengine - INFO - Epoch(train) [25][500/586] lr: 5.000000e-03 eta: 19:19:25 time: 0.696996 data_time: 0.058381 memory: 12959 loss_kpt: 523.932492 acc_pose: 0.648157 loss: 523.932492 2022/10/12 13:50:47 - mmengine - INFO - Epoch(train) [25][550/586] lr: 5.000000e-03 eta: 19:19:08 time: 0.684199 data_time: 0.058057 memory: 12959 loss_kpt: 524.933693 acc_pose: 0.589770 loss: 524.933693 2022/10/12 13:51:11 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:51:46 - mmengine - INFO - Epoch(train) [26][50/586] lr: 5.000000e-03 eta: 19:15:45 time: 0.701855 data_time: 0.070597 memory: 12959 loss_kpt: 521.437955 acc_pose: 0.623471 loss: 521.437955 2022/10/12 13:52:21 - mmengine - INFO - Epoch(train) [26][100/586] lr: 5.000000e-03 eta: 19:15:33 time: 0.694330 data_time: 0.056895 memory: 12959 loss_kpt: 525.148550 acc_pose: 0.693632 loss: 525.148550 2022/10/12 13:52:56 - mmengine - INFO - Epoch(train) [26][150/586] lr: 5.000000e-03 eta: 19:15:21 time: 0.692545 data_time: 0.057134 memory: 12959 loss_kpt: 523.661539 acc_pose: 0.610597 loss: 523.661539 2022/10/12 13:53:30 - mmengine - INFO - Epoch(train) [26][200/586] lr: 5.000000e-03 eta: 19:15:06 time: 0.687983 data_time: 0.056087 memory: 12959 loss_kpt: 514.810879 acc_pose: 0.576280 loss: 514.810879 2022/10/12 13:54:05 - mmengine - INFO - Epoch(train) [26][250/586] lr: 5.000000e-03 eta: 19:14:51 time: 0.687837 data_time: 0.065898 memory: 12959 loss_kpt: 519.917422 acc_pose: 0.649799 loss: 519.917422 2022/10/12 13:54:39 - mmengine - INFO - Epoch(train) [26][300/586] lr: 5.000000e-03 eta: 19:14:39 time: 0.696196 data_time: 0.061605 memory: 12959 loss_kpt: 513.947951 acc_pose: 0.687870 loss: 513.947951 2022/10/12 13:55:14 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:55:14 - mmengine - INFO - Epoch(train) [26][350/586] lr: 5.000000e-03 eta: 19:14:25 time: 0.691675 data_time: 0.056758 memory: 12959 loss_kpt: 518.645370 acc_pose: 0.575915 loss: 518.645370 2022/10/12 13:55:49 - mmengine - INFO - Epoch(train) [26][400/586] lr: 5.000000e-03 eta: 19:14:13 time: 0.695301 data_time: 0.057093 memory: 12959 loss_kpt: 513.728502 acc_pose: 0.639958 loss: 513.728502 2022/10/12 13:56:24 - mmengine - INFO - Epoch(train) [26][450/586] lr: 5.000000e-03 eta: 19:14:00 time: 0.695578 data_time: 0.058545 memory: 12959 loss_kpt: 527.681747 acc_pose: 0.534933 loss: 527.681747 2022/10/12 13:56:58 - mmengine - INFO - Epoch(train) [26][500/586] lr: 5.000000e-03 eta: 19:13:43 time: 0.684107 data_time: 0.054588 memory: 12959 loss_kpt: 510.624272 acc_pose: 0.664107 loss: 510.624272 2022/10/12 13:57:33 - mmengine - INFO - Epoch(train) [26][550/586] lr: 5.000000e-03 eta: 19:13:33 time: 0.702764 data_time: 0.059334 memory: 12959 loss_kpt: 519.385111 acc_pose: 0.722673 loss: 519.385111 2022/10/12 13:57:58 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 13:58:32 - mmengine - INFO - Epoch(train) [27][50/586] lr: 5.000000e-03 eta: 19:10:09 time: 0.682749 data_time: 0.069848 memory: 12959 loss_kpt: 513.957566 acc_pose: 0.664910 loss: 513.957566 2022/10/12 13:59:06 - mmengine - INFO - Epoch(train) [27][100/586] lr: 5.000000e-03 eta: 19:09:51 time: 0.677917 data_time: 0.059908 memory: 12959 loss_kpt: 514.675607 acc_pose: 0.606904 loss: 514.675607 2022/10/12 13:59:40 - mmengine - INFO - Epoch(train) [27][150/586] lr: 5.000000e-03 eta: 19:09:35 time: 0.688123 data_time: 0.060653 memory: 12959 loss_kpt: 516.717473 acc_pose: 0.590184 loss: 516.717473 2022/10/12 14:00:14 - mmengine - INFO - Epoch(train) [27][200/586] lr: 5.000000e-03 eta: 19:09:17 time: 0.680112 data_time: 0.058475 memory: 12959 loss_kpt: 509.315278 acc_pose: 0.527329 loss: 509.315278 2022/10/12 14:00:48 - mmengine - INFO - Epoch(train) [27][250/586] lr: 5.000000e-03 eta: 19:09:02 time: 0.688408 data_time: 0.057710 memory: 12959 loss_kpt: 522.418299 acc_pose: 0.606241 loss: 522.418299 2022/10/12 14:01:22 - mmengine - INFO - Epoch(train) [27][300/586] lr: 5.000000e-03 eta: 19:08:42 time: 0.676341 data_time: 0.061673 memory: 12959 loss_kpt: 517.184221 acc_pose: 0.624247 loss: 517.184221 2022/10/12 14:01:57 - mmengine - INFO - Epoch(train) [27][350/586] lr: 5.000000e-03 eta: 19:08:28 time: 0.694909 data_time: 0.058499 memory: 12959 loss_kpt: 517.984861 acc_pose: 0.574177 loss: 517.984861 2022/10/12 14:02:31 - mmengine - INFO - Epoch(train) [27][400/586] lr: 5.000000e-03 eta: 19:08:12 time: 0.688386 data_time: 0.057073 memory: 12959 loss_kpt: 507.029854 acc_pose: 0.635091 loss: 507.029854 2022/10/12 14:03:06 - mmengine - INFO - Epoch(train) [27][450/586] lr: 5.000000e-03 eta: 19:07:55 time: 0.684081 data_time: 0.054295 memory: 12959 loss_kpt: 513.656194 acc_pose: 0.647355 loss: 513.656194 2022/10/12 14:03:40 - mmengine - INFO - Epoch(train) [27][500/586] lr: 5.000000e-03 eta: 19:07:39 time: 0.687441 data_time: 0.058544 memory: 12959 loss_kpt: 512.830812 acc_pose: 0.618124 loss: 512.830812 2022/10/12 14:04:15 - mmengine - INFO - Epoch(train) [27][550/586] lr: 5.000000e-03 eta: 19:07:23 time: 0.689509 data_time: 0.059422 memory: 12959 loss_kpt: 522.208792 acc_pose: 0.634398 loss: 522.208792 2022/10/12 14:04:39 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:05:12 - mmengine - INFO - Epoch(train) [28][50/586] lr: 5.000000e-03 eta: 19:04:03 time: 0.676084 data_time: 0.070837 memory: 12959 loss_kpt: 519.636661 acc_pose: 0.711442 loss: 519.636661 2022/10/12 14:05:45 - mmengine - INFO - Epoch(train) [28][100/586] lr: 5.000000e-03 eta: 19:03:38 time: 0.661458 data_time: 0.058983 memory: 12959 loss_kpt: 522.481890 acc_pose: 0.592134 loss: 522.481890 2022/10/12 14:06:19 - mmengine - INFO - Epoch(train) [28][150/586] lr: 5.000000e-03 eta: 19:03:17 time: 0.673268 data_time: 0.059787 memory: 12959 loss_kpt: 510.538447 acc_pose: 0.612793 loss: 510.538447 2022/10/12 14:06:38 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:06:52 - mmengine - INFO - Epoch(train) [28][200/586] lr: 5.000000e-03 eta: 19:02:53 time: 0.664355 data_time: 0.063584 memory: 12959 loss_kpt: 514.901353 acc_pose: 0.689474 loss: 514.901353 2022/10/12 14:07:26 - mmengine - INFO - Epoch(train) [28][250/586] lr: 5.000000e-03 eta: 19:02:31 time: 0.669589 data_time: 0.059986 memory: 12959 loss_kpt: 521.687710 acc_pose: 0.686500 loss: 521.687710 2022/10/12 14:07:59 - mmengine - INFO - Epoch(train) [28][300/586] lr: 5.000000e-03 eta: 19:02:06 time: 0.662174 data_time: 0.056010 memory: 12959 loss_kpt: 515.282176 acc_pose: 0.652665 loss: 515.282176 2022/10/12 14:08:32 - mmengine - INFO - Epoch(train) [28][350/586] lr: 5.000000e-03 eta: 19:01:42 time: 0.666389 data_time: 0.061100 memory: 12959 loss_kpt: 518.088631 acc_pose: 0.648927 loss: 518.088631 2022/10/12 14:09:05 - mmengine - INFO - Epoch(train) [28][400/586] lr: 5.000000e-03 eta: 19:01:14 time: 0.652168 data_time: 0.056589 memory: 12959 loss_kpt: 504.867592 acc_pose: 0.651311 loss: 504.867592 2022/10/12 14:09:38 - mmengine - INFO - Epoch(train) [28][450/586] lr: 5.000000e-03 eta: 19:00:50 time: 0.664555 data_time: 0.057867 memory: 12959 loss_kpt: 506.619093 acc_pose: 0.725408 loss: 506.619093 2022/10/12 14:10:12 - mmengine - INFO - Epoch(train) [28][500/586] lr: 5.000000e-03 eta: 19:00:28 time: 0.672545 data_time: 0.056541 memory: 12959 loss_kpt: 505.491956 acc_pose: 0.539380 loss: 505.491956 2022/10/12 14:10:45 - mmengine - INFO - Epoch(train) [28][550/586] lr: 5.000000e-03 eta: 19:00:06 time: 0.672082 data_time: 0.061096 memory: 12959 loss_kpt: 515.321855 acc_pose: 0.600547 loss: 515.321855 2022/10/12 14:11:09 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:11:43 - mmengine - INFO - Epoch(train) [29][50/586] lr: 5.000000e-03 eta: 18:56:55 time: 0.683522 data_time: 0.066222 memory: 12959 loss_kpt: 513.611379 acc_pose: 0.666720 loss: 513.611379 2022/10/12 14:12:17 - mmengine - INFO - Epoch(train) [29][100/586] lr: 5.000000e-03 eta: 18:56:33 time: 0.669495 data_time: 0.060720 memory: 12959 loss_kpt: 504.272493 acc_pose: 0.684223 loss: 504.272493 2022/10/12 14:12:50 - mmengine - INFO - Epoch(train) [29][150/586] lr: 5.000000e-03 eta: 18:56:08 time: 0.663063 data_time: 0.057587 memory: 12959 loss_kpt: 504.566888 acc_pose: 0.732624 loss: 504.566888 2022/10/12 14:13:24 - mmengine - INFO - Epoch(train) [29][200/586] lr: 5.000000e-03 eta: 18:55:49 time: 0.679753 data_time: 0.057169 memory: 12959 loss_kpt: 503.445027 acc_pose: 0.688022 loss: 503.445027 2022/10/12 14:13:57 - mmengine - INFO - Epoch(train) [29][250/586] lr: 5.000000e-03 eta: 18:55:26 time: 0.668391 data_time: 0.055673 memory: 12959 loss_kpt: 509.971625 acc_pose: 0.587132 loss: 509.971625 2022/10/12 14:14:31 - mmengine - INFO - Epoch(train) [29][300/586] lr: 5.000000e-03 eta: 18:55:03 time: 0.670996 data_time: 0.058186 memory: 12959 loss_kpt: 503.674867 acc_pose: 0.622424 loss: 503.674867 2022/10/12 14:15:05 - mmengine - INFO - Epoch(train) [29][350/586] lr: 5.000000e-03 eta: 18:54:43 time: 0.675890 data_time: 0.054656 memory: 12959 loss_kpt: 510.960264 acc_pose: 0.726051 loss: 510.960264 2022/10/12 14:15:38 - mmengine - INFO - Epoch(train) [29][400/586] lr: 5.000000e-03 eta: 18:54:20 time: 0.669031 data_time: 0.058363 memory: 12959 loss_kpt: 500.810154 acc_pose: 0.668307 loss: 500.810154 2022/10/12 14:16:11 - mmengine - INFO - Epoch(train) [29][450/586] lr: 5.000000e-03 eta: 18:53:55 time: 0.664522 data_time: 0.054866 memory: 12959 loss_kpt: 507.202364 acc_pose: 0.723952 loss: 507.202364 2022/10/12 14:16:45 - mmengine - INFO - Epoch(train) [29][500/586] lr: 5.000000e-03 eta: 18:53:35 time: 0.679006 data_time: 0.059485 memory: 12959 loss_kpt: 508.040562 acc_pose: 0.638488 loss: 508.040562 2022/10/12 14:17:19 - mmengine - INFO - Epoch(train) [29][550/586] lr: 5.000000e-03 eta: 18:53:14 time: 0.675522 data_time: 0.054774 memory: 12959 loss_kpt: 512.815751 acc_pose: 0.683554 loss: 512.815751 2022/10/12 14:17:43 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:17:48 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:18:18 - mmengine - INFO - Epoch(train) [30][50/586] lr: 5.000000e-03 eta: 18:50:11 time: 0.689852 data_time: 0.068445 memory: 12959 loss_kpt: 504.410349 acc_pose: 0.662894 loss: 504.410349 2022/10/12 14:18:52 - mmengine - INFO - Epoch(train) [30][100/586] lr: 5.000000e-03 eta: 18:49:53 time: 0.686046 data_time: 0.058529 memory: 12959 loss_kpt: 506.161243 acc_pose: 0.650048 loss: 506.161243 2022/10/12 14:19:26 - mmengine - INFO - Epoch(train) [30][150/586] lr: 5.000000e-03 eta: 18:49:34 time: 0.680118 data_time: 0.056738 memory: 12959 loss_kpt: 505.901006 acc_pose: 0.738072 loss: 505.901006 2022/10/12 14:20:00 - mmengine - INFO - Epoch(train) [30][200/586] lr: 5.000000e-03 eta: 18:49:12 time: 0.673611 data_time: 0.056789 memory: 12959 loss_kpt: 494.904222 acc_pose: 0.637910 loss: 494.904222 2022/10/12 14:20:34 - mmengine - INFO - Epoch(train) [30][250/586] lr: 5.000000e-03 eta: 18:48:53 time: 0.680779 data_time: 0.066156 memory: 12959 loss_kpt: 495.035424 acc_pose: 0.639057 loss: 495.035424 2022/10/12 14:21:08 - mmengine - INFO - Epoch(train) [30][300/586] lr: 5.000000e-03 eta: 18:48:32 time: 0.677415 data_time: 0.053536 memory: 12959 loss_kpt: 504.762902 acc_pose: 0.650261 loss: 504.762902 2022/10/12 14:21:42 - mmengine - INFO - Epoch(train) [30][350/586] lr: 5.000000e-03 eta: 18:48:12 time: 0.678684 data_time: 0.060345 memory: 12959 loss_kpt: 497.862262 acc_pose: 0.642938 loss: 497.862262 2022/10/12 14:22:15 - mmengine - INFO - Epoch(train) [30][400/586] lr: 5.000000e-03 eta: 18:47:50 time: 0.674926 data_time: 0.055635 memory: 12959 loss_kpt: 497.229515 acc_pose: 0.607959 loss: 497.229515 2022/10/12 14:22:49 - mmengine - INFO - Epoch(train) [30][450/586] lr: 5.000000e-03 eta: 18:47:28 time: 0.671976 data_time: 0.059646 memory: 12959 loss_kpt: 504.060659 acc_pose: 0.657756 loss: 504.060659 2022/10/12 14:23:23 - mmengine - INFO - Epoch(train) [30][500/586] lr: 5.000000e-03 eta: 18:47:06 time: 0.674465 data_time: 0.052781 memory: 12959 loss_kpt: 498.822127 acc_pose: 0.562383 loss: 498.822127 2022/10/12 14:23:56 - mmengine - INFO - Epoch(train) [30][550/586] lr: 5.000000e-03 eta: 18:46:44 time: 0.673438 data_time: 0.063186 memory: 12959 loss_kpt: 497.096584 acc_pose: 0.680855 loss: 497.096584 2022/10/12 14:24:20 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:24:20 - mmengine - INFO - Saving checkpoint at 30 epochs 2022/10/12 14:24:38 - mmengine - INFO - Epoch(val) [30][50/407] eta: 0:01:35 time: 0.267895 data_time: 0.012192 memory: 12959 2022/10/12 14:24:51 - mmengine - INFO - Epoch(val) [30][100/407] eta: 0:01:20 time: 0.262779 data_time: 0.008153 memory: 2407 2022/10/12 14:25:04 - mmengine - INFO - Epoch(val) [30][150/407] eta: 0:01:07 time: 0.263045 data_time: 0.007756 memory: 2407 2022/10/12 14:25:17 - mmengine - INFO - Epoch(val) [30][200/407] eta: 0:00:54 time: 0.265580 data_time: 0.008175 memory: 2407 2022/10/12 14:25:30 - mmengine - INFO - Epoch(val) [30][250/407] eta: 0:00:41 time: 0.262010 data_time: 0.007766 memory: 2407 2022/10/12 14:25:44 - mmengine - INFO - Epoch(val) [30][300/407] eta: 0:00:28 time: 0.264019 data_time: 0.008102 memory: 2407 2022/10/12 14:25:57 - mmengine - INFO - Epoch(val) [30][350/407] eta: 0:00:14 time: 0.261492 data_time: 0.007719 memory: 2407 2022/10/12 14:26:10 - mmengine - INFO - Epoch(val) [30][400/407] eta: 0:00:01 time: 0.262614 data_time: 0.007738 memory: 2407 2022/10/12 14:26:24 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 14:26:40 - mmengine - INFO - Epoch(val) [30][407/407] coco/AP: 0.498277 coco/AP .5: 0.773677 coco/AP .75: 0.541242 coco/AP (M): 0.509498 coco/AP (L): 0.510516 coco/AR: 0.605463 coco/AR .5: 0.856423 coco/AR .75: 0.659477 coco/AR (M): 0.577329 coco/AR (L): 0.644444 2022/10/12 14:26:40 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_20.pth is removed 2022/10/12 14:26:42 - mmengine - INFO - The best checkpoint with 0.4983 coco/AP at 30 epoch is saved to best_coco/AP_epoch_30.pth. 2022/10/12 14:27:16 - mmengine - INFO - Epoch(train) [31][50/586] lr: 5.000000e-03 eta: 18:43:43 time: 0.681337 data_time: 0.061782 memory: 12959 loss_kpt: 501.378475 acc_pose: 0.677936 loss: 501.378475 2022/10/12 14:27:50 - mmengine - INFO - Epoch(train) [31][100/586] lr: 5.000000e-03 eta: 18:43:24 time: 0.684013 data_time: 0.064451 memory: 12959 loss_kpt: 498.884962 acc_pose: 0.687230 loss: 498.884962 2022/10/12 14:28:25 - mmengine - INFO - Epoch(train) [31][150/586] lr: 5.000000e-03 eta: 18:43:07 time: 0.690181 data_time: 0.061396 memory: 12959 loss_kpt: 494.321825 acc_pose: 0.659821 loss: 494.321825 2022/10/12 14:28:59 - mmengine - INFO - Epoch(train) [31][200/586] lr: 5.000000e-03 eta: 18:42:46 time: 0.674705 data_time: 0.060483 memory: 12959 loss_kpt: 503.709810 acc_pose: 0.669683 loss: 503.709810 2022/10/12 14:29:32 - mmengine - INFO - Epoch(train) [31][250/586] lr: 5.000000e-03 eta: 18:42:24 time: 0.676790 data_time: 0.057547 memory: 12959 loss_kpt: 497.326025 acc_pose: 0.641728 loss: 497.326025 2022/10/12 14:30:07 - mmengine - INFO - Epoch(train) [31][300/586] lr: 5.000000e-03 eta: 18:42:05 time: 0.681306 data_time: 0.061148 memory: 12959 loss_kpt: 488.200339 acc_pose: 0.713436 loss: 488.200339 2022/10/12 14:30:40 - mmengine - INFO - Epoch(train) [31][350/586] lr: 5.000000e-03 eta: 18:41:44 time: 0.677767 data_time: 0.059314 memory: 12959 loss_kpt: 508.702582 acc_pose: 0.672452 loss: 508.702582 2022/10/12 14:31:15 - mmengine - INFO - Epoch(train) [31][400/586] lr: 5.000000e-03 eta: 18:41:24 time: 0.681859 data_time: 0.060027 memory: 12959 loss_kpt: 490.707431 acc_pose: 0.655633 loss: 490.707431 2022/10/12 14:31:28 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:31:49 - mmengine - INFO - Epoch(train) [31][450/586] lr: 5.000000e-03 eta: 18:41:06 time: 0.687976 data_time: 0.061717 memory: 12959 loss_kpt: 495.641984 acc_pose: 0.781246 loss: 495.641984 2022/10/12 14:32:22 - mmengine - INFO - Epoch(train) [31][500/586] lr: 5.000000e-03 eta: 18:40:42 time: 0.667968 data_time: 0.058496 memory: 12959 loss_kpt: 498.331005 acc_pose: 0.688737 loss: 498.331005 2022/10/12 14:32:56 - mmengine - INFO - Epoch(train) [31][550/586] lr: 5.000000e-03 eta: 18:40:22 time: 0.683061 data_time: 0.062893 memory: 12959 loss_kpt: 500.717766 acc_pose: 0.587225 loss: 500.717766 2022/10/12 14:33:21 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:33:56 - mmengine - INFO - Epoch(train) [32][50/586] lr: 5.000000e-03 eta: 18:37:33 time: 0.705422 data_time: 0.075745 memory: 12959 loss_kpt: 505.995220 acc_pose: 0.689283 loss: 505.995220 2022/10/12 14:34:31 - mmengine - INFO - Epoch(train) [32][100/586] lr: 5.000000e-03 eta: 18:37:17 time: 0.696556 data_time: 0.057337 memory: 12959 loss_kpt: 493.869624 acc_pose: 0.690607 loss: 493.869624 2022/10/12 14:35:06 - mmengine - INFO - Epoch(train) [32][150/586] lr: 5.000000e-03 eta: 18:37:01 time: 0.693759 data_time: 0.060241 memory: 12959 loss_kpt: 498.463145 acc_pose: 0.696406 loss: 498.463145 2022/10/12 14:35:41 - mmengine - INFO - Epoch(train) [32][200/586] lr: 5.000000e-03 eta: 18:36:48 time: 0.707141 data_time: 0.055353 memory: 12959 loss_kpt: 486.670351 acc_pose: 0.610419 loss: 486.670351 2022/10/12 14:36:16 - mmengine - INFO - Epoch(train) [32][250/586] lr: 5.000000e-03 eta: 18:36:34 time: 0.705188 data_time: 0.063293 memory: 12959 loss_kpt: 485.537505 acc_pose: 0.580947 loss: 485.537505 2022/10/12 14:36:51 - mmengine - INFO - Epoch(train) [32][300/586] lr: 5.000000e-03 eta: 18:36:20 time: 0.701981 data_time: 0.061192 memory: 12959 loss_kpt: 498.561804 acc_pose: 0.727198 loss: 498.561804 2022/10/12 14:37:27 - mmengine - INFO - Epoch(train) [32][350/586] lr: 5.000000e-03 eta: 18:36:08 time: 0.711889 data_time: 0.058522 memory: 12959 loss_kpt: 493.809958 acc_pose: 0.603969 loss: 493.809958 2022/10/12 14:38:02 - mmengine - INFO - Epoch(train) [32][400/586] lr: 5.000000e-03 eta: 18:35:53 time: 0.702444 data_time: 0.057482 memory: 12959 loss_kpt: 489.170882 acc_pose: 0.686526 loss: 489.170882 2022/10/12 14:38:38 - mmengine - INFO - Epoch(train) [32][450/586] lr: 5.000000e-03 eta: 18:35:41 time: 0.709486 data_time: 0.060008 memory: 12959 loss_kpt: 481.455511 acc_pose: 0.657468 loss: 481.455511 2022/10/12 14:39:13 - mmengine - INFO - Epoch(train) [32][500/586] lr: 5.000000e-03 eta: 18:35:25 time: 0.698732 data_time: 0.057751 memory: 12959 loss_kpt: 486.781380 acc_pose: 0.656570 loss: 486.781380 2022/10/12 14:39:46 - mmengine - INFO - Epoch(train) [32][550/586] lr: 5.000000e-03 eta: 18:35:03 time: 0.678316 data_time: 0.056531 memory: 12959 loss_kpt: 484.771515 acc_pose: 0.765328 loss: 484.771515 2022/10/12 14:40:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:40:45 - mmengine - INFO - Epoch(train) [33][50/586] lr: 5.000000e-03 eta: 18:32:15 time: 0.697666 data_time: 0.066767 memory: 12959 loss_kpt: 490.122916 acc_pose: 0.677302 loss: 490.122916 2022/10/12 14:41:19 - mmengine - INFO - Epoch(train) [33][100/586] lr: 5.000000e-03 eta: 18:31:55 time: 0.681197 data_time: 0.054915 memory: 12959 loss_kpt: 487.850800 acc_pose: 0.749494 loss: 487.850800 2022/10/12 14:41:55 - mmengine - INFO - Epoch(train) [33][150/586] lr: 5.000000e-03 eta: 18:31:41 time: 0.707776 data_time: 0.063158 memory: 12959 loss_kpt: 482.211256 acc_pose: 0.741859 loss: 482.211256 2022/10/12 14:42:30 - mmengine - INFO - Epoch(train) [33][200/586] lr: 5.000000e-03 eta: 18:31:26 time: 0.699761 data_time: 0.056719 memory: 12959 loss_kpt: 483.873716 acc_pose: 0.573778 loss: 483.873716 2022/10/12 14:43:03 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:43:05 - mmengine - INFO - Epoch(train) [33][250/586] lr: 5.000000e-03 eta: 18:31:09 time: 0.697429 data_time: 0.060147 memory: 12959 loss_kpt: 487.515503 acc_pose: 0.643238 loss: 487.515503 2022/10/12 14:43:40 - mmengine - INFO - Epoch(train) [33][300/586] lr: 5.000000e-03 eta: 18:30:54 time: 0.700369 data_time: 0.060383 memory: 12959 loss_kpt: 492.660385 acc_pose: 0.668763 loss: 492.660385 2022/10/12 14:44:16 - mmengine - INFO - Epoch(train) [33][350/586] lr: 5.000000e-03 eta: 18:30:43 time: 0.721054 data_time: 0.056864 memory: 12959 loss_kpt: 492.315204 acc_pose: 0.664448 loss: 492.315204 2022/10/12 14:44:50 - mmengine - INFO - Epoch(train) [33][400/586] lr: 5.000000e-03 eta: 18:30:25 time: 0.689861 data_time: 0.056255 memory: 12959 loss_kpt: 481.200776 acc_pose: 0.623377 loss: 481.200776 2022/10/12 14:45:25 - mmengine - INFO - Epoch(train) [33][450/586] lr: 5.000000e-03 eta: 18:30:07 time: 0.693761 data_time: 0.059835 memory: 12959 loss_kpt: 475.524940 acc_pose: 0.647464 loss: 475.524940 2022/10/12 14:45:59 - mmengine - INFO - Epoch(train) [33][500/586] lr: 5.000000e-03 eta: 18:29:48 time: 0.691937 data_time: 0.063985 memory: 12959 loss_kpt: 487.018294 acc_pose: 0.719389 loss: 487.018294 2022/10/12 14:46:35 - mmengine - INFO - Epoch(train) [33][550/586] lr: 5.000000e-03 eta: 18:29:32 time: 0.700528 data_time: 0.064036 memory: 12959 loss_kpt: 474.666982 acc_pose: 0.708445 loss: 474.666982 2022/10/12 14:46:59 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:47:34 - mmengine - INFO - Epoch(train) [34][50/586] lr: 5.000000e-03 eta: 18:26:48 time: 0.695230 data_time: 0.072413 memory: 12959 loss_kpt: 473.981311 acc_pose: 0.729893 loss: 473.981311 2022/10/12 14:48:08 - mmengine - INFO - Epoch(train) [34][100/586] lr: 5.000000e-03 eta: 18:26:25 time: 0.674019 data_time: 0.056415 memory: 12959 loss_kpt: 484.099758 acc_pose: 0.714377 loss: 484.099758 2022/10/12 14:48:43 - mmengine - INFO - Epoch(train) [34][150/586] lr: 5.000000e-03 eta: 18:26:06 time: 0.692586 data_time: 0.063850 memory: 12959 loss_kpt: 482.875013 acc_pose: 0.668306 loss: 482.875013 2022/10/12 14:49:16 - mmengine - INFO - Epoch(train) [34][200/586] lr: 5.000000e-03 eta: 18:25:44 time: 0.679198 data_time: 0.056933 memory: 12959 loss_kpt: 484.176194 acc_pose: 0.664461 loss: 484.176194 2022/10/12 14:49:51 - mmengine - INFO - Epoch(train) [34][250/586] lr: 5.000000e-03 eta: 18:25:23 time: 0.680758 data_time: 0.058273 memory: 12959 loss_kpt: 482.302434 acc_pose: 0.700885 loss: 482.302434 2022/10/12 14:50:25 - mmengine - INFO - Epoch(train) [34][300/586] lr: 5.000000e-03 eta: 18:25:03 time: 0.685123 data_time: 0.061132 memory: 12959 loss_kpt: 480.210897 acc_pose: 0.675953 loss: 480.210897 2022/10/12 14:50:59 - mmengine - INFO - Epoch(train) [34][350/586] lr: 5.000000e-03 eta: 18:24:44 time: 0.691651 data_time: 0.056770 memory: 12959 loss_kpt: 476.721967 acc_pose: 0.641215 loss: 476.721967 2022/10/12 14:51:35 - mmengine - INFO - Epoch(train) [34][400/586] lr: 5.000000e-03 eta: 18:24:30 time: 0.712888 data_time: 0.056340 memory: 12959 loss_kpt: 479.176548 acc_pose: 0.726028 loss: 479.176548 2022/10/12 14:52:11 - mmengine - INFO - Epoch(train) [34][450/586] lr: 5.000000e-03 eta: 18:24:19 time: 0.720277 data_time: 0.060626 memory: 12959 loss_kpt: 480.480596 acc_pose: 0.738436 loss: 480.480596 2022/10/12 14:52:47 - mmengine - INFO - Epoch(train) [34][500/586] lr: 5.000000e-03 eta: 18:24:07 time: 0.720289 data_time: 0.053072 memory: 12959 loss_kpt: 481.653286 acc_pose: 0.697799 loss: 481.653286 2022/10/12 14:53:23 - mmengine - INFO - Epoch(train) [34][550/586] lr: 5.000000e-03 eta: 18:23:57 time: 0.725176 data_time: 0.060637 memory: 12959 loss_kpt: 477.787380 acc_pose: 0.736700 loss: 477.787380 2022/10/12 14:53:49 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:54:24 - mmengine - INFO - Epoch(train) [35][50/586] lr: 5.000000e-03 eta: 18:21:18 time: 0.702141 data_time: 0.071378 memory: 12959 loss_kpt: 476.630956 acc_pose: 0.676839 loss: 476.630956 2022/10/12 14:54:42 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 14:54:58 - mmengine - INFO - Epoch(train) [35][100/586] lr: 5.000000e-03 eta: 18:20:58 time: 0.688911 data_time: 0.055206 memory: 12959 loss_kpt: 473.948318 acc_pose: 0.704670 loss: 473.948318 2022/10/12 14:55:32 - mmengine - INFO - Epoch(train) [35][150/586] lr: 5.000000e-03 eta: 18:20:35 time: 0.676837 data_time: 0.053434 memory: 12959 loss_kpt: 471.175948 acc_pose: 0.722947 loss: 471.175948 2022/10/12 14:56:06 - mmengine - INFO - Epoch(train) [35][200/586] lr: 5.000000e-03 eta: 18:20:12 time: 0.674308 data_time: 0.058458 memory: 12959 loss_kpt: 475.633196 acc_pose: 0.676178 loss: 475.633196 2022/10/12 14:56:40 - mmengine - INFO - Epoch(train) [35][250/586] lr: 5.000000e-03 eta: 18:19:48 time: 0.675268 data_time: 0.054251 memory: 12959 loss_kpt: 463.419778 acc_pose: 0.711386 loss: 463.419778 2022/10/12 14:57:13 - mmengine - INFO - Epoch(train) [35][300/586] lr: 5.000000e-03 eta: 18:19:25 time: 0.674390 data_time: 0.055224 memory: 12959 loss_kpt: 469.291793 acc_pose: 0.614013 loss: 469.291793 2022/10/12 14:57:47 - mmengine - INFO - Epoch(train) [35][350/586] lr: 5.000000e-03 eta: 18:18:59 time: 0.665321 data_time: 0.059736 memory: 12959 loss_kpt: 477.817021 acc_pose: 0.700512 loss: 477.817021 2022/10/12 14:58:20 - mmengine - INFO - Epoch(train) [35][400/586] lr: 5.000000e-03 eta: 18:18:34 time: 0.670336 data_time: 0.053629 memory: 12959 loss_kpt: 473.311945 acc_pose: 0.743267 loss: 473.311945 2022/10/12 14:58:54 - mmengine - INFO - Epoch(train) [35][450/586] lr: 5.000000e-03 eta: 18:18:10 time: 0.673540 data_time: 0.061525 memory: 12959 loss_kpt: 474.256144 acc_pose: 0.645494 loss: 474.256144 2022/10/12 14:59:27 - mmengine - INFO - Epoch(train) [35][500/586] lr: 5.000000e-03 eta: 18:17:44 time: 0.667473 data_time: 0.053994 memory: 12959 loss_kpt: 471.180860 acc_pose: 0.527385 loss: 471.180860 2022/10/12 15:00:01 - mmengine - INFO - Epoch(train) [35][550/586] lr: 5.000000e-03 eta: 18:17:19 time: 0.668629 data_time: 0.059144 memory: 12959 loss_kpt: 475.650176 acc_pose: 0.662147 loss: 475.650176 2022/10/12 15:00:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:01:00 - mmengine - INFO - Epoch(train) [36][50/586] lr: 5.000000e-03 eta: 18:14:44 time: 0.705233 data_time: 0.063938 memory: 12959 loss_kpt: 470.668145 acc_pose: 0.702284 loss: 470.668145 2022/10/12 15:01:34 - mmengine - INFO - Epoch(train) [36][100/586] lr: 5.000000e-03 eta: 18:14:24 time: 0.687803 data_time: 0.059408 memory: 12959 loss_kpt: 474.943735 acc_pose: 0.786755 loss: 474.943735 2022/10/12 15:02:08 - mmengine - INFO - Epoch(train) [36][150/586] lr: 5.000000e-03 eta: 18:14:00 time: 0.674973 data_time: 0.058064 memory: 12959 loss_kpt: 468.739288 acc_pose: 0.701768 loss: 468.739288 2022/10/12 15:02:42 - mmengine - INFO - Epoch(train) [36][200/586] lr: 5.000000e-03 eta: 18:13:37 time: 0.676763 data_time: 0.060378 memory: 12959 loss_kpt: 473.005687 acc_pose: 0.639702 loss: 473.005687 2022/10/12 15:03:17 - mmengine - INFO - Epoch(train) [36][250/586] lr: 5.000000e-03 eta: 18:13:21 time: 0.704215 data_time: 0.055896 memory: 12959 loss_kpt: 474.973496 acc_pose: 0.741117 loss: 474.973496 2022/10/12 15:03:52 - mmengine - INFO - Epoch(train) [36][300/586] lr: 5.000000e-03 eta: 18:13:03 time: 0.700764 data_time: 0.056989 memory: 12959 loss_kpt: 470.683171 acc_pose: 0.718347 loss: 470.683171 2022/10/12 15:04:27 - mmengine - INFO - Epoch(train) [36][350/586] lr: 5.000000e-03 eta: 18:12:44 time: 0.694904 data_time: 0.056854 memory: 12959 loss_kpt: 473.820211 acc_pose: 0.709997 loss: 473.820211 2022/10/12 15:05:02 - mmengine - INFO - Epoch(train) [36][400/586] lr: 5.000000e-03 eta: 18:12:27 time: 0.703050 data_time: 0.061463 memory: 12959 loss_kpt: 469.478875 acc_pose: 0.677736 loss: 469.478875 2022/10/12 15:05:37 - mmengine - INFO - Epoch(train) [36][450/586] lr: 5.000000e-03 eta: 18:12:09 time: 0.697341 data_time: 0.054935 memory: 12959 loss_kpt: 458.573488 acc_pose: 0.731691 loss: 458.573488 2022/10/12 15:06:05 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:06:12 - mmengine - INFO - Epoch(train) [36][500/586] lr: 5.000000e-03 eta: 18:11:52 time: 0.707444 data_time: 0.066030 memory: 12959 loss_kpt: 463.665785 acc_pose: 0.682482 loss: 463.665785 2022/10/12 15:06:48 - mmengine - INFO - Epoch(train) [36][550/586] lr: 5.000000e-03 eta: 18:11:36 time: 0.707719 data_time: 0.054558 memory: 12959 loss_kpt: 461.966866 acc_pose: 0.803823 loss: 461.966866 2022/10/12 15:07:13 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:07:48 - mmengine - INFO - Epoch(train) [37][50/586] lr: 5.000000e-03 eta: 18:09:03 time: 0.699163 data_time: 0.067656 memory: 12959 loss_kpt: 461.517293 acc_pose: 0.641712 loss: 461.517293 2022/10/12 15:08:22 - mmengine - INFO - Epoch(train) [37][100/586] lr: 5.000000e-03 eta: 18:08:40 time: 0.677807 data_time: 0.059373 memory: 12959 loss_kpt: 463.663282 acc_pose: 0.716816 loss: 463.663282 2022/10/12 15:08:56 - mmengine - INFO - Epoch(train) [37][150/586] lr: 5.000000e-03 eta: 18:08:19 time: 0.685607 data_time: 0.060892 memory: 12959 loss_kpt: 466.218403 acc_pose: 0.687012 loss: 466.218403 2022/10/12 15:09:30 - mmengine - INFO - Epoch(train) [37][200/586] lr: 5.000000e-03 eta: 18:07:52 time: 0.665475 data_time: 0.058796 memory: 12959 loss_kpt: 459.669699 acc_pose: 0.699931 loss: 459.669699 2022/10/12 15:10:04 - mmengine - INFO - Epoch(train) [37][250/586] lr: 5.000000e-03 eta: 18:07:31 time: 0.684828 data_time: 0.059644 memory: 12959 loss_kpt: 458.277926 acc_pose: 0.697539 loss: 458.277926 2022/10/12 15:10:38 - mmengine - INFO - Epoch(train) [37][300/586] lr: 5.000000e-03 eta: 18:07:08 time: 0.681562 data_time: 0.063328 memory: 12959 loss_kpt: 457.373359 acc_pose: 0.559131 loss: 457.373359 2022/10/12 15:11:12 - mmengine - INFO - Epoch(train) [37][350/586] lr: 5.000000e-03 eta: 18:06:47 time: 0.688330 data_time: 0.061153 memory: 12959 loss_kpt: 464.327347 acc_pose: 0.621812 loss: 464.327347 2022/10/12 15:11:47 - mmengine - INFO - Epoch(train) [37][400/586] lr: 5.000000e-03 eta: 18:06:25 time: 0.684791 data_time: 0.054800 memory: 12959 loss_kpt: 457.153587 acc_pose: 0.736700 loss: 457.153587 2022/10/12 15:12:21 - mmengine - INFO - Epoch(train) [37][450/586] lr: 5.000000e-03 eta: 18:06:04 time: 0.688611 data_time: 0.057658 memory: 12959 loss_kpt: 455.704073 acc_pose: 0.666172 loss: 455.704073 2022/10/12 15:12:55 - mmengine - INFO - Epoch(train) [37][500/586] lr: 5.000000e-03 eta: 18:05:40 time: 0.677650 data_time: 0.056893 memory: 12959 loss_kpt: 459.170653 acc_pose: 0.665308 loss: 459.170653 2022/10/12 15:13:29 - mmengine - INFO - Epoch(train) [37][550/586] lr: 5.000000e-03 eta: 18:05:19 time: 0.689387 data_time: 0.055042 memory: 12959 loss_kpt: 463.382990 acc_pose: 0.731519 loss: 463.382990 2022/10/12 15:13:54 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:14:27 - mmengine - INFO - Epoch(train) [38][50/586] lr: 5.000000e-03 eta: 18:02:44 time: 0.675859 data_time: 0.071378 memory: 12959 loss_kpt: 459.962826 acc_pose: 0.739962 loss: 459.962826 2022/10/12 15:15:00 - mmengine - INFO - Epoch(train) [38][100/586] lr: 5.000000e-03 eta: 18:02:16 time: 0.657696 data_time: 0.054400 memory: 12959 loss_kpt: 450.129025 acc_pose: 0.706575 loss: 450.129025 2022/10/12 15:15:33 - mmengine - INFO - Epoch(train) [38][150/586] lr: 5.000000e-03 eta: 18:01:49 time: 0.661213 data_time: 0.061456 memory: 12959 loss_kpt: 453.338434 acc_pose: 0.706589 loss: 453.338434 2022/10/12 15:16:06 - mmengine - INFO - Epoch(train) [38][200/586] lr: 5.000000e-03 eta: 18:01:20 time: 0.655855 data_time: 0.057939 memory: 12959 loss_kpt: 454.649989 acc_pose: 0.699054 loss: 454.649989 2022/10/12 15:16:40 - mmengine - INFO - Epoch(train) [38][250/586] lr: 5.000000e-03 eta: 18:00:54 time: 0.668672 data_time: 0.058179 memory: 12959 loss_kpt: 452.756494 acc_pose: 0.741300 loss: 452.756494 2022/10/12 15:17:13 - mmengine - INFO - Epoch(train) [38][300/586] lr: 5.000000e-03 eta: 18:00:27 time: 0.664135 data_time: 0.058018 memory: 12959 loss_kpt: 453.311106 acc_pose: 0.761839 loss: 453.311106 2022/10/12 15:17:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:17:46 - mmengine - INFO - Epoch(train) [38][350/586] lr: 5.000000e-03 eta: 18:00:02 time: 0.671377 data_time: 0.057472 memory: 12959 loss_kpt: 447.444528 acc_pose: 0.722295 loss: 447.444528 2022/10/12 15:18:20 - mmengine - INFO - Epoch(train) [38][400/586] lr: 5.000000e-03 eta: 17:59:37 time: 0.672251 data_time: 0.058310 memory: 12959 loss_kpt: 449.573284 acc_pose: 0.738556 loss: 449.573284 2022/10/12 15:18:54 - mmengine - INFO - Epoch(train) [38][450/586] lr: 5.000000e-03 eta: 17:59:12 time: 0.672759 data_time: 0.066779 memory: 12959 loss_kpt: 449.030424 acc_pose: 0.783627 loss: 449.030424 2022/10/12 15:19:28 - mmengine - INFO - Epoch(train) [38][500/586] lr: 5.000000e-03 eta: 17:58:48 time: 0.677392 data_time: 0.060109 memory: 12959 loss_kpt: 451.604089 acc_pose: 0.644195 loss: 451.604089 2022/10/12 15:20:01 - mmengine - INFO - Epoch(train) [38][550/586] lr: 5.000000e-03 eta: 17:58:22 time: 0.666969 data_time: 0.057781 memory: 12959 loss_kpt: 454.565211 acc_pose: 0.701401 loss: 454.565211 2022/10/12 15:20:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:21:00 - mmengine - INFO - Epoch(train) [39][50/586] lr: 5.000000e-03 eta: 17:55:56 time: 0.699827 data_time: 0.065031 memory: 12959 loss_kpt: 450.405862 acc_pose: 0.704749 loss: 450.405862 2022/10/12 15:21:34 - mmengine - INFO - Epoch(train) [39][100/586] lr: 5.000000e-03 eta: 17:55:30 time: 0.669434 data_time: 0.055894 memory: 12959 loss_kpt: 446.067700 acc_pose: 0.735887 loss: 446.067700 2022/10/12 15:22:06 - mmengine - INFO - Epoch(train) [39][150/586] lr: 5.000000e-03 eta: 17:55:02 time: 0.658059 data_time: 0.059458 memory: 12959 loss_kpt: 456.466949 acc_pose: 0.740763 loss: 456.466949 2022/10/12 15:22:39 - mmengine - INFO - Epoch(train) [39][200/586] lr: 5.000000e-03 eta: 17:54:33 time: 0.654283 data_time: 0.056304 memory: 12959 loss_kpt: 444.589090 acc_pose: 0.774446 loss: 444.589090 2022/10/12 15:23:12 - mmengine - INFO - Epoch(train) [39][250/586] lr: 5.000000e-03 eta: 17:54:06 time: 0.664141 data_time: 0.059644 memory: 12959 loss_kpt: 446.778520 acc_pose: 0.758737 loss: 446.778520 2022/10/12 15:23:45 - mmengine - INFO - Epoch(train) [39][300/586] lr: 5.000000e-03 eta: 17:53:38 time: 0.659386 data_time: 0.056397 memory: 12959 loss_kpt: 443.907547 acc_pose: 0.666283 loss: 443.907547 2022/10/12 15:24:19 - mmengine - INFO - Epoch(train) [39][350/586] lr: 5.000000e-03 eta: 17:53:11 time: 0.666191 data_time: 0.061296 memory: 12959 loss_kpt: 442.830067 acc_pose: 0.667241 loss: 442.830067 2022/10/12 15:24:52 - mmengine - INFO - Epoch(train) [39][400/586] lr: 5.000000e-03 eta: 17:52:45 time: 0.667445 data_time: 0.056655 memory: 12959 loss_kpt: 456.735482 acc_pose: 0.705730 loss: 456.735482 2022/10/12 15:25:26 - mmengine - INFO - Epoch(train) [39][450/586] lr: 5.000000e-03 eta: 17:52:20 time: 0.673302 data_time: 0.062771 memory: 12959 loss_kpt: 446.287792 acc_pose: 0.735398 loss: 446.287792 2022/10/12 15:25:59 - mmengine - INFO - Epoch(train) [39][500/586] lr: 5.000000e-03 eta: 17:51:54 time: 0.668240 data_time: 0.058516 memory: 12959 loss_kpt: 442.593341 acc_pose: 0.714898 loss: 442.593341 2022/10/12 15:26:32 - mmengine - INFO - Epoch(train) [39][550/586] lr: 5.000000e-03 eta: 17:51:28 time: 0.666255 data_time: 0.058075 memory: 12959 loss_kpt: 442.366516 acc_pose: 0.687748 loss: 442.366516 2022/10/12 15:26:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:27:31 - mmengine - INFO - Epoch(train) [40][50/586] lr: 5.000000e-03 eta: 17:49:01 time: 0.685185 data_time: 0.068488 memory: 12959 loss_kpt: 447.210887 acc_pose: 0.577256 loss: 447.210887 2022/10/12 15:28:04 - mmengine - INFO - Epoch(train) [40][100/586] lr: 5.000000e-03 eta: 17:48:37 time: 0.674669 data_time: 0.054193 memory: 12959 loss_kpt: 448.614907 acc_pose: 0.793678 loss: 448.614907 2022/10/12 15:28:36 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:28:38 - mmengine - INFO - Epoch(train) [40][150/586] lr: 5.000000e-03 eta: 17:48:13 time: 0.680181 data_time: 0.061413 memory: 12959 loss_kpt: 447.682950 acc_pose: 0.633111 loss: 447.682950 2022/10/12 15:29:12 - mmengine - INFO - Epoch(train) [40][200/586] lr: 5.000000e-03 eta: 17:47:48 time: 0.673098 data_time: 0.059796 memory: 12959 loss_kpt: 442.410047 acc_pose: 0.740755 loss: 442.410047 2022/10/12 15:29:45 - mmengine - INFO - Epoch(train) [40][250/586] lr: 5.000000e-03 eta: 17:47:22 time: 0.666840 data_time: 0.059350 memory: 12959 loss_kpt: 438.439319 acc_pose: 0.720685 loss: 438.439319 2022/10/12 15:30:20 - mmengine - INFO - Epoch(train) [40][300/586] lr: 5.000000e-03 eta: 17:47:00 time: 0.688118 data_time: 0.059471 memory: 12959 loss_kpt: 442.284185 acc_pose: 0.775208 loss: 442.284185 2022/10/12 15:30:55 - mmengine - INFO - Epoch(train) [40][350/586] lr: 5.000000e-03 eta: 17:46:40 time: 0.694649 data_time: 0.060385 memory: 12959 loss_kpt: 442.374011 acc_pose: 0.733126 loss: 442.374011 2022/10/12 15:31:29 - mmengine - INFO - Epoch(train) [40][400/586] lr: 5.000000e-03 eta: 17:46:16 time: 0.682386 data_time: 0.057234 memory: 12959 loss_kpt: 437.342630 acc_pose: 0.755506 loss: 437.342630 2022/10/12 15:32:03 - mmengine - INFO - Epoch(train) [40][450/586] lr: 5.000000e-03 eta: 17:45:55 time: 0.691125 data_time: 0.066033 memory: 12959 loss_kpt: 443.256116 acc_pose: 0.725498 loss: 443.256116 2022/10/12 15:32:38 - mmengine - INFO - Epoch(train) [40][500/586] lr: 5.000000e-03 eta: 17:45:33 time: 0.690052 data_time: 0.056042 memory: 12959 loss_kpt: 438.839804 acc_pose: 0.614012 loss: 438.839804 2022/10/12 15:33:12 - mmengine - INFO - Epoch(train) [40][550/586] lr: 5.000000e-03 eta: 17:45:11 time: 0.685381 data_time: 0.061606 memory: 12959 loss_kpt: 432.269681 acc_pose: 0.700424 loss: 432.269681 2022/10/12 15:33:37 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:33:37 - mmengine - INFO - Saving checkpoint at 40 epochs 2022/10/12 15:33:55 - mmengine - INFO - Epoch(val) [40][50/407] eta: 0:01:35 time: 0.268831 data_time: 0.012521 memory: 12959 2022/10/12 15:34:08 - mmengine - INFO - Epoch(val) [40][100/407] eta: 0:01:20 time: 0.261193 data_time: 0.007704 memory: 2407 2022/10/12 15:34:21 - mmengine - INFO - Epoch(val) [40][150/407] eta: 0:01:07 time: 0.263322 data_time: 0.008066 memory: 2407 2022/10/12 15:34:34 - mmengine - INFO - Epoch(val) [40][200/407] eta: 0:00:53 time: 0.260214 data_time: 0.007507 memory: 2407 2022/10/12 15:34:47 - mmengine - INFO - Epoch(val) [40][250/407] eta: 0:00:40 time: 0.259923 data_time: 0.007694 memory: 2407 2022/10/12 15:35:00 - mmengine - INFO - Epoch(val) [40][300/407] eta: 0:00:28 time: 0.266504 data_time: 0.007830 memory: 2407 2022/10/12 15:35:13 - mmengine - INFO - Epoch(val) [40][350/407] eta: 0:00:15 time: 0.264136 data_time: 0.008189 memory: 2407 2022/10/12 15:35:26 - mmengine - INFO - Epoch(val) [40][400/407] eta: 0:00:01 time: 0.254569 data_time: 0.007251 memory: 2407 2022/10/12 15:35:40 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 15:35:56 - mmengine - INFO - Epoch(val) [40][407/407] coco/AP: 0.561974 coco/AP .5: 0.794517 coco/AP .75: 0.625271 coco/AP (M): 0.563264 coco/AP (L): 0.604110 coco/AR: 0.675866 coco/AR .5: 0.880353 coco/AR .75: 0.736933 coco/AR (M): 0.640235 coco/AR (L): 0.725158 2022/10/12 15:35:56 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_30.pth is removed 2022/10/12 15:35:58 - mmengine - INFO - The best checkpoint with 0.5620 coco/AP at 40 epoch is saved to best_coco/AP_epoch_40.pth. 2022/10/12 15:36:32 - mmengine - INFO - Epoch(train) [41][50/586] lr: 5.000000e-03 eta: 17:42:45 time: 0.677590 data_time: 0.072953 memory: 12959 loss_kpt: 441.925480 acc_pose: 0.677048 loss: 441.925480 2022/10/12 15:37:06 - mmengine - INFO - Epoch(train) [41][100/586] lr: 5.000000e-03 eta: 17:42:20 time: 0.673583 data_time: 0.056498 memory: 12959 loss_kpt: 443.128983 acc_pose: 0.745490 loss: 443.128983 2022/10/12 15:37:40 - mmengine - INFO - Epoch(train) [41][150/586] lr: 5.000000e-03 eta: 17:41:56 time: 0.677039 data_time: 0.056357 memory: 12959 loss_kpt: 431.869011 acc_pose: 0.685665 loss: 431.869011 2022/10/12 15:38:14 - mmengine - INFO - Epoch(train) [41][200/586] lr: 5.000000e-03 eta: 17:41:34 time: 0.685269 data_time: 0.060427 memory: 12959 loss_kpt: 440.982635 acc_pose: 0.666626 loss: 440.982635 2022/10/12 15:38:48 - mmengine - INFO - Epoch(train) [41][250/586] lr: 5.000000e-03 eta: 17:41:08 time: 0.672450 data_time: 0.060195 memory: 12959 loss_kpt: 439.035817 acc_pose: 0.706060 loss: 439.035817 2022/10/12 15:39:21 - mmengine - INFO - Epoch(train) [41][300/586] lr: 5.000000e-03 eta: 17:40:42 time: 0.667214 data_time: 0.056974 memory: 12959 loss_kpt: 433.747269 acc_pose: 0.661621 loss: 433.747269 2022/10/12 15:39:55 - mmengine - INFO - Epoch(train) [41][350/586] lr: 5.000000e-03 eta: 17:40:17 time: 0.676628 data_time: 0.058749 memory: 12959 loss_kpt: 435.652637 acc_pose: 0.775924 loss: 435.652637 2022/10/12 15:40:28 - mmengine - INFO - Epoch(train) [41][400/586] lr: 5.000000e-03 eta: 17:39:51 time: 0.668692 data_time: 0.054311 memory: 12959 loss_kpt: 435.624501 acc_pose: 0.690632 loss: 435.624501 2022/10/12 15:41:02 - mmengine - INFO - Epoch(train) [41][450/586] lr: 5.000000e-03 eta: 17:39:24 time: 0.668250 data_time: 0.055288 memory: 12959 loss_kpt: 442.533871 acc_pose: 0.649246 loss: 442.533871 2022/10/12 15:41:35 - mmengine - INFO - Epoch(train) [41][500/586] lr: 5.000000e-03 eta: 17:38:57 time: 0.663454 data_time: 0.053147 memory: 12959 loss_kpt: 434.017281 acc_pose: 0.623276 loss: 434.017281 2022/10/12 15:42:09 - mmengine - INFO - Epoch(train) [41][550/586] lr: 5.000000e-03 eta: 17:38:32 time: 0.674993 data_time: 0.055058 memory: 12959 loss_kpt: 424.898047 acc_pose: 0.685152 loss: 424.898047 2022/10/12 15:42:15 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:42:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:43:06 - mmengine - INFO - Epoch(train) [42][50/586] lr: 5.000000e-03 eta: 17:36:09 time: 0.677011 data_time: 0.069240 memory: 12959 loss_kpt: 431.223109 acc_pose: 0.747301 loss: 431.223109 2022/10/12 15:43:39 - mmengine - INFO - Epoch(train) [42][100/586] lr: 5.000000e-03 eta: 17:35:42 time: 0.660347 data_time: 0.052600 memory: 12959 loss_kpt: 428.284581 acc_pose: 0.740813 loss: 428.284581 2022/10/12 15:44:13 - mmengine - INFO - Epoch(train) [42][150/586] lr: 5.000000e-03 eta: 17:35:16 time: 0.670442 data_time: 0.056588 memory: 12959 loss_kpt: 429.643207 acc_pose: 0.690342 loss: 429.643207 2022/10/12 15:44:45 - mmengine - INFO - Epoch(train) [42][200/586] lr: 5.000000e-03 eta: 17:34:46 time: 0.650429 data_time: 0.060963 memory: 12959 loss_kpt: 430.992556 acc_pose: 0.673739 loss: 430.992556 2022/10/12 15:45:18 - mmengine - INFO - Epoch(train) [42][250/586] lr: 5.000000e-03 eta: 17:34:17 time: 0.655685 data_time: 0.056213 memory: 12959 loss_kpt: 427.230310 acc_pose: 0.778528 loss: 427.230310 2022/10/12 15:45:50 - mmengine - INFO - Epoch(train) [42][300/586] lr: 5.000000e-03 eta: 17:33:47 time: 0.650630 data_time: 0.052920 memory: 12959 loss_kpt: 424.777883 acc_pose: 0.793322 loss: 424.777883 2022/10/12 15:46:23 - mmengine - INFO - Epoch(train) [42][350/586] lr: 5.000000e-03 eta: 17:33:19 time: 0.660651 data_time: 0.055527 memory: 12959 loss_kpt: 426.739080 acc_pose: 0.747968 loss: 426.739080 2022/10/12 15:46:57 - mmengine - INFO - Epoch(train) [42][400/586] lr: 5.000000e-03 eta: 17:32:54 time: 0.675132 data_time: 0.054160 memory: 12959 loss_kpt: 430.035108 acc_pose: 0.679843 loss: 430.035108 2022/10/12 15:47:31 - mmengine - INFO - Epoch(train) [42][450/586] lr: 5.000000e-03 eta: 17:32:29 time: 0.676352 data_time: 0.059992 memory: 12959 loss_kpt: 434.771592 acc_pose: 0.769543 loss: 434.771592 2022/10/12 15:48:05 - mmengine - INFO - Epoch(train) [42][500/586] lr: 5.000000e-03 eta: 17:32:04 time: 0.674661 data_time: 0.060372 memory: 12959 loss_kpt: 430.605898 acc_pose: 0.635900 loss: 430.605898 2022/10/12 15:48:38 - mmengine - INFO - Epoch(train) [42][550/586] lr: 5.000000e-03 eta: 17:31:37 time: 0.667494 data_time: 0.060838 memory: 12959 loss_kpt: 430.141066 acc_pose: 0.697706 loss: 430.141066 2022/10/12 15:49:02 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:49:37 - mmengine - INFO - Epoch(train) [43][50/586] lr: 5.000000e-03 eta: 17:29:20 time: 0.688477 data_time: 0.070668 memory: 12959 loss_kpt: 432.425558 acc_pose: 0.764686 loss: 432.425558 2022/10/12 15:50:10 - mmengine - INFO - Epoch(train) [43][100/586] lr: 5.000000e-03 eta: 17:28:53 time: 0.667146 data_time: 0.057625 memory: 12959 loss_kpt: 430.556400 acc_pose: 0.756805 loss: 430.556400 2022/10/12 15:50:44 - mmengine - INFO - Epoch(train) [43][150/586] lr: 5.000000e-03 eta: 17:28:29 time: 0.681628 data_time: 0.063991 memory: 12959 loss_kpt: 416.927713 acc_pose: 0.766863 loss: 416.927713 2022/10/12 15:51:18 - mmengine - INFO - Epoch(train) [43][200/586] lr: 5.000000e-03 eta: 17:28:04 time: 0.670931 data_time: 0.059477 memory: 12959 loss_kpt: 423.092405 acc_pose: 0.716444 loss: 423.092405 2022/10/12 15:51:52 - mmengine - INFO - Epoch(train) [43][250/586] lr: 5.000000e-03 eta: 17:27:39 time: 0.679123 data_time: 0.060957 memory: 12959 loss_kpt: 427.871453 acc_pose: 0.675155 loss: 427.871453 2022/10/12 15:52:25 - mmengine - INFO - Epoch(train) [43][300/586] lr: 5.000000e-03 eta: 17:27:14 time: 0.671582 data_time: 0.056506 memory: 12959 loss_kpt: 411.637289 acc_pose: 0.739644 loss: 411.637289 2022/10/12 15:52:59 - mmengine - INFO - Epoch(train) [43][350/586] lr: 5.000000e-03 eta: 17:26:49 time: 0.676142 data_time: 0.065327 memory: 12959 loss_kpt: 421.632463 acc_pose: 0.754129 loss: 421.632463 2022/10/12 15:53:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:53:33 - mmengine - INFO - Epoch(train) [43][400/586] lr: 5.000000e-03 eta: 17:26:24 time: 0.676473 data_time: 0.055408 memory: 12959 loss_kpt: 428.767749 acc_pose: 0.730690 loss: 428.767749 2022/10/12 15:54:07 - mmengine - INFO - Epoch(train) [43][450/586] lr: 5.000000e-03 eta: 17:25:59 time: 0.677338 data_time: 0.060126 memory: 12959 loss_kpt: 424.703598 acc_pose: 0.788479 loss: 424.703598 2022/10/12 15:54:40 - mmengine - INFO - Epoch(train) [43][500/586] lr: 5.000000e-03 eta: 17:25:32 time: 0.668630 data_time: 0.057200 memory: 12959 loss_kpt: 422.080523 acc_pose: 0.686465 loss: 422.080523 2022/10/12 15:55:15 - mmengine - INFO - Epoch(train) [43][550/586] lr: 5.000000e-03 eta: 17:25:11 time: 0.693553 data_time: 0.065889 memory: 12959 loss_kpt: 424.775645 acc_pose: 0.699506 loss: 424.775645 2022/10/12 15:55:39 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 15:56:14 - mmengine - INFO - Epoch(train) [44][50/586] lr: 5.000000e-03 eta: 17:22:56 time: 0.690608 data_time: 0.073875 memory: 12959 loss_kpt: 423.871884 acc_pose: 0.737959 loss: 423.871884 2022/10/12 15:56:48 - mmengine - INFO - Epoch(train) [44][100/586] lr: 5.000000e-03 eta: 17:22:32 time: 0.682551 data_time: 0.057397 memory: 12959 loss_kpt: 424.952068 acc_pose: 0.711531 loss: 424.952068 2022/10/12 15:57:22 - mmengine - INFO - Epoch(train) [44][150/586] lr: 5.000000e-03 eta: 17:22:08 time: 0.678684 data_time: 0.061247 memory: 12959 loss_kpt: 418.850891 acc_pose: 0.691885 loss: 418.850891 2022/10/12 15:57:55 - mmengine - INFO - Epoch(train) [44][200/586] lr: 5.000000e-03 eta: 17:21:42 time: 0.670158 data_time: 0.053933 memory: 12959 loss_kpt: 427.504423 acc_pose: 0.735918 loss: 427.504423 2022/10/12 15:58:29 - mmengine - INFO - Epoch(train) [44][250/586] lr: 5.000000e-03 eta: 17:21:16 time: 0.672538 data_time: 0.058204 memory: 12959 loss_kpt: 420.897604 acc_pose: 0.743588 loss: 420.897604 2022/10/12 15:59:03 - mmengine - INFO - Epoch(train) [44][300/586] lr: 5.000000e-03 eta: 17:20:51 time: 0.680289 data_time: 0.056740 memory: 12959 loss_kpt: 419.419407 acc_pose: 0.749287 loss: 419.419407 2022/10/12 15:59:36 - mmengine - INFO - Epoch(train) [44][350/586] lr: 5.000000e-03 eta: 17:20:26 time: 0.672949 data_time: 0.056463 memory: 12959 loss_kpt: 411.751969 acc_pose: 0.824575 loss: 411.751969 2022/10/12 16:00:11 - mmengine - INFO - Epoch(train) [44][400/586] lr: 5.000000e-03 eta: 17:20:02 time: 0.684214 data_time: 0.055724 memory: 12959 loss_kpt: 418.481215 acc_pose: 0.685922 loss: 418.481215 2022/10/12 16:00:45 - mmengine - INFO - Epoch(train) [44][450/586] lr: 5.000000e-03 eta: 17:19:38 time: 0.682590 data_time: 0.060467 memory: 12959 loss_kpt: 423.726756 acc_pose: 0.785356 loss: 423.726756 2022/10/12 16:01:18 - mmengine - INFO - Epoch(train) [44][500/586] lr: 5.000000e-03 eta: 17:19:12 time: 0.673477 data_time: 0.053358 memory: 12959 loss_kpt: 419.084317 acc_pose: 0.716056 loss: 419.084317 2022/10/12 16:01:52 - mmengine - INFO - Epoch(train) [44][550/586] lr: 5.000000e-03 eta: 17:18:48 time: 0.679518 data_time: 0.061633 memory: 12959 loss_kpt: 412.367452 acc_pose: 0.753783 loss: 412.367452 2022/10/12 16:02:17 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:02:50 - mmengine - INFO - Epoch(train) [45][50/586] lr: 5.000000e-03 eta: 17:16:33 time: 0.677573 data_time: 0.069863 memory: 12959 loss_kpt: 417.653167 acc_pose: 0.771490 loss: 417.653167 2022/10/12 16:03:23 - mmengine - INFO - Epoch(train) [45][100/586] lr: 5.000000e-03 eta: 17:16:05 time: 0.660218 data_time: 0.055894 memory: 12959 loss_kpt: 414.048835 acc_pose: 0.644933 loss: 414.048835 2022/10/12 16:03:58 - mmengine - INFO - Epoch(train) [45][150/586] lr: 5.000000e-03 eta: 17:15:40 time: 0.681028 data_time: 0.056128 memory: 12959 loss_kpt: 414.629269 acc_pose: 0.668080 loss: 414.629269 2022/10/12 16:04:32 - mmengine - INFO - Epoch(train) [45][200/586] lr: 5.000000e-03 eta: 17:15:17 time: 0.684263 data_time: 0.056123 memory: 12959 loss_kpt: 421.294224 acc_pose: 0.733769 loss: 421.294224 2022/10/12 16:04:43 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:05:06 - mmengine - INFO - Epoch(train) [45][250/586] lr: 5.000000e-03 eta: 17:14:54 time: 0.691538 data_time: 0.060807 memory: 12959 loss_kpt: 421.754888 acc_pose: 0.856843 loss: 421.754888 2022/10/12 16:05:41 - mmengine - INFO - Epoch(train) [45][300/586] lr: 5.000000e-03 eta: 17:14:31 time: 0.686369 data_time: 0.058512 memory: 12959 loss_kpt: 421.415411 acc_pose: 0.802200 loss: 421.415411 2022/10/12 16:06:15 - mmengine - INFO - Epoch(train) [45][350/586] lr: 5.000000e-03 eta: 17:14:07 time: 0.682378 data_time: 0.058709 memory: 12959 loss_kpt: 411.921552 acc_pose: 0.781227 loss: 411.921552 2022/10/12 16:06:49 - mmengine - INFO - Epoch(train) [45][400/586] lr: 5.000000e-03 eta: 17:13:44 time: 0.690365 data_time: 0.064823 memory: 12959 loss_kpt: 423.843043 acc_pose: 0.792451 loss: 423.843043 2022/10/12 16:07:23 - mmengine - INFO - Epoch(train) [45][450/586] lr: 5.000000e-03 eta: 17:13:20 time: 0.681343 data_time: 0.060955 memory: 12959 loss_kpt: 420.486900 acc_pose: 0.785299 loss: 420.486900 2022/10/12 16:07:57 - mmengine - INFO - Epoch(train) [45][500/586] lr: 5.000000e-03 eta: 17:12:54 time: 0.673527 data_time: 0.058830 memory: 12959 loss_kpt: 422.374778 acc_pose: 0.717141 loss: 422.374778 2022/10/12 16:08:32 - mmengine - INFO - Epoch(train) [45][550/586] lr: 5.000000e-03 eta: 17:12:31 time: 0.692042 data_time: 0.064652 memory: 12959 loss_kpt: 422.209976 acc_pose: 0.813868 loss: 422.209976 2022/10/12 16:08:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:09:31 - mmengine - INFO - Epoch(train) [46][50/586] lr: 5.000000e-03 eta: 17:10:24 time: 0.706327 data_time: 0.073564 memory: 12959 loss_kpt: 413.870266 acc_pose: 0.753253 loss: 413.870266 2022/10/12 16:10:06 - mmengine - INFO - Epoch(train) [46][100/586] lr: 5.000000e-03 eta: 17:10:00 time: 0.682518 data_time: 0.058080 memory: 12959 loss_kpt: 415.711887 acc_pose: 0.767857 loss: 415.711887 2022/10/12 16:10:40 - mmengine - INFO - Epoch(train) [46][150/586] lr: 5.000000e-03 eta: 17:09:37 time: 0.689893 data_time: 0.058383 memory: 12959 loss_kpt: 416.550113 acc_pose: 0.780185 loss: 416.550113 2022/10/12 16:11:14 - mmengine - INFO - Epoch(train) [46][200/586] lr: 5.000000e-03 eta: 17:09:12 time: 0.678487 data_time: 0.062071 memory: 12959 loss_kpt: 413.137875 acc_pose: 0.744361 loss: 413.137875 2022/10/12 16:11:48 - mmengine - INFO - Epoch(train) [46][250/586] lr: 5.000000e-03 eta: 17:08:47 time: 0.679841 data_time: 0.059525 memory: 12959 loss_kpt: 417.169800 acc_pose: 0.797747 loss: 417.169800 2022/10/12 16:12:22 - mmengine - INFO - Epoch(train) [46][300/586] lr: 5.000000e-03 eta: 17:08:23 time: 0.681403 data_time: 0.059780 memory: 12959 loss_kpt: 408.986459 acc_pose: 0.618381 loss: 408.986459 2022/10/12 16:12:55 - mmengine - INFO - Epoch(train) [46][350/586] lr: 5.000000e-03 eta: 17:07:55 time: 0.663135 data_time: 0.058358 memory: 12959 loss_kpt: 415.572836 acc_pose: 0.738584 loss: 415.572836 2022/10/12 16:13:29 - mmengine - INFO - Epoch(train) [46][400/586] lr: 5.000000e-03 eta: 17:07:29 time: 0.673046 data_time: 0.063539 memory: 12959 loss_kpt: 413.088605 acc_pose: 0.715128 loss: 413.088605 2022/10/12 16:14:02 - mmengine - INFO - Epoch(train) [46][450/586] lr: 5.000000e-03 eta: 17:07:01 time: 0.664073 data_time: 0.058666 memory: 12959 loss_kpt: 409.596448 acc_pose: 0.814987 loss: 409.596448 2022/10/12 16:14:36 - mmengine - INFO - Epoch(train) [46][500/586] lr: 5.000000e-03 eta: 17:06:35 time: 0.675089 data_time: 0.062401 memory: 12959 loss_kpt: 412.826152 acc_pose: 0.787391 loss: 412.826152 2022/10/12 16:15:10 - mmengine - INFO - Epoch(train) [46][550/586] lr: 5.000000e-03 eta: 17:06:09 time: 0.674598 data_time: 0.059817 memory: 12959 loss_kpt: 410.966251 acc_pose: 0.803118 loss: 410.966251 2022/10/12 16:15:34 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:16:04 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:16:08 - mmengine - INFO - Epoch(train) [47][50/586] lr: 5.000000e-03 eta: 17:04:00 time: 0.686138 data_time: 0.067754 memory: 12959 loss_kpt: 418.431311 acc_pose: 0.700784 loss: 418.431311 2022/10/12 16:16:42 - mmengine - INFO - Epoch(train) [47][100/586] lr: 5.000000e-03 eta: 17:03:34 time: 0.672203 data_time: 0.060930 memory: 12959 loss_kpt: 416.407558 acc_pose: 0.788188 loss: 416.407558 2022/10/12 16:17:16 - mmengine - INFO - Epoch(train) [47][150/586] lr: 5.000000e-03 eta: 17:03:12 time: 0.695339 data_time: 0.061398 memory: 12959 loss_kpt: 415.548994 acc_pose: 0.747954 loss: 415.548994 2022/10/12 16:17:51 - mmengine - INFO - Epoch(train) [47][200/586] lr: 5.000000e-03 eta: 17:02:49 time: 0.691906 data_time: 0.059876 memory: 12959 loss_kpt: 408.105694 acc_pose: 0.777426 loss: 408.105694 2022/10/12 16:18:26 - mmengine - INFO - Epoch(train) [47][250/586] lr: 5.000000e-03 eta: 17:02:27 time: 0.696676 data_time: 0.060156 memory: 12959 loss_kpt: 411.665782 acc_pose: 0.774196 loss: 411.665782 2022/10/12 16:19:00 - mmengine - INFO - Epoch(train) [47][300/586] lr: 5.000000e-03 eta: 17:02:04 time: 0.691680 data_time: 0.057955 memory: 12959 loss_kpt: 407.490366 acc_pose: 0.741336 loss: 407.490366 2022/10/12 16:19:34 - mmengine - INFO - Epoch(train) [47][350/586] lr: 5.000000e-03 eta: 17:01:39 time: 0.676763 data_time: 0.058158 memory: 12959 loss_kpt: 406.555766 acc_pose: 0.811523 loss: 406.555766 2022/10/12 16:20:09 - mmengine - INFO - Epoch(train) [47][400/586] lr: 5.000000e-03 eta: 17:01:15 time: 0.686783 data_time: 0.057470 memory: 12959 loss_kpt: 422.965192 acc_pose: 0.703344 loss: 422.965192 2022/10/12 16:20:43 - mmengine - INFO - Epoch(train) [47][450/586] lr: 5.000000e-03 eta: 17:00:52 time: 0.690651 data_time: 0.064575 memory: 12959 loss_kpt: 408.444103 acc_pose: 0.685181 loss: 408.444103 2022/10/12 16:21:17 - mmengine - INFO - Epoch(train) [47][500/586] lr: 5.000000e-03 eta: 17:00:26 time: 0.679902 data_time: 0.055199 memory: 12959 loss_kpt: 408.630378 acc_pose: 0.769907 loss: 408.630378 2022/10/12 16:21:51 - mmengine - INFO - Epoch(train) [47][550/586] lr: 5.000000e-03 eta: 17:00:00 time: 0.674119 data_time: 0.056145 memory: 12959 loss_kpt: 414.520182 acc_pose: 0.755559 loss: 414.520182 2022/10/12 16:22:15 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:22:49 - mmengine - INFO - Epoch(train) [48][50/586] lr: 5.000000e-03 eta: 16:57:54 time: 0.690501 data_time: 0.069604 memory: 12959 loss_kpt: 411.160942 acc_pose: 0.643061 loss: 411.160942 2022/10/12 16:23:24 - mmengine - INFO - Epoch(train) [48][100/586] lr: 5.000000e-03 eta: 16:57:30 time: 0.688017 data_time: 0.058836 memory: 12959 loss_kpt: 408.577163 acc_pose: 0.752536 loss: 408.577163 2022/10/12 16:23:58 - mmengine - INFO - Epoch(train) [48][150/586] lr: 5.000000e-03 eta: 16:57:06 time: 0.686411 data_time: 0.059578 memory: 12959 loss_kpt: 403.530419 acc_pose: 0.773506 loss: 403.530419 2022/10/12 16:24:32 - mmengine - INFO - Epoch(train) [48][200/586] lr: 5.000000e-03 eta: 16:56:42 time: 0.685672 data_time: 0.060845 memory: 12959 loss_kpt: 408.011502 acc_pose: 0.714472 loss: 408.011502 2022/10/12 16:25:07 - mmengine - INFO - Epoch(train) [48][250/586] lr: 5.000000e-03 eta: 16:56:19 time: 0.693674 data_time: 0.063326 memory: 12959 loss_kpt: 408.797499 acc_pose: 0.735891 loss: 408.797499 2022/10/12 16:25:41 - mmengine - INFO - Epoch(train) [48][300/586] lr: 5.000000e-03 eta: 16:55:54 time: 0.681114 data_time: 0.059132 memory: 12959 loss_kpt: 410.259208 acc_pose: 0.773035 loss: 410.259208 2022/10/12 16:26:16 - mmengine - INFO - Epoch(train) [48][350/586] lr: 5.000000e-03 eta: 16:55:31 time: 0.690530 data_time: 0.056378 memory: 12959 loss_kpt: 406.098549 acc_pose: 0.793240 loss: 406.098549 2022/10/12 16:26:50 - mmengine - INFO - Epoch(train) [48][400/586] lr: 5.000000e-03 eta: 16:55:08 time: 0.690965 data_time: 0.057469 memory: 12959 loss_kpt: 410.808177 acc_pose: 0.717278 loss: 410.808177 2022/10/12 16:27:24 - mmengine - INFO - Epoch(train) [48][450/586] lr: 5.000000e-03 eta: 16:54:43 time: 0.682430 data_time: 0.063329 memory: 12959 loss_kpt: 409.971422 acc_pose: 0.817413 loss: 409.971422 2022/10/12 16:27:30 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:27:59 - mmengine - INFO - Epoch(train) [48][500/586] lr: 5.000000e-03 eta: 16:54:19 time: 0.686938 data_time: 0.054145 memory: 12959 loss_kpt: 408.887878 acc_pose: 0.720084 loss: 408.887878 2022/10/12 16:28:33 - mmengine - INFO - Epoch(train) [48][550/586] lr: 5.000000e-03 eta: 16:53:54 time: 0.682631 data_time: 0.060353 memory: 12959 loss_kpt: 409.257802 acc_pose: 0.698416 loss: 409.257802 2022/10/12 16:28:57 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:29:31 - mmengine - INFO - Epoch(train) [49][50/586] lr: 5.000000e-03 eta: 16:51:49 time: 0.687840 data_time: 0.070478 memory: 12959 loss_kpt: 409.229086 acc_pose: 0.801288 loss: 409.229086 2022/10/12 16:30:05 - mmengine - INFO - Epoch(train) [49][100/586] lr: 5.000000e-03 eta: 16:51:23 time: 0.673683 data_time: 0.060209 memory: 12959 loss_kpt: 406.925676 acc_pose: 0.800297 loss: 406.925676 2022/10/12 16:30:39 - mmengine - INFO - Epoch(train) [49][150/586] lr: 5.000000e-03 eta: 16:50:57 time: 0.675144 data_time: 0.058830 memory: 12959 loss_kpt: 400.484367 acc_pose: 0.784223 loss: 400.484367 2022/10/12 16:31:12 - mmengine - INFO - Epoch(train) [49][200/586] lr: 5.000000e-03 eta: 16:50:31 time: 0.676304 data_time: 0.054808 memory: 12959 loss_kpt: 403.442782 acc_pose: 0.774377 loss: 403.442782 2022/10/12 16:31:46 - mmengine - INFO - Epoch(train) [49][250/586] lr: 5.000000e-03 eta: 16:50:03 time: 0.668789 data_time: 0.055123 memory: 12959 loss_kpt: 409.136854 acc_pose: 0.773278 loss: 409.136854 2022/10/12 16:32:19 - mmengine - INFO - Epoch(train) [49][300/586] lr: 5.000000e-03 eta: 16:49:35 time: 0.662488 data_time: 0.060294 memory: 12959 loss_kpt: 407.332590 acc_pose: 0.818697 loss: 407.332590 2022/10/12 16:32:52 - mmengine - INFO - Epoch(train) [49][350/586] lr: 5.000000e-03 eta: 16:49:06 time: 0.659007 data_time: 0.056482 memory: 12959 loss_kpt: 401.143000 acc_pose: 0.762179 loss: 401.143000 2022/10/12 16:33:26 - mmengine - INFO - Epoch(train) [49][400/586] lr: 5.000000e-03 eta: 16:48:40 time: 0.675108 data_time: 0.059631 memory: 12959 loss_kpt: 404.009155 acc_pose: 0.707238 loss: 404.009155 2022/10/12 16:34:00 - mmengine - INFO - Epoch(train) [49][450/586] lr: 5.000000e-03 eta: 16:48:15 time: 0.679140 data_time: 0.054173 memory: 12959 loss_kpt: 399.424135 acc_pose: 0.679396 loss: 399.424135 2022/10/12 16:34:33 - mmengine - INFO - Epoch(train) [49][500/586] lr: 5.000000e-03 eta: 16:47:48 time: 0.674728 data_time: 0.054038 memory: 12959 loss_kpt: 403.813983 acc_pose: 0.757519 loss: 403.813983 2022/10/12 16:35:07 - mmengine - INFO - Epoch(train) [49][550/586] lr: 5.000000e-03 eta: 16:47:20 time: 0.662402 data_time: 0.054833 memory: 12959 loss_kpt: 403.945895 acc_pose: 0.701114 loss: 403.945895 2022/10/12 16:35:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:36:04 - mmengine - INFO - Epoch(train) [50][50/586] lr: 5.000000e-03 eta: 16:45:14 time: 0.668607 data_time: 0.066123 memory: 12959 loss_kpt: 404.282418 acc_pose: 0.738904 loss: 404.282418 2022/10/12 16:36:38 - mmengine - INFO - Epoch(train) [50][100/586] lr: 5.000000e-03 eta: 16:44:47 time: 0.673846 data_time: 0.057258 memory: 12959 loss_kpt: 406.722194 acc_pose: 0.714855 loss: 406.722194 2022/10/12 16:37:11 - mmengine - INFO - Epoch(train) [50][150/586] lr: 5.000000e-03 eta: 16:44:20 time: 0.666020 data_time: 0.060632 memory: 12959 loss_kpt: 396.321326 acc_pose: 0.766039 loss: 396.321326 2022/10/12 16:37:44 - mmengine - INFO - Epoch(train) [50][200/586] lr: 5.000000e-03 eta: 16:43:51 time: 0.660268 data_time: 0.059377 memory: 12959 loss_kpt: 400.343294 acc_pose: 0.778751 loss: 400.343294 2022/10/12 16:38:17 - mmengine - INFO - Epoch(train) [50][250/586] lr: 5.000000e-03 eta: 16:43:23 time: 0.663414 data_time: 0.059160 memory: 12959 loss_kpt: 408.246719 acc_pose: 0.748769 loss: 408.246719 2022/10/12 16:38:41 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:38:51 - mmengine - INFO - Epoch(train) [50][300/586] lr: 5.000000e-03 eta: 16:42:56 time: 0.671668 data_time: 0.060117 memory: 12959 loss_kpt: 399.410269 acc_pose: 0.791933 loss: 399.410269 2022/10/12 16:39:24 - mmengine - INFO - Epoch(train) [50][350/586] lr: 5.000000e-03 eta: 16:42:28 time: 0.664692 data_time: 0.058683 memory: 12959 loss_kpt: 398.536768 acc_pose: 0.714278 loss: 398.536768 2022/10/12 16:39:57 - mmengine - INFO - Epoch(train) [50][400/586] lr: 5.000000e-03 eta: 16:41:59 time: 0.660498 data_time: 0.053325 memory: 12959 loss_kpt: 404.409314 acc_pose: 0.703218 loss: 404.409314 2022/10/12 16:40:30 - mmengine - INFO - Epoch(train) [50][450/586] lr: 5.000000e-03 eta: 16:41:31 time: 0.664828 data_time: 0.059357 memory: 12959 loss_kpt: 401.703517 acc_pose: 0.769364 loss: 401.703517 2022/10/12 16:41:04 - mmengine - INFO - Epoch(train) [50][500/586] lr: 5.000000e-03 eta: 16:41:04 time: 0.668798 data_time: 0.055142 memory: 12959 loss_kpt: 394.910056 acc_pose: 0.768531 loss: 394.910056 2022/10/12 16:41:37 - mmengine - INFO - Epoch(train) [50][550/586] lr: 5.000000e-03 eta: 16:40:36 time: 0.666736 data_time: 0.053477 memory: 12959 loss_kpt: 405.578397 acc_pose: 0.739093 loss: 405.578397 2022/10/12 16:42:01 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:42:01 - mmengine - INFO - Saving checkpoint at 50 epochs 2022/10/12 16:42:19 - mmengine - INFO - Epoch(val) [50][50/407] eta: 0:01:38 time: 0.274714 data_time: 0.013278 memory: 12959 2022/10/12 16:42:32 - mmengine - INFO - Epoch(val) [50][100/407] eta: 0:01:19 time: 0.260401 data_time: 0.007483 memory: 2407 2022/10/12 16:42:45 - mmengine - INFO - Epoch(val) [50][150/407] eta: 0:01:06 time: 0.259836 data_time: 0.007783 memory: 2407 2022/10/12 16:42:58 - mmengine - INFO - Epoch(val) [50][200/407] eta: 0:00:53 time: 0.260482 data_time: 0.007599 memory: 2407 2022/10/12 16:43:11 - mmengine - INFO - Epoch(val) [50][250/407] eta: 0:00:40 time: 0.259748 data_time: 0.007612 memory: 2407 2022/10/12 16:43:24 - mmengine - INFO - Epoch(val) [50][300/407] eta: 0:00:27 time: 0.259217 data_time: 0.007613 memory: 2407 2022/10/12 16:43:37 - mmengine - INFO - Epoch(val) [50][350/407] eta: 0:00:14 time: 0.260949 data_time: 0.007733 memory: 2407 2022/10/12 16:43:50 - mmengine - INFO - Epoch(val) [50][400/407] eta: 0:00:01 time: 0.257966 data_time: 0.007478 memory: 2407 2022/10/12 16:44:04 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 16:44:20 - mmengine - INFO - Epoch(val) [50][407/407] coco/AP: 0.608710 coco/AP .5: 0.815690 coco/AP .75: 0.679295 coco/AP (M): 0.602110 coco/AP (L): 0.676103 coco/AR: 0.719285 coco/AR .5: 0.898929 coco/AR .75: 0.781486 coco/AR (M): 0.677438 coco/AR (L): 0.777072 2022/10/12 16:44:20 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_40.pth is removed 2022/10/12 16:44:22 - mmengine - INFO - The best checkpoint with 0.6087 coco/AP at 50 epoch is saved to best_coco/AP_epoch_50.pth. 2022/10/12 16:44:56 - mmengine - INFO - Epoch(train) [51][50/586] lr: 5.000000e-03 eta: 16:38:33 time: 0.675717 data_time: 0.062447 memory: 12959 loss_kpt: 394.847984 acc_pose: 0.633930 loss: 394.847984 2022/10/12 16:45:29 - mmengine - INFO - Epoch(train) [51][100/586] lr: 5.000000e-03 eta: 16:38:06 time: 0.666993 data_time: 0.053817 memory: 12959 loss_kpt: 408.921301 acc_pose: 0.726005 loss: 408.921301 2022/10/12 16:46:03 - mmengine - INFO - Epoch(train) [51][150/586] lr: 5.000000e-03 eta: 16:37:39 time: 0.673063 data_time: 0.057525 memory: 12959 loss_kpt: 395.725396 acc_pose: 0.772316 loss: 395.725396 2022/10/12 16:46:36 - mmengine - INFO - Epoch(train) [51][200/586] lr: 5.000000e-03 eta: 16:37:12 time: 0.666258 data_time: 0.055428 memory: 12959 loss_kpt: 396.713478 acc_pose: 0.779311 loss: 396.713478 2022/10/12 16:47:10 - mmengine - INFO - Epoch(train) [51][250/586] lr: 5.000000e-03 eta: 16:36:44 time: 0.665197 data_time: 0.063398 memory: 12959 loss_kpt: 396.600149 acc_pose: 0.724073 loss: 396.600149 2022/10/12 16:47:43 - mmengine - INFO - Epoch(train) [51][300/586] lr: 5.000000e-03 eta: 16:36:16 time: 0.664989 data_time: 0.058303 memory: 12959 loss_kpt: 391.974310 acc_pose: 0.739763 loss: 391.974310 2022/10/12 16:48:16 - mmengine - INFO - Epoch(train) [51][350/586] lr: 5.000000e-03 eta: 16:35:49 time: 0.671451 data_time: 0.059387 memory: 12959 loss_kpt: 403.714413 acc_pose: 0.833570 loss: 403.714413 2022/10/12 16:48:50 - mmengine - INFO - Epoch(train) [51][400/586] lr: 5.000000e-03 eta: 16:35:22 time: 0.676470 data_time: 0.058316 memory: 12959 loss_kpt: 394.373089 acc_pose: 0.813537 loss: 394.373089 2022/10/12 16:49:24 - mmengine - INFO - Epoch(train) [51][450/586] lr: 5.000000e-03 eta: 16:34:55 time: 0.667200 data_time: 0.061309 memory: 12959 loss_kpt: 398.219901 acc_pose: 0.802663 loss: 398.219901 2022/10/12 16:49:57 - mmengine - INFO - Epoch(train) [51][500/586] lr: 5.000000e-03 eta: 16:34:27 time: 0.667292 data_time: 0.059708 memory: 12959 loss_kpt: 395.079280 acc_pose: 0.818257 loss: 395.079280 2022/10/12 16:50:31 - mmengine - INFO - Epoch(train) [51][550/586] lr: 5.000000e-03 eta: 16:34:01 time: 0.676745 data_time: 0.058101 memory: 12959 loss_kpt: 400.451471 acc_pose: 0.711411 loss: 400.451471 2022/10/12 16:50:55 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:51:30 - mmengine - INFO - Epoch(train) [52][50/586] lr: 5.000000e-03 eta: 16:32:03 time: 0.698889 data_time: 0.075317 memory: 12959 loss_kpt: 398.552297 acc_pose: 0.760334 loss: 398.552297 2022/10/12 16:52:04 - mmengine - INFO - Epoch(train) [52][100/586] lr: 5.000000e-03 eta: 16:31:38 time: 0.682325 data_time: 0.064986 memory: 12959 loss_kpt: 402.589988 acc_pose: 0.733557 loss: 402.589988 2022/10/12 16:52:14 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:52:39 - mmengine - INFO - Epoch(train) [52][150/586] lr: 5.000000e-03 eta: 16:31:15 time: 0.698104 data_time: 0.062476 memory: 12959 loss_kpt: 393.836736 acc_pose: 0.758254 loss: 393.836736 2022/10/12 16:53:13 - mmengine - INFO - Epoch(train) [52][200/586] lr: 5.000000e-03 eta: 16:30:51 time: 0.687406 data_time: 0.066142 memory: 12959 loss_kpt: 395.754547 acc_pose: 0.822983 loss: 395.754547 2022/10/12 16:53:48 - mmengine - INFO - Epoch(train) [52][250/586] lr: 5.000000e-03 eta: 16:30:27 time: 0.691316 data_time: 0.065684 memory: 12959 loss_kpt: 400.213240 acc_pose: 0.789769 loss: 400.213240 2022/10/12 16:54:22 - mmengine - INFO - Epoch(train) [52][300/586] lr: 5.000000e-03 eta: 16:30:01 time: 0.682121 data_time: 0.067159 memory: 12959 loss_kpt: 391.307608 acc_pose: 0.758968 loss: 391.307608 2022/10/12 16:54:56 - mmengine - INFO - Epoch(train) [52][350/586] lr: 5.000000e-03 eta: 16:29:38 time: 0.694056 data_time: 0.063081 memory: 12959 loss_kpt: 400.288584 acc_pose: 0.742425 loss: 400.288584 2022/10/12 16:55:31 - mmengine - INFO - Epoch(train) [52][400/586] lr: 5.000000e-03 eta: 16:29:14 time: 0.692753 data_time: 0.060320 memory: 12959 loss_kpt: 393.077090 acc_pose: 0.665355 loss: 393.077090 2022/10/12 16:56:05 - mmengine - INFO - Epoch(train) [52][450/586] lr: 5.000000e-03 eta: 16:28:49 time: 0.683108 data_time: 0.060441 memory: 12959 loss_kpt: 398.965197 acc_pose: 0.758952 loss: 398.965197 2022/10/12 16:56:40 - mmengine - INFO - Epoch(train) [52][500/586] lr: 5.000000e-03 eta: 16:28:24 time: 0.690471 data_time: 0.063463 memory: 12959 loss_kpt: 402.371058 acc_pose: 0.758056 loss: 402.371058 2022/10/12 16:57:14 - mmengine - INFO - Epoch(train) [52][550/586] lr: 5.000000e-03 eta: 16:28:00 time: 0.689983 data_time: 0.059434 memory: 12959 loss_kpt: 391.102797 acc_pose: 0.797134 loss: 391.102797 2022/10/12 16:57:38 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 16:58:12 - mmengine - INFO - Epoch(train) [53][50/586] lr: 5.000000e-03 eta: 16:26:01 time: 0.679350 data_time: 0.066567 memory: 12959 loss_kpt: 398.698848 acc_pose: 0.778570 loss: 398.698848 2022/10/12 16:58:45 - mmengine - INFO - Epoch(train) [53][100/586] lr: 5.000000e-03 eta: 16:25:33 time: 0.665898 data_time: 0.052086 memory: 12959 loss_kpt: 399.853026 acc_pose: 0.751472 loss: 399.853026 2022/10/12 16:59:18 - mmengine - INFO - Epoch(train) [53][150/586] lr: 5.000000e-03 eta: 16:25:03 time: 0.653762 data_time: 0.056392 memory: 12959 loss_kpt: 395.465514 acc_pose: 0.711265 loss: 395.465514 2022/10/12 16:59:51 - mmengine - INFO - Epoch(train) [53][200/586] lr: 5.000000e-03 eta: 16:24:35 time: 0.664794 data_time: 0.056297 memory: 12959 loss_kpt: 391.152436 acc_pose: 0.794837 loss: 391.152436 2022/10/12 17:00:25 - mmengine - INFO - Epoch(train) [53][250/586] lr: 5.000000e-03 eta: 16:24:07 time: 0.666387 data_time: 0.059861 memory: 12959 loss_kpt: 396.790610 acc_pose: 0.823600 loss: 396.790610 2022/10/12 17:00:59 - mmengine - INFO - Epoch(train) [53][300/586] lr: 5.000000e-03 eta: 16:23:41 time: 0.675907 data_time: 0.052804 memory: 12959 loss_kpt: 397.172406 acc_pose: 0.808969 loss: 397.172406 2022/10/12 17:01:32 - mmengine - INFO - Epoch(train) [53][350/586] lr: 5.000000e-03 eta: 16:23:14 time: 0.678534 data_time: 0.063047 memory: 12959 loss_kpt: 395.809503 acc_pose: 0.740925 loss: 395.809503 2022/10/12 17:02:07 - mmengine - INFO - Epoch(train) [53][400/586] lr: 5.000000e-03 eta: 16:22:49 time: 0.680492 data_time: 0.055516 memory: 12959 loss_kpt: 389.866491 acc_pose: 0.854343 loss: 389.866491 2022/10/12 17:02:40 - mmengine - INFO - Epoch(train) [53][450/586] lr: 5.000000e-03 eta: 16:22:22 time: 0.673602 data_time: 0.060105 memory: 12959 loss_kpt: 391.667894 acc_pose: 0.791140 loss: 391.667894 2022/10/12 17:03:14 - mmengine - INFO - Epoch(train) [53][500/586] lr: 5.000000e-03 eta: 16:21:54 time: 0.668788 data_time: 0.056740 memory: 12959 loss_kpt: 384.001694 acc_pose: 0.770291 loss: 384.001694 2022/10/12 17:03:33 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:03:48 - mmengine - INFO - Epoch(train) [53][550/586] lr: 5.000000e-03 eta: 16:21:29 time: 0.684203 data_time: 0.062611 memory: 12959 loss_kpt: 390.628820 acc_pose: 0.767706 loss: 390.628820 2022/10/12 17:04:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:04:45 - mmengine - INFO - Epoch(train) [54][50/586] lr: 5.000000e-03 eta: 16:19:30 time: 0.668585 data_time: 0.074668 memory: 12959 loss_kpt: 392.394604 acc_pose: 0.784041 loss: 392.394604 2022/10/12 17:05:18 - mmengine - INFO - Epoch(train) [54][100/586] lr: 5.000000e-03 eta: 16:18:59 time: 0.649186 data_time: 0.055464 memory: 12959 loss_kpt: 392.245458 acc_pose: 0.813487 loss: 392.245458 2022/10/12 17:05:51 - mmengine - INFO - Epoch(train) [54][150/586] lr: 5.000000e-03 eta: 16:18:31 time: 0.667591 data_time: 0.062758 memory: 12959 loss_kpt: 397.157755 acc_pose: 0.818991 loss: 397.157755 2022/10/12 17:06:25 - mmengine - INFO - Epoch(train) [54][200/586] lr: 5.000000e-03 eta: 16:18:04 time: 0.668022 data_time: 0.058318 memory: 12959 loss_kpt: 391.591561 acc_pose: 0.778598 loss: 391.591561 2022/10/12 17:06:58 - mmengine - INFO - Epoch(train) [54][250/586] lr: 5.000000e-03 eta: 16:17:35 time: 0.660226 data_time: 0.056855 memory: 12959 loss_kpt: 392.139146 acc_pose: 0.853190 loss: 392.139146 2022/10/12 17:07:31 - mmengine - INFO - Epoch(train) [54][300/586] lr: 5.000000e-03 eta: 16:17:08 time: 0.675739 data_time: 0.054274 memory: 12959 loss_kpt: 392.131678 acc_pose: 0.781522 loss: 392.131678 2022/10/12 17:08:05 - mmengine - INFO - Epoch(train) [54][350/586] lr: 5.000000e-03 eta: 16:16:42 time: 0.680699 data_time: 0.060630 memory: 12959 loss_kpt: 386.843641 acc_pose: 0.718564 loss: 386.843641 2022/10/12 17:08:39 - mmengine - INFO - Epoch(train) [54][400/586] lr: 5.000000e-03 eta: 16:16:15 time: 0.669088 data_time: 0.059155 memory: 12959 loss_kpt: 393.249504 acc_pose: 0.775829 loss: 393.249504 2022/10/12 17:09:13 - mmengine - INFO - Epoch(train) [54][450/586] lr: 5.000000e-03 eta: 16:15:49 time: 0.681655 data_time: 0.062483 memory: 12959 loss_kpt: 384.779200 acc_pose: 0.756132 loss: 384.779200 2022/10/12 17:09:47 - mmengine - INFO - Epoch(train) [54][500/586] lr: 5.000000e-03 eta: 16:15:22 time: 0.672464 data_time: 0.058661 memory: 12959 loss_kpt: 395.841636 acc_pose: 0.730068 loss: 395.841636 2022/10/12 17:10:20 - mmengine - INFO - Epoch(train) [54][550/586] lr: 5.000000e-03 eta: 16:14:54 time: 0.672036 data_time: 0.059006 memory: 12959 loss_kpt: 386.886852 acc_pose: 0.818179 loss: 386.886852 2022/10/12 17:10:44 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:11:20 - mmengine - INFO - Epoch(train) [55][50/586] lr: 5.000000e-03 eta: 16:13:03 time: 0.708795 data_time: 0.070875 memory: 12959 loss_kpt: 383.572028 acc_pose: 0.734944 loss: 383.572028 2022/10/12 17:11:54 - mmengine - INFO - Epoch(train) [55][100/586] lr: 5.000000e-03 eta: 16:12:37 time: 0.680606 data_time: 0.059843 memory: 12959 loss_kpt: 391.597045 acc_pose: 0.807551 loss: 391.597045 2022/10/12 17:12:28 - mmengine - INFO - Epoch(train) [55][150/586] lr: 5.000000e-03 eta: 16:12:11 time: 0.682080 data_time: 0.056527 memory: 12959 loss_kpt: 387.625340 acc_pose: 0.772886 loss: 387.625340 2022/10/12 17:13:02 - mmengine - INFO - Epoch(train) [55][200/586] lr: 5.000000e-03 eta: 16:11:45 time: 0.679428 data_time: 0.060363 memory: 12959 loss_kpt: 397.154431 acc_pose: 0.738280 loss: 397.154431 2022/10/12 17:13:36 - mmengine - INFO - Epoch(train) [55][250/586] lr: 5.000000e-03 eta: 16:11:20 time: 0.687204 data_time: 0.056917 memory: 12959 loss_kpt: 392.776246 acc_pose: 0.770147 loss: 392.776246 2022/10/12 17:14:10 - mmengine - INFO - Epoch(train) [55][300/586] lr: 5.000000e-03 eta: 16:10:54 time: 0.683887 data_time: 0.061142 memory: 12959 loss_kpt: 386.875691 acc_pose: 0.778330 loss: 386.875691 2022/10/12 17:14:45 - mmengine - INFO - Epoch(train) [55][350/586] lr: 5.000000e-03 eta: 16:10:29 time: 0.686684 data_time: 0.055766 memory: 12959 loss_kpt: 383.724537 acc_pose: 0.779027 loss: 383.724537 2022/10/12 17:14:49 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:15:19 - mmengine - INFO - Epoch(train) [55][400/586] lr: 5.000000e-03 eta: 16:10:04 time: 0.685090 data_time: 0.059598 memory: 12959 loss_kpt: 390.728459 acc_pose: 0.756581 loss: 390.728459 2022/10/12 17:15:53 - mmengine - INFO - Epoch(train) [55][450/586] lr: 5.000000e-03 eta: 16:09:39 time: 0.691997 data_time: 0.059067 memory: 12959 loss_kpt: 390.066027 acc_pose: 0.729065 loss: 390.066027 2022/10/12 17:16:28 - mmengine - INFO - Epoch(train) [55][500/586] lr: 5.000000e-03 eta: 16:09:14 time: 0.689091 data_time: 0.059126 memory: 12959 loss_kpt: 391.081996 acc_pose: 0.770547 loss: 391.081996 2022/10/12 17:17:02 - mmengine - INFO - Epoch(train) [55][550/586] lr: 5.000000e-03 eta: 16:08:49 time: 0.690458 data_time: 0.058542 memory: 12959 loss_kpt: 396.851616 acc_pose: 0.830701 loss: 396.851616 2022/10/12 17:17:27 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:18:01 - mmengine - INFO - Epoch(train) [56][50/586] lr: 5.000000e-03 eta: 16:06:56 time: 0.685423 data_time: 0.070396 memory: 12959 loss_kpt: 387.574553 acc_pose: 0.752461 loss: 387.574553 2022/10/12 17:18:34 - mmengine - INFO - Epoch(train) [56][100/586] lr: 5.000000e-03 eta: 16:06:28 time: 0.667939 data_time: 0.055798 memory: 12959 loss_kpt: 386.497496 acc_pose: 0.852954 loss: 386.497496 2022/10/12 17:19:09 - mmengine - INFO - Epoch(train) [56][150/586] lr: 5.000000e-03 eta: 16:06:03 time: 0.691419 data_time: 0.054245 memory: 12959 loss_kpt: 389.236098 acc_pose: 0.804951 loss: 389.236098 2022/10/12 17:19:42 - mmengine - INFO - Epoch(train) [56][200/586] lr: 5.000000e-03 eta: 16:05:34 time: 0.661023 data_time: 0.052445 memory: 12959 loss_kpt: 385.208154 acc_pose: 0.815009 loss: 385.208154 2022/10/12 17:20:16 - mmengine - INFO - Epoch(train) [56][250/586] lr: 5.000000e-03 eta: 16:05:08 time: 0.676041 data_time: 0.057054 memory: 12959 loss_kpt: 386.076884 acc_pose: 0.864093 loss: 386.076884 2022/10/12 17:20:50 - mmengine - INFO - Epoch(train) [56][300/586] lr: 5.000000e-03 eta: 16:04:41 time: 0.679149 data_time: 0.058108 memory: 12959 loss_kpt: 392.251265 acc_pose: 0.826628 loss: 392.251265 2022/10/12 17:21:24 - mmengine - INFO - Epoch(train) [56][350/586] lr: 5.000000e-03 eta: 16:04:15 time: 0.684182 data_time: 0.059358 memory: 12959 loss_kpt: 396.241312 acc_pose: 0.822022 loss: 396.241312 2022/10/12 17:21:57 - mmengine - INFO - Epoch(train) [56][400/586] lr: 5.000000e-03 eta: 16:03:47 time: 0.666069 data_time: 0.054720 memory: 12959 loss_kpt: 388.545718 acc_pose: 0.759973 loss: 388.545718 2022/10/12 17:22:32 - mmengine - INFO - Epoch(train) [56][450/586] lr: 5.000000e-03 eta: 16:03:22 time: 0.687327 data_time: 0.060249 memory: 12959 loss_kpt: 388.782935 acc_pose: 0.799051 loss: 388.782935 2022/10/12 17:23:05 - mmengine - INFO - Epoch(train) [56][500/586] lr: 5.000000e-03 eta: 16:02:53 time: 0.664467 data_time: 0.054905 memory: 12959 loss_kpt: 385.830283 acc_pose: 0.783794 loss: 385.830283 2022/10/12 17:23:38 - mmengine - INFO - Epoch(train) [56][550/586] lr: 5.000000e-03 eta: 16:02:26 time: 0.672043 data_time: 0.057679 memory: 12959 loss_kpt: 386.224183 acc_pose: 0.743144 loss: 386.224183 2022/10/12 17:24:02 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:24:38 - mmengine - INFO - Epoch(train) [57][50/586] lr: 5.000000e-03 eta: 16:00:38 time: 0.718383 data_time: 0.073000 memory: 12959 loss_kpt: 390.931409 acc_pose: 0.711880 loss: 390.931409 2022/10/12 17:25:14 - mmengine - INFO - Epoch(train) [57][100/586] lr: 5.000000e-03 eta: 16:00:15 time: 0.702674 data_time: 0.055034 memory: 12959 loss_kpt: 380.465743 acc_pose: 0.718972 loss: 380.465743 2022/10/12 17:25:47 - mmengine - INFO - Epoch(train) [57][150/586] lr: 5.000000e-03 eta: 15:59:48 time: 0.677490 data_time: 0.058285 memory: 12959 loss_kpt: 391.679967 acc_pose: 0.783892 loss: 391.679967 2022/10/12 17:26:11 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:26:22 - mmengine - INFO - Epoch(train) [57][200/586] lr: 5.000000e-03 eta: 15:59:24 time: 0.692105 data_time: 0.060886 memory: 12959 loss_kpt: 383.916414 acc_pose: 0.746950 loss: 383.916414 2022/10/12 17:26:56 - mmengine - INFO - Epoch(train) [57][250/586] lr: 5.000000e-03 eta: 15:58:58 time: 0.682890 data_time: 0.055942 memory: 12959 loss_kpt: 393.109219 acc_pose: 0.652482 loss: 393.109219 2022/10/12 17:27:30 - mmengine - INFO - Epoch(train) [57][300/586] lr: 5.000000e-03 eta: 15:58:32 time: 0.684236 data_time: 0.059264 memory: 12959 loss_kpt: 413.435916 acc_pose: 0.802582 loss: 413.435916 2022/10/12 17:28:05 - mmengine - INFO - Epoch(train) [57][350/586] lr: 5.000000e-03 eta: 15:58:06 time: 0.688311 data_time: 0.054482 memory: 12959 loss_kpt: 399.151553 acc_pose: 0.740737 loss: 399.151553 2022/10/12 17:28:39 - mmengine - INFO - Epoch(train) [57][400/586] lr: 5.000000e-03 eta: 15:57:42 time: 0.691945 data_time: 0.056348 memory: 12959 loss_kpt: 389.113871 acc_pose: 0.799291 loss: 389.113871 2022/10/12 17:29:14 - mmengine - INFO - Epoch(train) [57][450/586] lr: 5.000000e-03 eta: 15:57:17 time: 0.696241 data_time: 0.059217 memory: 12959 loss_kpt: 397.500326 acc_pose: 0.793136 loss: 397.500326 2022/10/12 17:29:49 - mmengine - INFO - Epoch(train) [57][500/586] lr: 5.000000e-03 eta: 15:56:52 time: 0.689574 data_time: 0.051560 memory: 12959 loss_kpt: 388.826499 acc_pose: 0.700288 loss: 388.826499 2022/10/12 17:30:24 - mmengine - INFO - Epoch(train) [57][550/586] lr: 5.000000e-03 eta: 15:56:28 time: 0.699267 data_time: 0.058556 memory: 12959 loss_kpt: 389.557642 acc_pose: 0.758644 loss: 389.557642 2022/10/12 17:30:48 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:31:23 - mmengine - INFO - Epoch(train) [58][50/586] lr: 5.000000e-03 eta: 15:54:37 time: 0.680605 data_time: 0.069747 memory: 12959 loss_kpt: 388.560258 acc_pose: 0.713400 loss: 388.560258 2022/10/12 17:31:56 - mmengine - INFO - Epoch(train) [58][100/586] lr: 5.000000e-03 eta: 15:54:09 time: 0.669984 data_time: 0.053028 memory: 12959 loss_kpt: 385.658880 acc_pose: 0.752552 loss: 385.658880 2022/10/12 17:32:29 - mmengine - INFO - Epoch(train) [58][150/586] lr: 5.000000e-03 eta: 15:53:39 time: 0.658189 data_time: 0.053911 memory: 12959 loss_kpt: 386.667945 acc_pose: 0.793943 loss: 386.667945 2022/10/12 17:33:02 - mmengine - INFO - Epoch(train) [58][200/586] lr: 5.000000e-03 eta: 15:53:10 time: 0.659213 data_time: 0.054222 memory: 12959 loss_kpt: 385.614037 acc_pose: 0.764011 loss: 385.614037 2022/10/12 17:33:35 - mmengine - INFO - Epoch(train) [58][250/586] lr: 5.000000e-03 eta: 15:52:42 time: 0.671261 data_time: 0.060584 memory: 12959 loss_kpt: 387.816154 acc_pose: 0.830903 loss: 387.816154 2022/10/12 17:34:09 - mmengine - INFO - Epoch(train) [58][300/586] lr: 5.000000e-03 eta: 15:52:14 time: 0.667809 data_time: 0.059489 memory: 12959 loss_kpt: 385.225804 acc_pose: 0.726906 loss: 385.225804 2022/10/12 17:34:42 - mmengine - INFO - Epoch(train) [58][350/586] lr: 5.000000e-03 eta: 15:51:46 time: 0.669549 data_time: 0.054072 memory: 12959 loss_kpt: 383.129811 acc_pose: 0.678730 loss: 383.129811 2022/10/12 17:35:16 - mmengine - INFO - Epoch(train) [58][400/586] lr: 5.000000e-03 eta: 15:51:18 time: 0.671296 data_time: 0.057695 memory: 12959 loss_kpt: 382.753859 acc_pose: 0.739187 loss: 382.753859 2022/10/12 17:35:50 - mmengine - INFO - Epoch(train) [58][450/586] lr: 5.000000e-03 eta: 15:50:51 time: 0.672118 data_time: 0.055142 memory: 12959 loss_kpt: 377.555308 acc_pose: 0.822700 loss: 377.555308 2022/10/12 17:36:23 - mmengine - INFO - Epoch(train) [58][500/586] lr: 5.000000e-03 eta: 15:50:23 time: 0.673063 data_time: 0.061260 memory: 12959 loss_kpt: 387.123460 acc_pose: 0.767936 loss: 387.123460 2022/10/12 17:36:57 - mmengine - INFO - Epoch(train) [58][550/586] lr: 5.000000e-03 eta: 15:49:55 time: 0.666548 data_time: 0.058590 memory: 12959 loss_kpt: 378.616777 acc_pose: 0.777703 loss: 378.616777 2022/10/12 17:37:20 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:37:29 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:37:55 - mmengine - INFO - Epoch(train) [59][50/586] lr: 5.000000e-03 eta: 15:48:06 time: 0.690140 data_time: 0.065371 memory: 12959 loss_kpt: 386.569353 acc_pose: 0.854544 loss: 386.569353 2022/10/12 17:38:29 - mmengine - INFO - Epoch(train) [59][100/586] lr: 5.000000e-03 eta: 15:47:40 time: 0.686257 data_time: 0.056069 memory: 12959 loss_kpt: 386.581885 acc_pose: 0.679471 loss: 386.581885 2022/10/12 17:39:04 - mmengine - INFO - Epoch(train) [59][150/586] lr: 5.000000e-03 eta: 15:47:15 time: 0.688278 data_time: 0.058241 memory: 12959 loss_kpt: 387.366718 acc_pose: 0.786472 loss: 387.366718 2022/10/12 17:39:39 - mmengine - INFO - Epoch(train) [59][200/586] lr: 5.000000e-03 eta: 15:46:50 time: 0.695587 data_time: 0.050831 memory: 12959 loss_kpt: 380.730861 acc_pose: 0.772678 loss: 380.730861 2022/10/12 17:40:13 - mmengine - INFO - Epoch(train) [59][250/586] lr: 5.000000e-03 eta: 15:46:24 time: 0.686181 data_time: 0.061159 memory: 12959 loss_kpt: 381.528701 acc_pose: 0.840672 loss: 381.528701 2022/10/12 17:40:47 - mmengine - INFO - Epoch(train) [59][300/586] lr: 5.000000e-03 eta: 15:45:58 time: 0.684236 data_time: 0.051637 memory: 12959 loss_kpt: 383.280720 acc_pose: 0.756692 loss: 383.280720 2022/10/12 17:41:22 - mmengine - INFO - Epoch(train) [59][350/586] lr: 5.000000e-03 eta: 15:45:34 time: 0.699711 data_time: 0.058385 memory: 12959 loss_kpt: 380.413424 acc_pose: 0.709822 loss: 380.413424 2022/10/12 17:41:56 - mmengine - INFO - Epoch(train) [59][400/586] lr: 5.000000e-03 eta: 15:45:07 time: 0.681556 data_time: 0.055700 memory: 12959 loss_kpt: 383.882862 acc_pose: 0.794592 loss: 383.882862 2022/10/12 17:42:30 - mmengine - INFO - Epoch(train) [59][450/586] lr: 5.000000e-03 eta: 15:44:40 time: 0.676830 data_time: 0.055860 memory: 12959 loss_kpt: 379.303741 acc_pose: 0.777162 loss: 379.303741 2022/10/12 17:43:04 - mmengine - INFO - Epoch(train) [59][500/586] lr: 5.000000e-03 eta: 15:44:14 time: 0.686021 data_time: 0.062215 memory: 12959 loss_kpt: 382.655638 acc_pose: 0.700631 loss: 382.655638 2022/10/12 17:43:39 - mmengine - INFO - Epoch(train) [59][550/586] lr: 5.000000e-03 eta: 15:43:49 time: 0.689473 data_time: 0.059947 memory: 12959 loss_kpt: 380.590988 acc_pose: 0.748282 loss: 380.590988 2022/10/12 17:44:03 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:44:38 - mmengine - INFO - Epoch(train) [60][50/586] lr: 5.000000e-03 eta: 15:42:04 time: 0.711398 data_time: 0.069414 memory: 12959 loss_kpt: 382.855569 acc_pose: 0.733478 loss: 382.855569 2022/10/12 17:45:13 - mmengine - INFO - Epoch(train) [60][100/586] lr: 5.000000e-03 eta: 15:41:40 time: 0.699693 data_time: 0.057552 memory: 12959 loss_kpt: 382.993041 acc_pose: 0.776082 loss: 382.993041 2022/10/12 17:45:48 - mmengine - INFO - Epoch(train) [60][150/586] lr: 5.000000e-03 eta: 15:41:14 time: 0.689167 data_time: 0.061545 memory: 12959 loss_kpt: 388.498896 acc_pose: 0.771102 loss: 388.498896 2022/10/12 17:46:22 - mmengine - INFO - Epoch(train) [60][200/586] lr: 5.000000e-03 eta: 15:40:49 time: 0.690114 data_time: 0.051785 memory: 12959 loss_kpt: 382.662488 acc_pose: 0.796461 loss: 382.662488 2022/10/12 17:46:56 - mmengine - INFO - Epoch(train) [60][250/586] lr: 5.000000e-03 eta: 15:40:22 time: 0.684847 data_time: 0.052588 memory: 12959 loss_kpt: 378.147358 acc_pose: 0.794352 loss: 378.147358 2022/10/12 17:47:30 - mmengine - INFO - Epoch(train) [60][300/586] lr: 5.000000e-03 eta: 15:39:56 time: 0.680373 data_time: 0.051875 memory: 12959 loss_kpt: 378.423998 acc_pose: 0.819116 loss: 378.423998 2022/10/12 17:48:05 - mmengine - INFO - Epoch(train) [60][350/586] lr: 5.000000e-03 eta: 15:39:29 time: 0.685291 data_time: 0.058826 memory: 12959 loss_kpt: 382.830932 acc_pose: 0.741854 loss: 382.830932 2022/10/12 17:48:39 - mmengine - INFO - Epoch(train) [60][400/586] lr: 5.000000e-03 eta: 15:39:03 time: 0.683623 data_time: 0.055500 memory: 12959 loss_kpt: 382.591516 acc_pose: 0.776073 loss: 382.591516 2022/10/12 17:48:57 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:49:13 - mmengine - INFO - Epoch(train) [60][450/586] lr: 5.000000e-03 eta: 15:38:36 time: 0.680348 data_time: 0.051800 memory: 12959 loss_kpt: 373.880756 acc_pose: 0.838408 loss: 373.880756 2022/10/12 17:49:47 - mmengine - INFO - Epoch(train) [60][500/586] lr: 5.000000e-03 eta: 15:38:09 time: 0.678208 data_time: 0.052122 memory: 12959 loss_kpt: 385.728840 acc_pose: 0.803383 loss: 385.728840 2022/10/12 17:50:21 - mmengine - INFO - Epoch(train) [60][550/586] lr: 5.000000e-03 eta: 15:37:42 time: 0.681419 data_time: 0.051688 memory: 12959 loss_kpt: 381.205409 acc_pose: 0.836616 loss: 381.205409 2022/10/12 17:50:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 17:50:45 - mmengine - INFO - Saving checkpoint at 60 epochs 2022/10/12 17:51:03 - mmengine - INFO - Epoch(val) [60][50/407] eta: 0:01:36 time: 0.270407 data_time: 0.012285 memory: 12959 2022/10/12 17:51:16 - mmengine - INFO - Epoch(val) [60][100/407] eta: 0:01:20 time: 0.261019 data_time: 0.007762 memory: 2407 2022/10/12 17:51:29 - mmengine - INFO - Epoch(val) [60][150/407] eta: 0:01:06 time: 0.259952 data_time: 0.007670 memory: 2407 2022/10/12 17:51:42 - mmengine - INFO - Epoch(val) [60][200/407] eta: 0:00:53 time: 0.260439 data_time: 0.007627 memory: 2407 2022/10/12 17:51:55 - mmengine - INFO - Epoch(val) [60][250/407] eta: 0:00:41 time: 0.261321 data_time: 0.011240 memory: 2407 2022/10/12 17:52:08 - mmengine - INFO - Epoch(val) [60][300/407] eta: 0:00:27 time: 0.260741 data_time: 0.007756 memory: 2407 2022/10/12 17:52:21 - mmengine - INFO - Epoch(val) [60][350/407] eta: 0:00:14 time: 0.260816 data_time: 0.007823 memory: 2407 2022/10/12 17:52:34 - mmengine - INFO - Epoch(val) [60][400/407] eta: 0:00:01 time: 0.257202 data_time: 0.007370 memory: 2407 2022/10/12 17:52:48 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 17:53:04 - mmengine - INFO - Epoch(val) [60][407/407] coco/AP: 0.675215 coco/AP .5: 0.874581 coco/AP .75: 0.746765 coco/AP (M): 0.649867 coco/AP (L): 0.729118 coco/AR: 0.747985 coco/AR .5: 0.918293 coco/AR .75: 0.808249 coco/AR (M): 0.707348 coco/AR (L): 0.804162 2022/10/12 17:53:04 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_50.pth is removed 2022/10/12 17:53:06 - mmengine - INFO - The best checkpoint with 0.6752 coco/AP at 60 epoch is saved to best_coco/AP_epoch_60.pth. 2022/10/12 17:53:40 - mmengine - INFO - Epoch(train) [61][50/586] lr: 5.000000e-03 eta: 15:35:55 time: 0.681281 data_time: 0.066120 memory: 12959 loss_kpt: 380.249757 acc_pose: 0.756501 loss: 380.249757 2022/10/12 17:54:14 - mmengine - INFO - Epoch(train) [61][100/586] lr: 5.000000e-03 eta: 15:35:28 time: 0.684301 data_time: 0.053163 memory: 12959 loss_kpt: 376.392896 acc_pose: 0.747611 loss: 376.392896 2022/10/12 17:54:49 - mmengine - INFO - Epoch(train) [61][150/586] lr: 5.000000e-03 eta: 15:35:03 time: 0.694788 data_time: 0.060729 memory: 12959 loss_kpt: 379.098731 acc_pose: 0.742462 loss: 379.098731 2022/10/12 17:55:24 - mmengine - INFO - Epoch(train) [61][200/586] lr: 5.000000e-03 eta: 15:34:39 time: 0.704091 data_time: 0.061012 memory: 12959 loss_kpt: 378.187842 acc_pose: 0.798015 loss: 378.187842 2022/10/12 17:55:59 - mmengine - INFO - Epoch(train) [61][250/586] lr: 5.000000e-03 eta: 15:34:14 time: 0.695134 data_time: 0.055874 memory: 12959 loss_kpt: 383.722998 acc_pose: 0.804273 loss: 383.722998 2022/10/12 17:56:34 - mmengine - INFO - Epoch(train) [61][300/586] lr: 5.000000e-03 eta: 15:33:50 time: 0.700715 data_time: 0.058428 memory: 12959 loss_kpt: 376.962156 acc_pose: 0.849373 loss: 376.962156 2022/10/12 17:57:10 - mmengine - INFO - Epoch(train) [61][350/586] lr: 5.000000e-03 eta: 15:33:27 time: 0.711011 data_time: 0.059977 memory: 12959 loss_kpt: 379.102601 acc_pose: 0.786055 loss: 379.102601 2022/10/12 17:57:45 - mmengine - INFO - Epoch(train) [61][400/586] lr: 5.000000e-03 eta: 15:33:03 time: 0.704196 data_time: 0.052251 memory: 12959 loss_kpt: 375.030012 acc_pose: 0.747434 loss: 375.030012 2022/10/12 17:58:20 - mmengine - INFO - Epoch(train) [61][450/586] lr: 5.000000e-03 eta: 15:32:39 time: 0.706362 data_time: 0.059003 memory: 12959 loss_kpt: 378.317408 acc_pose: 0.723865 loss: 378.317408 2022/10/12 17:58:55 - mmengine - INFO - Epoch(train) [61][500/586] lr: 5.000000e-03 eta: 15:32:14 time: 0.698362 data_time: 0.056527 memory: 12959 loss_kpt: 376.790579 acc_pose: 0.789079 loss: 376.790579 2022/10/12 17:59:31 - mmengine - INFO - Epoch(train) [61][550/586] lr: 5.000000e-03 eta: 15:31:52 time: 0.717152 data_time: 0.059553 memory: 12959 loss_kpt: 376.509933 acc_pose: 0.690793 loss: 376.509933 2022/10/12 17:59:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:00:30 - mmengine - INFO - Epoch(train) [62][50/586] lr: 5.000000e-03 eta: 15:30:05 time: 0.676139 data_time: 0.075198 memory: 12959 loss_kpt: 378.086052 acc_pose: 0.741036 loss: 378.086052 2022/10/12 18:01:04 - mmengine - INFO - Epoch(train) [62][100/586] lr: 5.000000e-03 eta: 15:29:36 time: 0.670014 data_time: 0.054312 memory: 12959 loss_kpt: 381.406196 acc_pose: 0.845629 loss: 381.406196 2022/10/12 18:01:37 - mmengine - INFO - Epoch(train) [62][150/586] lr: 5.000000e-03 eta: 15:29:07 time: 0.662995 data_time: 0.053836 memory: 12959 loss_kpt: 379.138574 acc_pose: 0.756365 loss: 379.138574 2022/10/12 18:02:10 - mmengine - INFO - Epoch(train) [62][200/586] lr: 5.000000e-03 eta: 15:28:39 time: 0.669636 data_time: 0.056200 memory: 12959 loss_kpt: 376.918912 acc_pose: 0.751606 loss: 376.918912 2022/10/12 18:02:44 - mmengine - INFO - Epoch(train) [62][250/586] lr: 5.000000e-03 eta: 15:28:11 time: 0.673075 data_time: 0.052961 memory: 12959 loss_kpt: 373.442556 acc_pose: 0.800538 loss: 373.442556 2022/10/12 18:02:47 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:03:18 - mmengine - INFO - Epoch(train) [62][300/586] lr: 5.000000e-03 eta: 15:27:43 time: 0.672696 data_time: 0.051870 memory: 12959 loss_kpt: 380.159930 acc_pose: 0.806464 loss: 380.159930 2022/10/12 18:03:52 - mmengine - INFO - Epoch(train) [62][350/586] lr: 5.000000e-03 eta: 15:27:16 time: 0.684315 data_time: 0.057371 memory: 12959 loss_kpt: 374.205336 acc_pose: 0.708295 loss: 374.205336 2022/10/12 18:04:25 - mmengine - INFO - Epoch(train) [62][400/586] lr: 5.000000e-03 eta: 15:26:48 time: 0.667361 data_time: 0.052044 memory: 12959 loss_kpt: 382.635815 acc_pose: 0.800967 loss: 382.635815 2022/10/12 18:04:59 - mmengine - INFO - Epoch(train) [62][450/586] lr: 5.000000e-03 eta: 15:26:21 time: 0.680098 data_time: 0.057304 memory: 12959 loss_kpt: 377.610745 acc_pose: 0.781563 loss: 377.610745 2022/10/12 18:05:33 - mmengine - INFO - Epoch(train) [62][500/586] lr: 5.000000e-03 eta: 15:25:54 time: 0.682995 data_time: 0.056041 memory: 12959 loss_kpt: 382.598521 acc_pose: 0.757496 loss: 382.598521 2022/10/12 18:06:07 - mmengine - INFO - Epoch(train) [62][550/586] lr: 5.000000e-03 eta: 15:25:27 time: 0.681750 data_time: 0.056327 memory: 12959 loss_kpt: 377.153470 acc_pose: 0.656687 loss: 377.153470 2022/10/12 18:06:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:07:06 - mmengine - INFO - Epoch(train) [63][50/586] lr: 5.000000e-03 eta: 15:23:42 time: 0.686697 data_time: 0.070254 memory: 12959 loss_kpt: 374.948284 acc_pose: 0.763432 loss: 374.948284 2022/10/12 18:07:39 - mmengine - INFO - Epoch(train) [63][100/586] lr: 5.000000e-03 eta: 15:23:13 time: 0.663046 data_time: 0.055315 memory: 12959 loss_kpt: 371.199832 acc_pose: 0.818015 loss: 371.199832 2022/10/12 18:08:13 - mmengine - INFO - Epoch(train) [63][150/586] lr: 5.000000e-03 eta: 15:22:45 time: 0.671839 data_time: 0.054915 memory: 12959 loss_kpt: 381.568028 acc_pose: 0.760675 loss: 381.568028 2022/10/12 18:08:47 - mmengine - INFO - Epoch(train) [63][200/586] lr: 5.000000e-03 eta: 15:22:18 time: 0.676261 data_time: 0.058475 memory: 12959 loss_kpt: 369.153162 acc_pose: 0.777388 loss: 369.153162 2022/10/12 18:09:21 - mmengine - INFO - Epoch(train) [63][250/586] lr: 5.000000e-03 eta: 15:21:51 time: 0.683986 data_time: 0.059793 memory: 12959 loss_kpt: 380.301025 acc_pose: 0.836303 loss: 380.301025 2022/10/12 18:09:55 - mmengine - INFO - Epoch(train) [63][300/586] lr: 5.000000e-03 eta: 15:21:23 time: 0.678218 data_time: 0.056099 memory: 12959 loss_kpt: 373.131520 acc_pose: 0.788555 loss: 373.131520 2022/10/12 18:10:29 - mmengine - INFO - Epoch(train) [63][350/586] lr: 5.000000e-03 eta: 15:20:57 time: 0.687012 data_time: 0.054159 memory: 12959 loss_kpt: 382.301682 acc_pose: 0.838462 loss: 382.301682 2022/10/12 18:11:03 - mmengine - INFO - Epoch(train) [63][400/586] lr: 5.000000e-03 eta: 15:20:30 time: 0.680320 data_time: 0.057870 memory: 12959 loss_kpt: 380.964948 acc_pose: 0.747037 loss: 380.964948 2022/10/12 18:11:38 - mmengine - INFO - Epoch(train) [63][450/586] lr: 5.000000e-03 eta: 15:20:03 time: 0.688390 data_time: 0.056200 memory: 12959 loss_kpt: 384.075135 acc_pose: 0.817659 loss: 384.075135 2022/10/12 18:12:12 - mmengine - INFO - Epoch(train) [63][500/586] lr: 5.000000e-03 eta: 15:19:36 time: 0.681760 data_time: 0.056296 memory: 12959 loss_kpt: 369.684916 acc_pose: 0.834289 loss: 369.684916 2022/10/12 18:12:46 - mmengine - INFO - Epoch(train) [63][550/586] lr: 5.000000e-03 eta: 15:19:09 time: 0.678201 data_time: 0.056699 memory: 12959 loss_kpt: 382.852618 acc_pose: 0.813629 loss: 382.852618 2022/10/12 18:13:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:13:45 - mmengine - INFO - Epoch(train) [64][50/586] lr: 5.000000e-03 eta: 15:17:27 time: 0.696047 data_time: 0.068620 memory: 12959 loss_kpt: 373.892767 acc_pose: 0.767089 loss: 373.892767 2022/10/12 18:14:07 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:14:19 - mmengine - INFO - Epoch(train) [64][100/586] lr: 5.000000e-03 eta: 15:17:00 time: 0.681403 data_time: 0.056977 memory: 12959 loss_kpt: 375.104393 acc_pose: 0.841731 loss: 375.104393 2022/10/12 18:14:53 - mmengine - INFO - Epoch(train) [64][150/586] lr: 5.000000e-03 eta: 15:16:34 time: 0.692562 data_time: 0.062728 memory: 12959 loss_kpt: 374.269197 acc_pose: 0.776630 loss: 374.269197 2022/10/12 18:15:28 - mmengine - INFO - Epoch(train) [64][200/586] lr: 5.000000e-03 eta: 15:16:07 time: 0.680351 data_time: 0.055595 memory: 12959 loss_kpt: 378.126966 acc_pose: 0.778728 loss: 378.126966 2022/10/12 18:16:01 - mmengine - INFO - Epoch(train) [64][250/586] lr: 5.000000e-03 eta: 15:15:39 time: 0.677477 data_time: 0.055656 memory: 12959 loss_kpt: 371.104235 acc_pose: 0.773277 loss: 371.104235 2022/10/12 18:16:36 - mmengine - INFO - Epoch(train) [64][300/586] lr: 5.000000e-03 eta: 15:15:12 time: 0.685693 data_time: 0.061358 memory: 12959 loss_kpt: 374.404532 acc_pose: 0.711196 loss: 374.404532 2022/10/12 18:17:09 - mmengine - INFO - Epoch(train) [64][350/586] lr: 5.000000e-03 eta: 15:14:43 time: 0.666165 data_time: 0.058351 memory: 12959 loss_kpt: 369.739208 acc_pose: 0.777508 loss: 369.739208 2022/10/12 18:17:42 - mmengine - INFO - Epoch(train) [64][400/586] lr: 5.000000e-03 eta: 15:14:15 time: 0.668063 data_time: 0.053032 memory: 12959 loss_kpt: 378.078474 acc_pose: 0.760477 loss: 378.078474 2022/10/12 18:18:16 - mmengine - INFO - Epoch(train) [64][450/586] lr: 5.000000e-03 eta: 15:13:47 time: 0.676071 data_time: 0.057396 memory: 12959 loss_kpt: 369.196641 acc_pose: 0.793012 loss: 369.196641 2022/10/12 18:18:49 - mmengine - INFO - Epoch(train) [64][500/586] lr: 5.000000e-03 eta: 15:13:18 time: 0.663803 data_time: 0.051842 memory: 12959 loss_kpt: 367.804394 acc_pose: 0.760810 loss: 367.804394 2022/10/12 18:19:23 - mmengine - INFO - Epoch(train) [64][550/586] lr: 5.000000e-03 eta: 15:12:49 time: 0.672165 data_time: 0.057992 memory: 12959 loss_kpt: 378.921084 acc_pose: 0.814451 loss: 378.921084 2022/10/12 18:19:47 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:20:22 - mmengine - INFO - Epoch(train) [65][50/586] lr: 5.000000e-03 eta: 15:11:08 time: 0.696953 data_time: 0.064563 memory: 12959 loss_kpt: 370.953524 acc_pose: 0.754895 loss: 370.953524 2022/10/12 18:20:56 - mmengine - INFO - Epoch(train) [65][100/586] lr: 5.000000e-03 eta: 15:10:41 time: 0.682173 data_time: 0.057879 memory: 12959 loss_kpt: 376.208078 acc_pose: 0.789668 loss: 376.208078 2022/10/12 18:21:30 - mmengine - INFO - Epoch(train) [65][150/586] lr: 5.000000e-03 eta: 15:10:14 time: 0.683815 data_time: 0.056263 memory: 12959 loss_kpt: 367.347671 acc_pose: 0.773094 loss: 367.347671 2022/10/12 18:22:04 - mmengine - INFO - Epoch(train) [65][200/586] lr: 5.000000e-03 eta: 15:09:45 time: 0.664581 data_time: 0.058530 memory: 12959 loss_kpt: 378.613146 acc_pose: 0.784764 loss: 378.613146 2022/10/12 18:22:37 - mmengine - INFO - Epoch(train) [65][250/586] lr: 5.000000e-03 eta: 15:09:16 time: 0.668195 data_time: 0.059717 memory: 12959 loss_kpt: 378.886043 acc_pose: 0.755457 loss: 378.886043 2022/10/12 18:23:11 - mmengine - INFO - Epoch(train) [65][300/586] lr: 5.000000e-03 eta: 15:08:49 time: 0.676772 data_time: 0.054736 memory: 12959 loss_kpt: 372.279404 acc_pose: 0.770138 loss: 372.279404 2022/10/12 18:23:44 - mmengine - INFO - Epoch(train) [65][350/586] lr: 5.000000e-03 eta: 15:08:20 time: 0.670127 data_time: 0.060734 memory: 12959 loss_kpt: 375.817888 acc_pose: 0.776696 loss: 375.817888 2022/10/12 18:24:17 - mmengine - INFO - Epoch(train) [65][400/586] lr: 5.000000e-03 eta: 15:07:51 time: 0.662676 data_time: 0.056783 memory: 12959 loss_kpt: 370.533065 acc_pose: 0.810438 loss: 370.533065 2022/10/12 18:24:51 - mmengine - INFO - Epoch(train) [65][450/586] lr: 5.000000e-03 eta: 15:07:22 time: 0.671590 data_time: 0.059390 memory: 12959 loss_kpt: 370.306772 acc_pose: 0.718486 loss: 370.306772 2022/10/12 18:25:22 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:25:24 - mmengine - INFO - Epoch(train) [65][500/586] lr: 5.000000e-03 eta: 15:06:53 time: 0.665284 data_time: 0.056642 memory: 12959 loss_kpt: 373.960631 acc_pose: 0.709174 loss: 373.960631 2022/10/12 18:25:57 - mmengine - INFO - Epoch(train) [65][550/586] lr: 5.000000e-03 eta: 15:06:23 time: 0.657530 data_time: 0.058970 memory: 12959 loss_kpt: 369.945328 acc_pose: 0.773850 loss: 369.945328 2022/10/12 18:26:21 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:26:54 - mmengine - INFO - Epoch(train) [66][50/586] lr: 5.000000e-03 eta: 15:04:40 time: 0.672167 data_time: 0.062880 memory: 12959 loss_kpt: 374.581739 acc_pose: 0.719157 loss: 374.581739 2022/10/12 18:27:28 - mmengine - INFO - Epoch(train) [66][100/586] lr: 5.000000e-03 eta: 15:04:12 time: 0.667741 data_time: 0.053891 memory: 12959 loss_kpt: 370.510462 acc_pose: 0.818768 loss: 370.510462 2022/10/12 18:28:01 - mmengine - INFO - Epoch(train) [66][150/586] lr: 5.000000e-03 eta: 15:03:43 time: 0.669497 data_time: 0.056116 memory: 12959 loss_kpt: 371.781156 acc_pose: 0.774162 loss: 371.781156 2022/10/12 18:28:35 - mmengine - INFO - Epoch(train) [66][200/586] lr: 5.000000e-03 eta: 15:03:15 time: 0.673401 data_time: 0.053257 memory: 12959 loss_kpt: 371.349277 acc_pose: 0.787814 loss: 371.349277 2022/10/12 18:29:09 - mmengine - INFO - Epoch(train) [66][250/586] lr: 5.000000e-03 eta: 15:02:48 time: 0.683885 data_time: 0.052884 memory: 12959 loss_kpt: 380.227053 acc_pose: 0.773813 loss: 380.227053 2022/10/12 18:29:43 - mmengine - INFO - Epoch(train) [66][300/586] lr: 5.000000e-03 eta: 15:02:21 time: 0.685072 data_time: 0.057196 memory: 12959 loss_kpt: 375.860950 acc_pose: 0.759892 loss: 375.860950 2022/10/12 18:30:17 - mmengine - INFO - Epoch(train) [66][350/586] lr: 5.000000e-03 eta: 15:01:53 time: 0.678988 data_time: 0.055073 memory: 12959 loss_kpt: 361.700173 acc_pose: 0.787701 loss: 361.700173 2022/10/12 18:30:51 - mmengine - INFO - Epoch(train) [66][400/586] lr: 5.000000e-03 eta: 15:01:24 time: 0.663222 data_time: 0.060566 memory: 12959 loss_kpt: 377.823149 acc_pose: 0.746396 loss: 377.823149 2022/10/12 18:31:25 - mmengine - INFO - Epoch(train) [66][450/586] lr: 5.000000e-03 eta: 15:00:57 time: 0.682476 data_time: 0.054457 memory: 12959 loss_kpt: 368.584820 acc_pose: 0.794194 loss: 368.584820 2022/10/12 18:31:59 - mmengine - INFO - Epoch(train) [66][500/586] lr: 5.000000e-03 eta: 15:00:30 time: 0.690361 data_time: 0.053929 memory: 12959 loss_kpt: 371.815914 acc_pose: 0.832259 loss: 371.815914 2022/10/12 18:32:34 - mmengine - INFO - Epoch(train) [66][550/586] lr: 5.000000e-03 eta: 15:00:03 time: 0.687943 data_time: 0.055647 memory: 12959 loss_kpt: 366.046403 acc_pose: 0.754508 loss: 366.046403 2022/10/12 18:32:58 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:33:33 - mmengine - INFO - Epoch(train) [67][50/586] lr: 5.000000e-03 eta: 14:58:24 time: 0.690809 data_time: 0.072215 memory: 12959 loss_kpt: 374.149658 acc_pose: 0.795387 loss: 374.149658 2022/10/12 18:34:07 - mmengine - INFO - Epoch(train) [67][100/586] lr: 5.000000e-03 eta: 14:57:57 time: 0.688953 data_time: 0.061399 memory: 12959 loss_kpt: 364.537190 acc_pose: 0.805890 loss: 364.537190 2022/10/12 18:34:41 - mmengine - INFO - Epoch(train) [67][150/586] lr: 5.000000e-03 eta: 14:57:29 time: 0.676560 data_time: 0.059503 memory: 12959 loss_kpt: 373.859470 acc_pose: 0.711560 loss: 373.859470 2022/10/12 18:35:15 - mmengine - INFO - Epoch(train) [67][200/586] lr: 5.000000e-03 eta: 14:57:02 time: 0.685743 data_time: 0.051948 memory: 12959 loss_kpt: 372.665668 acc_pose: 0.800122 loss: 372.665668 2022/10/12 18:35:49 - mmengine - INFO - Epoch(train) [67][250/586] lr: 5.000000e-03 eta: 14:56:36 time: 0.687459 data_time: 0.060801 memory: 12959 loss_kpt: 370.414160 acc_pose: 0.771188 loss: 370.414160 2022/10/12 18:36:24 - mmengine - INFO - Epoch(train) [67][300/586] lr: 5.000000e-03 eta: 14:56:08 time: 0.681487 data_time: 0.053837 memory: 12959 loss_kpt: 371.784211 acc_pose: 0.786222 loss: 371.784211 2022/10/12 18:36:40 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:36:58 - mmengine - INFO - Epoch(train) [67][350/586] lr: 5.000000e-03 eta: 14:55:41 time: 0.681811 data_time: 0.058935 memory: 12959 loss_kpt: 368.225693 acc_pose: 0.700914 loss: 368.225693 2022/10/12 18:37:32 - mmengine - INFO - Epoch(train) [67][400/586] lr: 5.000000e-03 eta: 14:55:15 time: 0.695538 data_time: 0.052276 memory: 12959 loss_kpt: 380.460808 acc_pose: 0.762891 loss: 380.460808 2022/10/12 18:38:07 - mmengine - INFO - Epoch(train) [67][450/586] lr: 5.000000e-03 eta: 14:54:49 time: 0.693477 data_time: 0.055473 memory: 12959 loss_kpt: 375.034836 acc_pose: 0.760675 loss: 375.034836 2022/10/12 18:38:41 - mmengine - INFO - Epoch(train) [67][500/586] lr: 5.000000e-03 eta: 14:54:21 time: 0.679538 data_time: 0.058212 memory: 12959 loss_kpt: 364.901292 acc_pose: 0.755751 loss: 364.901292 2022/10/12 18:39:16 - mmengine - INFO - Epoch(train) [67][550/586] lr: 5.000000e-03 eta: 14:53:54 time: 0.691203 data_time: 0.058741 memory: 12959 loss_kpt: 375.683624 acc_pose: 0.767025 loss: 375.683624 2022/10/12 18:39:40 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:40:15 - mmengine - INFO - Epoch(train) [68][50/586] lr: 5.000000e-03 eta: 14:52:16 time: 0.689420 data_time: 0.069079 memory: 12959 loss_kpt: 362.235563 acc_pose: 0.787164 loss: 362.235563 2022/10/12 18:40:48 - mmengine - INFO - Epoch(train) [68][100/586] lr: 5.000000e-03 eta: 14:51:46 time: 0.660797 data_time: 0.055552 memory: 12959 loss_kpt: 372.412927 acc_pose: 0.752375 loss: 372.412927 2022/10/12 18:41:22 - mmengine - INFO - Epoch(train) [68][150/586] lr: 5.000000e-03 eta: 14:51:19 time: 0.682305 data_time: 0.064398 memory: 12959 loss_kpt: 365.652660 acc_pose: 0.756061 loss: 365.652660 2022/10/12 18:41:55 - mmengine - INFO - Epoch(train) [68][200/586] lr: 5.000000e-03 eta: 14:50:50 time: 0.670135 data_time: 0.052867 memory: 12959 loss_kpt: 373.207609 acc_pose: 0.827942 loss: 373.207609 2022/10/12 18:42:29 - mmengine - INFO - Epoch(train) [68][250/586] lr: 5.000000e-03 eta: 14:50:21 time: 0.670155 data_time: 0.056909 memory: 12959 loss_kpt: 370.683817 acc_pose: 0.801106 loss: 370.683817 2022/10/12 18:43:02 - mmengine - INFO - Epoch(train) [68][300/586] lr: 5.000000e-03 eta: 14:49:52 time: 0.663145 data_time: 0.057960 memory: 12959 loss_kpt: 368.458798 acc_pose: 0.793060 loss: 368.458798 2022/10/12 18:43:35 - mmengine - INFO - Epoch(train) [68][350/586] lr: 5.000000e-03 eta: 14:49:22 time: 0.663243 data_time: 0.056547 memory: 12959 loss_kpt: 368.143620 acc_pose: 0.730237 loss: 368.143620 2022/10/12 18:44:09 - mmengine - INFO - Epoch(train) [68][400/586] lr: 5.000000e-03 eta: 14:48:54 time: 0.671992 data_time: 0.051479 memory: 12959 loss_kpt: 371.013086 acc_pose: 0.801891 loss: 371.013086 2022/10/12 18:44:43 - mmengine - INFO - Epoch(train) [68][450/586] lr: 5.000000e-03 eta: 14:48:26 time: 0.678087 data_time: 0.056026 memory: 12959 loss_kpt: 363.758772 acc_pose: 0.806397 loss: 363.758772 2022/10/12 18:45:17 - mmengine - INFO - Epoch(train) [68][500/586] lr: 5.000000e-03 eta: 14:47:59 time: 0.684309 data_time: 0.053593 memory: 12959 loss_kpt: 366.142438 acc_pose: 0.672362 loss: 366.142438 2022/10/12 18:45:52 - mmengine - INFO - Epoch(train) [68][550/586] lr: 5.000000e-03 eta: 14:47:33 time: 0.705601 data_time: 0.059572 memory: 12959 loss_kpt: 360.819335 acc_pose: 0.805590 loss: 360.819335 2022/10/12 18:46:17 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:46:52 - mmengine - INFO - Epoch(train) [69][50/586] lr: 5.000000e-03 eta: 14:45:56 time: 0.691493 data_time: 0.072225 memory: 12959 loss_kpt: 367.827548 acc_pose: 0.746380 loss: 367.827548 2022/10/12 18:47:25 - mmengine - INFO - Epoch(train) [69][100/586] lr: 5.000000e-03 eta: 14:45:27 time: 0.671629 data_time: 0.057710 memory: 12959 loss_kpt: 369.299640 acc_pose: 0.807755 loss: 369.299640 2022/10/12 18:47:59 - mmengine - INFO - Epoch(train) [69][150/586] lr: 5.000000e-03 eta: 14:44:59 time: 0.679149 data_time: 0.060538 memory: 12959 loss_kpt: 368.927692 acc_pose: 0.813596 loss: 368.927692 2022/10/12 18:48:01 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:48:32 - mmengine - INFO - Epoch(train) [69][200/586] lr: 5.000000e-03 eta: 14:44:30 time: 0.659756 data_time: 0.055014 memory: 12959 loss_kpt: 371.062318 acc_pose: 0.774367 loss: 371.062318 2022/10/12 18:49:06 - mmengine - INFO - Epoch(train) [69][250/586] lr: 5.000000e-03 eta: 14:44:02 time: 0.680420 data_time: 0.064347 memory: 12959 loss_kpt: 367.966414 acc_pose: 0.812903 loss: 367.966414 2022/10/12 18:49:40 - mmengine - INFO - Epoch(train) [69][300/586] lr: 5.000000e-03 eta: 14:43:33 time: 0.670421 data_time: 0.055903 memory: 12959 loss_kpt: 367.042664 acc_pose: 0.792519 loss: 367.042664 2022/10/12 18:50:14 - mmengine - INFO - Epoch(train) [69][350/586] lr: 5.000000e-03 eta: 14:43:05 time: 0.676013 data_time: 0.062045 memory: 12959 loss_kpt: 366.763885 acc_pose: 0.754798 loss: 366.763885 2022/10/12 18:50:47 - mmengine - INFO - Epoch(train) [69][400/586] lr: 5.000000e-03 eta: 14:42:36 time: 0.672529 data_time: 0.058708 memory: 12959 loss_kpt: 369.794189 acc_pose: 0.847465 loss: 369.794189 2022/10/12 18:51:21 - mmengine - INFO - Epoch(train) [69][450/586] lr: 5.000000e-03 eta: 14:42:08 time: 0.676973 data_time: 0.058579 memory: 12959 loss_kpt: 366.074528 acc_pose: 0.711415 loss: 366.074528 2022/10/12 18:51:55 - mmengine - INFO - Epoch(train) [69][500/586] lr: 5.000000e-03 eta: 14:41:39 time: 0.671434 data_time: 0.058102 memory: 12959 loss_kpt: 367.000068 acc_pose: 0.775517 loss: 367.000068 2022/10/12 18:52:29 - mmengine - INFO - Epoch(train) [69][550/586] lr: 5.000000e-03 eta: 14:41:12 time: 0.679902 data_time: 0.063794 memory: 12959 loss_kpt: 366.280343 acc_pose: 0.796203 loss: 366.280343 2022/10/12 18:52:52 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:53:27 - mmengine - INFO - Epoch(train) [70][50/586] lr: 5.000000e-03 eta: 14:39:35 time: 0.694768 data_time: 0.075159 memory: 12959 loss_kpt: 376.776452 acc_pose: 0.826053 loss: 376.776452 2022/10/12 18:54:01 - mmengine - INFO - Epoch(train) [70][100/586] lr: 5.000000e-03 eta: 14:39:07 time: 0.673652 data_time: 0.055870 memory: 12959 loss_kpt: 361.266996 acc_pose: 0.790870 loss: 361.266996 2022/10/12 18:54:34 - mmengine - INFO - Epoch(train) [70][150/586] lr: 5.000000e-03 eta: 14:38:38 time: 0.667372 data_time: 0.054780 memory: 12959 loss_kpt: 362.602365 acc_pose: 0.783791 loss: 362.602365 2022/10/12 18:55:08 - mmengine - INFO - Epoch(train) [70][200/586] lr: 5.000000e-03 eta: 14:38:09 time: 0.672812 data_time: 0.054715 memory: 12959 loss_kpt: 372.540679 acc_pose: 0.834051 loss: 372.540679 2022/10/12 18:55:43 - mmengine - INFO - Epoch(train) [70][250/586] lr: 5.000000e-03 eta: 14:37:42 time: 0.690903 data_time: 0.059929 memory: 12959 loss_kpt: 368.282556 acc_pose: 0.836824 loss: 368.282556 2022/10/12 18:56:17 - mmengine - INFO - Epoch(train) [70][300/586] lr: 5.000000e-03 eta: 14:37:15 time: 0.686445 data_time: 0.058991 memory: 12959 loss_kpt: 364.435652 acc_pose: 0.841946 loss: 364.435652 2022/10/12 18:56:52 - mmengine - INFO - Epoch(train) [70][350/586] lr: 5.000000e-03 eta: 14:36:49 time: 0.695013 data_time: 0.059709 memory: 12959 loss_kpt: 363.106086 acc_pose: 0.785645 loss: 363.106086 2022/10/12 18:57:26 - mmengine - INFO - Epoch(train) [70][400/586] lr: 5.000000e-03 eta: 14:36:22 time: 0.691448 data_time: 0.056726 memory: 12959 loss_kpt: 363.105868 acc_pose: 0.794588 loss: 363.105868 2022/10/12 18:58:01 - mmengine - INFO - Epoch(train) [70][450/586] lr: 5.000000e-03 eta: 14:35:56 time: 0.694062 data_time: 0.054532 memory: 12959 loss_kpt: 365.662844 acc_pose: 0.722383 loss: 365.662844 2022/10/12 18:58:35 - mmengine - INFO - Epoch(train) [70][500/586] lr: 5.000000e-03 eta: 14:35:28 time: 0.682867 data_time: 0.052165 memory: 12959 loss_kpt: 365.226043 acc_pose: 0.732531 loss: 365.226043 2022/10/12 18:59:10 - mmengine - INFO - Epoch(train) [70][550/586] lr: 5.000000e-03 eta: 14:35:02 time: 0.695379 data_time: 0.056351 memory: 12959 loss_kpt: 369.125237 acc_pose: 0.823888 loss: 369.125237 2022/10/12 18:59:21 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:59:34 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 18:59:34 - mmengine - INFO - Saving checkpoint at 70 epochs 2022/10/12 18:59:52 - mmengine - INFO - Epoch(val) [70][50/407] eta: 0:01:36 time: 0.269305 data_time: 0.012693 memory: 12959 2022/10/12 19:00:05 - mmengine - INFO - Epoch(val) [70][100/407] eta: 0:01:20 time: 0.262431 data_time: 0.007679 memory: 2407 2022/10/12 19:00:18 - mmengine - INFO - Epoch(val) [70][150/407] eta: 0:01:07 time: 0.262195 data_time: 0.007698 memory: 2407 2022/10/12 19:00:31 - mmengine - INFO - Epoch(val) [70][200/407] eta: 0:00:53 time: 0.260347 data_time: 0.007476 memory: 2407 2022/10/12 19:00:44 - mmengine - INFO - Epoch(val) [70][250/407] eta: 0:00:40 time: 0.260336 data_time: 0.007921 memory: 2407 2022/10/12 19:00:57 - mmengine - INFO - Epoch(val) [70][300/407] eta: 0:00:27 time: 0.260252 data_time: 0.007695 memory: 2407 2022/10/12 19:01:10 - mmengine - INFO - Epoch(val) [70][350/407] eta: 0:00:15 time: 0.263389 data_time: 0.007629 memory: 2407 2022/10/12 19:01:23 - mmengine - INFO - Epoch(val) [70][400/407] eta: 0:00:01 time: 0.260630 data_time: 0.007656 memory: 2407 2022/10/12 19:01:37 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 19:01:53 - mmengine - INFO - Epoch(val) [70][407/407] coco/AP: 0.655995 coco/AP .5: 0.849755 coco/AP .75: 0.726844 coco/AP (M): 0.632181 coco/AP (L): 0.711099 coco/AR: 0.739657 coco/AR .5: 0.908533 coco/AR .75: 0.801008 coco/AR (M): 0.696094 coco/AR (L): 0.799628 2022/10/12 19:02:28 - mmengine - INFO - Epoch(train) [71][50/586] lr: 5.000000e-03 eta: 14:33:27 time: 0.700739 data_time: 0.063328 memory: 12959 loss_kpt: 362.160930 acc_pose: 0.811866 loss: 362.160930 2022/10/12 19:03:02 - mmengine - INFO - Epoch(train) [71][100/586] lr: 5.000000e-03 eta: 14:32:59 time: 0.679664 data_time: 0.057281 memory: 12959 loss_kpt: 363.566205 acc_pose: 0.828486 loss: 363.566205 2022/10/12 19:03:36 - mmengine - INFO - Epoch(train) [71][150/586] lr: 5.000000e-03 eta: 14:32:31 time: 0.679439 data_time: 0.053100 memory: 12959 loss_kpt: 364.196933 acc_pose: 0.814853 loss: 364.196933 2022/10/12 19:04:09 - mmengine - INFO - Epoch(train) [71][200/586] lr: 5.000000e-03 eta: 14:32:01 time: 0.663917 data_time: 0.053613 memory: 12959 loss_kpt: 363.912985 acc_pose: 0.774256 loss: 363.912985 2022/10/12 19:04:43 - mmengine - INFO - Epoch(train) [71][250/586] lr: 5.000000e-03 eta: 14:31:33 time: 0.676966 data_time: 0.060573 memory: 12959 loss_kpt: 367.963654 acc_pose: 0.762420 loss: 367.963654 2022/10/12 19:05:17 - mmengine - INFO - Epoch(train) [71][300/586] lr: 5.000000e-03 eta: 14:31:05 time: 0.678809 data_time: 0.058152 memory: 12959 loss_kpt: 369.783124 acc_pose: 0.820343 loss: 369.783124 2022/10/12 19:05:51 - mmengine - INFO - Epoch(train) [71][350/586] lr: 5.000000e-03 eta: 14:30:37 time: 0.683173 data_time: 0.057592 memory: 12959 loss_kpt: 367.086841 acc_pose: 0.797706 loss: 367.086841 2022/10/12 19:06:25 - mmengine - INFO - Epoch(train) [71][400/586] lr: 5.000000e-03 eta: 14:30:09 time: 0.672697 data_time: 0.057194 memory: 12959 loss_kpt: 359.370060 acc_pose: 0.761546 loss: 359.370060 2022/10/12 19:06:59 - mmengine - INFO - Epoch(train) [71][450/586] lr: 5.000000e-03 eta: 14:29:41 time: 0.677943 data_time: 0.060880 memory: 12959 loss_kpt: 366.425315 acc_pose: 0.744793 loss: 366.425315 2022/10/12 19:07:32 - mmengine - INFO - Epoch(train) [71][500/586] lr: 5.000000e-03 eta: 14:29:11 time: 0.668998 data_time: 0.056688 memory: 12959 loss_kpt: 368.154290 acc_pose: 0.784825 loss: 368.154290 2022/10/12 19:08:04 - mmengine - INFO - Epoch(train) [71][550/586] lr: 5.000000e-03 eta: 14:28:40 time: 0.645920 data_time: 0.055877 memory: 12959 loss_kpt: 366.102141 acc_pose: 0.838181 loss: 366.102141 2022/10/12 19:08:28 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:09:02 - mmengine - INFO - Epoch(train) [72][50/586] lr: 5.000000e-03 eta: 14:27:05 time: 0.687739 data_time: 0.072557 memory: 12959 loss_kpt: 370.015566 acc_pose: 0.819252 loss: 370.015566 2022/10/12 19:09:36 - mmengine - INFO - Epoch(train) [72][100/586] lr: 5.000000e-03 eta: 14:26:37 time: 0.683787 data_time: 0.058981 memory: 12959 loss_kpt: 367.874074 acc_pose: 0.786860 loss: 367.874074 2022/10/12 19:10:10 - mmengine - INFO - Epoch(train) [72][150/586] lr: 5.000000e-03 eta: 14:26:08 time: 0.666162 data_time: 0.060721 memory: 12959 loss_kpt: 362.129153 acc_pose: 0.776589 loss: 362.129153 2022/10/12 19:10:43 - mmengine - INFO - Epoch(train) [72][200/586] lr: 5.000000e-03 eta: 14:25:39 time: 0.673669 data_time: 0.059323 memory: 12959 loss_kpt: 362.598898 acc_pose: 0.913979 loss: 362.598898 2022/10/12 19:11:18 - mmengine - INFO - Epoch(train) [72][250/586] lr: 5.000000e-03 eta: 14:25:12 time: 0.683774 data_time: 0.059571 memory: 12959 loss_kpt: 364.180983 acc_pose: 0.874940 loss: 364.180983 2022/10/12 19:11:51 - mmengine - INFO - Epoch(train) [72][300/586] lr: 5.000000e-03 eta: 14:24:43 time: 0.676925 data_time: 0.057084 memory: 12959 loss_kpt: 372.714778 acc_pose: 0.798656 loss: 372.714778 2022/10/12 19:12:27 - mmengine - INFO - Epoch(train) [72][350/586] lr: 5.000000e-03 eta: 14:24:17 time: 0.702036 data_time: 0.060120 memory: 12959 loss_kpt: 363.792662 acc_pose: 0.728348 loss: 363.792662 2022/10/12 19:12:57 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:13:01 - mmengine - INFO - Epoch(train) [72][400/586] lr: 5.000000e-03 eta: 14:23:50 time: 0.687501 data_time: 0.053008 memory: 12959 loss_kpt: 364.117118 acc_pose: 0.743396 loss: 364.117118 2022/10/12 19:13:35 - mmengine - INFO - Epoch(train) [72][450/586] lr: 5.000000e-03 eta: 14:23:22 time: 0.680182 data_time: 0.060872 memory: 12959 loss_kpt: 359.651525 acc_pose: 0.817372 loss: 359.651525 2022/10/12 19:14:09 - mmengine - INFO - Epoch(train) [72][500/586] lr: 5.000000e-03 eta: 14:22:54 time: 0.676339 data_time: 0.057181 memory: 12959 loss_kpt: 363.767078 acc_pose: 0.831392 loss: 363.767078 2022/10/12 19:14:43 - mmengine - INFO - Epoch(train) [72][550/586] lr: 5.000000e-03 eta: 14:22:26 time: 0.687042 data_time: 0.057944 memory: 12959 loss_kpt: 365.394777 acc_pose: 0.769024 loss: 365.394777 2022/10/12 19:15:07 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:15:41 - mmengine - INFO - Epoch(train) [73][50/586] lr: 5.000000e-03 eta: 14:20:50 time: 0.674109 data_time: 0.072176 memory: 12959 loss_kpt: 365.838180 acc_pose: 0.817919 loss: 365.838180 2022/10/12 19:16:14 - mmengine - INFO - Epoch(train) [73][100/586] lr: 5.000000e-03 eta: 14:20:20 time: 0.656667 data_time: 0.057921 memory: 12959 loss_kpt: 356.776691 acc_pose: 0.794157 loss: 356.776691 2022/10/12 19:16:47 - mmengine - INFO - Epoch(train) [73][150/586] lr: 5.000000e-03 eta: 14:19:50 time: 0.658678 data_time: 0.059706 memory: 12959 loss_kpt: 359.813015 acc_pose: 0.841586 loss: 359.813015 2022/10/12 19:17:20 - mmengine - INFO - Epoch(train) [73][200/586] lr: 5.000000e-03 eta: 14:19:20 time: 0.654267 data_time: 0.054797 memory: 12959 loss_kpt: 360.704420 acc_pose: 0.782983 loss: 360.704420 2022/10/12 19:17:53 - mmengine - INFO - Epoch(train) [73][250/586] lr: 5.000000e-03 eta: 14:18:50 time: 0.668293 data_time: 0.053366 memory: 12959 loss_kpt: 368.208755 acc_pose: 0.765156 loss: 368.208755 2022/10/12 19:18:25 - mmengine - INFO - Epoch(train) [73][300/586] lr: 5.000000e-03 eta: 14:18:19 time: 0.647060 data_time: 0.056219 memory: 12959 loss_kpt: 365.374675 acc_pose: 0.746384 loss: 365.374675 2022/10/12 19:18:57 - mmengine - INFO - Epoch(train) [73][350/586] lr: 5.000000e-03 eta: 14:17:48 time: 0.643654 data_time: 0.061049 memory: 12959 loss_kpt: 361.882802 acc_pose: 0.849738 loss: 361.882802 2022/10/12 19:19:30 - mmengine - INFO - Epoch(train) [73][400/586] lr: 5.000000e-03 eta: 14:17:17 time: 0.653792 data_time: 0.054522 memory: 12959 loss_kpt: 375.122629 acc_pose: 0.874862 loss: 375.122629 2022/10/12 19:20:03 - mmengine - INFO - Epoch(train) [73][450/586] lr: 5.000000e-03 eta: 14:16:47 time: 0.660697 data_time: 0.055207 memory: 12959 loss_kpt: 360.319054 acc_pose: 0.804956 loss: 360.319054 2022/10/12 19:20:36 - mmengine - INFO - Epoch(train) [73][500/586] lr: 5.000000e-03 eta: 14:16:17 time: 0.657677 data_time: 0.057427 memory: 12959 loss_kpt: 359.553588 acc_pose: 0.869527 loss: 359.553588 2022/10/12 19:21:09 - mmengine - INFO - Epoch(train) [73][550/586] lr: 5.000000e-03 eta: 14:15:47 time: 0.659398 data_time: 0.054845 memory: 12959 loss_kpt: 361.619885 acc_pose: 0.776821 loss: 361.619885 2022/10/12 19:21:33 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:22:07 - mmengine - INFO - Epoch(train) [74][50/586] lr: 5.000000e-03 eta: 14:14:13 time: 0.685422 data_time: 0.065405 memory: 12959 loss_kpt: 361.902769 acc_pose: 0.779499 loss: 361.902769 2022/10/12 19:22:41 - mmengine - INFO - Epoch(train) [74][100/586] lr: 5.000000e-03 eta: 14:13:45 time: 0.680682 data_time: 0.055051 memory: 12959 loss_kpt: 357.740858 acc_pose: 0.832620 loss: 357.740858 2022/10/12 19:23:16 - mmengine - INFO - Epoch(train) [74][150/586] lr: 5.000000e-03 eta: 14:13:19 time: 0.702673 data_time: 0.056478 memory: 12959 loss_kpt: 361.366361 acc_pose: 0.782059 loss: 361.366361 2022/10/12 19:23:51 - mmengine - INFO - Epoch(train) [74][200/586] lr: 5.000000e-03 eta: 14:12:53 time: 0.705284 data_time: 0.056686 memory: 12959 loss_kpt: 370.198533 acc_pose: 0.830950 loss: 370.198533 2022/10/12 19:24:07 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:24:26 - mmengine - INFO - Epoch(train) [74][250/586] lr: 5.000000e-03 eta: 14:12:27 time: 0.702454 data_time: 0.053543 memory: 12959 loss_kpt: 358.824496 acc_pose: 0.744843 loss: 358.824496 2022/10/12 19:25:01 - mmengine - INFO - Epoch(train) [74][300/586] lr: 5.000000e-03 eta: 14:12:01 time: 0.699189 data_time: 0.055158 memory: 12959 loss_kpt: 365.764576 acc_pose: 0.834510 loss: 365.764576 2022/10/12 19:25:37 - mmengine - INFO - Epoch(train) [74][350/586] lr: 5.000000e-03 eta: 14:11:36 time: 0.711880 data_time: 0.060052 memory: 12959 loss_kpt: 363.925361 acc_pose: 0.775297 loss: 363.925361 2022/10/12 19:26:13 - mmengine - INFO - Epoch(train) [74][400/586] lr: 5.000000e-03 eta: 14:11:10 time: 0.712887 data_time: 0.054755 memory: 12959 loss_kpt: 354.284755 acc_pose: 0.817240 loss: 354.284755 2022/10/12 19:26:48 - mmengine - INFO - Epoch(train) [74][450/586] lr: 5.000000e-03 eta: 14:10:44 time: 0.700302 data_time: 0.055640 memory: 12959 loss_kpt: 362.880028 acc_pose: 0.815752 loss: 362.880028 2022/10/12 19:27:24 - mmengine - INFO - Epoch(train) [74][500/586] lr: 5.000000e-03 eta: 14:10:20 time: 0.726358 data_time: 0.058289 memory: 12959 loss_kpt: 361.032179 acc_pose: 0.768869 loss: 361.032179 2022/10/12 19:27:59 - mmengine - INFO - Epoch(train) [74][550/586] lr: 5.000000e-03 eta: 14:09:54 time: 0.708256 data_time: 0.059315 memory: 12959 loss_kpt: 360.014069 acc_pose: 0.736651 loss: 360.014069 2022/10/12 19:28:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:28:59 - mmengine - INFO - Epoch(train) [75][50/586] lr: 5.000000e-03 eta: 14:08:22 time: 0.689697 data_time: 0.069304 memory: 12959 loss_kpt: 360.888755 acc_pose: 0.848819 loss: 360.888755 2022/10/12 19:29:33 - mmengine - INFO - Epoch(train) [75][100/586] lr: 5.000000e-03 eta: 14:07:53 time: 0.677514 data_time: 0.059637 memory: 12959 loss_kpt: 359.351104 acc_pose: 0.819368 loss: 359.351104 2022/10/12 19:30:07 - mmengine - INFO - Epoch(train) [75][150/586] lr: 5.000000e-03 eta: 14:07:25 time: 0.673970 data_time: 0.057083 memory: 12959 loss_kpt: 366.896975 acc_pose: 0.776111 loss: 366.896975 2022/10/12 19:30:41 - mmengine - INFO - Epoch(train) [75][200/586] lr: 5.000000e-03 eta: 14:06:57 time: 0.681940 data_time: 0.055276 memory: 12959 loss_kpt: 363.491968 acc_pose: 0.788150 loss: 363.491968 2022/10/12 19:31:16 - mmengine - INFO - Epoch(train) [75][250/586] lr: 5.000000e-03 eta: 14:06:30 time: 0.697446 data_time: 0.060657 memory: 12959 loss_kpt: 355.528713 acc_pose: 0.834539 loss: 355.528713 2022/10/12 19:31:50 - mmengine - INFO - Epoch(train) [75][300/586] lr: 5.000000e-03 eta: 14:06:01 time: 0.677308 data_time: 0.057257 memory: 12959 loss_kpt: 360.558844 acc_pose: 0.755249 loss: 360.558844 2022/10/12 19:32:24 - mmengine - INFO - Epoch(train) [75][350/586] lr: 5.000000e-03 eta: 14:05:33 time: 0.676902 data_time: 0.056326 memory: 12959 loss_kpt: 362.465709 acc_pose: 0.810466 loss: 362.465709 2022/10/12 19:32:58 - mmengine - INFO - Epoch(train) [75][400/586] lr: 5.000000e-03 eta: 14:05:05 time: 0.684161 data_time: 0.053395 memory: 12959 loss_kpt: 357.744916 acc_pose: 0.718158 loss: 357.744916 2022/10/12 19:33:33 - mmengine - INFO - Epoch(train) [75][450/586] lr: 5.000000e-03 eta: 14:04:38 time: 0.698089 data_time: 0.060655 memory: 12959 loss_kpt: 363.151032 acc_pose: 0.754191 loss: 363.151032 2022/10/12 19:34:07 - mmengine - INFO - Epoch(train) [75][500/586] lr: 5.000000e-03 eta: 14:04:10 time: 0.678217 data_time: 0.052230 memory: 12959 loss_kpt: 360.497371 acc_pose: 0.794064 loss: 360.497371 2022/10/12 19:34:41 - mmengine - INFO - Epoch(train) [75][550/586] lr: 5.000000e-03 eta: 14:03:41 time: 0.680502 data_time: 0.055276 memory: 12959 loss_kpt: 356.213578 acc_pose: 0.763854 loss: 356.213578 2022/10/12 19:35:05 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:35:40 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:35:40 - mmengine - INFO - Epoch(train) [76][50/586] lr: 5.000000e-03 eta: 14:02:09 time: 0.685786 data_time: 0.075497 memory: 12959 loss_kpt: 359.701725 acc_pose: 0.841449 loss: 359.701725 2022/10/12 19:36:12 - mmengine - INFO - Epoch(train) [76][100/586] lr: 5.000000e-03 eta: 14:01:38 time: 0.650890 data_time: 0.056677 memory: 12959 loss_kpt: 360.160083 acc_pose: 0.796536 loss: 360.160083 2022/10/12 19:36:45 - mmengine - INFO - Epoch(train) [76][150/586] lr: 5.000000e-03 eta: 14:01:09 time: 0.663701 data_time: 0.059660 memory: 12959 loss_kpt: 359.647435 acc_pose: 0.770734 loss: 359.647435 2022/10/12 19:37:18 - mmengine - INFO - Epoch(train) [76][200/586] lr: 5.000000e-03 eta: 14:00:38 time: 0.658515 data_time: 0.052314 memory: 12959 loss_kpt: 358.943378 acc_pose: 0.774702 loss: 358.943378 2022/10/12 19:37:52 - mmengine - INFO - Epoch(train) [76][250/586] lr: 5.000000e-03 eta: 14:00:09 time: 0.666420 data_time: 0.060256 memory: 12959 loss_kpt: 363.340919 acc_pose: 0.784069 loss: 363.340919 2022/10/12 19:38:24 - mmengine - INFO - Epoch(train) [76][300/586] lr: 5.000000e-03 eta: 13:59:38 time: 0.654730 data_time: 0.056932 memory: 12959 loss_kpt: 360.405720 acc_pose: 0.840955 loss: 360.405720 2022/10/12 19:38:58 - mmengine - INFO - Epoch(train) [76][350/586] lr: 5.000000e-03 eta: 13:59:08 time: 0.663634 data_time: 0.060658 memory: 12959 loss_kpt: 359.024650 acc_pose: 0.763898 loss: 359.024650 2022/10/12 19:39:31 - mmengine - INFO - Epoch(train) [76][400/586] lr: 5.000000e-03 eta: 13:58:38 time: 0.659398 data_time: 0.057399 memory: 12959 loss_kpt: 360.869473 acc_pose: 0.837083 loss: 360.869473 2022/10/12 19:40:04 - mmengine - INFO - Epoch(train) [76][450/586] lr: 5.000000e-03 eta: 13:58:09 time: 0.673668 data_time: 0.064800 memory: 12959 loss_kpt: 363.328883 acc_pose: 0.739284 loss: 363.328883 2022/10/12 19:40:37 - mmengine - INFO - Epoch(train) [76][500/586] lr: 5.000000e-03 eta: 13:57:39 time: 0.660189 data_time: 0.057243 memory: 12959 loss_kpt: 355.410076 acc_pose: 0.778646 loss: 355.410076 2022/10/12 19:41:10 - mmengine - INFO - Epoch(train) [76][550/586] lr: 5.000000e-03 eta: 13:57:09 time: 0.663155 data_time: 0.059594 memory: 12959 loss_kpt: 356.035201 acc_pose: 0.814640 loss: 356.035201 2022/10/12 19:41:34 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:42:08 - mmengine - INFO - Epoch(train) [77][50/586] lr: 5.000000e-03 eta: 13:55:38 time: 0.688593 data_time: 0.067483 memory: 12959 loss_kpt: 360.043071 acc_pose: 0.825736 loss: 360.043071 2022/10/12 19:42:43 - mmengine - INFO - Epoch(train) [77][100/586] lr: 5.000000e-03 eta: 13:55:10 time: 0.681210 data_time: 0.056596 memory: 12959 loss_kpt: 362.661242 acc_pose: 0.775201 loss: 362.661242 2022/10/12 19:43:17 - mmengine - INFO - Epoch(train) [77][150/586] lr: 5.000000e-03 eta: 13:54:42 time: 0.679551 data_time: 0.061531 memory: 12959 loss_kpt: 359.975177 acc_pose: 0.735956 loss: 359.975177 2022/10/12 19:43:51 - mmengine - INFO - Epoch(train) [77][200/586] lr: 5.000000e-03 eta: 13:54:14 time: 0.687055 data_time: 0.057137 memory: 12959 loss_kpt: 363.145066 acc_pose: 0.750669 loss: 363.145066 2022/10/12 19:44:26 - mmengine - INFO - Epoch(train) [77][250/586] lr: 5.000000e-03 eta: 13:53:47 time: 0.697540 data_time: 0.065016 memory: 12959 loss_kpt: 359.642087 acc_pose: 0.823450 loss: 359.642087 2022/10/12 19:45:00 - mmengine - INFO - Epoch(train) [77][300/586] lr: 5.000000e-03 eta: 13:53:18 time: 0.677434 data_time: 0.057965 memory: 12959 loss_kpt: 360.982102 acc_pose: 0.846718 loss: 360.982102 2022/10/12 19:45:34 - mmengine - INFO - Epoch(train) [77][350/586] lr: 5.000000e-03 eta: 13:52:51 time: 0.688373 data_time: 0.061103 memory: 12959 loss_kpt: 359.812200 acc_pose: 0.818359 loss: 359.812200 2022/10/12 19:46:09 - mmengine - INFO - Epoch(train) [77][400/586] lr: 5.000000e-03 eta: 13:52:23 time: 0.693645 data_time: 0.058142 memory: 12959 loss_kpt: 363.053136 acc_pose: 0.823987 loss: 363.053136 2022/10/12 19:46:43 - mmengine - INFO - Epoch(train) [77][450/586] lr: 5.000000e-03 eta: 13:51:56 time: 0.690627 data_time: 0.057528 memory: 12959 loss_kpt: 352.978801 acc_pose: 0.709170 loss: 352.978801 2022/10/12 19:46:53 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:47:17 - mmengine - INFO - Epoch(train) [77][500/586] lr: 5.000000e-03 eta: 13:51:26 time: 0.668315 data_time: 0.054890 memory: 12959 loss_kpt: 351.693942 acc_pose: 0.864397 loss: 351.693942 2022/10/12 19:47:51 - mmengine - INFO - Epoch(train) [77][550/586] lr: 5.000000e-03 eta: 13:50:59 time: 0.691296 data_time: 0.060581 memory: 12959 loss_kpt: 354.376161 acc_pose: 0.749549 loss: 354.376161 2022/10/12 19:48:15 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:48:50 - mmengine - INFO - Epoch(train) [78][50/586] lr: 5.000000e-03 eta: 13:49:29 time: 0.696907 data_time: 0.071916 memory: 12959 loss_kpt: 361.511690 acc_pose: 0.766994 loss: 361.511690 2022/10/12 19:49:23 - mmengine - INFO - Epoch(train) [78][100/586] lr: 5.000000e-03 eta: 13:48:59 time: 0.657847 data_time: 0.056172 memory: 12959 loss_kpt: 354.181773 acc_pose: 0.874881 loss: 354.181773 2022/10/12 19:49:56 - mmengine - INFO - Epoch(train) [78][150/586] lr: 5.000000e-03 eta: 13:48:30 time: 0.668651 data_time: 0.059059 memory: 12959 loss_kpt: 361.323589 acc_pose: 0.754470 loss: 361.323589 2022/10/12 19:50:30 - mmengine - INFO - Epoch(train) [78][200/586] lr: 5.000000e-03 eta: 13:48:00 time: 0.667845 data_time: 0.059448 memory: 12959 loss_kpt: 358.765678 acc_pose: 0.845599 loss: 358.765678 2022/10/12 19:51:04 - mmengine - INFO - Epoch(train) [78][250/586] lr: 5.000000e-03 eta: 13:47:32 time: 0.681277 data_time: 0.062077 memory: 12959 loss_kpt: 358.277454 acc_pose: 0.826595 loss: 358.277454 2022/10/12 19:51:38 - mmengine - INFO - Epoch(train) [78][300/586] lr: 5.000000e-03 eta: 13:47:04 time: 0.690473 data_time: 0.058110 memory: 12959 loss_kpt: 358.641984 acc_pose: 0.729839 loss: 358.641984 2022/10/12 19:52:13 - mmengine - INFO - Epoch(train) [78][350/586] lr: 5.000000e-03 eta: 13:46:37 time: 0.696152 data_time: 0.060001 memory: 12959 loss_kpt: 356.121677 acc_pose: 0.749238 loss: 356.121677 2022/10/12 19:52:47 - mmengine - INFO - Epoch(train) [78][400/586] lr: 5.000000e-03 eta: 13:46:09 time: 0.682710 data_time: 0.060640 memory: 12959 loss_kpt: 353.698967 acc_pose: 0.794227 loss: 353.698967 2022/10/12 19:53:22 - mmengine - INFO - Epoch(train) [78][450/586] lr: 5.000000e-03 eta: 13:45:41 time: 0.690527 data_time: 0.060588 memory: 12959 loss_kpt: 361.324830 acc_pose: 0.802094 loss: 361.324830 2022/10/12 19:53:57 - mmengine - INFO - Epoch(train) [78][500/586] lr: 5.000000e-03 eta: 13:45:14 time: 0.695409 data_time: 0.059085 memory: 12959 loss_kpt: 360.843755 acc_pose: 0.819354 loss: 360.843755 2022/10/12 19:54:31 - mmengine - INFO - Epoch(train) [78][550/586] lr: 5.000000e-03 eta: 13:44:47 time: 0.694556 data_time: 0.058681 memory: 12959 loss_kpt: 360.863104 acc_pose: 0.738191 loss: 360.863104 2022/10/12 19:54:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:55:31 - mmengine - INFO - Epoch(train) [79][50/586] lr: 5.000000e-03 eta: 13:43:18 time: 0.699820 data_time: 0.066840 memory: 12959 loss_kpt: 357.614698 acc_pose: 0.809211 loss: 357.614698 2022/10/12 19:56:06 - mmengine - INFO - Epoch(train) [79][100/586] lr: 5.000000e-03 eta: 13:42:50 time: 0.689531 data_time: 0.058058 memory: 12959 loss_kpt: 357.589064 acc_pose: 0.856182 loss: 357.589064 2022/10/12 19:56:39 - mmengine - INFO - Epoch(train) [79][150/586] lr: 5.000000e-03 eta: 13:42:21 time: 0.675124 data_time: 0.059953 memory: 12959 loss_kpt: 343.694610 acc_pose: 0.836377 loss: 343.694610 2022/10/12 19:57:13 - mmengine - INFO - Epoch(train) [79][200/586] lr: 5.000000e-03 eta: 13:41:52 time: 0.675472 data_time: 0.059231 memory: 12959 loss_kpt: 353.154432 acc_pose: 0.802162 loss: 353.154432 2022/10/12 19:57:47 - mmengine - INFO - Epoch(train) [79][250/586] lr: 5.000000e-03 eta: 13:41:24 time: 0.681120 data_time: 0.062932 memory: 12959 loss_kpt: 354.998850 acc_pose: 0.729869 loss: 354.998850 2022/10/12 19:58:16 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 19:58:21 - mmengine - INFO - Epoch(train) [79][300/586] lr: 5.000000e-03 eta: 13:40:55 time: 0.676754 data_time: 0.057696 memory: 12959 loss_kpt: 353.954482 acc_pose: 0.836577 loss: 353.954482 2022/10/12 19:58:55 - mmengine - INFO - Epoch(train) [79][350/586] lr: 5.000000e-03 eta: 13:40:27 time: 0.681127 data_time: 0.059362 memory: 12959 loss_kpt: 352.264212 acc_pose: 0.824645 loss: 352.264212 2022/10/12 19:59:29 - mmengine - INFO - Epoch(train) [79][400/586] lr: 5.000000e-03 eta: 13:39:58 time: 0.677915 data_time: 0.057640 memory: 12959 loss_kpt: 358.722548 acc_pose: 0.817878 loss: 358.722548 2022/10/12 20:00:03 - mmengine - INFO - Epoch(train) [79][450/586] lr: 5.000000e-03 eta: 13:39:29 time: 0.670893 data_time: 0.063988 memory: 12959 loss_kpt: 362.008483 acc_pose: 0.762574 loss: 362.008483 2022/10/12 20:00:36 - mmengine - INFO - Epoch(train) [79][500/586] lr: 5.000000e-03 eta: 13:38:59 time: 0.672767 data_time: 0.058499 memory: 12959 loss_kpt: 352.998682 acc_pose: 0.803791 loss: 352.998682 2022/10/12 20:01:10 - mmengine - INFO - Epoch(train) [79][550/586] lr: 5.000000e-03 eta: 13:38:30 time: 0.668441 data_time: 0.060217 memory: 12959 loss_kpt: 355.207985 acc_pose: 0.791337 loss: 355.207985 2022/10/12 20:01:33 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:02:07 - mmengine - INFO - Epoch(train) [80][50/586] lr: 5.000000e-03 eta: 13:37:00 time: 0.679637 data_time: 0.072909 memory: 12959 loss_kpt: 355.990602 acc_pose: 0.797325 loss: 355.990602 2022/10/12 20:02:40 - mmengine - INFO - Epoch(train) [80][100/586] lr: 5.000000e-03 eta: 13:36:30 time: 0.660011 data_time: 0.056615 memory: 12959 loss_kpt: 354.741659 acc_pose: 0.904927 loss: 354.741659 2022/10/12 20:03:14 - mmengine - INFO - Epoch(train) [80][150/586] lr: 5.000000e-03 eta: 13:36:01 time: 0.671798 data_time: 0.056890 memory: 12959 loss_kpt: 353.405068 acc_pose: 0.759854 loss: 353.405068 2022/10/12 20:03:47 - mmengine - INFO - Epoch(train) [80][200/586] lr: 5.000000e-03 eta: 13:35:31 time: 0.664793 data_time: 0.053449 memory: 12959 loss_kpt: 355.918855 acc_pose: 0.797732 loss: 355.918855 2022/10/12 20:04:21 - mmengine - INFO - Epoch(train) [80][250/586] lr: 5.000000e-03 eta: 13:35:02 time: 0.681572 data_time: 0.064560 memory: 12959 loss_kpt: 358.586749 acc_pose: 0.818463 loss: 358.586749 2022/10/12 20:04:54 - mmengine - INFO - Epoch(train) [80][300/586] lr: 5.000000e-03 eta: 13:34:32 time: 0.663264 data_time: 0.056594 memory: 12959 loss_kpt: 359.287731 acc_pose: 0.843537 loss: 359.287731 2022/10/12 20:05:28 - mmengine - INFO - Epoch(train) [80][350/586] lr: 5.000000e-03 eta: 13:34:03 time: 0.673367 data_time: 0.060830 memory: 12959 loss_kpt: 349.606144 acc_pose: 0.785711 loss: 349.606144 2022/10/12 20:06:01 - mmengine - INFO - Epoch(train) [80][400/586] lr: 5.000000e-03 eta: 13:33:33 time: 0.665628 data_time: 0.059687 memory: 12959 loss_kpt: 347.783105 acc_pose: 0.801103 loss: 347.783105 2022/10/12 20:06:35 - mmengine - INFO - Epoch(train) [80][450/586] lr: 5.000000e-03 eta: 13:33:04 time: 0.669569 data_time: 0.061761 memory: 12959 loss_kpt: 358.747297 acc_pose: 0.773767 loss: 358.747297 2022/10/12 20:07:08 - mmengine - INFO - Epoch(train) [80][500/586] lr: 5.000000e-03 eta: 13:32:35 time: 0.672253 data_time: 0.053963 memory: 12959 loss_kpt: 355.895911 acc_pose: 0.789197 loss: 355.895911 2022/10/12 20:07:42 - mmengine - INFO - Epoch(train) [80][550/586] lr: 5.000000e-03 eta: 13:32:05 time: 0.673042 data_time: 0.062885 memory: 12959 loss_kpt: 355.787151 acc_pose: 0.844215 loss: 355.787151 2022/10/12 20:08:06 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:08:06 - mmengine - INFO - Saving checkpoint at 80 epochs 2022/10/12 20:08:24 - mmengine - INFO - Epoch(val) [80][50/407] eta: 0:01:36 time: 0.270528 data_time: 0.012440 memory: 12959 2022/10/12 20:08:37 - mmengine - INFO - Epoch(val) [80][100/407] eta: 0:01:20 time: 0.263827 data_time: 0.007643 memory: 2407 2022/10/12 20:08:50 - mmengine - INFO - Epoch(val) [80][150/407] eta: 0:01:07 time: 0.264214 data_time: 0.007909 memory: 2407 2022/10/12 20:09:04 - mmengine - INFO - Epoch(val) [80][200/407] eta: 0:00:54 time: 0.262568 data_time: 0.007791 memory: 2407 2022/10/12 20:09:17 - mmengine - INFO - Epoch(val) [80][250/407] eta: 0:00:41 time: 0.262286 data_time: 0.007794 memory: 2407 2022/10/12 20:09:30 - mmengine - INFO - Epoch(val) [80][300/407] eta: 0:00:28 time: 0.262748 data_time: 0.007582 memory: 2407 2022/10/12 20:09:43 - mmengine - INFO - Epoch(val) [80][350/407] eta: 0:00:14 time: 0.261489 data_time: 0.007689 memory: 2407 2022/10/12 20:09:56 - mmengine - INFO - Epoch(val) [80][400/407] eta: 0:00:01 time: 0.260150 data_time: 0.007748 memory: 2407 2022/10/12 20:10:10 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 20:10:26 - mmengine - INFO - Epoch(val) [80][407/407] coco/AP: 0.696383 coco/AP .5: 0.883613 coco/AP .75: 0.771768 coco/AP (M): 0.667401 coco/AP (L): 0.752988 coco/AR: 0.766168 coco/AR .5: 0.925693 coco/AR .75: 0.828243 coco/AR (M): 0.724092 coco/AR (L): 0.824117 2022/10/12 20:10:26 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_60.pth is removed 2022/10/12 20:10:28 - mmengine - INFO - The best checkpoint with 0.6964 coco/AP at 80 epoch is saved to best_coco/AP_epoch_80.pth. 2022/10/12 20:11:02 - mmengine - INFO - Epoch(train) [81][50/586] lr: 5.000000e-03 eta: 13:30:35 time: 0.662986 data_time: 0.071107 memory: 12959 loss_kpt: 352.543550 acc_pose: 0.796523 loss: 352.543550 2022/10/12 20:11:34 - mmengine - INFO - Epoch(train) [81][100/586] lr: 5.000000e-03 eta: 13:30:04 time: 0.647721 data_time: 0.056206 memory: 12959 loss_kpt: 352.236925 acc_pose: 0.810192 loss: 352.236925 2022/10/12 20:11:47 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:12:06 - mmengine - INFO - Epoch(train) [81][150/586] lr: 5.000000e-03 eta: 13:29:32 time: 0.643583 data_time: 0.059526 memory: 12959 loss_kpt: 353.850946 acc_pose: 0.802509 loss: 353.850946 2022/10/12 20:12:39 - mmengine - INFO - Epoch(train) [81][200/586] lr: 5.000000e-03 eta: 13:29:01 time: 0.649585 data_time: 0.056670 memory: 12959 loss_kpt: 351.557247 acc_pose: 0.829891 loss: 351.557247 2022/10/12 20:13:11 - mmengine - INFO - Epoch(train) [81][250/586] lr: 5.000000e-03 eta: 13:28:30 time: 0.650089 data_time: 0.062786 memory: 12959 loss_kpt: 355.918621 acc_pose: 0.795630 loss: 355.918621 2022/10/12 20:13:44 - mmengine - INFO - Epoch(train) [81][300/586] lr: 5.000000e-03 eta: 13:27:59 time: 0.651167 data_time: 0.054429 memory: 12959 loss_kpt: 359.735709 acc_pose: 0.773605 loss: 359.735709 2022/10/12 20:14:17 - mmengine - INFO - Epoch(train) [81][350/586] lr: 5.000000e-03 eta: 13:27:29 time: 0.656790 data_time: 0.067448 memory: 12959 loss_kpt: 360.799346 acc_pose: 0.819786 loss: 360.799346 2022/10/12 20:14:49 - mmengine - INFO - Epoch(train) [81][400/586] lr: 5.000000e-03 eta: 13:26:57 time: 0.642689 data_time: 0.057277 memory: 12959 loss_kpt: 349.807739 acc_pose: 0.782947 loss: 349.807739 2022/10/12 20:15:21 - mmengine - INFO - Epoch(train) [81][450/586] lr: 5.000000e-03 eta: 13:26:26 time: 0.649088 data_time: 0.057106 memory: 12959 loss_kpt: 353.820177 acc_pose: 0.818949 loss: 353.820177 2022/10/12 20:15:54 - mmengine - INFO - Epoch(train) [81][500/586] lr: 5.000000e-03 eta: 13:25:55 time: 0.657635 data_time: 0.055745 memory: 12959 loss_kpt: 356.309492 acc_pose: 0.759226 loss: 356.309492 2022/10/12 20:16:27 - mmengine - INFO - Epoch(train) [81][550/586] lr: 5.000000e-03 eta: 13:25:26 time: 0.665288 data_time: 0.062485 memory: 12959 loss_kpt: 355.476041 acc_pose: 0.748953 loss: 355.476041 2022/10/12 20:16:51 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:17:26 - mmengine - INFO - Epoch(train) [82][50/586] lr: 5.000000e-03 eta: 13:23:59 time: 0.696094 data_time: 0.068255 memory: 12959 loss_kpt: 352.522825 acc_pose: 0.767196 loss: 352.522825 2022/10/12 20:18:01 - mmengine - INFO - Epoch(train) [82][100/586] lr: 5.000000e-03 eta: 13:23:32 time: 0.702652 data_time: 0.062027 memory: 12959 loss_kpt: 353.528629 acc_pose: 0.865536 loss: 353.528629 2022/10/12 20:18:36 - mmengine - INFO - Epoch(train) [82][150/586] lr: 5.000000e-03 eta: 13:23:05 time: 0.700492 data_time: 0.059160 memory: 12959 loss_kpt: 355.043185 acc_pose: 0.879385 loss: 355.043185 2022/10/12 20:19:11 - mmengine - INFO - Epoch(train) [82][200/586] lr: 5.000000e-03 eta: 13:22:38 time: 0.702167 data_time: 0.052215 memory: 12959 loss_kpt: 347.938904 acc_pose: 0.788000 loss: 347.938904 2022/10/12 20:19:46 - mmengine - INFO - Epoch(train) [82][250/586] lr: 5.000000e-03 eta: 13:22:11 time: 0.700082 data_time: 0.062167 memory: 12959 loss_kpt: 351.007777 acc_pose: 0.757853 loss: 351.007777 2022/10/12 20:20:20 - mmengine - INFO - Epoch(train) [82][300/586] lr: 5.000000e-03 eta: 13:21:43 time: 0.688249 data_time: 0.053767 memory: 12959 loss_kpt: 361.704125 acc_pose: 0.804082 loss: 361.704125 2022/10/12 20:20:55 - mmengine - INFO - Epoch(train) [82][350/586] lr: 5.000000e-03 eta: 13:21:14 time: 0.683262 data_time: 0.056107 memory: 12959 loss_kpt: 359.683440 acc_pose: 0.846675 loss: 359.683440 2022/10/12 20:21:28 - mmengine - INFO - Epoch(train) [82][400/586] lr: 5.000000e-03 eta: 13:20:45 time: 0.670569 data_time: 0.058081 memory: 12959 loss_kpt: 353.410038 acc_pose: 0.838246 loss: 353.410038 2022/10/12 20:22:01 - mmengine - INFO - Epoch(train) [82][450/586] lr: 5.000000e-03 eta: 13:20:15 time: 0.666261 data_time: 0.060633 memory: 12959 loss_kpt: 353.536481 acc_pose: 0.736591 loss: 353.536481 2022/10/12 20:22:35 - mmengine - INFO - Epoch(train) [82][500/586] lr: 5.000000e-03 eta: 13:19:45 time: 0.669213 data_time: 0.063835 memory: 12959 loss_kpt: 356.089449 acc_pose: 0.808106 loss: 356.089449 2022/10/12 20:22:58 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:23:09 - mmengine - INFO - Epoch(train) [82][550/586] lr: 5.000000e-03 eta: 13:19:16 time: 0.672526 data_time: 0.057898 memory: 12959 loss_kpt: 353.411013 acc_pose: 0.796159 loss: 353.411013 2022/10/12 20:23:33 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:24:07 - mmengine - INFO - Epoch(train) [83][50/586] lr: 5.000000e-03 eta: 13:17:49 time: 0.681846 data_time: 0.073666 memory: 12959 loss_kpt: 358.416975 acc_pose: 0.825226 loss: 358.416975 2022/10/12 20:24:40 - mmengine - INFO - Epoch(train) [83][100/586] lr: 5.000000e-03 eta: 13:17:19 time: 0.667901 data_time: 0.059563 memory: 12959 loss_kpt: 348.292075 acc_pose: 0.890994 loss: 348.292075 2022/10/12 20:25:14 - mmengine - INFO - Epoch(train) [83][150/586] lr: 5.000000e-03 eta: 13:16:50 time: 0.672293 data_time: 0.056753 memory: 12959 loss_kpt: 351.522026 acc_pose: 0.687156 loss: 351.522026 2022/10/12 20:25:47 - mmengine - INFO - Epoch(train) [83][200/586] lr: 5.000000e-03 eta: 13:16:19 time: 0.661480 data_time: 0.057605 memory: 12959 loss_kpt: 356.541975 acc_pose: 0.829596 loss: 356.541975 2022/10/12 20:26:21 - mmengine - INFO - Epoch(train) [83][250/586] lr: 5.000000e-03 eta: 13:15:51 time: 0.679032 data_time: 0.062949 memory: 12959 loss_kpt: 348.479767 acc_pose: 0.802007 loss: 348.479767 2022/10/12 20:26:54 - mmengine - INFO - Epoch(train) [83][300/586] lr: 5.000000e-03 eta: 13:15:21 time: 0.665187 data_time: 0.054788 memory: 12959 loss_kpt: 348.926652 acc_pose: 0.788656 loss: 348.926652 2022/10/12 20:27:28 - mmengine - INFO - Epoch(train) [83][350/586] lr: 5.000000e-03 eta: 13:14:51 time: 0.672593 data_time: 0.054994 memory: 12959 loss_kpt: 359.054434 acc_pose: 0.854986 loss: 359.054434 2022/10/12 20:28:01 - mmengine - INFO - Epoch(train) [83][400/586] lr: 5.000000e-03 eta: 13:14:22 time: 0.669242 data_time: 0.059847 memory: 12959 loss_kpt: 355.991050 acc_pose: 0.786280 loss: 355.991050 2022/10/12 20:28:35 - mmengine - INFO - Epoch(train) [83][450/586] lr: 5.000000e-03 eta: 13:13:52 time: 0.668346 data_time: 0.062271 memory: 12959 loss_kpt: 363.881872 acc_pose: 0.833033 loss: 363.881872 2022/10/12 20:29:08 - mmengine - INFO - Epoch(train) [83][500/586] lr: 5.000000e-03 eta: 13:13:22 time: 0.666993 data_time: 0.058884 memory: 12959 loss_kpt: 352.118683 acc_pose: 0.809593 loss: 352.118683 2022/10/12 20:29:41 - mmengine - INFO - Epoch(train) [83][550/586] lr: 5.000000e-03 eta: 13:12:52 time: 0.664616 data_time: 0.056860 memory: 12959 loss_kpt: 358.068986 acc_pose: 0.776960 loss: 358.068986 2022/10/12 20:30:05 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:30:39 - mmengine - INFO - Epoch(train) [84][50/586] lr: 5.000000e-03 eta: 13:11:25 time: 0.671627 data_time: 0.067929 memory: 12959 loss_kpt: 337.433140 acc_pose: 0.806620 loss: 337.433140 2022/10/12 20:31:12 - mmengine - INFO - Epoch(train) [84][100/586] lr: 5.000000e-03 eta: 13:10:55 time: 0.661123 data_time: 0.060978 memory: 12959 loss_kpt: 351.647973 acc_pose: 0.823058 loss: 351.647973 2022/10/12 20:31:45 - mmengine - INFO - Epoch(train) [84][150/586] lr: 5.000000e-03 eta: 13:10:24 time: 0.660888 data_time: 0.061617 memory: 12959 loss_kpt: 360.949218 acc_pose: 0.804114 loss: 360.949218 2022/10/12 20:32:18 - mmengine - INFO - Epoch(train) [84][200/586] lr: 5.000000e-03 eta: 13:09:54 time: 0.659256 data_time: 0.055194 memory: 12959 loss_kpt: 353.681160 acc_pose: 0.780748 loss: 353.681160 2022/10/12 20:32:51 - mmengine - INFO - Epoch(train) [84][250/586] lr: 5.000000e-03 eta: 13:09:24 time: 0.664213 data_time: 0.056136 memory: 12959 loss_kpt: 355.208433 acc_pose: 0.702466 loss: 355.208433 2022/10/12 20:33:24 - mmengine - INFO - Epoch(train) [84][300/586] lr: 5.000000e-03 eta: 13:08:53 time: 0.655906 data_time: 0.053007 memory: 12959 loss_kpt: 355.723800 acc_pose: 0.775317 loss: 355.723800 2022/10/12 20:33:57 - mmengine - INFO - Epoch(train) [84][350/586] lr: 5.000000e-03 eta: 13:08:24 time: 0.667968 data_time: 0.061848 memory: 12959 loss_kpt: 345.746394 acc_pose: 0.837262 loss: 345.746394 2022/10/12 20:34:06 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:34:31 - mmengine - INFO - Epoch(train) [84][400/586] lr: 5.000000e-03 eta: 13:07:54 time: 0.665095 data_time: 0.051844 memory: 12959 loss_kpt: 344.670143 acc_pose: 0.757425 loss: 344.670143 2022/10/12 20:35:04 - mmengine - INFO - Epoch(train) [84][450/586] lr: 5.000000e-03 eta: 13:07:24 time: 0.673091 data_time: 0.059338 memory: 12959 loss_kpt: 349.646451 acc_pose: 0.823403 loss: 349.646451 2022/10/12 20:35:37 - mmengine - INFO - Epoch(train) [84][500/586] lr: 5.000000e-03 eta: 13:06:54 time: 0.661957 data_time: 0.057609 memory: 12959 loss_kpt: 355.132839 acc_pose: 0.828422 loss: 355.132839 2022/10/12 20:36:11 - mmengine - INFO - Epoch(train) [84][550/586] lr: 5.000000e-03 eta: 13:06:24 time: 0.668110 data_time: 0.057353 memory: 12959 loss_kpt: 354.989224 acc_pose: 0.782846 loss: 354.989224 2022/10/12 20:36:35 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:37:10 - mmengine - INFO - Epoch(train) [85][50/586] lr: 5.000000e-03 eta: 13:05:00 time: 0.697916 data_time: 0.070971 memory: 12959 loss_kpt: 353.581971 acc_pose: 0.755266 loss: 353.581971 2022/10/12 20:37:44 - mmengine - INFO - Epoch(train) [85][100/586] lr: 5.000000e-03 eta: 13:04:31 time: 0.681642 data_time: 0.060951 memory: 12959 loss_kpt: 346.756003 acc_pose: 0.814975 loss: 346.756003 2022/10/12 20:38:18 - mmengine - INFO - Epoch(train) [85][150/586] lr: 5.000000e-03 eta: 13:04:02 time: 0.677310 data_time: 0.058334 memory: 12959 loss_kpt: 356.523247 acc_pose: 0.827953 loss: 356.523247 2022/10/12 20:38:51 - mmengine - INFO - Epoch(train) [85][200/586] lr: 5.000000e-03 eta: 13:03:33 time: 0.676161 data_time: 0.058650 memory: 12959 loss_kpt: 347.125334 acc_pose: 0.768078 loss: 347.125334 2022/10/12 20:39:25 - mmengine - INFO - Epoch(train) [85][250/586] lr: 5.000000e-03 eta: 13:03:04 time: 0.683118 data_time: 0.063651 memory: 12959 loss_kpt: 347.957312 acc_pose: 0.822945 loss: 347.957312 2022/10/12 20:39:59 - mmengine - INFO - Epoch(train) [85][300/586] lr: 5.000000e-03 eta: 13:02:35 time: 0.676010 data_time: 0.057432 memory: 12959 loss_kpt: 342.840377 acc_pose: 0.791364 loss: 342.840377 2022/10/12 20:40:33 - mmengine - INFO - Epoch(train) [85][350/586] lr: 5.000000e-03 eta: 13:02:06 time: 0.675997 data_time: 0.055989 memory: 12959 loss_kpt: 354.092534 acc_pose: 0.834288 loss: 354.092534 2022/10/12 20:41:06 - mmengine - INFO - Epoch(train) [85][400/586] lr: 5.000000e-03 eta: 13:01:36 time: 0.663862 data_time: 0.055629 memory: 12959 loss_kpt: 350.721612 acc_pose: 0.840359 loss: 350.721612 2022/10/12 20:41:40 - mmengine - INFO - Epoch(train) [85][450/586] lr: 5.000000e-03 eta: 13:01:06 time: 0.672741 data_time: 0.058472 memory: 12959 loss_kpt: 352.426840 acc_pose: 0.896096 loss: 352.426840 2022/10/12 20:42:14 - mmengine - INFO - Epoch(train) [85][500/586] lr: 5.000000e-03 eta: 13:00:37 time: 0.676051 data_time: 0.057370 memory: 12959 loss_kpt: 356.192423 acc_pose: 0.736351 loss: 356.192423 2022/10/12 20:42:48 - mmengine - INFO - Epoch(train) [85][550/586] lr: 5.000000e-03 eta: 13:00:09 time: 0.683224 data_time: 0.059023 memory: 12959 loss_kpt: 354.595645 acc_pose: 0.759254 loss: 354.595645 2022/10/12 20:43:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:43:46 - mmengine - INFO - Epoch(train) [86][50/586] lr: 5.000000e-03 eta: 12:58:43 time: 0.682180 data_time: 0.067803 memory: 12959 loss_kpt: 350.551981 acc_pose: 0.866821 loss: 350.551981 2022/10/12 20:44:19 - mmengine - INFO - Epoch(train) [86][100/586] lr: 5.000000e-03 eta: 12:58:13 time: 0.661338 data_time: 0.052168 memory: 12959 loss_kpt: 345.881340 acc_pose: 0.786221 loss: 345.881340 2022/10/12 20:44:52 - mmengine - INFO - Epoch(train) [86][150/586] lr: 5.000000e-03 eta: 12:57:42 time: 0.656419 data_time: 0.054592 memory: 12959 loss_kpt: 357.294609 acc_pose: 0.815760 loss: 357.294609 2022/10/12 20:45:19 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:45:25 - mmengine - INFO - Epoch(train) [86][200/586] lr: 5.000000e-03 eta: 12:57:12 time: 0.662847 data_time: 0.061078 memory: 12959 loss_kpt: 348.393511 acc_pose: 0.826579 loss: 348.393511 2022/10/12 20:45:59 - mmengine - INFO - Epoch(train) [86][250/586] lr: 5.000000e-03 eta: 12:56:43 time: 0.673932 data_time: 0.054021 memory: 12959 loss_kpt: 358.107990 acc_pose: 0.804394 loss: 358.107990 2022/10/12 20:46:32 - mmengine - INFO - Epoch(train) [86][300/586] lr: 5.000000e-03 eta: 12:56:13 time: 0.666092 data_time: 0.052900 memory: 12959 loss_kpt: 349.815898 acc_pose: 0.796246 loss: 349.815898 2022/10/12 20:47:06 - mmengine - INFO - Epoch(train) [86][350/586] lr: 5.000000e-03 eta: 12:55:43 time: 0.670672 data_time: 0.056179 memory: 12959 loss_kpt: 351.270371 acc_pose: 0.812368 loss: 351.270371 2022/10/12 20:47:39 - mmengine - INFO - Epoch(train) [86][400/586] lr: 5.000000e-03 eta: 12:55:13 time: 0.656600 data_time: 0.057573 memory: 12959 loss_kpt: 361.840717 acc_pose: 0.829718 loss: 361.840717 2022/10/12 20:48:11 - mmengine - INFO - Epoch(train) [86][450/586] lr: 5.000000e-03 eta: 12:54:42 time: 0.654828 data_time: 0.059322 memory: 12959 loss_kpt: 353.953891 acc_pose: 0.783059 loss: 353.953891 2022/10/12 20:48:44 - mmengine - INFO - Epoch(train) [86][500/586] lr: 5.000000e-03 eta: 12:54:11 time: 0.656568 data_time: 0.058599 memory: 12959 loss_kpt: 347.027106 acc_pose: 0.788251 loss: 347.027106 2022/10/12 20:49:17 - mmengine - INFO - Epoch(train) [86][550/586] lr: 5.000000e-03 eta: 12:53:41 time: 0.664213 data_time: 0.066724 memory: 12959 loss_kpt: 352.289912 acc_pose: 0.844208 loss: 352.289912 2022/10/12 20:49:41 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:50:15 - mmengine - INFO - Epoch(train) [87][50/586] lr: 5.000000e-03 eta: 12:52:17 time: 0.695591 data_time: 0.072682 memory: 12959 loss_kpt: 347.499402 acc_pose: 0.786536 loss: 347.499402 2022/10/12 20:50:49 - mmengine - INFO - Epoch(train) [87][100/586] lr: 5.000000e-03 eta: 12:51:48 time: 0.674794 data_time: 0.058734 memory: 12959 loss_kpt: 347.844767 acc_pose: 0.850301 loss: 347.844767 2022/10/12 20:51:23 - mmengine - INFO - Epoch(train) [87][150/586] lr: 5.000000e-03 eta: 12:51:19 time: 0.675470 data_time: 0.058441 memory: 12959 loss_kpt: 349.933715 acc_pose: 0.869350 loss: 349.933715 2022/10/12 20:51:56 - mmengine - INFO - Epoch(train) [87][200/586] lr: 5.000000e-03 eta: 12:50:49 time: 0.662167 data_time: 0.053741 memory: 12959 loss_kpt: 357.617854 acc_pose: 0.827497 loss: 357.617854 2022/10/12 20:52:30 - mmengine - INFO - Epoch(train) [87][250/586] lr: 5.000000e-03 eta: 12:50:20 time: 0.678378 data_time: 0.060188 memory: 12959 loss_kpt: 349.990446 acc_pose: 0.824022 loss: 349.990446 2022/10/12 20:53:04 - mmengine - INFO - Epoch(train) [87][300/586] lr: 5.000000e-03 eta: 12:49:50 time: 0.672662 data_time: 0.060414 memory: 12959 loss_kpt: 352.208086 acc_pose: 0.820113 loss: 352.208086 2022/10/12 20:53:37 - mmengine - INFO - Epoch(train) [87][350/586] lr: 5.000000e-03 eta: 12:49:21 time: 0.673370 data_time: 0.060699 memory: 12959 loss_kpt: 355.590037 acc_pose: 0.802790 loss: 355.590037 2022/10/12 20:54:11 - mmengine - INFO - Epoch(train) [87][400/586] lr: 5.000000e-03 eta: 12:48:51 time: 0.673392 data_time: 0.053959 memory: 12959 loss_kpt: 346.967239 acc_pose: 0.841549 loss: 346.967239 2022/10/12 20:54:45 - mmengine - INFO - Epoch(train) [87][450/586] lr: 5.000000e-03 eta: 12:48:22 time: 0.674927 data_time: 0.056850 memory: 12959 loss_kpt: 344.091317 acc_pose: 0.797854 loss: 344.091317 2022/10/12 20:55:18 - mmengine - INFO - Epoch(train) [87][500/586] lr: 5.000000e-03 eta: 12:47:52 time: 0.669219 data_time: 0.057248 memory: 12959 loss_kpt: 353.844293 acc_pose: 0.801440 loss: 353.844293 2022/10/12 20:55:52 - mmengine - INFO - Epoch(train) [87][550/586] lr: 5.000000e-03 eta: 12:47:22 time: 0.668145 data_time: 0.061172 memory: 12959 loss_kpt: 346.482980 acc_pose: 0.804547 loss: 346.482980 2022/10/12 20:56:15 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:56:28 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 20:56:50 - mmengine - INFO - Epoch(train) [88][50/586] lr: 5.000000e-03 eta: 12:45:59 time: 0.694193 data_time: 0.074451 memory: 12959 loss_kpt: 349.759736 acc_pose: 0.819878 loss: 349.759736 2022/10/12 20:57:23 - mmengine - INFO - Epoch(train) [88][100/586] lr: 5.000000e-03 eta: 12:45:29 time: 0.668350 data_time: 0.058425 memory: 12959 loss_kpt: 343.109140 acc_pose: 0.855736 loss: 343.109140 2022/10/12 20:57:58 - mmengine - INFO - Epoch(train) [88][150/586] lr: 5.000000e-03 eta: 12:45:01 time: 0.682916 data_time: 0.057765 memory: 12959 loss_kpt: 344.209196 acc_pose: 0.774814 loss: 344.209196 2022/10/12 20:58:31 - mmengine - INFO - Epoch(train) [88][200/586] lr: 5.000000e-03 eta: 12:44:30 time: 0.664293 data_time: 0.053223 memory: 12959 loss_kpt: 345.343218 acc_pose: 0.855607 loss: 345.343218 2022/10/12 20:59:05 - mmengine - INFO - Epoch(train) [88][250/586] lr: 5.000000e-03 eta: 12:44:01 time: 0.677370 data_time: 0.060998 memory: 12959 loss_kpt: 355.687377 acc_pose: 0.779456 loss: 355.687377 2022/10/12 20:59:38 - mmengine - INFO - Epoch(train) [88][300/586] lr: 5.000000e-03 eta: 12:43:32 time: 0.674202 data_time: 0.058100 memory: 12959 loss_kpt: 348.319647 acc_pose: 0.858672 loss: 348.319647 2022/10/12 21:00:13 - mmengine - INFO - Epoch(train) [88][350/586] lr: 5.000000e-03 eta: 12:43:03 time: 0.683678 data_time: 0.065329 memory: 12959 loss_kpt: 349.852385 acc_pose: 0.812945 loss: 349.852385 2022/10/12 21:00:46 - mmengine - INFO - Epoch(train) [88][400/586] lr: 5.000000e-03 eta: 12:42:34 time: 0.672172 data_time: 0.058321 memory: 12959 loss_kpt: 343.870698 acc_pose: 0.748496 loss: 343.870698 2022/10/12 21:01:20 - mmengine - INFO - Epoch(train) [88][450/586] lr: 5.000000e-03 eta: 12:42:05 time: 0.681222 data_time: 0.064818 memory: 12959 loss_kpt: 347.305269 acc_pose: 0.834034 loss: 347.305269 2022/10/12 21:01:54 - mmengine - INFO - Epoch(train) [88][500/586] lr: 5.000000e-03 eta: 12:41:35 time: 0.668833 data_time: 0.056249 memory: 12959 loss_kpt: 356.511694 acc_pose: 0.804127 loss: 356.511694 2022/10/12 21:02:27 - mmengine - INFO - Epoch(train) [88][550/586] lr: 5.000000e-03 eta: 12:41:05 time: 0.672213 data_time: 0.058892 memory: 12959 loss_kpt: 351.414564 acc_pose: 0.856019 loss: 351.414564 2022/10/12 21:02:51 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:03:26 - mmengine - INFO - Epoch(train) [89][50/586] lr: 5.000000e-03 eta: 12:39:42 time: 0.684521 data_time: 0.068941 memory: 12959 loss_kpt: 345.793096 acc_pose: 0.827989 loss: 345.793096 2022/10/12 21:03:59 - mmengine - INFO - Epoch(train) [89][100/586] lr: 5.000000e-03 eta: 12:39:12 time: 0.664141 data_time: 0.059106 memory: 12959 loss_kpt: 344.301232 acc_pose: 0.829498 loss: 344.301232 2022/10/12 21:04:33 - mmengine - INFO - Epoch(train) [89][150/586] lr: 5.000000e-03 eta: 12:38:42 time: 0.674390 data_time: 0.059082 memory: 12959 loss_kpt: 350.783429 acc_pose: 0.787728 loss: 350.783429 2022/10/12 21:05:06 - mmengine - INFO - Epoch(train) [89][200/586] lr: 5.000000e-03 eta: 12:38:13 time: 0.670957 data_time: 0.060620 memory: 12959 loss_kpt: 350.746158 acc_pose: 0.783329 loss: 350.746158 2022/10/12 21:05:40 - mmengine - INFO - Epoch(train) [89][250/586] lr: 5.000000e-03 eta: 12:37:44 time: 0.681755 data_time: 0.060640 memory: 12959 loss_kpt: 355.430981 acc_pose: 0.759857 loss: 355.430981 2022/10/12 21:06:14 - mmengine - INFO - Epoch(train) [89][300/586] lr: 5.000000e-03 eta: 12:37:15 time: 0.680079 data_time: 0.053324 memory: 12959 loss_kpt: 348.694026 acc_pose: 0.732859 loss: 348.694026 2022/10/12 21:06:48 - mmengine - INFO - Epoch(train) [89][350/586] lr: 5.000000e-03 eta: 12:36:45 time: 0.673270 data_time: 0.055848 memory: 12959 loss_kpt: 343.147852 acc_pose: 0.855521 loss: 343.147852 2022/10/12 21:07:22 - mmengine - INFO - Epoch(train) [89][400/586] lr: 5.000000e-03 eta: 12:36:17 time: 0.690328 data_time: 0.055245 memory: 12959 loss_kpt: 346.485615 acc_pose: 0.856797 loss: 346.485615 2022/10/12 21:07:44 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:07:56 - mmengine - INFO - Epoch(train) [89][450/586] lr: 5.000000e-03 eta: 12:35:47 time: 0.675066 data_time: 0.055698 memory: 12959 loss_kpt: 348.058414 acc_pose: 0.803071 loss: 348.058414 2022/10/12 21:08:30 - mmengine - INFO - Epoch(train) [89][500/586] lr: 5.000000e-03 eta: 12:35:18 time: 0.672558 data_time: 0.058385 memory: 12959 loss_kpt: 346.446803 acc_pose: 0.849405 loss: 346.446803 2022/10/12 21:09:03 - mmengine - INFO - Epoch(train) [89][550/586] lr: 5.000000e-03 eta: 12:34:48 time: 0.667432 data_time: 0.062297 memory: 12959 loss_kpt: 351.382489 acc_pose: 0.733806 loss: 351.382489 2022/10/12 21:09:27 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:10:01 - mmengine - INFO - Epoch(train) [90][50/586] lr: 5.000000e-03 eta: 12:33:25 time: 0.688290 data_time: 0.065569 memory: 12959 loss_kpt: 349.651586 acc_pose: 0.826532 loss: 349.651586 2022/10/12 21:10:34 - mmengine - INFO - Epoch(train) [90][100/586] lr: 5.000000e-03 eta: 12:32:55 time: 0.663134 data_time: 0.058102 memory: 12959 loss_kpt: 350.394048 acc_pose: 0.776267 loss: 350.394048 2022/10/12 21:11:08 - mmengine - INFO - Epoch(train) [90][150/586] lr: 5.000000e-03 eta: 12:32:25 time: 0.661733 data_time: 0.057533 memory: 12959 loss_kpt: 353.799381 acc_pose: 0.815459 loss: 353.799381 2022/10/12 21:11:41 - mmengine - INFO - Epoch(train) [90][200/586] lr: 5.000000e-03 eta: 12:31:55 time: 0.668303 data_time: 0.054775 memory: 12959 loss_kpt: 346.548452 acc_pose: 0.794989 loss: 346.548452 2022/10/12 21:12:14 - mmengine - INFO - Epoch(train) [90][250/586] lr: 5.000000e-03 eta: 12:31:25 time: 0.666143 data_time: 0.057817 memory: 12959 loss_kpt: 354.778150 acc_pose: 0.808640 loss: 354.778150 2022/10/12 21:12:48 - mmengine - INFO - Epoch(train) [90][300/586] lr: 5.000000e-03 eta: 12:30:55 time: 0.674513 data_time: 0.055297 memory: 12959 loss_kpt: 346.125249 acc_pose: 0.833616 loss: 346.125249 2022/10/12 21:13:23 - mmengine - INFO - Epoch(train) [90][350/586] lr: 5.000000e-03 eta: 12:30:27 time: 0.694224 data_time: 0.054504 memory: 12959 loss_kpt: 351.413259 acc_pose: 0.862387 loss: 351.413259 2022/10/12 21:13:58 - mmengine - INFO - Epoch(train) [90][400/586] lr: 5.000000e-03 eta: 12:30:00 time: 0.705014 data_time: 0.056618 memory: 12959 loss_kpt: 349.905693 acc_pose: 0.790384 loss: 349.905693 2022/10/12 21:14:34 - mmengine - INFO - Epoch(train) [90][450/586] lr: 5.000000e-03 eta: 12:29:33 time: 0.711884 data_time: 0.055444 memory: 12959 loss_kpt: 345.672224 acc_pose: 0.801719 loss: 345.672224 2022/10/12 21:15:09 - mmengine - INFO - Epoch(train) [90][500/586] lr: 5.000000e-03 eta: 12:29:06 time: 0.708755 data_time: 0.058272 memory: 12959 loss_kpt: 348.037971 acc_pose: 0.832641 loss: 348.037971 2022/10/12 21:15:45 - mmengine - INFO - Epoch(train) [90][550/586] lr: 5.000000e-03 eta: 12:28:39 time: 0.715134 data_time: 0.060252 memory: 12959 loss_kpt: 350.108597 acc_pose: 0.781115 loss: 350.108597 2022/10/12 21:16:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:16:10 - mmengine - INFO - Saving checkpoint at 90 epochs 2022/10/12 21:16:28 - mmengine - INFO - Epoch(val) [90][50/407] eta: 0:01:37 time: 0.273399 data_time: 0.012648 memory: 12959 2022/10/12 21:16:41 - mmengine - INFO - Epoch(val) [90][100/407] eta: 0:01:20 time: 0.261090 data_time: 0.007684 memory: 2407 2022/10/12 21:16:54 - mmengine - INFO - Epoch(val) [90][150/407] eta: 0:01:06 time: 0.260243 data_time: 0.007771 memory: 2407 2022/10/12 21:17:07 - mmengine - INFO - Epoch(val) [90][200/407] eta: 0:00:55 time: 0.267241 data_time: 0.008065 memory: 2407 2022/10/12 21:17:20 - mmengine - INFO - Epoch(val) [90][250/407] eta: 0:00:41 time: 0.263263 data_time: 0.007808 memory: 2407 2022/10/12 21:17:34 - mmengine - INFO - Epoch(val) [90][300/407] eta: 0:00:28 time: 0.264555 data_time: 0.007646 memory: 2407 2022/10/12 21:17:47 - mmengine - INFO - Epoch(val) [90][350/407] eta: 0:00:14 time: 0.261411 data_time: 0.007736 memory: 2407 2022/10/12 21:17:59 - mmengine - INFO - Epoch(val) [90][400/407] eta: 0:00:01 time: 0.254949 data_time: 0.007694 memory: 2407 2022/10/12 21:18:13 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 21:18:29 - mmengine - INFO - Epoch(val) [90][407/407] coco/AP: 0.705838 coco/AP .5: 0.885338 coco/AP .75: 0.779800 coco/AP (M): 0.677413 coco/AP (L): 0.763051 coco/AR: 0.774701 coco/AR .5: 0.927739 coco/AR .75: 0.837217 coco/AR (M): 0.732778 coco/AR (L): 0.832590 2022/10/12 21:18:29 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_80.pth is removed 2022/10/12 21:18:32 - mmengine - INFO - The best checkpoint with 0.7058 coco/AP at 90 epoch is saved to best_coco/AP_epoch_90.pth. 2022/10/12 21:19:06 - mmengine - INFO - Epoch(train) [91][50/586] lr: 5.000000e-03 eta: 12:27:16 time: 0.680073 data_time: 0.069592 memory: 12959 loss_kpt: 346.591596 acc_pose: 0.661768 loss: 346.591596 2022/10/12 21:19:39 - mmengine - INFO - Epoch(train) [91][100/586] lr: 5.000000e-03 eta: 12:26:46 time: 0.667632 data_time: 0.057336 memory: 12959 loss_kpt: 347.744099 acc_pose: 0.860714 loss: 347.744099 2022/10/12 21:20:12 - mmengine - INFO - Epoch(train) [91][150/586] lr: 5.000000e-03 eta: 12:26:16 time: 0.662252 data_time: 0.063073 memory: 12959 loss_kpt: 339.700311 acc_pose: 0.834976 loss: 339.700311 2022/10/12 21:20:45 - mmengine - INFO - Epoch(train) [91][200/586] lr: 5.000000e-03 eta: 12:25:45 time: 0.652491 data_time: 0.057368 memory: 12959 loss_kpt: 345.661296 acc_pose: 0.756222 loss: 345.661296 2022/10/12 21:21:19 - mmengine - INFO - Epoch(train) [91][250/586] lr: 5.000000e-03 eta: 12:25:16 time: 0.676103 data_time: 0.063438 memory: 12959 loss_kpt: 347.530865 acc_pose: 0.853247 loss: 347.530865 2022/10/12 21:21:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:21:52 - mmengine - INFO - Epoch(train) [91][300/586] lr: 5.000000e-03 eta: 12:24:46 time: 0.667125 data_time: 0.055974 memory: 12959 loss_kpt: 347.892113 acc_pose: 0.798913 loss: 347.892113 2022/10/12 21:22:25 - mmengine - INFO - Epoch(train) [91][350/586] lr: 5.000000e-03 eta: 12:24:15 time: 0.663081 data_time: 0.060362 memory: 12959 loss_kpt: 345.925863 acc_pose: 0.833685 loss: 345.925863 2022/10/12 21:22:59 - mmengine - INFO - Epoch(train) [91][400/586] lr: 5.000000e-03 eta: 12:23:45 time: 0.668923 data_time: 0.056895 memory: 12959 loss_kpt: 343.938474 acc_pose: 0.786041 loss: 343.938474 2022/10/12 21:23:32 - mmengine - INFO - Epoch(train) [91][450/586] lr: 5.000000e-03 eta: 12:23:16 time: 0.669820 data_time: 0.064789 memory: 12959 loss_kpt: 347.297600 acc_pose: 0.836215 loss: 347.297600 2022/10/12 21:24:05 - mmengine - INFO - Epoch(train) [91][500/586] lr: 5.000000e-03 eta: 12:22:45 time: 0.665610 data_time: 0.057166 memory: 12959 loss_kpt: 346.191279 acc_pose: 0.843490 loss: 346.191279 2022/10/12 21:24:39 - mmengine - INFO - Epoch(train) [91][550/586] lr: 5.000000e-03 eta: 12:22:15 time: 0.667255 data_time: 0.062291 memory: 12959 loss_kpt: 342.829150 acc_pose: 0.772117 loss: 342.829150 2022/10/12 21:25:03 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:25:37 - mmengine - INFO - Epoch(train) [92][50/586] lr: 5.000000e-03 eta: 12:20:53 time: 0.675852 data_time: 0.072014 memory: 12959 loss_kpt: 351.015432 acc_pose: 0.810079 loss: 351.015432 2022/10/12 21:26:10 - mmengine - INFO - Epoch(train) [92][100/586] lr: 5.000000e-03 eta: 12:20:23 time: 0.670956 data_time: 0.059357 memory: 12959 loss_kpt: 340.284996 acc_pose: 0.829627 loss: 340.284996 2022/10/12 21:26:44 - mmengine - INFO - Epoch(train) [92][150/586] lr: 5.000000e-03 eta: 12:19:54 time: 0.676122 data_time: 0.056696 memory: 12959 loss_kpt: 351.805440 acc_pose: 0.783254 loss: 351.805440 2022/10/12 21:27:18 - mmengine - INFO - Epoch(train) [92][200/586] lr: 5.000000e-03 eta: 12:19:25 time: 0.685937 data_time: 0.058536 memory: 12959 loss_kpt: 347.094028 acc_pose: 0.802674 loss: 347.094028 2022/10/12 21:27:53 - mmengine - INFO - Epoch(train) [92][250/586] lr: 5.000000e-03 eta: 12:18:57 time: 0.693823 data_time: 0.057209 memory: 12959 loss_kpt: 350.654128 acc_pose: 0.780262 loss: 350.654128 2022/10/12 21:28:28 - mmengine - INFO - Epoch(train) [92][300/586] lr: 5.000000e-03 eta: 12:18:28 time: 0.690116 data_time: 0.055343 memory: 12959 loss_kpt: 345.882812 acc_pose: 0.850451 loss: 345.882812 2022/10/12 21:29:02 - mmengine - INFO - Epoch(train) [92][350/586] lr: 5.000000e-03 eta: 12:18:00 time: 0.691586 data_time: 0.056384 memory: 12959 loss_kpt: 344.683716 acc_pose: 0.849212 loss: 344.683716 2022/10/12 21:29:36 - mmengine - INFO - Epoch(train) [92][400/586] lr: 5.000000e-03 eta: 12:17:31 time: 0.685205 data_time: 0.058309 memory: 12959 loss_kpt: 348.180786 acc_pose: 0.838781 loss: 348.180786 2022/10/12 21:30:11 - mmengine - INFO - Epoch(train) [92][450/586] lr: 5.000000e-03 eta: 12:17:03 time: 0.694565 data_time: 0.056096 memory: 12959 loss_kpt: 344.296090 acc_pose: 0.877213 loss: 344.296090 2022/10/12 21:30:46 - mmengine - INFO - Epoch(train) [92][500/586] lr: 5.000000e-03 eta: 12:16:35 time: 0.706113 data_time: 0.060679 memory: 12959 loss_kpt: 349.178370 acc_pose: 0.847354 loss: 349.178370 2022/10/12 21:31:22 - mmengine - INFO - Epoch(train) [92][550/586] lr: 5.000000e-03 eta: 12:16:08 time: 0.707333 data_time: 0.061063 memory: 12959 loss_kpt: 350.697567 acc_pose: 0.739680 loss: 350.697567 2022/10/12 21:31:47 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:32:22 - mmengine - INFO - Epoch(train) [93][50/586] lr: 5.000000e-03 eta: 12:14:47 time: 0.693180 data_time: 0.066602 memory: 12959 loss_kpt: 353.489626 acc_pose: 0.779624 loss: 353.489626 2022/10/12 21:32:49 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:32:57 - mmengine - INFO - Epoch(train) [93][100/586] lr: 5.000000e-03 eta: 12:14:19 time: 0.700690 data_time: 0.058432 memory: 12959 loss_kpt: 345.469435 acc_pose: 0.863640 loss: 345.469435 2022/10/12 21:33:32 - mmengine - INFO - Epoch(train) [93][150/586] lr: 5.000000e-03 eta: 12:13:51 time: 0.701773 data_time: 0.059640 memory: 12959 loss_kpt: 343.156248 acc_pose: 0.788013 loss: 343.156248 2022/10/12 21:34:06 - mmengine - INFO - Epoch(train) [93][200/586] lr: 5.000000e-03 eta: 12:13:22 time: 0.677060 data_time: 0.055943 memory: 12959 loss_kpt: 348.356527 acc_pose: 0.822415 loss: 348.356527 2022/10/12 21:34:41 - mmengine - INFO - Epoch(train) [93][250/586] lr: 5.000000e-03 eta: 12:12:53 time: 0.696173 data_time: 0.060789 memory: 12959 loss_kpt: 348.320388 acc_pose: 0.842438 loss: 348.320388 2022/10/12 21:35:15 - mmengine - INFO - Epoch(train) [93][300/586] lr: 5.000000e-03 eta: 12:12:24 time: 0.684284 data_time: 0.060270 memory: 12959 loss_kpt: 348.898852 acc_pose: 0.794232 loss: 348.898852 2022/10/12 21:35:50 - mmengine - INFO - Epoch(train) [93][350/586] lr: 5.000000e-03 eta: 12:11:56 time: 0.693779 data_time: 0.063569 memory: 12959 loss_kpt: 348.070328 acc_pose: 0.775070 loss: 348.070328 2022/10/12 21:36:24 - mmengine - INFO - Epoch(train) [93][400/586] lr: 5.000000e-03 eta: 12:11:27 time: 0.681241 data_time: 0.059380 memory: 12959 loss_kpt: 349.174781 acc_pose: 0.813508 loss: 349.174781 2022/10/12 21:36:58 - mmengine - INFO - Epoch(train) [93][450/586] lr: 5.000000e-03 eta: 12:10:58 time: 0.687010 data_time: 0.060291 memory: 12959 loss_kpt: 340.655511 acc_pose: 0.838661 loss: 340.655511 2022/10/12 21:37:32 - mmengine - INFO - Epoch(train) [93][500/586] lr: 5.000000e-03 eta: 12:10:29 time: 0.682076 data_time: 0.055642 memory: 12959 loss_kpt: 347.405117 acc_pose: 0.800604 loss: 347.405117 2022/10/12 21:38:07 - mmengine - INFO - Epoch(train) [93][550/586] lr: 5.000000e-03 eta: 12:10:00 time: 0.683408 data_time: 0.055513 memory: 12959 loss_kpt: 343.540972 acc_pose: 0.831189 loss: 343.540972 2022/10/12 21:38:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:39:06 - mmengine - INFO - Epoch(train) [94][50/586] lr: 5.000000e-03 eta: 12:08:39 time: 0.694078 data_time: 0.069847 memory: 12959 loss_kpt: 341.330418 acc_pose: 0.803306 loss: 341.330418 2022/10/12 21:39:39 - mmengine - INFO - Epoch(train) [94][100/586] lr: 5.000000e-03 eta: 12:08:09 time: 0.662875 data_time: 0.058420 memory: 12959 loss_kpt: 343.171245 acc_pose: 0.853692 loss: 343.171245 2022/10/12 21:40:13 - mmengine - INFO - Epoch(train) [94][150/586] lr: 5.000000e-03 eta: 12:07:40 time: 0.685772 data_time: 0.054044 memory: 12959 loss_kpt: 349.127908 acc_pose: 0.827538 loss: 349.127908 2022/10/12 21:40:47 - mmengine - INFO - Epoch(train) [94][200/586] lr: 5.000000e-03 eta: 12:07:10 time: 0.675016 data_time: 0.058782 memory: 12959 loss_kpt: 346.694501 acc_pose: 0.764710 loss: 346.694501 2022/10/12 21:41:21 - mmengine - INFO - Epoch(train) [94][250/586] lr: 5.000000e-03 eta: 12:06:41 time: 0.685422 data_time: 0.061461 memory: 12959 loss_kpt: 342.855260 acc_pose: 0.778713 loss: 342.855260 2022/10/12 21:41:55 - mmengine - INFO - Epoch(train) [94][300/586] lr: 5.000000e-03 eta: 12:06:12 time: 0.685398 data_time: 0.056563 memory: 12959 loss_kpt: 340.593842 acc_pose: 0.795651 loss: 340.593842 2022/10/12 21:42:30 - mmengine - INFO - Epoch(train) [94][350/586] lr: 5.000000e-03 eta: 12:05:44 time: 0.694744 data_time: 0.056130 memory: 12959 loss_kpt: 343.485624 acc_pose: 0.844768 loss: 343.485624 2022/10/12 21:43:04 - mmengine - INFO - Epoch(train) [94][400/586] lr: 5.000000e-03 eta: 12:05:15 time: 0.679945 data_time: 0.056207 memory: 12959 loss_kpt: 344.495037 acc_pose: 0.788868 loss: 344.495037 2022/10/12 21:43:40 - mmengine - INFO - Epoch(train) [94][450/586] lr: 5.000000e-03 eta: 12:04:47 time: 0.711485 data_time: 0.060867 memory: 12959 loss_kpt: 343.179938 acc_pose: 0.790445 loss: 343.179938 2022/10/12 21:44:15 - mmengine - INFO - Epoch(train) [94][500/586] lr: 5.000000e-03 eta: 12:04:20 time: 0.708947 data_time: 0.056704 memory: 12959 loss_kpt: 347.029193 acc_pose: 0.734707 loss: 347.029193 2022/10/12 21:44:17 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:44:51 - mmengine - INFO - Epoch(train) [94][550/586] lr: 5.000000e-03 eta: 12:03:53 time: 0.717875 data_time: 0.059955 memory: 12959 loss_kpt: 347.196549 acc_pose: 0.839269 loss: 347.196549 2022/10/12 21:45:16 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:45:51 - mmengine - INFO - Epoch(train) [95][50/586] lr: 5.000000e-03 eta: 12:02:34 time: 0.711266 data_time: 0.063659 memory: 12959 loss_kpt: 345.683896 acc_pose: 0.870800 loss: 345.683896 2022/10/12 21:46:26 - mmengine - INFO - Epoch(train) [95][100/586] lr: 5.000000e-03 eta: 12:02:05 time: 0.689765 data_time: 0.061417 memory: 12959 loss_kpt: 344.184271 acc_pose: 0.872249 loss: 344.184271 2022/10/12 21:47:00 - mmengine - INFO - Epoch(train) [95][150/586] lr: 5.000000e-03 eta: 12:01:36 time: 0.684978 data_time: 0.058234 memory: 12959 loss_kpt: 342.920407 acc_pose: 0.809348 loss: 342.920407 2022/10/12 21:47:34 - mmengine - INFO - Epoch(train) [95][200/586] lr: 5.000000e-03 eta: 12:01:07 time: 0.681982 data_time: 0.056901 memory: 12959 loss_kpt: 340.977495 acc_pose: 0.801344 loss: 340.977495 2022/10/12 21:48:08 - mmengine - INFO - Epoch(train) [95][250/586] lr: 5.000000e-03 eta: 12:00:37 time: 0.676035 data_time: 0.058025 memory: 12959 loss_kpt: 343.378185 acc_pose: 0.833667 loss: 343.378185 2022/10/12 21:48:42 - mmengine - INFO - Epoch(train) [95][300/586] lr: 5.000000e-03 eta: 12:00:08 time: 0.685558 data_time: 0.061424 memory: 12959 loss_kpt: 344.647465 acc_pose: 0.818559 loss: 344.647465 2022/10/12 21:49:17 - mmengine - INFO - Epoch(train) [95][350/586] lr: 5.000000e-03 eta: 11:59:39 time: 0.686357 data_time: 0.059662 memory: 12959 loss_kpt: 349.427944 acc_pose: 0.814084 loss: 349.427944 2022/10/12 21:49:51 - mmengine - INFO - Epoch(train) [95][400/586] lr: 5.000000e-03 eta: 11:59:10 time: 0.683334 data_time: 0.057554 memory: 12959 loss_kpt: 343.001049 acc_pose: 0.789035 loss: 343.001049 2022/10/12 21:50:25 - mmengine - INFO - Epoch(train) [95][450/586] lr: 5.000000e-03 eta: 11:58:41 time: 0.683292 data_time: 0.061181 memory: 12959 loss_kpt: 338.535515 acc_pose: 0.862196 loss: 338.535515 2022/10/12 21:51:00 - mmengine - INFO - Epoch(train) [95][500/586] lr: 5.000000e-03 eta: 11:58:13 time: 0.704101 data_time: 0.062131 memory: 12959 loss_kpt: 347.455883 acc_pose: 0.839169 loss: 347.455883 2022/10/12 21:51:35 - mmengine - INFO - Epoch(train) [95][550/586] lr: 5.000000e-03 eta: 11:57:44 time: 0.691170 data_time: 0.056860 memory: 12959 loss_kpt: 350.551259 acc_pose: 0.788475 loss: 350.551259 2022/10/12 21:51:59 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:52:35 - mmengine - INFO - Epoch(train) [96][50/586] lr: 5.000000e-03 eta: 11:56:26 time: 0.721172 data_time: 0.069569 memory: 12959 loss_kpt: 347.669612 acc_pose: 0.801612 loss: 347.669612 2022/10/12 21:53:11 - mmengine - INFO - Epoch(train) [96][100/586] lr: 5.000000e-03 eta: 11:55:59 time: 0.713437 data_time: 0.057612 memory: 12959 loss_kpt: 344.303869 acc_pose: 0.793596 loss: 344.303869 2022/10/12 21:53:47 - mmengine - INFO - Epoch(train) [96][150/586] lr: 5.000000e-03 eta: 11:55:32 time: 0.725580 data_time: 0.059263 memory: 12959 loss_kpt: 342.926140 acc_pose: 0.795355 loss: 342.926140 2022/10/12 21:54:23 - mmengine - INFO - Epoch(train) [96][200/586] lr: 5.000000e-03 eta: 11:55:05 time: 0.715831 data_time: 0.052440 memory: 12959 loss_kpt: 341.678012 acc_pose: 0.736867 loss: 341.678012 2022/10/12 21:54:59 - mmengine - INFO - Epoch(train) [96][250/586] lr: 5.000000e-03 eta: 11:54:38 time: 0.720339 data_time: 0.058638 memory: 12959 loss_kpt: 342.462298 acc_pose: 0.836272 loss: 342.462298 2022/10/12 21:55:35 - mmengine - INFO - Epoch(train) [96][300/586] lr: 5.000000e-03 eta: 11:54:11 time: 0.716309 data_time: 0.053540 memory: 12959 loss_kpt: 348.318797 acc_pose: 0.841012 loss: 348.318797 2022/10/12 21:55:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:56:11 - mmengine - INFO - Epoch(train) [96][350/586] lr: 5.000000e-03 eta: 11:53:43 time: 0.711212 data_time: 0.058345 memory: 12959 loss_kpt: 345.520070 acc_pose: 0.835250 loss: 345.520070 2022/10/12 21:56:47 - mmengine - INFO - Epoch(train) [96][400/586] lr: 5.000000e-03 eta: 11:53:16 time: 0.722476 data_time: 0.055565 memory: 12959 loss_kpt: 344.027982 acc_pose: 0.802906 loss: 344.027982 2022/10/12 21:57:23 - mmengine - INFO - Epoch(train) [96][450/586] lr: 5.000000e-03 eta: 11:52:49 time: 0.722004 data_time: 0.062975 memory: 12959 loss_kpt: 343.743082 acc_pose: 0.778087 loss: 343.743082 2022/10/12 21:57:58 - mmengine - INFO - Epoch(train) [96][500/586] lr: 5.000000e-03 eta: 11:52:22 time: 0.712455 data_time: 0.056997 memory: 12959 loss_kpt: 343.808406 acc_pose: 0.776859 loss: 343.808406 2022/10/12 21:58:34 - mmengine - INFO - Epoch(train) [96][550/586] lr: 5.000000e-03 eta: 11:51:54 time: 0.716753 data_time: 0.060579 memory: 12959 loss_kpt: 344.060941 acc_pose: 0.818759 loss: 344.060941 2022/10/12 21:58:59 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 21:59:33 - mmengine - INFO - Epoch(train) [97][50/586] lr: 5.000000e-03 eta: 11:50:34 time: 0.672604 data_time: 0.064358 memory: 12959 loss_kpt: 346.521581 acc_pose: 0.730553 loss: 346.521581 2022/10/12 22:00:07 - mmengine - INFO - Epoch(train) [97][100/586] lr: 5.000000e-03 eta: 11:50:04 time: 0.669408 data_time: 0.053582 memory: 12959 loss_kpt: 344.730748 acc_pose: 0.810465 loss: 344.730748 2022/10/12 22:00:40 - mmengine - INFO - Epoch(train) [97][150/586] lr: 5.000000e-03 eta: 11:49:34 time: 0.669654 data_time: 0.054784 memory: 12959 loss_kpt: 347.850404 acc_pose: 0.825772 loss: 347.850404 2022/10/12 22:01:13 - mmengine - INFO - Epoch(train) [97][200/586] lr: 5.000000e-03 eta: 11:49:04 time: 0.666370 data_time: 0.058371 memory: 12959 loss_kpt: 341.608533 acc_pose: 0.781063 loss: 341.608533 2022/10/12 22:01:47 - mmengine - INFO - Epoch(train) [97][250/586] lr: 5.000000e-03 eta: 11:48:33 time: 0.668238 data_time: 0.058125 memory: 12959 loss_kpt: 341.921602 acc_pose: 0.795175 loss: 341.921602 2022/10/12 22:02:21 - mmengine - INFO - Epoch(train) [97][300/586] lr: 5.000000e-03 eta: 11:48:04 time: 0.675293 data_time: 0.056509 memory: 12959 loss_kpt: 344.026385 acc_pose: 0.806327 loss: 344.026385 2022/10/12 22:02:54 - mmengine - INFO - Epoch(train) [97][350/586] lr: 5.000000e-03 eta: 11:47:34 time: 0.670149 data_time: 0.058148 memory: 12959 loss_kpt: 349.683076 acc_pose: 0.807378 loss: 349.683076 2022/10/12 22:03:27 - mmengine - INFO - Epoch(train) [97][400/586] lr: 5.000000e-03 eta: 11:47:03 time: 0.662880 data_time: 0.057487 memory: 12959 loss_kpt: 345.069043 acc_pose: 0.814209 loss: 345.069043 2022/10/12 22:04:01 - mmengine - INFO - Epoch(train) [97][450/586] lr: 5.000000e-03 eta: 11:46:33 time: 0.675311 data_time: 0.057246 memory: 12959 loss_kpt: 341.006755 acc_pose: 0.835137 loss: 341.006755 2022/10/12 22:04:36 - mmengine - INFO - Epoch(train) [97][500/586] lr: 5.000000e-03 eta: 11:46:05 time: 0.695167 data_time: 0.060478 memory: 12959 loss_kpt: 340.150479 acc_pose: 0.827198 loss: 340.150479 2022/10/12 22:05:10 - mmengine - INFO - Epoch(train) [97][550/586] lr: 5.000000e-03 eta: 11:45:35 time: 0.685947 data_time: 0.058934 memory: 12959 loss_kpt: 342.306262 acc_pose: 0.756978 loss: 342.306262 2022/10/12 22:05:35 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:06:10 - mmengine - INFO - Epoch(train) [98][50/586] lr: 5.000000e-03 eta: 11:44:17 time: 0.699800 data_time: 0.070473 memory: 12959 loss_kpt: 336.116865 acc_pose: 0.803097 loss: 336.116865 2022/10/12 22:06:45 - mmengine - INFO - Epoch(train) [98][100/586] lr: 5.000000e-03 eta: 11:43:49 time: 0.699408 data_time: 0.058178 memory: 12959 loss_kpt: 341.700639 acc_pose: 0.824743 loss: 341.700639 2022/10/12 22:07:20 - mmengine - INFO - Epoch(train) [98][150/586] lr: 5.000000e-03 eta: 11:43:20 time: 0.697485 data_time: 0.061111 memory: 12959 loss_kpt: 344.449824 acc_pose: 0.847361 loss: 344.449824 2022/10/12 22:07:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:07:54 - mmengine - INFO - Epoch(train) [98][200/586] lr: 5.000000e-03 eta: 11:42:52 time: 0.697716 data_time: 0.052232 memory: 12959 loss_kpt: 341.063903 acc_pose: 0.762061 loss: 341.063903 2022/10/12 22:08:29 - mmengine - INFO - Epoch(train) [98][250/586] lr: 5.000000e-03 eta: 11:42:23 time: 0.697354 data_time: 0.064658 memory: 12959 loss_kpt: 342.951306 acc_pose: 0.763823 loss: 342.951306 2022/10/12 22:09:04 - mmengine - INFO - Epoch(train) [98][300/586] lr: 5.000000e-03 eta: 11:41:54 time: 0.692061 data_time: 0.052677 memory: 12959 loss_kpt: 337.931292 acc_pose: 0.822741 loss: 337.931292 2022/10/12 22:09:39 - mmengine - INFO - Epoch(train) [98][350/586] lr: 5.000000e-03 eta: 11:41:26 time: 0.709644 data_time: 0.060779 memory: 12959 loss_kpt: 337.975538 acc_pose: 0.828641 loss: 337.975538 2022/10/12 22:10:15 - mmengine - INFO - Epoch(train) [98][400/586] lr: 5.000000e-03 eta: 11:40:58 time: 0.702493 data_time: 0.057228 memory: 12959 loss_kpt: 342.261409 acc_pose: 0.865361 loss: 342.261409 2022/10/12 22:10:50 - mmengine - INFO - Epoch(train) [98][450/586] lr: 5.000000e-03 eta: 11:40:30 time: 0.701917 data_time: 0.060600 memory: 12959 loss_kpt: 341.253531 acc_pose: 0.811883 loss: 341.253531 2022/10/12 22:11:25 - mmengine - INFO - Epoch(train) [98][500/586] lr: 5.000000e-03 eta: 11:40:01 time: 0.699946 data_time: 0.053767 memory: 12959 loss_kpt: 338.523911 acc_pose: 0.841160 loss: 338.523911 2022/10/12 22:12:00 - mmengine - INFO - Epoch(train) [98][550/586] lr: 5.000000e-03 eta: 11:39:33 time: 0.701522 data_time: 0.058222 memory: 12959 loss_kpt: 343.102422 acc_pose: 0.845815 loss: 343.102422 2022/10/12 22:12:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:12:59 - mmengine - INFO - Epoch(train) [99][50/586] lr: 5.000000e-03 eta: 11:38:15 time: 0.691251 data_time: 0.064686 memory: 12959 loss_kpt: 340.442128 acc_pose: 0.868249 loss: 340.442128 2022/10/12 22:13:33 - mmengine - INFO - Epoch(train) [99][100/586] lr: 5.000000e-03 eta: 11:37:45 time: 0.682361 data_time: 0.055818 memory: 12959 loss_kpt: 336.994177 acc_pose: 0.825924 loss: 336.994177 2022/10/12 22:14:08 - mmengine - INFO - Epoch(train) [99][150/586] lr: 5.000000e-03 eta: 11:37:16 time: 0.686928 data_time: 0.054619 memory: 12959 loss_kpt: 339.458544 acc_pose: 0.896116 loss: 339.458544 2022/10/12 22:14:41 - mmengine - INFO - Epoch(train) [99][200/586] lr: 5.000000e-03 eta: 11:36:46 time: 0.676624 data_time: 0.059402 memory: 12959 loss_kpt: 342.459470 acc_pose: 0.860734 loss: 342.459470 2022/10/12 22:15:16 - mmengine - INFO - Epoch(train) [99][250/586] lr: 5.000000e-03 eta: 11:36:17 time: 0.694030 data_time: 0.056141 memory: 12959 loss_kpt: 345.127239 acc_pose: 0.866999 loss: 345.127239 2022/10/12 22:15:51 - mmengine - INFO - Epoch(train) [99][300/586] lr: 5.000000e-03 eta: 11:35:48 time: 0.692807 data_time: 0.060196 memory: 12959 loss_kpt: 345.573888 acc_pose: 0.776524 loss: 345.573888 2022/10/12 22:16:26 - mmengine - INFO - Epoch(train) [99][350/586] lr: 5.000000e-03 eta: 11:35:20 time: 0.704139 data_time: 0.055814 memory: 12959 loss_kpt: 336.105991 acc_pose: 0.792317 loss: 336.105991 2022/10/12 22:17:01 - mmengine - INFO - Epoch(train) [99][400/586] lr: 5.000000e-03 eta: 11:34:51 time: 0.694444 data_time: 0.058047 memory: 12959 loss_kpt: 343.871274 acc_pose: 0.858858 loss: 343.871274 2022/10/12 22:17:36 - mmengine - INFO - Epoch(train) [99][450/586] lr: 5.000000e-03 eta: 11:34:23 time: 0.699423 data_time: 0.058919 memory: 12959 loss_kpt: 336.784152 acc_pose: 0.809388 loss: 336.784152 2022/10/12 22:18:11 - mmengine - INFO - Epoch(train) [99][500/586] lr: 5.000000e-03 eta: 11:33:54 time: 0.704485 data_time: 0.061952 memory: 12959 loss_kpt: 344.014087 acc_pose: 0.825251 loss: 344.014087 2022/10/12 22:18:46 - mmengine - INFO - Epoch(train) [99][550/586] lr: 5.000000e-03 eta: 11:33:26 time: 0.693873 data_time: 0.057825 memory: 12959 loss_kpt: 343.793903 acc_pose: 0.878630 loss: 343.793903 2022/10/12 22:19:01 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:19:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:19:44 - mmengine - INFO - Epoch(train) [100][50/586] lr: 5.000000e-03 eta: 11:32:08 time: 0.692254 data_time: 0.068993 memory: 12959 loss_kpt: 343.396100 acc_pose: 0.822264 loss: 343.396100 2022/10/12 22:20:18 - mmengine - INFO - Epoch(train) [100][100/586] lr: 5.000000e-03 eta: 11:31:37 time: 0.667586 data_time: 0.054533 memory: 12959 loss_kpt: 344.012602 acc_pose: 0.811026 loss: 344.012602 2022/10/12 22:20:52 - mmengine - INFO - Epoch(train) [100][150/586] lr: 5.000000e-03 eta: 11:31:08 time: 0.683098 data_time: 0.064864 memory: 12959 loss_kpt: 337.700623 acc_pose: 0.751640 loss: 337.700623 2022/10/12 22:21:26 - mmengine - INFO - Epoch(train) [100][200/586] lr: 5.000000e-03 eta: 11:30:38 time: 0.682206 data_time: 0.052251 memory: 12959 loss_kpt: 340.379451 acc_pose: 0.773034 loss: 340.379451 2022/10/12 22:22:00 - mmengine - INFO - Epoch(train) [100][250/586] lr: 5.000000e-03 eta: 11:30:09 time: 0.677686 data_time: 0.058279 memory: 12959 loss_kpt: 341.509860 acc_pose: 0.866265 loss: 341.509860 2022/10/12 22:22:34 - mmengine - INFO - Epoch(train) [100][300/586] lr: 5.000000e-03 eta: 11:29:39 time: 0.678970 data_time: 0.060971 memory: 12959 loss_kpt: 343.634687 acc_pose: 0.828629 loss: 343.634687 2022/10/12 22:23:09 - mmengine - INFO - Epoch(train) [100][350/586] lr: 5.000000e-03 eta: 11:29:10 time: 0.691173 data_time: 0.060478 memory: 12959 loss_kpt: 345.641664 acc_pose: 0.742691 loss: 345.641664 2022/10/12 22:23:43 - mmengine - INFO - Epoch(train) [100][400/586] lr: 5.000000e-03 eta: 11:28:41 time: 0.686666 data_time: 0.055528 memory: 12959 loss_kpt: 350.571198 acc_pose: 0.768135 loss: 350.571198 2022/10/12 22:24:18 - mmengine - INFO - Epoch(train) [100][450/586] lr: 5.000000e-03 eta: 11:28:12 time: 0.695763 data_time: 0.053714 memory: 12959 loss_kpt: 346.123401 acc_pose: 0.842547 loss: 346.123401 2022/10/12 22:24:52 - mmengine - INFO - Epoch(train) [100][500/586] lr: 5.000000e-03 eta: 11:27:42 time: 0.686278 data_time: 0.058296 memory: 12959 loss_kpt: 342.956945 acc_pose: 0.863111 loss: 342.956945 2022/10/12 22:25:27 - mmengine - INFO - Epoch(train) [100][550/586] lr: 5.000000e-03 eta: 11:27:13 time: 0.691800 data_time: 0.055621 memory: 12959 loss_kpt: 337.606025 acc_pose: 0.774065 loss: 337.606025 2022/10/12 22:25:51 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:25:51 - mmengine - INFO - Saving checkpoint at 100 epochs 2022/10/12 22:26:09 - mmengine - INFO - Epoch(val) [100][50/407] eta: 0:01:39 time: 0.278113 data_time: 0.014772 memory: 12959 2022/10/12 22:26:22 - mmengine - INFO - Epoch(val) [100][100/407] eta: 0:01:19 time: 0.258837 data_time: 0.007487 memory: 2407 2022/10/12 22:26:35 - mmengine - INFO - Epoch(val) [100][150/407] eta: 0:01:07 time: 0.263076 data_time: 0.008076 memory: 2407 2022/10/12 22:26:48 - mmengine - INFO - Epoch(val) [100][200/407] eta: 0:00:54 time: 0.263349 data_time: 0.007702 memory: 2407 2022/10/12 22:27:02 - mmengine - INFO - Epoch(val) [100][250/407] eta: 0:00:41 time: 0.264080 data_time: 0.007926 memory: 2407 2022/10/12 22:27:15 - mmengine - INFO - Epoch(val) [100][300/407] eta: 0:00:27 time: 0.259388 data_time: 0.007530 memory: 2407 2022/10/12 22:27:28 - mmengine - INFO - Epoch(val) [100][350/407] eta: 0:00:14 time: 0.260697 data_time: 0.007721 memory: 2407 2022/10/12 22:27:40 - mmengine - INFO - Epoch(val) [100][400/407] eta: 0:00:01 time: 0.254805 data_time: 0.007328 memory: 2407 2022/10/12 22:27:55 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 22:28:11 - mmengine - INFO - Epoch(val) [100][407/407] coco/AP: 0.716412 coco/AP .5: 0.890429 coco/AP .75: 0.790253 coco/AP (M): 0.685409 coco/AP (L): 0.775302 coco/AR: 0.783328 coco/AR .5: 0.930730 coco/AR .75: 0.844301 coco/AR (M): 0.740535 coco/AR (L): 0.842178 2022/10/12 22:28:11 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_90.pth is removed 2022/10/12 22:28:13 - mmengine - INFO - The best checkpoint with 0.7164 coco/AP at 100 epoch is saved to best_coco/AP_epoch_100.pth. 2022/10/12 22:28:47 - mmengine - INFO - Epoch(train) [101][50/586] lr: 5.000000e-03 eta: 11:25:55 time: 0.679040 data_time: 0.063840 memory: 12959 loss_kpt: 344.334337 acc_pose: 0.815541 loss: 344.334337 2022/10/12 22:29:20 - mmengine - INFO - Epoch(train) [101][100/586] lr: 5.000000e-03 eta: 11:25:25 time: 0.666246 data_time: 0.059497 memory: 12959 loss_kpt: 341.930680 acc_pose: 0.837202 loss: 341.930680 2022/10/12 22:29:54 - mmengine - INFO - Epoch(train) [101][150/586] lr: 5.000000e-03 eta: 11:24:55 time: 0.679648 data_time: 0.063151 memory: 12959 loss_kpt: 347.867089 acc_pose: 0.864213 loss: 347.867089 2022/10/12 22:30:29 - mmengine - INFO - Epoch(train) [101][200/586] lr: 5.000000e-03 eta: 11:24:25 time: 0.683701 data_time: 0.066351 memory: 12959 loss_kpt: 336.235341 acc_pose: 0.801117 loss: 336.235341 2022/10/12 22:31:03 - mmengine - INFO - Epoch(train) [101][250/586] lr: 5.000000e-03 eta: 11:23:56 time: 0.679020 data_time: 0.064606 memory: 12959 loss_kpt: 345.637857 acc_pose: 0.830497 loss: 345.637857 2022/10/12 22:31:37 - mmengine - INFO - Epoch(train) [101][300/586] lr: 5.000000e-03 eta: 11:23:26 time: 0.681940 data_time: 0.059001 memory: 12959 loss_kpt: 337.970623 acc_pose: 0.696729 loss: 337.970623 2022/10/12 22:32:10 - mmengine - INFO - Epoch(train) [101][350/586] lr: 5.000000e-03 eta: 11:22:56 time: 0.674928 data_time: 0.065987 memory: 12959 loss_kpt: 336.067299 acc_pose: 0.786966 loss: 336.067299 2022/10/12 22:32:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:32:45 - mmengine - INFO - Epoch(train) [101][400/586] lr: 5.000000e-03 eta: 11:22:27 time: 0.683877 data_time: 0.065404 memory: 12959 loss_kpt: 335.227304 acc_pose: 0.797058 loss: 335.227304 2022/10/12 22:33:19 - mmengine - INFO - Epoch(train) [101][450/586] lr: 5.000000e-03 eta: 11:21:57 time: 0.683114 data_time: 0.064599 memory: 12959 loss_kpt: 340.300208 acc_pose: 0.833360 loss: 340.300208 2022/10/12 22:33:53 - mmengine - INFO - Epoch(train) [101][500/586] lr: 5.000000e-03 eta: 11:21:28 time: 0.688016 data_time: 0.069679 memory: 12959 loss_kpt: 342.593558 acc_pose: 0.786573 loss: 342.593558 2022/10/12 22:34:27 - mmengine - INFO - Epoch(train) [101][550/586] lr: 5.000000e-03 eta: 11:20:58 time: 0.682262 data_time: 0.061192 memory: 12959 loss_kpt: 342.343717 acc_pose: 0.904865 loss: 342.343717 2022/10/12 22:34:52 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:35:26 - mmengine - INFO - Epoch(train) [102][50/586] lr: 5.000000e-03 eta: 11:19:40 time: 0.679048 data_time: 0.071680 memory: 12959 loss_kpt: 338.404043 acc_pose: 0.899410 loss: 338.404043 2022/10/12 22:36:00 - mmengine - INFO - Epoch(train) [102][100/586] lr: 5.000000e-03 eta: 11:19:11 time: 0.682478 data_time: 0.058496 memory: 12959 loss_kpt: 347.680438 acc_pose: 0.820976 loss: 347.680438 2022/10/12 22:36:34 - mmengine - INFO - Epoch(train) [102][150/586] lr: 5.000000e-03 eta: 11:18:41 time: 0.678129 data_time: 0.064342 memory: 12959 loss_kpt: 336.441810 acc_pose: 0.808430 loss: 336.441810 2022/10/12 22:37:08 - mmengine - INFO - Epoch(train) [102][200/586] lr: 5.000000e-03 eta: 11:18:11 time: 0.677267 data_time: 0.060961 memory: 12959 loss_kpt: 338.912239 acc_pose: 0.839006 loss: 338.912239 2022/10/12 22:37:42 - mmengine - INFO - Epoch(train) [102][250/586] lr: 5.000000e-03 eta: 11:17:41 time: 0.679673 data_time: 0.063292 memory: 12959 loss_kpt: 340.489510 acc_pose: 0.877646 loss: 340.489510 2022/10/12 22:38:15 - mmengine - INFO - Epoch(train) [102][300/586] lr: 5.000000e-03 eta: 11:17:11 time: 0.674828 data_time: 0.057548 memory: 12959 loss_kpt: 338.412927 acc_pose: 0.814388 loss: 338.412927 2022/10/12 22:38:49 - mmengine - INFO - Epoch(train) [102][350/586] lr: 5.000000e-03 eta: 11:16:42 time: 0.683740 data_time: 0.064587 memory: 12959 loss_kpt: 342.525010 acc_pose: 0.837891 loss: 342.525010 2022/10/12 22:39:23 - mmengine - INFO - Epoch(train) [102][400/586] lr: 5.000000e-03 eta: 11:16:12 time: 0.678572 data_time: 0.059062 memory: 12959 loss_kpt: 344.537536 acc_pose: 0.765686 loss: 344.537536 2022/10/12 22:39:57 - mmengine - INFO - Epoch(train) [102][450/586] lr: 5.000000e-03 eta: 11:15:42 time: 0.678742 data_time: 0.066903 memory: 12959 loss_kpt: 339.483257 acc_pose: 0.729288 loss: 339.483257 2022/10/12 22:40:31 - mmengine - INFO - Epoch(train) [102][500/586] lr: 5.000000e-03 eta: 11:15:12 time: 0.671892 data_time: 0.063376 memory: 12959 loss_kpt: 344.496506 acc_pose: 0.782917 loss: 344.496506 2022/10/12 22:41:04 - mmengine - INFO - Epoch(train) [102][550/586] lr: 5.000000e-03 eta: 11:14:41 time: 0.662848 data_time: 0.059575 memory: 12959 loss_kpt: 340.003310 acc_pose: 0.809732 loss: 340.003310 2022/10/12 22:41:28 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:42:03 - mmengine - INFO - Epoch(train) [103][50/586] lr: 5.000000e-03 eta: 11:13:25 time: 0.696158 data_time: 0.073422 memory: 12959 loss_kpt: 341.688062 acc_pose: 0.844525 loss: 341.688062 2022/10/12 22:42:37 - mmengine - INFO - Epoch(train) [103][100/586] lr: 5.000000e-03 eta: 11:12:55 time: 0.684005 data_time: 0.060974 memory: 12959 loss_kpt: 342.034654 acc_pose: 0.823534 loss: 342.034654 2022/10/12 22:43:12 - mmengine - INFO - Epoch(train) [103][150/586] lr: 5.000000e-03 eta: 11:12:26 time: 0.687102 data_time: 0.061246 memory: 12959 loss_kpt: 335.376233 acc_pose: 0.859948 loss: 335.376233 2022/10/12 22:43:46 - mmengine - INFO - Epoch(train) [103][200/586] lr: 5.000000e-03 eta: 11:11:56 time: 0.683247 data_time: 0.058259 memory: 12959 loss_kpt: 342.112061 acc_pose: 0.835766 loss: 342.112061 2022/10/12 22:44:05 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:44:21 - mmengine - INFO - Epoch(train) [103][250/586] lr: 5.000000e-03 eta: 11:11:27 time: 0.700017 data_time: 0.062606 memory: 12959 loss_kpt: 342.739935 acc_pose: 0.836055 loss: 342.739935 2022/10/12 22:44:55 - mmengine - INFO - Epoch(train) [103][300/586] lr: 5.000000e-03 eta: 11:10:58 time: 0.690550 data_time: 0.060734 memory: 12959 loss_kpt: 344.689587 acc_pose: 0.855028 loss: 344.689587 2022/10/12 22:45:30 - mmengine - INFO - Epoch(train) [103][350/586] lr: 5.000000e-03 eta: 11:10:29 time: 0.692474 data_time: 0.062496 memory: 12959 loss_kpt: 344.788526 acc_pose: 0.823753 loss: 344.788526 2022/10/12 22:46:04 - mmengine - INFO - Epoch(train) [103][400/586] lr: 5.000000e-03 eta: 11:09:59 time: 0.686786 data_time: 0.063472 memory: 12959 loss_kpt: 338.147404 acc_pose: 0.810546 loss: 338.147404 2022/10/12 22:46:39 - mmengine - INFO - Epoch(train) [103][450/586] lr: 5.000000e-03 eta: 11:09:30 time: 0.693076 data_time: 0.059203 memory: 12959 loss_kpt: 339.046437 acc_pose: 0.755146 loss: 339.046437 2022/10/12 22:47:14 - mmengine - INFO - Epoch(train) [103][500/586] lr: 5.000000e-03 eta: 11:09:01 time: 0.695946 data_time: 0.056398 memory: 12959 loss_kpt: 332.308077 acc_pose: 0.798197 loss: 332.308077 2022/10/12 22:47:48 - mmengine - INFO - Epoch(train) [103][550/586] lr: 5.000000e-03 eta: 11:08:32 time: 0.691768 data_time: 0.066893 memory: 12959 loss_kpt: 330.858527 acc_pose: 0.807730 loss: 330.858527 2022/10/12 22:48:13 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:48:47 - mmengine - INFO - Epoch(train) [104][50/586] lr: 5.000000e-03 eta: 11:07:15 time: 0.680724 data_time: 0.080040 memory: 12959 loss_kpt: 339.397628 acc_pose: 0.758671 loss: 339.397628 2022/10/12 22:49:20 - mmengine - INFO - Epoch(train) [104][100/586] lr: 5.000000e-03 eta: 11:06:44 time: 0.659400 data_time: 0.063194 memory: 12959 loss_kpt: 339.693675 acc_pose: 0.815111 loss: 339.693675 2022/10/12 22:49:53 - mmengine - INFO - Epoch(train) [104][150/586] lr: 5.000000e-03 eta: 11:06:14 time: 0.671289 data_time: 0.061062 memory: 12959 loss_kpt: 344.795260 acc_pose: 0.805546 loss: 344.795260 2022/10/12 22:50:27 - mmengine - INFO - Epoch(train) [104][200/586] lr: 5.000000e-03 eta: 11:05:44 time: 0.669368 data_time: 0.061061 memory: 12959 loss_kpt: 338.304917 acc_pose: 0.789567 loss: 338.304917 2022/10/12 22:51:00 - mmengine - INFO - Epoch(train) [104][250/586] lr: 5.000000e-03 eta: 11:05:13 time: 0.661039 data_time: 0.064779 memory: 12959 loss_kpt: 343.864925 acc_pose: 0.823615 loss: 343.864925 2022/10/12 22:51:34 - mmengine - INFO - Epoch(train) [104][300/586] lr: 5.000000e-03 eta: 11:04:43 time: 0.676118 data_time: 0.061492 memory: 12959 loss_kpt: 345.468979 acc_pose: 0.882081 loss: 345.468979 2022/10/12 22:52:08 - mmengine - INFO - Epoch(train) [104][350/586] lr: 5.000000e-03 eta: 11:04:13 time: 0.680909 data_time: 0.065704 memory: 12959 loss_kpt: 334.035621 acc_pose: 0.816403 loss: 334.035621 2022/10/12 22:52:42 - mmengine - INFO - Epoch(train) [104][400/586] lr: 5.000000e-03 eta: 11:03:43 time: 0.685957 data_time: 0.067351 memory: 12959 loss_kpt: 334.097851 acc_pose: 0.882782 loss: 334.097851 2022/10/12 22:53:17 - mmengine - INFO - Epoch(train) [104][450/586] lr: 5.000000e-03 eta: 11:03:14 time: 0.693119 data_time: 0.068581 memory: 12959 loss_kpt: 340.567218 acc_pose: 0.802294 loss: 340.567218 2022/10/12 22:53:50 - mmengine - INFO - Epoch(train) [104][500/586] lr: 5.000000e-03 eta: 11:02:44 time: 0.667825 data_time: 0.063680 memory: 12959 loss_kpt: 336.498926 acc_pose: 0.736359 loss: 336.498926 2022/10/12 22:54:25 - mmengine - INFO - Epoch(train) [104][550/586] lr: 5.000000e-03 eta: 11:02:15 time: 0.704668 data_time: 0.061502 memory: 12959 loss_kpt: 348.682903 acc_pose: 0.744762 loss: 348.682903 2022/10/12 22:54:50 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:55:25 - mmengine - INFO - Epoch(train) [105][50/586] lr: 5.000000e-03 eta: 11:00:59 time: 0.692586 data_time: 0.067551 memory: 12959 loss_kpt: 338.221608 acc_pose: 0.823209 loss: 338.221608 2022/10/12 22:55:29 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 22:55:59 - mmengine - INFO - Epoch(train) [105][100/586] lr: 5.000000e-03 eta: 11:00:30 time: 0.680659 data_time: 0.060544 memory: 12959 loss_kpt: 339.260405 acc_pose: 0.819101 loss: 339.260405 2022/10/12 22:56:33 - mmengine - INFO - Epoch(train) [105][150/586] lr: 5.000000e-03 eta: 11:00:00 time: 0.677181 data_time: 0.063956 memory: 12959 loss_kpt: 341.924800 acc_pose: 0.809764 loss: 341.924800 2022/10/12 22:57:06 - mmengine - INFO - Epoch(train) [105][200/586] lr: 5.000000e-03 eta: 10:59:29 time: 0.662658 data_time: 0.061755 memory: 12959 loss_kpt: 334.124042 acc_pose: 0.798883 loss: 334.124042 2022/10/12 22:57:40 - mmengine - INFO - Epoch(train) [105][250/586] lr: 5.000000e-03 eta: 10:58:59 time: 0.680992 data_time: 0.066817 memory: 12959 loss_kpt: 338.117964 acc_pose: 0.829085 loss: 338.117964 2022/10/12 22:58:13 - mmengine - INFO - Epoch(train) [105][300/586] lr: 5.000000e-03 eta: 10:58:28 time: 0.665153 data_time: 0.060709 memory: 12959 loss_kpt: 346.149771 acc_pose: 0.824480 loss: 346.149771 2022/10/12 22:58:47 - mmengine - INFO - Epoch(train) [105][350/586] lr: 5.000000e-03 eta: 10:57:58 time: 0.669456 data_time: 0.065799 memory: 12959 loss_kpt: 338.584747 acc_pose: 0.874083 loss: 338.584747 2022/10/12 22:59:20 - mmengine - INFO - Epoch(train) [105][400/586] lr: 5.000000e-03 eta: 10:57:28 time: 0.670995 data_time: 0.063430 memory: 12959 loss_kpt: 341.245284 acc_pose: 0.839949 loss: 341.245284 2022/10/12 22:59:54 - mmengine - INFO - Epoch(train) [105][450/586] lr: 5.000000e-03 eta: 10:56:58 time: 0.680021 data_time: 0.063996 memory: 12959 loss_kpt: 340.451821 acc_pose: 0.859947 loss: 340.451821 2022/10/12 23:00:28 - mmengine - INFO - Epoch(train) [105][500/586] lr: 5.000000e-03 eta: 10:56:28 time: 0.678294 data_time: 0.060422 memory: 12959 loss_kpt: 345.065721 acc_pose: 0.755737 loss: 345.065721 2022/10/12 23:01:02 - mmengine - INFO - Epoch(train) [105][550/586] lr: 5.000000e-03 eta: 10:55:58 time: 0.676211 data_time: 0.064220 memory: 12959 loss_kpt: 334.010165 acc_pose: 0.815656 loss: 334.010165 2022/10/12 23:01:26 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:02:00 - mmengine - INFO - Epoch(train) [106][50/586] lr: 5.000000e-03 eta: 10:54:42 time: 0.687849 data_time: 0.072060 memory: 12959 loss_kpt: 339.828406 acc_pose: 0.797530 loss: 339.828406 2022/10/12 23:02:34 - mmengine - INFO - Epoch(train) [106][100/586] lr: 5.000000e-03 eta: 10:54:12 time: 0.669307 data_time: 0.066743 memory: 12959 loss_kpt: 344.424224 acc_pose: 0.838900 loss: 344.424224 2022/10/12 23:03:08 - mmengine - INFO - Epoch(train) [106][150/586] lr: 5.000000e-03 eta: 10:53:42 time: 0.680356 data_time: 0.059957 memory: 12959 loss_kpt: 335.605074 acc_pose: 0.837496 loss: 335.605074 2022/10/12 23:03:42 - mmengine - INFO - Epoch(train) [106][200/586] lr: 5.000000e-03 eta: 10:53:12 time: 0.684542 data_time: 0.060785 memory: 12959 loss_kpt: 333.514070 acc_pose: 0.778328 loss: 333.514070 2022/10/12 23:04:17 - mmengine - INFO - Epoch(train) [106][250/586] lr: 5.000000e-03 eta: 10:52:42 time: 0.687268 data_time: 0.067108 memory: 12959 loss_kpt: 335.749053 acc_pose: 0.757441 loss: 335.749053 2022/10/12 23:04:51 - mmengine - INFO - Epoch(train) [106][300/586] lr: 5.000000e-03 eta: 10:52:13 time: 0.691682 data_time: 0.058729 memory: 12959 loss_kpt: 336.462156 acc_pose: 0.803013 loss: 336.462156 2022/10/12 23:05:26 - mmengine - INFO - Epoch(train) [106][350/586] lr: 5.000000e-03 eta: 10:51:44 time: 0.697083 data_time: 0.064444 memory: 12959 loss_kpt: 338.918989 acc_pose: 0.817470 loss: 338.918989 2022/10/12 23:06:00 - mmengine - INFO - Epoch(train) [106][400/586] lr: 5.000000e-03 eta: 10:51:14 time: 0.678598 data_time: 0.065857 memory: 12959 loss_kpt: 340.949332 acc_pose: 0.783640 loss: 340.949332 2022/10/12 23:06:34 - mmengine - INFO - Epoch(train) [106][450/586] lr: 5.000000e-03 eta: 10:50:44 time: 0.686456 data_time: 0.065626 memory: 12959 loss_kpt: 344.273850 acc_pose: 0.774005 loss: 344.273850 2022/10/12 23:06:48 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:07:08 - mmengine - INFO - Epoch(train) [106][500/586] lr: 5.000000e-03 eta: 10:50:14 time: 0.671946 data_time: 0.058332 memory: 12959 loss_kpt: 333.038360 acc_pose: 0.796282 loss: 333.038360 2022/10/12 23:07:42 - mmengine - INFO - Epoch(train) [106][550/586] lr: 5.000000e-03 eta: 10:49:44 time: 0.674656 data_time: 0.063119 memory: 12959 loss_kpt: 337.758941 acc_pose: 0.843120 loss: 337.758941 2022/10/12 23:08:06 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:08:40 - mmengine - INFO - Epoch(train) [107][50/586] lr: 5.000000e-03 eta: 10:48:29 time: 0.692666 data_time: 0.075017 memory: 12959 loss_kpt: 339.014243 acc_pose: 0.815838 loss: 339.014243 2022/10/12 23:09:14 - mmengine - INFO - Epoch(train) [107][100/586] lr: 5.000000e-03 eta: 10:47:59 time: 0.680550 data_time: 0.065334 memory: 12959 loss_kpt: 331.102646 acc_pose: 0.802335 loss: 331.102646 2022/10/12 23:09:48 - mmengine - INFO - Epoch(train) [107][150/586] lr: 5.000000e-03 eta: 10:47:29 time: 0.670919 data_time: 0.058790 memory: 12959 loss_kpt: 337.140535 acc_pose: 0.794405 loss: 337.140535 2022/10/12 23:10:21 - mmengine - INFO - Epoch(train) [107][200/586] lr: 5.000000e-03 eta: 10:46:58 time: 0.668754 data_time: 0.067524 memory: 12959 loss_kpt: 337.161232 acc_pose: 0.774458 loss: 337.161232 2022/10/12 23:10:55 - mmengine - INFO - Epoch(train) [107][250/586] lr: 5.000000e-03 eta: 10:46:28 time: 0.676008 data_time: 0.062590 memory: 12959 loss_kpt: 340.615058 acc_pose: 0.853189 loss: 340.615058 2022/10/12 23:11:29 - mmengine - INFO - Epoch(train) [107][300/586] lr: 5.000000e-03 eta: 10:45:57 time: 0.667933 data_time: 0.065904 memory: 12959 loss_kpt: 338.293248 acc_pose: 0.859694 loss: 338.293248 2022/10/12 23:12:02 - mmengine - INFO - Epoch(train) [107][350/586] lr: 5.000000e-03 eta: 10:45:27 time: 0.663429 data_time: 0.063947 memory: 12959 loss_kpt: 342.217927 acc_pose: 0.805943 loss: 342.217927 2022/10/12 23:12:36 - mmengine - INFO - Epoch(train) [107][400/586] lr: 5.000000e-03 eta: 10:44:57 time: 0.690339 data_time: 0.068350 memory: 12959 loss_kpt: 333.933784 acc_pose: 0.845234 loss: 333.933784 2022/10/12 23:13:11 - mmengine - INFO - Epoch(train) [107][450/586] lr: 5.000000e-03 eta: 10:44:28 time: 0.688685 data_time: 0.064081 memory: 12959 loss_kpt: 337.405965 acc_pose: 0.762003 loss: 337.405965 2022/10/12 23:13:46 - mmengine - INFO - Epoch(train) [107][500/586] lr: 5.000000e-03 eta: 10:43:58 time: 0.696000 data_time: 0.067283 memory: 12959 loss_kpt: 341.182009 acc_pose: 0.861663 loss: 341.182009 2022/10/12 23:14:21 - mmengine - INFO - Epoch(train) [107][550/586] lr: 5.000000e-03 eta: 10:43:30 time: 0.705421 data_time: 0.064866 memory: 12959 loss_kpt: 336.852318 acc_pose: 0.839650 loss: 336.852318 2022/10/12 23:14:46 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:15:21 - mmengine - INFO - Epoch(train) [108][50/586] lr: 5.000000e-03 eta: 10:42:15 time: 0.696928 data_time: 0.072300 memory: 12959 loss_kpt: 339.263883 acc_pose: 0.727451 loss: 339.263883 2022/10/12 23:15:55 - mmengine - INFO - Epoch(train) [108][100/586] lr: 5.000000e-03 eta: 10:41:45 time: 0.680320 data_time: 0.059047 memory: 12959 loss_kpt: 331.918419 acc_pose: 0.805788 loss: 331.918419 2022/10/12 23:16:29 - mmengine - INFO - Epoch(train) [108][150/586] lr: 5.000000e-03 eta: 10:41:15 time: 0.678986 data_time: 0.069983 memory: 12959 loss_kpt: 334.592836 acc_pose: 0.758659 loss: 334.592836 2022/10/12 23:17:02 - mmengine - INFO - Epoch(train) [108][200/586] lr: 5.000000e-03 eta: 10:40:45 time: 0.675199 data_time: 0.064348 memory: 12959 loss_kpt: 340.054490 acc_pose: 0.782051 loss: 340.054490 2022/10/12 23:17:37 - mmengine - INFO - Epoch(train) [108][250/586] lr: 5.000000e-03 eta: 10:40:16 time: 0.697333 data_time: 0.069500 memory: 12959 loss_kpt: 340.143035 acc_pose: 0.815728 loss: 340.143035 2022/10/12 23:18:11 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:18:12 - mmengine - INFO - Epoch(train) [108][300/586] lr: 5.000000e-03 eta: 10:39:47 time: 0.700509 data_time: 0.066700 memory: 12959 loss_kpt: 341.186780 acc_pose: 0.764605 loss: 341.186780 2022/10/12 23:18:47 - mmengine - INFO - Epoch(train) [108][350/586] lr: 5.000000e-03 eta: 10:39:17 time: 0.693754 data_time: 0.065291 memory: 12959 loss_kpt: 341.920066 acc_pose: 0.809800 loss: 341.920066 2022/10/12 23:19:21 - mmengine - INFO - Epoch(train) [108][400/586] lr: 5.000000e-03 eta: 10:38:48 time: 0.685609 data_time: 0.062964 memory: 12959 loss_kpt: 340.258552 acc_pose: 0.901511 loss: 340.258552 2022/10/12 23:19:57 - mmengine - INFO - Epoch(train) [108][450/586] lr: 5.000000e-03 eta: 10:38:19 time: 0.713982 data_time: 0.070583 memory: 12959 loss_kpt: 336.631507 acc_pose: 0.729878 loss: 336.631507 2022/10/12 23:20:33 - mmengine - INFO - Epoch(train) [108][500/586] lr: 5.000000e-03 eta: 10:37:51 time: 0.711619 data_time: 0.063601 memory: 12959 loss_kpt: 341.881693 acc_pose: 0.783025 loss: 341.881693 2022/10/12 23:21:09 - mmengine - INFO - Epoch(train) [108][550/586] lr: 5.000000e-03 eta: 10:37:23 time: 0.725827 data_time: 0.069346 memory: 12959 loss_kpt: 336.886630 acc_pose: 0.812884 loss: 336.886630 2022/10/12 23:21:35 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:22:10 - mmengine - INFO - Epoch(train) [109][50/586] lr: 5.000000e-03 eta: 10:36:09 time: 0.696158 data_time: 0.070832 memory: 12959 loss_kpt: 340.873187 acc_pose: 0.815567 loss: 340.873187 2022/10/12 23:22:45 - mmengine - INFO - Epoch(train) [109][100/586] lr: 5.000000e-03 eta: 10:35:39 time: 0.690847 data_time: 0.061017 memory: 12959 loss_kpt: 345.604568 acc_pose: 0.797830 loss: 345.604568 2022/10/12 23:23:19 - mmengine - INFO - Epoch(train) [109][150/586] lr: 5.000000e-03 eta: 10:35:09 time: 0.681325 data_time: 0.066351 memory: 12959 loss_kpt: 333.546989 acc_pose: 0.845123 loss: 333.546989 2022/10/12 23:23:53 - mmengine - INFO - Epoch(train) [109][200/586] lr: 5.000000e-03 eta: 10:34:40 time: 0.687014 data_time: 0.068619 memory: 12959 loss_kpt: 332.764315 acc_pose: 0.833743 loss: 332.764315 2022/10/12 23:24:27 - mmengine - INFO - Epoch(train) [109][250/586] lr: 5.000000e-03 eta: 10:34:10 time: 0.686091 data_time: 0.063052 memory: 12959 loss_kpt: 335.037615 acc_pose: 0.755171 loss: 335.037615 2022/10/12 23:25:01 - mmengine - INFO - Epoch(train) [109][300/586] lr: 5.000000e-03 eta: 10:33:39 time: 0.670064 data_time: 0.064276 memory: 12959 loss_kpt: 337.499987 acc_pose: 0.891551 loss: 337.499987 2022/10/12 23:25:35 - mmengine - INFO - Epoch(train) [109][350/586] lr: 5.000000e-03 eta: 10:33:10 time: 0.692214 data_time: 0.065517 memory: 12959 loss_kpt: 335.096877 acc_pose: 0.819367 loss: 335.096877 2022/10/12 23:26:09 - mmengine - INFO - Epoch(train) [109][400/586] lr: 5.000000e-03 eta: 10:32:40 time: 0.678082 data_time: 0.067293 memory: 12959 loss_kpt: 342.853599 acc_pose: 0.797563 loss: 342.853599 2022/10/12 23:26:44 - mmengine - INFO - Epoch(train) [109][450/586] lr: 5.000000e-03 eta: 10:32:10 time: 0.687456 data_time: 0.062541 memory: 12959 loss_kpt: 335.506221 acc_pose: 0.809985 loss: 335.506221 2022/10/12 23:27:18 - mmengine - INFO - Epoch(train) [109][500/586] lr: 5.000000e-03 eta: 10:31:40 time: 0.676925 data_time: 0.065708 memory: 12959 loss_kpt: 334.572281 acc_pose: 0.834920 loss: 334.572281 2022/10/12 23:27:51 - mmengine - INFO - Epoch(train) [109][550/586] lr: 5.000000e-03 eta: 10:31:09 time: 0.677196 data_time: 0.067028 memory: 12959 loss_kpt: 335.125281 acc_pose: 0.849653 loss: 335.125281 2022/10/12 23:28:16 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:28:51 - mmengine - INFO - Epoch(train) [110][50/586] lr: 5.000000e-03 eta: 10:29:56 time: 0.710651 data_time: 0.072870 memory: 12959 loss_kpt: 343.064313 acc_pose: 0.860104 loss: 343.064313 2022/10/12 23:29:26 - mmengine - INFO - Epoch(train) [110][100/586] lr: 5.000000e-03 eta: 10:29:27 time: 0.697341 data_time: 0.059721 memory: 12959 loss_kpt: 338.113917 acc_pose: 0.895621 loss: 338.113917 2022/10/12 23:29:44 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:30:01 - mmengine - INFO - Epoch(train) [110][150/586] lr: 5.000000e-03 eta: 10:28:58 time: 0.700689 data_time: 0.062377 memory: 12959 loss_kpt: 338.891612 acc_pose: 0.816486 loss: 338.891612 2022/10/12 23:30:35 - mmengine - INFO - Epoch(train) [110][200/586] lr: 5.000000e-03 eta: 10:28:28 time: 0.678212 data_time: 0.063102 memory: 12959 loss_kpt: 338.898503 acc_pose: 0.773918 loss: 338.898503 2022/10/12 23:31:09 - mmengine - INFO - Epoch(train) [110][250/586] lr: 5.000000e-03 eta: 10:27:58 time: 0.687985 data_time: 0.062050 memory: 12959 loss_kpt: 337.166586 acc_pose: 0.810114 loss: 337.166586 2022/10/12 23:31:44 - mmengine - INFO - Epoch(train) [110][300/586] lr: 5.000000e-03 eta: 10:27:29 time: 0.693940 data_time: 0.061492 memory: 12959 loss_kpt: 339.089799 acc_pose: 0.873967 loss: 339.089799 2022/10/12 23:32:19 - mmengine - INFO - Epoch(train) [110][350/586] lr: 5.000000e-03 eta: 10:26:59 time: 0.692347 data_time: 0.061470 memory: 12959 loss_kpt: 339.104321 acc_pose: 0.832709 loss: 339.104321 2022/10/12 23:32:53 - mmengine - INFO - Epoch(train) [110][400/586] lr: 5.000000e-03 eta: 10:26:29 time: 0.684758 data_time: 0.063115 memory: 12959 loss_kpt: 341.522427 acc_pose: 0.876846 loss: 341.522427 2022/10/12 23:33:28 - mmengine - INFO - Epoch(train) [110][450/586] lr: 5.000000e-03 eta: 10:26:00 time: 0.694431 data_time: 0.060438 memory: 12959 loss_kpt: 343.547505 acc_pose: 0.796043 loss: 343.547505 2022/10/12 23:34:02 - mmengine - INFO - Epoch(train) [110][500/586] lr: 5.000000e-03 eta: 10:25:30 time: 0.680744 data_time: 0.064825 memory: 12959 loss_kpt: 336.069214 acc_pose: 0.818267 loss: 336.069214 2022/10/12 23:34:37 - mmengine - INFO - Epoch(train) [110][550/586] lr: 5.000000e-03 eta: 10:25:00 time: 0.698291 data_time: 0.061607 memory: 12959 loss_kpt: 332.599840 acc_pose: 0.855405 loss: 332.599840 2022/10/12 23:35:01 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:35:01 - mmengine - INFO - Saving checkpoint at 110 epochs 2022/10/12 23:35:19 - mmengine - INFO - Epoch(val) [110][50/407] eta: 0:01:37 time: 0.273053 data_time: 0.013178 memory: 12959 2022/10/12 23:35:32 - mmengine - INFO - Epoch(val) [110][100/407] eta: 0:01:20 time: 0.262188 data_time: 0.008476 memory: 2407 2022/10/12 23:35:45 - mmengine - INFO - Epoch(val) [110][150/407] eta: 0:01:07 time: 0.262278 data_time: 0.008619 memory: 2407 2022/10/12 23:35:59 - mmengine - INFO - Epoch(val) [110][200/407] eta: 0:00:55 time: 0.268959 data_time: 0.008298 memory: 2407 2022/10/12 23:36:12 - mmengine - INFO - Epoch(val) [110][250/407] eta: 0:00:42 time: 0.268234 data_time: 0.008690 memory: 2407 2022/10/12 23:36:25 - mmengine - INFO - Epoch(val) [110][300/407] eta: 0:00:28 time: 0.263763 data_time: 0.008085 memory: 2407 2022/10/12 23:36:38 - mmengine - INFO - Epoch(val) [110][350/407] eta: 0:00:15 time: 0.265591 data_time: 0.008565 memory: 2407 2022/10/12 23:36:51 - mmengine - INFO - Epoch(val) [110][400/407] eta: 0:00:01 time: 0.258786 data_time: 0.007666 memory: 2407 2022/10/12 23:37:06 - mmengine - INFO - Evaluating CocoMetric... 2022/10/12 23:37:22 - mmengine - INFO - Epoch(val) [110][407/407] coco/AP: 0.717643 coco/AP .5: 0.892284 coco/AP .75: 0.794118 coco/AP (M): 0.686482 coco/AP (L): 0.778025 coco/AR: 0.782793 coco/AR .5: 0.931045 coco/AR .75: 0.847764 coco/AR (M): 0.738978 coco/AR (L): 0.843181 2022/10/12 23:37:22 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_100.pth is removed 2022/10/12 23:37:25 - mmengine - INFO - The best checkpoint with 0.7176 coco/AP at 110 epoch is saved to best_coco/AP_epoch_110.pth. 2022/10/12 23:37:58 - mmengine - INFO - Epoch(train) [111][50/586] lr: 5.000000e-03 eta: 10:23:46 time: 0.671426 data_time: 0.068093 memory: 12959 loss_kpt: 334.846834 acc_pose: 0.778113 loss: 334.846834 2022/10/12 23:38:33 - mmengine - INFO - Epoch(train) [111][100/586] lr: 5.000000e-03 eta: 10:23:16 time: 0.683995 data_time: 0.069684 memory: 12959 loss_kpt: 332.133885 acc_pose: 0.811255 loss: 332.133885 2022/10/12 23:39:08 - mmengine - INFO - Epoch(train) [111][150/586] lr: 5.000000e-03 eta: 10:22:47 time: 0.700962 data_time: 0.061800 memory: 12959 loss_kpt: 330.519578 acc_pose: 0.839796 loss: 330.519578 2022/10/12 23:39:42 - mmengine - INFO - Epoch(train) [111][200/586] lr: 5.000000e-03 eta: 10:22:17 time: 0.685428 data_time: 0.059080 memory: 12959 loss_kpt: 333.476530 acc_pose: 0.807328 loss: 333.476530 2022/10/12 23:40:17 - mmengine - INFO - Epoch(train) [111][250/586] lr: 5.000000e-03 eta: 10:21:47 time: 0.692100 data_time: 0.060866 memory: 12959 loss_kpt: 335.066437 acc_pose: 0.751241 loss: 335.066437 2022/10/12 23:40:51 - mmengine - INFO - Epoch(train) [111][300/586] lr: 5.000000e-03 eta: 10:21:18 time: 0.694796 data_time: 0.062216 memory: 12959 loss_kpt: 341.913719 acc_pose: 0.794093 loss: 341.913719 2022/10/12 23:41:26 - mmengine - INFO - Epoch(train) [111][350/586] lr: 5.000000e-03 eta: 10:20:48 time: 0.691797 data_time: 0.065048 memory: 12959 loss_kpt: 341.234689 acc_pose: 0.775649 loss: 341.234689 2022/10/12 23:42:01 - mmengine - INFO - Epoch(train) [111][400/586] lr: 5.000000e-03 eta: 10:20:19 time: 0.696040 data_time: 0.062764 memory: 12959 loss_kpt: 331.982165 acc_pose: 0.844982 loss: 331.982165 2022/10/12 23:42:35 - mmengine - INFO - Epoch(train) [111][450/586] lr: 5.000000e-03 eta: 10:19:49 time: 0.688654 data_time: 0.068217 memory: 12959 loss_kpt: 332.716800 acc_pose: 0.897403 loss: 332.716800 2022/10/12 23:43:10 - mmengine - INFO - Epoch(train) [111][500/586] lr: 5.000000e-03 eta: 10:19:19 time: 0.688475 data_time: 0.059971 memory: 12959 loss_kpt: 343.861013 acc_pose: 0.883594 loss: 343.861013 2022/10/12 23:43:38 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:43:45 - mmengine - INFO - Epoch(train) [111][550/586] lr: 5.000000e-03 eta: 10:18:50 time: 0.705572 data_time: 0.061577 memory: 12959 loss_kpt: 331.936868 acc_pose: 0.864102 loss: 331.936868 2022/10/12 23:44:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:44:45 - mmengine - INFO - Epoch(train) [112][50/586] lr: 5.000000e-03 eta: 10:17:38 time: 0.701804 data_time: 0.073532 memory: 12959 loss_kpt: 332.497016 acc_pose: 0.831650 loss: 332.497016 2022/10/12 23:45:20 - mmengine - INFO - Epoch(train) [112][100/586] lr: 5.000000e-03 eta: 10:17:08 time: 0.700008 data_time: 0.061257 memory: 12959 loss_kpt: 336.612692 acc_pose: 0.854204 loss: 336.612692 2022/10/12 23:45:55 - mmengine - INFO - Epoch(train) [112][150/586] lr: 5.000000e-03 eta: 10:16:39 time: 0.708274 data_time: 0.059554 memory: 12959 loss_kpt: 337.975463 acc_pose: 0.881869 loss: 337.975463 2022/10/12 23:46:31 - mmengine - INFO - Epoch(train) [112][200/586] lr: 5.000000e-03 eta: 10:16:10 time: 0.700493 data_time: 0.063909 memory: 12959 loss_kpt: 332.025574 acc_pose: 0.787695 loss: 332.025574 2022/10/12 23:47:06 - mmengine - INFO - Epoch(train) [112][250/586] lr: 5.000000e-03 eta: 10:15:42 time: 0.719431 data_time: 0.062335 memory: 12959 loss_kpt: 339.284641 acc_pose: 0.775812 loss: 339.284641 2022/10/12 23:47:42 - mmengine - INFO - Epoch(train) [112][300/586] lr: 5.000000e-03 eta: 10:15:12 time: 0.700958 data_time: 0.063922 memory: 12959 loss_kpt: 343.534288 acc_pose: 0.837468 loss: 343.534288 2022/10/12 23:48:18 - mmengine - INFO - Epoch(train) [112][350/586] lr: 5.000000e-03 eta: 10:14:44 time: 0.722794 data_time: 0.063072 memory: 12959 loss_kpt: 331.718123 acc_pose: 0.818200 loss: 331.718123 2022/10/12 23:48:53 - mmengine - INFO - Epoch(train) [112][400/586] lr: 5.000000e-03 eta: 10:14:15 time: 0.707866 data_time: 0.063050 memory: 12959 loss_kpt: 335.276024 acc_pose: 0.816259 loss: 335.276024 2022/10/12 23:49:29 - mmengine - INFO - Epoch(train) [112][450/586] lr: 5.000000e-03 eta: 10:13:47 time: 0.720062 data_time: 0.068625 memory: 12959 loss_kpt: 338.769799 acc_pose: 0.861550 loss: 338.769799 2022/10/12 23:50:04 - mmengine - INFO - Epoch(train) [112][500/586] lr: 5.000000e-03 eta: 10:13:17 time: 0.698552 data_time: 0.063495 memory: 12959 loss_kpt: 336.077379 acc_pose: 0.839371 loss: 336.077379 2022/10/12 23:50:40 - mmengine - INFO - Epoch(train) [112][550/586] lr: 5.000000e-03 eta: 10:12:49 time: 0.715149 data_time: 0.061441 memory: 12959 loss_kpt: 332.132569 acc_pose: 0.794939 loss: 332.132569 2022/10/12 23:51:05 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:51:39 - mmengine - INFO - Epoch(train) [113][50/586] lr: 5.000000e-03 eta: 10:11:36 time: 0.688156 data_time: 0.078723 memory: 12959 loss_kpt: 342.883278 acc_pose: 0.761872 loss: 342.883278 2022/10/12 23:52:12 - mmengine - INFO - Epoch(train) [113][100/586] lr: 5.000000e-03 eta: 10:11:04 time: 0.657328 data_time: 0.057668 memory: 12959 loss_kpt: 335.607311 acc_pose: 0.803296 loss: 335.607311 2022/10/12 23:52:45 - mmengine - INFO - Epoch(train) [113][150/586] lr: 5.000000e-03 eta: 10:10:33 time: 0.663257 data_time: 0.065032 memory: 12959 loss_kpt: 330.058796 acc_pose: 0.891841 loss: 330.058796 2022/10/12 23:53:19 - mmengine - INFO - Epoch(train) [113][200/586] lr: 5.000000e-03 eta: 10:10:03 time: 0.668740 data_time: 0.057889 memory: 12959 loss_kpt: 337.589691 acc_pose: 0.866975 loss: 337.589691 2022/10/12 23:53:52 - mmengine - INFO - Epoch(train) [113][250/586] lr: 5.000000e-03 eta: 10:09:32 time: 0.659034 data_time: 0.064361 memory: 12959 loss_kpt: 336.879235 acc_pose: 0.745836 loss: 336.879235 2022/10/12 23:54:25 - mmengine - INFO - Epoch(train) [113][300/586] lr: 5.000000e-03 eta: 10:09:01 time: 0.670411 data_time: 0.061771 memory: 12959 loss_kpt: 337.649092 acc_pose: 0.780272 loss: 337.649092 2022/10/12 23:54:58 - mmengine - INFO - Epoch(train) [113][350/586] lr: 5.000000e-03 eta: 10:08:30 time: 0.664104 data_time: 0.061402 memory: 12959 loss_kpt: 331.819889 acc_pose: 0.856409 loss: 331.819889 2022/10/12 23:55:11 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:55:32 - mmengine - INFO - Epoch(train) [113][400/586] lr: 5.000000e-03 eta: 10:07:59 time: 0.672786 data_time: 0.062338 memory: 12959 loss_kpt: 337.107201 acc_pose: 0.831430 loss: 337.107201 2022/10/12 23:56:06 - mmengine - INFO - Epoch(train) [113][450/586] lr: 5.000000e-03 eta: 10:07:29 time: 0.687685 data_time: 0.061605 memory: 12959 loss_kpt: 333.796104 acc_pose: 0.804896 loss: 333.796104 2022/10/12 23:56:40 - mmengine - INFO - Epoch(train) [113][500/586] lr: 5.000000e-03 eta: 10:06:59 time: 0.667960 data_time: 0.062306 memory: 12959 loss_kpt: 336.998622 acc_pose: 0.854895 loss: 336.998622 2022/10/12 23:57:13 - mmengine - INFO - Epoch(train) [113][550/586] lr: 5.000000e-03 eta: 10:06:28 time: 0.671826 data_time: 0.066422 memory: 12959 loss_kpt: 332.046750 acc_pose: 0.814024 loss: 332.046750 2022/10/12 23:57:37 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/12 23:58:12 - mmengine - INFO - Epoch(train) [114][50/586] lr: 5.000000e-03 eta: 10:05:16 time: 0.693664 data_time: 0.072169 memory: 12959 loss_kpt: 338.644813 acc_pose: 0.867771 loss: 338.644813 2022/10/12 23:58:46 - mmengine - INFO - Epoch(train) [114][100/586] lr: 5.000000e-03 eta: 10:04:45 time: 0.682178 data_time: 0.063222 memory: 12959 loss_kpt: 334.269085 acc_pose: 0.783757 loss: 334.269085 2022/10/12 23:59:20 - mmengine - INFO - Epoch(train) [114][150/586] lr: 5.000000e-03 eta: 10:04:15 time: 0.685548 data_time: 0.060705 memory: 12959 loss_kpt: 335.925394 acc_pose: 0.875332 loss: 335.925394 2022/10/12 23:59:54 - mmengine - INFO - Epoch(train) [114][200/586] lr: 5.000000e-03 eta: 10:03:45 time: 0.683203 data_time: 0.061229 memory: 12959 loss_kpt: 336.222348 acc_pose: 0.802940 loss: 336.222348 2022/10/13 00:00:28 - mmengine - INFO - Epoch(train) [114][250/586] lr: 5.000000e-03 eta: 10:03:15 time: 0.679864 data_time: 0.062462 memory: 12959 loss_kpt: 333.709258 acc_pose: 0.705407 loss: 333.709258 2022/10/13 00:01:01 - mmengine - INFO - Epoch(train) [114][300/586] lr: 5.000000e-03 eta: 10:02:44 time: 0.661982 data_time: 0.065338 memory: 12959 loss_kpt: 336.430815 acc_pose: 0.828625 loss: 336.430815 2022/10/13 00:01:35 - mmengine - INFO - Epoch(train) [114][350/586] lr: 5.000000e-03 eta: 10:02:14 time: 0.675445 data_time: 0.062370 memory: 12959 loss_kpt: 333.917643 acc_pose: 0.896098 loss: 333.917643 2022/10/13 00:02:09 - mmengine - INFO - Epoch(train) [114][400/586] lr: 5.000000e-03 eta: 10:01:43 time: 0.682345 data_time: 0.063381 memory: 12959 loss_kpt: 334.572825 acc_pose: 0.857471 loss: 334.572825 2022/10/13 00:02:43 - mmengine - INFO - Epoch(train) [114][450/586] lr: 5.000000e-03 eta: 10:01:13 time: 0.674480 data_time: 0.064171 memory: 12959 loss_kpt: 334.094083 acc_pose: 0.853859 loss: 334.094083 2022/10/13 00:03:17 - mmengine - INFO - Epoch(train) [114][500/586] lr: 5.000000e-03 eta: 10:00:43 time: 0.679413 data_time: 0.067447 memory: 12959 loss_kpt: 327.862691 acc_pose: 0.802739 loss: 327.862691 2022/10/13 00:03:51 - mmengine - INFO - Epoch(train) [114][550/586] lr: 5.000000e-03 eta: 10:00:13 time: 0.685201 data_time: 0.064830 memory: 12959 loss_kpt: 335.120322 acc_pose: 0.764991 loss: 335.120322 2022/10/13 00:04:16 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:04:51 - mmengine - INFO - Epoch(train) [115][50/586] lr: 5.000000e-03 eta: 9:59:01 time: 0.699452 data_time: 0.074117 memory: 12959 loss_kpt: 332.893324 acc_pose: 0.870705 loss: 332.893324 2022/10/13 00:05:25 - mmengine - INFO - Epoch(train) [115][100/586] lr: 5.000000e-03 eta: 9:58:31 time: 0.687541 data_time: 0.060377 memory: 12959 loss_kpt: 326.775087 acc_pose: 0.855947 loss: 326.775087 2022/10/13 00:06:00 - mmengine - INFO - Epoch(train) [115][150/586] lr: 5.000000e-03 eta: 9:58:01 time: 0.684311 data_time: 0.062016 memory: 12959 loss_kpt: 331.688240 acc_pose: 0.694183 loss: 331.688240 2022/10/13 00:06:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:06:34 - mmengine - INFO - Epoch(train) [115][200/586] lr: 5.000000e-03 eta: 9:57:31 time: 0.692252 data_time: 0.066101 memory: 12959 loss_kpt: 336.322112 acc_pose: 0.860517 loss: 336.322112 2022/10/13 00:07:09 - mmengine - INFO - Epoch(train) [115][250/586] lr: 5.000000e-03 eta: 9:57:01 time: 0.688354 data_time: 0.061548 memory: 12959 loss_kpt: 337.860917 acc_pose: 0.743624 loss: 337.860917 2022/10/13 00:07:43 - mmengine - INFO - Epoch(train) [115][300/586] lr: 5.000000e-03 eta: 9:56:31 time: 0.679391 data_time: 0.062598 memory: 12959 loss_kpt: 337.115949 acc_pose: 0.790385 loss: 337.115949 2022/10/13 00:08:18 - mmengine - INFO - Epoch(train) [115][350/586] lr: 5.000000e-03 eta: 9:56:01 time: 0.699815 data_time: 0.059363 memory: 12959 loss_kpt: 334.508581 acc_pose: 0.795768 loss: 334.508581 2022/10/13 00:08:52 - mmengine - INFO - Epoch(train) [115][400/586] lr: 5.000000e-03 eta: 9:55:31 time: 0.689009 data_time: 0.067352 memory: 12959 loss_kpt: 336.043379 acc_pose: 0.804399 loss: 336.043379 2022/10/13 00:09:27 - mmengine - INFO - Epoch(train) [115][450/586] lr: 5.000000e-03 eta: 9:55:02 time: 0.704055 data_time: 0.060965 memory: 12959 loss_kpt: 333.301567 acc_pose: 0.781350 loss: 333.301567 2022/10/13 00:10:02 - mmengine - INFO - Epoch(train) [115][500/586] lr: 5.000000e-03 eta: 9:54:32 time: 0.687225 data_time: 0.063817 memory: 12959 loss_kpt: 330.164077 acc_pose: 0.832432 loss: 330.164077 2022/10/13 00:10:37 - mmengine - INFO - Epoch(train) [115][550/586] lr: 5.000000e-03 eta: 9:54:02 time: 0.699769 data_time: 0.061868 memory: 12959 loss_kpt: 332.348389 acc_pose: 0.769570 loss: 332.348389 2022/10/13 00:11:01 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:11:37 - mmengine - INFO - Epoch(train) [116][50/586] lr: 5.000000e-03 eta: 9:52:52 time: 0.722562 data_time: 0.079388 memory: 12959 loss_kpt: 324.754534 acc_pose: 0.826984 loss: 324.754534 2022/10/13 00:12:12 - mmengine - INFO - Epoch(train) [116][100/586] lr: 5.000000e-03 eta: 9:52:22 time: 0.699315 data_time: 0.064090 memory: 12959 loss_kpt: 331.832703 acc_pose: 0.834244 loss: 331.832703 2022/10/13 00:12:48 - mmengine - INFO - Epoch(train) [116][150/586] lr: 5.000000e-03 eta: 9:51:53 time: 0.716488 data_time: 0.064497 memory: 12959 loss_kpt: 333.632446 acc_pose: 0.842886 loss: 333.632446 2022/10/13 00:13:22 - mmengine - INFO - Epoch(train) [116][200/586] lr: 5.000000e-03 eta: 9:51:23 time: 0.689143 data_time: 0.062428 memory: 12959 loss_kpt: 327.969905 acc_pose: 0.841938 loss: 327.969905 2022/10/13 00:13:58 - mmengine - INFO - Epoch(train) [116][250/586] lr: 5.000000e-03 eta: 9:50:54 time: 0.707064 data_time: 0.069054 memory: 12959 loss_kpt: 337.720470 acc_pose: 0.799569 loss: 337.720470 2022/10/13 00:14:32 - mmengine - INFO - Epoch(train) [116][300/586] lr: 5.000000e-03 eta: 9:50:24 time: 0.683469 data_time: 0.061477 memory: 12959 loss_kpt: 337.609563 acc_pose: 0.859499 loss: 337.609563 2022/10/13 00:15:07 - mmengine - INFO - Epoch(train) [116][350/586] lr: 5.000000e-03 eta: 9:49:54 time: 0.697904 data_time: 0.065365 memory: 12959 loss_kpt: 335.357966 acc_pose: 0.883077 loss: 335.357966 2022/10/13 00:15:42 - mmengine - INFO - Epoch(train) [116][400/586] lr: 5.000000e-03 eta: 9:49:25 time: 0.699867 data_time: 0.065354 memory: 12959 loss_kpt: 335.586721 acc_pose: 0.842236 loss: 335.586721 2022/10/13 00:16:19 - mmengine - INFO - Epoch(train) [116][450/586] lr: 5.000000e-03 eta: 9:48:57 time: 0.732166 data_time: 0.059591 memory: 12959 loss_kpt: 329.083355 acc_pose: 0.814494 loss: 329.083355 2022/10/13 00:16:54 - mmengine - INFO - Epoch(train) [116][500/586] lr: 5.000000e-03 eta: 9:48:27 time: 0.705659 data_time: 0.065798 memory: 12959 loss_kpt: 336.388898 acc_pose: 0.779955 loss: 336.388898 2022/10/13 00:17:29 - mmengine - INFO - Epoch(train) [116][550/586] lr: 5.000000e-03 eta: 9:47:58 time: 0.711068 data_time: 0.064306 memory: 12959 loss_kpt: 333.666833 acc_pose: 0.858017 loss: 333.666833 2022/10/13 00:17:55 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:18:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:18:31 - mmengine - INFO - Epoch(train) [117][50/586] lr: 5.000000e-03 eta: 9:46:47 time: 0.707115 data_time: 0.072758 memory: 12959 loss_kpt: 334.467968 acc_pose: 0.822798 loss: 334.467968 2022/10/13 00:19:05 - mmengine - INFO - Epoch(train) [117][100/586] lr: 5.000000e-03 eta: 9:46:17 time: 0.691031 data_time: 0.062711 memory: 12959 loss_kpt: 333.555102 acc_pose: 0.775503 loss: 333.555102 2022/10/13 00:19:39 - mmengine - INFO - Epoch(train) [117][150/586] lr: 5.000000e-03 eta: 9:45:47 time: 0.685537 data_time: 0.065180 memory: 12959 loss_kpt: 334.701323 acc_pose: 0.764134 loss: 334.701323 2022/10/13 00:20:14 - mmengine - INFO - Epoch(train) [117][200/586] lr: 5.000000e-03 eta: 9:45:17 time: 0.692805 data_time: 0.058654 memory: 12959 loss_kpt: 327.669481 acc_pose: 0.831561 loss: 327.669481 2022/10/13 00:20:49 - mmengine - INFO - Epoch(train) [117][250/586] lr: 5.000000e-03 eta: 9:44:47 time: 0.690577 data_time: 0.061284 memory: 12959 loss_kpt: 329.453314 acc_pose: 0.840574 loss: 329.453314 2022/10/13 00:21:23 - mmengine - INFO - Epoch(train) [117][300/586] lr: 5.000000e-03 eta: 9:44:17 time: 0.686851 data_time: 0.061464 memory: 12959 loss_kpt: 333.582543 acc_pose: 0.847025 loss: 333.582543 2022/10/13 00:21:58 - mmengine - INFO - Epoch(train) [117][350/586] lr: 5.000000e-03 eta: 9:43:48 time: 0.697376 data_time: 0.066236 memory: 12959 loss_kpt: 336.987353 acc_pose: 0.825884 loss: 336.987353 2022/10/13 00:22:33 - mmengine - INFO - Epoch(train) [117][400/586] lr: 5.000000e-03 eta: 9:43:18 time: 0.692590 data_time: 0.060276 memory: 12959 loss_kpt: 329.233501 acc_pose: 0.799100 loss: 329.233501 2022/10/13 00:23:07 - mmengine - INFO - Epoch(train) [117][450/586] lr: 5.000000e-03 eta: 9:42:48 time: 0.689305 data_time: 0.059611 memory: 12959 loss_kpt: 337.380193 acc_pose: 0.790458 loss: 337.380193 2022/10/13 00:23:40 - mmengine - INFO - Epoch(train) [117][500/586] lr: 5.000000e-03 eta: 9:42:17 time: 0.668591 data_time: 0.061838 memory: 12959 loss_kpt: 331.586450 acc_pose: 0.850625 loss: 331.586450 2022/10/13 00:24:14 - mmengine - INFO - Epoch(train) [117][550/586] lr: 5.000000e-03 eta: 9:41:46 time: 0.680023 data_time: 0.067281 memory: 12959 loss_kpt: 333.266492 acc_pose: 0.821409 loss: 333.266492 2022/10/13 00:24:38 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:25:14 - mmengine - INFO - Epoch(train) [118][50/586] lr: 5.000000e-03 eta: 9:40:35 time: 0.701265 data_time: 0.077356 memory: 12959 loss_kpt: 338.138341 acc_pose: 0.793553 loss: 338.138341 2022/10/13 00:25:47 - mmengine - INFO - Epoch(train) [118][100/586] lr: 5.000000e-03 eta: 9:40:05 time: 0.669836 data_time: 0.063488 memory: 12959 loss_kpt: 332.261506 acc_pose: 0.714246 loss: 332.261506 2022/10/13 00:26:22 - mmengine - INFO - Epoch(train) [118][150/586] lr: 5.000000e-03 eta: 9:39:35 time: 0.690857 data_time: 0.065957 memory: 12959 loss_kpt: 333.596665 acc_pose: 0.777338 loss: 333.596665 2022/10/13 00:26:56 - mmengine - INFO - Epoch(train) [118][200/586] lr: 5.000000e-03 eta: 9:39:05 time: 0.697733 data_time: 0.063482 memory: 12959 loss_kpt: 333.085223 acc_pose: 0.848196 loss: 333.085223 2022/10/13 00:27:32 - mmengine - INFO - Epoch(train) [118][250/586] lr: 5.000000e-03 eta: 9:38:35 time: 0.701068 data_time: 0.061887 memory: 12959 loss_kpt: 332.066301 acc_pose: 0.856650 loss: 332.066301 2022/10/13 00:28:07 - mmengine - INFO - Epoch(train) [118][300/586] lr: 5.000000e-03 eta: 9:38:06 time: 0.704059 data_time: 0.064428 memory: 12959 loss_kpt: 337.524133 acc_pose: 0.860294 loss: 337.524133 2022/10/13 00:28:41 - mmengine - INFO - Epoch(train) [118][350/586] lr: 5.000000e-03 eta: 9:37:36 time: 0.692705 data_time: 0.067349 memory: 12959 loss_kpt: 334.510305 acc_pose: 0.827210 loss: 334.510305 2022/10/13 00:29:17 - mmengine - INFO - Epoch(train) [118][400/586] lr: 5.000000e-03 eta: 9:37:07 time: 0.715115 data_time: 0.063582 memory: 12959 loss_kpt: 335.052141 acc_pose: 0.842808 loss: 335.052141 2022/10/13 00:29:44 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:29:52 - mmengine - INFO - Epoch(train) [118][450/586] lr: 5.000000e-03 eta: 9:36:37 time: 0.695324 data_time: 0.065514 memory: 12959 loss_kpt: 336.816284 acc_pose: 0.843360 loss: 336.816284 2022/10/13 00:30:27 - mmengine - INFO - Epoch(train) [118][500/586] lr: 5.000000e-03 eta: 9:36:08 time: 0.703841 data_time: 0.061534 memory: 12959 loss_kpt: 331.043174 acc_pose: 0.820927 loss: 331.043174 2022/10/13 00:31:02 - mmengine - INFO - Epoch(train) [118][550/586] lr: 5.000000e-03 eta: 9:35:38 time: 0.704961 data_time: 0.062382 memory: 12959 loss_kpt: 327.953466 acc_pose: 0.872878 loss: 327.953466 2022/10/13 00:31:27 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:32:02 - mmengine - INFO - Epoch(train) [119][50/586] lr: 5.000000e-03 eta: 9:34:27 time: 0.699611 data_time: 0.074176 memory: 12959 loss_kpt: 336.724539 acc_pose: 0.874192 loss: 336.724539 2022/10/13 00:32:36 - mmengine - INFO - Epoch(train) [119][100/586] lr: 5.000000e-03 eta: 9:33:57 time: 0.686023 data_time: 0.060220 memory: 12959 loss_kpt: 330.488085 acc_pose: 0.819715 loss: 330.488085 2022/10/13 00:33:11 - mmengine - INFO - Epoch(train) [119][150/586] lr: 5.000000e-03 eta: 9:33:27 time: 0.696356 data_time: 0.063052 memory: 12959 loss_kpt: 331.881061 acc_pose: 0.901048 loss: 331.881061 2022/10/13 00:33:46 - mmengine - INFO - Epoch(train) [119][200/586] lr: 5.000000e-03 eta: 9:32:57 time: 0.688844 data_time: 0.060946 memory: 12959 loss_kpt: 334.070550 acc_pose: 0.780285 loss: 334.070550 2022/10/13 00:34:20 - mmengine - INFO - Epoch(train) [119][250/586] lr: 5.000000e-03 eta: 9:32:27 time: 0.694305 data_time: 0.060902 memory: 12959 loss_kpt: 329.150614 acc_pose: 0.740318 loss: 329.150614 2022/10/13 00:34:55 - mmengine - INFO - Epoch(train) [119][300/586] lr: 5.000000e-03 eta: 9:31:57 time: 0.692904 data_time: 0.065757 memory: 12959 loss_kpt: 333.077371 acc_pose: 0.828850 loss: 333.077371 2022/10/13 00:35:31 - mmengine - INFO - Epoch(train) [119][350/586] lr: 5.000000e-03 eta: 9:31:28 time: 0.715552 data_time: 0.062974 memory: 12959 loss_kpt: 333.113510 acc_pose: 0.817737 loss: 333.113510 2022/10/13 00:36:06 - mmengine - INFO - Epoch(train) [119][400/586] lr: 5.000000e-03 eta: 9:30:58 time: 0.694589 data_time: 0.061983 memory: 12959 loss_kpt: 327.754233 acc_pose: 0.795188 loss: 327.754233 2022/10/13 00:36:41 - mmengine - INFO - Epoch(train) [119][450/586] lr: 5.000000e-03 eta: 9:30:29 time: 0.701935 data_time: 0.067207 memory: 12959 loss_kpt: 332.601226 acc_pose: 0.794258 loss: 332.601226 2022/10/13 00:37:15 - mmengine - INFO - Epoch(train) [119][500/586] lr: 5.000000e-03 eta: 9:29:59 time: 0.695826 data_time: 0.062976 memory: 12959 loss_kpt: 335.805143 acc_pose: 0.806845 loss: 335.805143 2022/10/13 00:37:50 - mmengine - INFO - Epoch(train) [119][550/586] lr: 5.000000e-03 eta: 9:29:29 time: 0.701074 data_time: 0.066256 memory: 12959 loss_kpt: 334.834163 acc_pose: 0.822518 loss: 334.834163 2022/10/13 00:38:15 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:38:50 - mmengine - INFO - Epoch(train) [120][50/586] lr: 5.000000e-03 eta: 9:28:19 time: 0.694165 data_time: 0.070024 memory: 12959 loss_kpt: 332.543003 acc_pose: 0.863995 loss: 332.543003 2022/10/13 00:39:24 - mmengine - INFO - Epoch(train) [120][100/586] lr: 5.000000e-03 eta: 9:27:48 time: 0.680471 data_time: 0.062138 memory: 12959 loss_kpt: 340.967513 acc_pose: 0.840240 loss: 340.967513 2022/10/13 00:39:58 - mmengine - INFO - Epoch(train) [120][150/586] lr: 5.000000e-03 eta: 9:27:18 time: 0.679217 data_time: 0.063693 memory: 12959 loss_kpt: 331.672766 acc_pose: 0.895139 loss: 331.672766 2022/10/13 00:40:33 - mmengine - INFO - Epoch(train) [120][200/586] lr: 5.000000e-03 eta: 9:26:48 time: 0.694150 data_time: 0.059282 memory: 12959 loss_kpt: 332.252883 acc_pose: 0.773362 loss: 332.252883 2022/10/13 00:41:08 - mmengine - INFO - Epoch(train) [120][250/586] lr: 5.000000e-03 eta: 9:26:18 time: 0.704648 data_time: 0.069418 memory: 12959 loss_kpt: 329.972634 acc_pose: 0.824180 loss: 329.972634 2022/10/13 00:41:19 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:41:44 - mmengine - INFO - Epoch(train) [120][300/586] lr: 5.000000e-03 eta: 9:25:49 time: 0.714818 data_time: 0.064619 memory: 12959 loss_kpt: 338.850876 acc_pose: 0.785009 loss: 338.850876 2022/10/13 00:42:19 - mmengine - INFO - Epoch(train) [120][350/586] lr: 5.000000e-03 eta: 9:25:20 time: 0.712627 data_time: 0.061087 memory: 12959 loss_kpt: 332.255676 acc_pose: 0.864163 loss: 332.255676 2022/10/13 00:42:56 - mmengine - INFO - Epoch(train) [120][400/586] lr: 5.000000e-03 eta: 9:24:51 time: 0.725543 data_time: 0.058748 memory: 12959 loss_kpt: 335.194904 acc_pose: 0.779958 loss: 335.194904 2022/10/13 00:43:32 - mmengine - INFO - Epoch(train) [120][450/586] lr: 5.000000e-03 eta: 9:24:22 time: 0.722090 data_time: 0.065760 memory: 12959 loss_kpt: 325.548933 acc_pose: 0.853125 loss: 325.548933 2022/10/13 00:44:08 - mmengine - INFO - Epoch(train) [120][500/586] lr: 5.000000e-03 eta: 9:23:53 time: 0.729673 data_time: 0.059786 memory: 12959 loss_kpt: 329.167503 acc_pose: 0.767495 loss: 329.167503 2022/10/13 00:44:45 - mmengine - INFO - Epoch(train) [120][550/586] lr: 5.000000e-03 eta: 9:23:24 time: 0.724727 data_time: 0.075067 memory: 12959 loss_kpt: 331.867641 acc_pose: 0.796416 loss: 331.867641 2022/10/13 00:45:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:45:10 - mmengine - INFO - Saving checkpoint at 120 epochs 2022/10/13 00:45:27 - mmengine - INFO - Epoch(val) [120][50/407] eta: 0:01:36 time: 0.269086 data_time: 0.012961 memory: 12959 2022/10/13 00:45:40 - mmengine - INFO - Epoch(val) [120][100/407] eta: 0:01:20 time: 0.261187 data_time: 0.007562 memory: 2407 2022/10/13 00:45:54 - mmengine - INFO - Epoch(val) [120][150/407] eta: 0:01:07 time: 0.261911 data_time: 0.008301 memory: 2407 2022/10/13 00:46:07 - mmengine - INFO - Epoch(val) [120][200/407] eta: 0:00:53 time: 0.260571 data_time: 0.007323 memory: 2407 2022/10/13 00:46:20 - mmengine - INFO - Epoch(val) [120][250/407] eta: 0:00:41 time: 0.262018 data_time: 0.007856 memory: 2407 2022/10/13 00:46:33 - mmengine - INFO - Epoch(val) [120][300/407] eta: 0:00:28 time: 0.261998 data_time: 0.007789 memory: 2407 2022/10/13 00:46:46 - mmengine - INFO - Epoch(val) [120][350/407] eta: 0:00:15 time: 0.266396 data_time: 0.007960 memory: 2407 2022/10/13 00:46:59 - mmengine - INFO - Epoch(val) [120][400/407] eta: 0:00:01 time: 0.261619 data_time: 0.007426 memory: 2407 2022/10/13 00:47:14 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 00:47:30 - mmengine - INFO - Epoch(val) [120][407/407] coco/AP: 0.726177 coco/AP .5: 0.892862 coco/AP .75: 0.801732 coco/AP (M): 0.694056 coco/AP (L): 0.788488 coco/AR: 0.793687 coco/AR .5: 0.934509 coco/AR .75: 0.857525 coco/AR (M): 0.749631 coco/AR (L): 0.854552 2022/10/13 00:47:30 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_110.pth is removed 2022/10/13 00:47:32 - mmengine - INFO - The best checkpoint with 0.7262 coco/AP at 120 epoch is saved to best_coco/AP_epoch_120.pth. 2022/10/13 00:48:07 - mmengine - INFO - Epoch(train) [121][50/586] lr: 5.000000e-03 eta: 9:22:14 time: 0.689960 data_time: 0.077242 memory: 12959 loss_kpt: 334.974288 acc_pose: 0.837445 loss: 334.974288 2022/10/13 00:48:40 - mmengine - INFO - Epoch(train) [121][100/586] lr: 5.000000e-03 eta: 9:21:43 time: 0.669105 data_time: 0.059899 memory: 12959 loss_kpt: 329.782350 acc_pose: 0.797765 loss: 329.782350 2022/10/13 00:49:15 - mmengine - INFO - Epoch(train) [121][150/586] lr: 5.000000e-03 eta: 9:21:13 time: 0.685826 data_time: 0.062011 memory: 12959 loss_kpt: 334.240648 acc_pose: 0.878632 loss: 334.240648 2022/10/13 00:49:49 - mmengine - INFO - Epoch(train) [121][200/586] lr: 5.000000e-03 eta: 9:20:42 time: 0.683843 data_time: 0.055505 memory: 12959 loss_kpt: 333.577126 acc_pose: 0.853015 loss: 333.577126 2022/10/13 00:50:23 - mmengine - INFO - Epoch(train) [121][250/586] lr: 5.000000e-03 eta: 9:20:12 time: 0.688042 data_time: 0.064639 memory: 12959 loss_kpt: 331.319567 acc_pose: 0.805839 loss: 331.319567 2022/10/13 00:50:57 - mmengine - INFO - Epoch(train) [121][300/586] lr: 5.000000e-03 eta: 9:19:41 time: 0.680658 data_time: 0.061569 memory: 12959 loss_kpt: 331.348970 acc_pose: 0.759618 loss: 331.348970 2022/10/13 00:51:32 - mmengine - INFO - Epoch(train) [121][350/586] lr: 5.000000e-03 eta: 9:19:12 time: 0.697309 data_time: 0.062104 memory: 12959 loss_kpt: 333.906569 acc_pose: 0.783406 loss: 333.906569 2022/10/13 00:52:06 - mmengine - INFO - Epoch(train) [121][400/586] lr: 5.000000e-03 eta: 9:18:41 time: 0.678644 data_time: 0.057937 memory: 12959 loss_kpt: 328.985339 acc_pose: 0.871143 loss: 328.985339 2022/10/13 00:52:40 - mmengine - INFO - Epoch(train) [121][450/586] lr: 5.000000e-03 eta: 9:18:10 time: 0.674398 data_time: 0.061615 memory: 12959 loss_kpt: 336.709753 acc_pose: 0.790334 loss: 336.709753 2022/10/13 00:53:13 - mmengine - INFO - Epoch(train) [121][500/586] lr: 5.000000e-03 eta: 9:17:39 time: 0.659335 data_time: 0.054597 memory: 12959 loss_kpt: 337.649299 acc_pose: 0.839225 loss: 337.649299 2022/10/13 00:53:46 - mmengine - INFO - Epoch(train) [121][550/586] lr: 5.000000e-03 eta: 9:17:08 time: 0.666414 data_time: 0.059162 memory: 12959 loss_kpt: 331.965460 acc_pose: 0.839664 loss: 331.965460 2022/10/13 00:54:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:54:44 - mmengine - INFO - Epoch(train) [122][50/586] lr: 5.000000e-03 eta: 9:15:57 time: 0.670090 data_time: 0.072833 memory: 12959 loss_kpt: 328.612739 acc_pose: 0.785770 loss: 328.612739 2022/10/13 00:55:13 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 00:55:17 - mmengine - INFO - Epoch(train) [122][100/586] lr: 5.000000e-03 eta: 9:15:26 time: 0.663075 data_time: 0.055318 memory: 12959 loss_kpt: 334.319794 acc_pose: 0.879048 loss: 334.319794 2022/10/13 00:55:51 - mmengine - INFO - Epoch(train) [122][150/586] lr: 5.000000e-03 eta: 9:14:55 time: 0.673090 data_time: 0.053636 memory: 12959 loss_kpt: 333.865206 acc_pose: 0.799015 loss: 333.865206 2022/10/13 00:56:24 - mmengine - INFO - Epoch(train) [122][200/586] lr: 5.000000e-03 eta: 9:14:24 time: 0.668892 data_time: 0.049867 memory: 12959 loss_kpt: 328.380428 acc_pose: 0.770348 loss: 328.380428 2022/10/13 00:56:59 - mmengine - INFO - Epoch(train) [122][250/586] lr: 5.000000e-03 eta: 9:13:54 time: 0.690995 data_time: 0.057262 memory: 12959 loss_kpt: 329.004733 acc_pose: 0.790125 loss: 329.004733 2022/10/13 00:57:32 - mmengine - INFO - Epoch(train) [122][300/586] lr: 5.000000e-03 eta: 9:13:23 time: 0.674812 data_time: 0.055660 memory: 12959 loss_kpt: 337.095895 acc_pose: 0.832508 loss: 337.095895 2022/10/13 00:58:06 - mmengine - INFO - Epoch(train) [122][350/586] lr: 5.000000e-03 eta: 9:12:52 time: 0.679130 data_time: 0.060872 memory: 12959 loss_kpt: 334.325019 acc_pose: 0.836495 loss: 334.325019 2022/10/13 00:58:40 - mmengine - INFO - Epoch(train) [122][400/586] lr: 5.000000e-03 eta: 9:12:22 time: 0.680772 data_time: 0.056196 memory: 12959 loss_kpt: 333.341512 acc_pose: 0.798548 loss: 333.341512 2022/10/13 00:59:14 - mmengine - INFO - Epoch(train) [122][450/586] lr: 5.000000e-03 eta: 9:11:51 time: 0.678055 data_time: 0.055159 memory: 12959 loss_kpt: 329.823762 acc_pose: 0.800246 loss: 329.823762 2022/10/13 00:59:48 - mmengine - INFO - Epoch(train) [122][500/586] lr: 5.000000e-03 eta: 9:11:20 time: 0.673925 data_time: 0.055153 memory: 12959 loss_kpt: 330.026862 acc_pose: 0.846499 loss: 330.026862 2022/10/13 01:00:22 - mmengine - INFO - Epoch(train) [122][550/586] lr: 5.000000e-03 eta: 9:10:50 time: 0.673412 data_time: 0.062975 memory: 12959 loss_kpt: 334.874437 acc_pose: 0.861251 loss: 334.874437 2022/10/13 01:00:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:01:20 - mmengine - INFO - Epoch(train) [123][50/586] lr: 5.000000e-03 eta: 9:09:39 time: 0.686028 data_time: 0.072370 memory: 12959 loss_kpt: 327.810189 acc_pose: 0.840055 loss: 327.810189 2022/10/13 01:01:53 - mmengine - INFO - Epoch(train) [123][100/586] lr: 5.000000e-03 eta: 9:09:08 time: 0.666707 data_time: 0.062013 memory: 12959 loss_kpt: 331.653708 acc_pose: 0.802628 loss: 331.653708 2022/10/13 01:02:26 - mmengine - INFO - Epoch(train) [123][150/586] lr: 5.000000e-03 eta: 9:08:37 time: 0.664992 data_time: 0.059884 memory: 12959 loss_kpt: 328.642037 acc_pose: 0.707952 loss: 328.642037 2022/10/13 01:03:00 - mmengine - INFO - Epoch(train) [123][200/586] lr: 5.000000e-03 eta: 9:08:06 time: 0.666483 data_time: 0.056023 memory: 12959 loss_kpt: 334.268983 acc_pose: 0.797621 loss: 334.268983 2022/10/13 01:03:33 - mmengine - INFO - Epoch(train) [123][250/586] lr: 5.000000e-03 eta: 9:07:35 time: 0.664203 data_time: 0.059020 memory: 12959 loss_kpt: 331.301099 acc_pose: 0.823378 loss: 331.301099 2022/10/13 01:04:06 - mmengine - INFO - Epoch(train) [123][300/586] lr: 5.000000e-03 eta: 9:07:04 time: 0.661316 data_time: 0.061636 memory: 12959 loss_kpt: 327.790484 acc_pose: 0.805151 loss: 327.790484 2022/10/13 01:04:40 - mmengine - INFO - Epoch(train) [123][350/586] lr: 5.000000e-03 eta: 9:06:33 time: 0.672055 data_time: 0.058631 memory: 12959 loss_kpt: 338.446599 acc_pose: 0.830118 loss: 338.446599 2022/10/13 01:05:13 - mmengine - INFO - Epoch(train) [123][400/586] lr: 5.000000e-03 eta: 9:06:02 time: 0.663082 data_time: 0.056068 memory: 12959 loss_kpt: 332.023259 acc_pose: 0.805757 loss: 332.023259 2022/10/13 01:05:46 - mmengine - INFO - Epoch(train) [123][450/586] lr: 5.000000e-03 eta: 9:05:31 time: 0.664182 data_time: 0.062053 memory: 12959 loss_kpt: 333.202145 acc_pose: 0.874932 loss: 333.202145 2022/10/13 01:06:19 - mmengine - INFO - Epoch(train) [123][500/586] lr: 5.000000e-03 eta: 9:04:59 time: 0.665504 data_time: 0.056369 memory: 12959 loss_kpt: 334.984786 acc_pose: 0.846005 loss: 334.984786 2022/10/13 01:06:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:06:52 - mmengine - INFO - Epoch(train) [123][550/586] lr: 5.000000e-03 eta: 9:04:28 time: 0.662097 data_time: 0.057055 memory: 12959 loss_kpt: 330.985030 acc_pose: 0.835400 loss: 330.985030 2022/10/13 01:07:16 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:07:50 - mmengine - INFO - Epoch(train) [124][50/586] lr: 5.000000e-03 eta: 9:03:19 time: 0.687676 data_time: 0.065995 memory: 12959 loss_kpt: 333.759680 acc_pose: 0.763965 loss: 333.759680 2022/10/13 01:08:24 - mmengine - INFO - Epoch(train) [124][100/586] lr: 5.000000e-03 eta: 9:02:48 time: 0.668459 data_time: 0.053451 memory: 12959 loss_kpt: 330.490938 acc_pose: 0.827553 loss: 330.490938 2022/10/13 01:08:57 - mmengine - INFO - Epoch(train) [124][150/586] lr: 5.000000e-03 eta: 9:02:17 time: 0.668276 data_time: 0.054310 memory: 12959 loss_kpt: 325.729788 acc_pose: 0.821869 loss: 325.729788 2022/10/13 01:09:31 - mmengine - INFO - Epoch(train) [124][200/586] lr: 5.000000e-03 eta: 9:01:46 time: 0.673683 data_time: 0.055107 memory: 12959 loss_kpt: 325.894151 acc_pose: 0.882947 loss: 325.894151 2022/10/13 01:10:05 - mmengine - INFO - Epoch(train) [124][250/586] lr: 5.000000e-03 eta: 9:01:15 time: 0.676680 data_time: 0.059350 memory: 12959 loss_kpt: 330.197552 acc_pose: 0.846481 loss: 330.197552 2022/10/13 01:10:38 - mmengine - INFO - Epoch(train) [124][300/586] lr: 5.000000e-03 eta: 9:00:44 time: 0.669488 data_time: 0.051253 memory: 12959 loss_kpt: 330.172652 acc_pose: 0.884463 loss: 330.172652 2022/10/13 01:11:12 - mmengine - INFO - Epoch(train) [124][350/586] lr: 5.000000e-03 eta: 9:00:13 time: 0.673752 data_time: 0.060546 memory: 12959 loss_kpt: 329.357742 acc_pose: 0.868390 loss: 329.357742 2022/10/13 01:11:45 - mmengine - INFO - Epoch(train) [124][400/586] lr: 5.000000e-03 eta: 8:59:42 time: 0.664084 data_time: 0.055404 memory: 12959 loss_kpt: 326.742775 acc_pose: 0.767256 loss: 326.742775 2022/10/13 01:12:19 - mmengine - INFO - Epoch(train) [124][450/586] lr: 5.000000e-03 eta: 8:59:11 time: 0.676494 data_time: 0.057390 memory: 12959 loss_kpt: 333.603444 acc_pose: 0.910144 loss: 333.603444 2022/10/13 01:12:53 - mmengine - INFO - Epoch(train) [124][500/586] lr: 5.000000e-03 eta: 8:58:41 time: 0.686771 data_time: 0.057646 memory: 12959 loss_kpt: 328.141016 acc_pose: 0.798413 loss: 328.141016 2022/10/13 01:13:28 - mmengine - INFO - Epoch(train) [124][550/586] lr: 5.000000e-03 eta: 8:58:11 time: 0.693385 data_time: 0.054575 memory: 12959 loss_kpt: 331.591005 acc_pose: 0.803422 loss: 331.591005 2022/10/13 01:13:52 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:14:26 - mmengine - INFO - Epoch(train) [125][50/586] lr: 5.000000e-03 eta: 8:57:01 time: 0.675956 data_time: 0.074538 memory: 12959 loss_kpt: 327.921687 acc_pose: 0.841946 loss: 327.921687 2022/10/13 01:14:59 - mmengine - INFO - Epoch(train) [125][100/586] lr: 5.000000e-03 eta: 8:56:30 time: 0.660079 data_time: 0.059264 memory: 12959 loss_kpt: 332.278740 acc_pose: 0.751894 loss: 332.278740 2022/10/13 01:15:33 - mmengine - INFO - Epoch(train) [125][150/586] lr: 5.000000e-03 eta: 8:55:59 time: 0.676865 data_time: 0.062046 memory: 12959 loss_kpt: 330.574273 acc_pose: 0.876675 loss: 330.574273 2022/10/13 01:16:06 - mmengine - INFO - Epoch(train) [125][200/586] lr: 5.000000e-03 eta: 8:55:27 time: 0.656846 data_time: 0.058439 memory: 12959 loss_kpt: 337.777919 acc_pose: 0.829250 loss: 337.777919 2022/10/13 01:16:39 - mmengine - INFO - Epoch(train) [125][250/586] lr: 5.000000e-03 eta: 8:54:56 time: 0.669300 data_time: 0.055055 memory: 12959 loss_kpt: 332.863204 acc_pose: 0.817523 loss: 332.863204 2022/10/13 01:17:12 - mmengine - INFO - Epoch(train) [125][300/586] lr: 5.000000e-03 eta: 8:54:25 time: 0.664565 data_time: 0.059816 memory: 12959 loss_kpt: 330.653738 acc_pose: 0.799203 loss: 330.653738 2022/10/13 01:17:36 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:17:46 - mmengine - INFO - Epoch(train) [125][350/586] lr: 5.000000e-03 eta: 8:53:54 time: 0.668641 data_time: 0.057760 memory: 12959 loss_kpt: 328.907089 acc_pose: 0.837377 loss: 328.907089 2022/10/13 01:18:19 - mmengine - INFO - Epoch(train) [125][400/586] lr: 5.000000e-03 eta: 8:53:23 time: 0.667902 data_time: 0.055624 memory: 12959 loss_kpt: 330.655156 acc_pose: 0.784951 loss: 330.655156 2022/10/13 01:18:53 - mmengine - INFO - Epoch(train) [125][450/586] lr: 5.000000e-03 eta: 8:52:52 time: 0.677824 data_time: 0.057625 memory: 12959 loss_kpt: 332.406403 acc_pose: 0.832007 loss: 332.406403 2022/10/13 01:19:26 - mmengine - INFO - Epoch(train) [125][500/586] lr: 5.000000e-03 eta: 8:52:21 time: 0.665594 data_time: 0.053652 memory: 12959 loss_kpt: 329.170171 acc_pose: 0.881081 loss: 329.170171 2022/10/13 01:20:00 - mmengine - INFO - Epoch(train) [125][550/586] lr: 5.000000e-03 eta: 8:51:51 time: 0.679979 data_time: 0.059915 memory: 12959 loss_kpt: 333.726651 acc_pose: 0.851450 loss: 333.726651 2022/10/13 01:20:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:20:59 - mmengine - INFO - Epoch(train) [126][50/586] lr: 5.000000e-03 eta: 8:50:42 time: 0.685898 data_time: 0.067565 memory: 12959 loss_kpt: 335.291901 acc_pose: 0.817844 loss: 335.291901 2022/10/13 01:21:33 - mmengine - INFO - Epoch(train) [126][100/586] lr: 5.000000e-03 eta: 8:50:11 time: 0.679591 data_time: 0.054936 memory: 12959 loss_kpt: 331.525322 acc_pose: 0.798297 loss: 331.525322 2022/10/13 01:22:07 - mmengine - INFO - Epoch(train) [126][150/586] lr: 5.000000e-03 eta: 8:49:40 time: 0.681229 data_time: 0.062283 memory: 12959 loss_kpt: 328.349910 acc_pose: 0.890312 loss: 328.349910 2022/10/13 01:22:41 - mmengine - INFO - Epoch(train) [126][200/586] lr: 5.000000e-03 eta: 8:49:09 time: 0.674936 data_time: 0.056845 memory: 12959 loss_kpt: 329.938557 acc_pose: 0.826243 loss: 329.938557 2022/10/13 01:23:16 - mmengine - INFO - Epoch(train) [126][250/586] lr: 5.000000e-03 eta: 8:48:39 time: 0.692506 data_time: 0.059551 memory: 12959 loss_kpt: 325.899155 acc_pose: 0.894402 loss: 325.899155 2022/10/13 01:23:50 - mmengine - INFO - Epoch(train) [126][300/586] lr: 5.000000e-03 eta: 8:48:09 time: 0.694120 data_time: 0.060220 memory: 12959 loss_kpt: 325.701699 acc_pose: 0.847118 loss: 325.701699 2022/10/13 01:24:25 - mmengine - INFO - Epoch(train) [126][350/586] lr: 5.000000e-03 eta: 8:47:39 time: 0.688441 data_time: 0.061434 memory: 12959 loss_kpt: 323.923423 acc_pose: 0.873668 loss: 323.923423 2022/10/13 01:25:00 - mmengine - INFO - Epoch(train) [126][400/586] lr: 5.000000e-03 eta: 8:47:09 time: 0.699825 data_time: 0.056963 memory: 12959 loss_kpt: 330.953070 acc_pose: 0.830906 loss: 330.953070 2022/10/13 01:25:35 - mmengine - INFO - Epoch(train) [126][450/586] lr: 5.000000e-03 eta: 8:46:39 time: 0.706092 data_time: 0.056439 memory: 12959 loss_kpt: 328.610116 acc_pose: 0.869405 loss: 328.610116 2022/10/13 01:26:09 - mmengine - INFO - Epoch(train) [126][500/586] lr: 5.000000e-03 eta: 8:46:08 time: 0.687112 data_time: 0.055962 memory: 12959 loss_kpt: 335.759073 acc_pose: 0.897760 loss: 335.759073 2022/10/13 01:26:44 - mmengine - INFO - Epoch(train) [126][550/586] lr: 5.000000e-03 eta: 8:45:38 time: 0.693801 data_time: 0.059863 memory: 12959 loss_kpt: 328.654487 acc_pose: 0.852959 loss: 328.654487 2022/10/13 01:27:09 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:27:44 - mmengine - INFO - Epoch(train) [127][50/586] lr: 5.000000e-03 eta: 8:44:30 time: 0.700761 data_time: 0.069250 memory: 12959 loss_kpt: 330.631064 acc_pose: 0.841632 loss: 330.631064 2022/10/13 01:28:18 - mmengine - INFO - Epoch(train) [127][100/586] lr: 5.000000e-03 eta: 8:43:59 time: 0.682784 data_time: 0.060028 memory: 12959 loss_kpt: 328.868473 acc_pose: 0.800326 loss: 328.868473 2022/10/13 01:28:53 - mmengine - INFO - Epoch(train) [127][150/586] lr: 5.000000e-03 eta: 8:43:29 time: 0.693382 data_time: 0.056703 memory: 12959 loss_kpt: 330.421823 acc_pose: 0.819061 loss: 330.421823 2022/10/13 01:29:02 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:29:27 - mmengine - INFO - Epoch(train) [127][200/586] lr: 5.000000e-03 eta: 8:42:58 time: 0.678119 data_time: 0.061018 memory: 12959 loss_kpt: 330.865290 acc_pose: 0.792775 loss: 330.865290 2022/10/13 01:30:01 - mmengine - INFO - Epoch(train) [127][250/586] lr: 5.000000e-03 eta: 8:42:28 time: 0.691771 data_time: 0.057516 memory: 12959 loss_kpt: 328.382996 acc_pose: 0.839223 loss: 328.382996 2022/10/13 01:30:35 - mmengine - INFO - Epoch(train) [127][300/586] lr: 5.000000e-03 eta: 8:41:57 time: 0.676350 data_time: 0.061624 memory: 12959 loss_kpt: 331.395868 acc_pose: 0.775933 loss: 331.395868 2022/10/13 01:31:10 - mmengine - INFO - Epoch(train) [127][350/586] lr: 5.000000e-03 eta: 8:41:27 time: 0.695774 data_time: 0.063286 memory: 12959 loss_kpt: 333.382154 acc_pose: 0.839325 loss: 333.382154 2022/10/13 01:31:44 - mmengine - INFO - Epoch(train) [127][400/586] lr: 5.000000e-03 eta: 8:40:56 time: 0.681006 data_time: 0.064922 memory: 12959 loss_kpt: 337.776290 acc_pose: 0.857512 loss: 337.776290 2022/10/13 01:32:19 - mmengine - INFO - Epoch(train) [127][450/586] lr: 5.000000e-03 eta: 8:40:26 time: 0.694664 data_time: 0.060058 memory: 12959 loss_kpt: 337.810372 acc_pose: 0.763309 loss: 337.810372 2022/10/13 01:32:53 - mmengine - INFO - Epoch(train) [127][500/586] lr: 5.000000e-03 eta: 8:39:55 time: 0.676082 data_time: 0.059788 memory: 12959 loss_kpt: 324.299040 acc_pose: 0.821863 loss: 324.299040 2022/10/13 01:33:26 - mmengine - INFO - Epoch(train) [127][550/586] lr: 5.000000e-03 eta: 8:39:24 time: 0.676545 data_time: 0.060556 memory: 12959 loss_kpt: 336.260473 acc_pose: 0.858273 loss: 336.260473 2022/10/13 01:33:50 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:34:24 - mmengine - INFO - Epoch(train) [128][50/586] lr: 5.000000e-03 eta: 8:38:16 time: 0.679306 data_time: 0.069473 memory: 12959 loss_kpt: 325.124969 acc_pose: 0.822779 loss: 325.124969 2022/10/13 01:34:58 - mmengine - INFO - Epoch(train) [128][100/586] lr: 5.000000e-03 eta: 8:37:44 time: 0.666655 data_time: 0.058519 memory: 12959 loss_kpt: 330.806279 acc_pose: 0.792215 loss: 330.806279 2022/10/13 01:35:31 - mmengine - INFO - Epoch(train) [128][150/586] lr: 5.000000e-03 eta: 8:37:13 time: 0.663778 data_time: 0.060110 memory: 12959 loss_kpt: 324.964523 acc_pose: 0.800127 loss: 324.964523 2022/10/13 01:36:04 - mmengine - INFO - Epoch(train) [128][200/586] lr: 5.000000e-03 eta: 8:36:42 time: 0.664585 data_time: 0.056124 memory: 12959 loss_kpt: 333.515107 acc_pose: 0.861329 loss: 333.515107 2022/10/13 01:36:38 - mmengine - INFO - Epoch(train) [128][250/586] lr: 5.000000e-03 eta: 8:36:11 time: 0.675459 data_time: 0.057567 memory: 12959 loss_kpt: 325.494754 acc_pose: 0.843675 loss: 325.494754 2022/10/13 01:37:11 - mmengine - INFO - Epoch(train) [128][300/586] lr: 5.000000e-03 eta: 8:35:40 time: 0.673718 data_time: 0.056807 memory: 12959 loss_kpt: 333.437723 acc_pose: 0.860356 loss: 333.437723 2022/10/13 01:37:46 - mmengine - INFO - Epoch(train) [128][350/586] lr: 5.000000e-03 eta: 8:35:10 time: 0.685043 data_time: 0.063292 memory: 12959 loss_kpt: 333.603553 acc_pose: 0.805725 loss: 333.603553 2022/10/13 01:38:20 - mmengine - INFO - Epoch(train) [128][400/586] lr: 5.000000e-03 eta: 8:34:39 time: 0.692255 data_time: 0.053558 memory: 12959 loss_kpt: 333.116563 acc_pose: 0.877948 loss: 333.116563 2022/10/13 01:38:55 - mmengine - INFO - Epoch(train) [128][450/586] lr: 5.000000e-03 eta: 8:34:09 time: 0.696544 data_time: 0.060554 memory: 12959 loss_kpt: 332.935114 acc_pose: 0.781491 loss: 332.935114 2022/10/13 01:39:30 - mmengine - INFO - Epoch(train) [128][500/586] lr: 5.000000e-03 eta: 8:33:39 time: 0.690496 data_time: 0.057622 memory: 12959 loss_kpt: 333.444930 acc_pose: 0.834576 loss: 333.444930 2022/10/13 01:40:05 - mmengine - INFO - Epoch(train) [128][550/586] lr: 5.000000e-03 eta: 8:33:09 time: 0.694795 data_time: 0.055898 memory: 12959 loss_kpt: 327.904942 acc_pose: 0.818444 loss: 327.904942 2022/10/13 01:40:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:40:29 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:41:04 - mmengine - INFO - Epoch(train) [129][50/586] lr: 5.000000e-03 eta: 8:32:00 time: 0.692823 data_time: 0.070891 memory: 12959 loss_kpt: 328.408619 acc_pose: 0.797727 loss: 328.408619 2022/10/13 01:41:37 - mmengine - INFO - Epoch(train) [129][100/586] lr: 5.000000e-03 eta: 8:31:30 time: 0.675112 data_time: 0.061346 memory: 12959 loss_kpt: 330.806234 acc_pose: 0.842704 loss: 330.806234 2022/10/13 01:42:11 - mmengine - INFO - Epoch(train) [129][150/586] lr: 5.000000e-03 eta: 8:30:58 time: 0.667586 data_time: 0.062472 memory: 12959 loss_kpt: 331.116858 acc_pose: 0.885081 loss: 331.116858 2022/10/13 01:42:44 - mmengine - INFO - Epoch(train) [129][200/586] lr: 5.000000e-03 eta: 8:30:27 time: 0.670411 data_time: 0.060524 memory: 12959 loss_kpt: 330.839846 acc_pose: 0.833903 loss: 330.839846 2022/10/13 01:43:17 - mmengine - INFO - Epoch(train) [129][250/586] lr: 5.000000e-03 eta: 8:29:56 time: 0.662148 data_time: 0.060466 memory: 12959 loss_kpt: 328.606331 acc_pose: 0.789835 loss: 328.606331 2022/10/13 01:43:51 - mmengine - INFO - Epoch(train) [129][300/586] lr: 5.000000e-03 eta: 8:29:25 time: 0.662014 data_time: 0.056343 memory: 12959 loss_kpt: 331.288474 acc_pose: 0.782260 loss: 331.288474 2022/10/13 01:44:24 - mmengine - INFO - Epoch(train) [129][350/586] lr: 5.000000e-03 eta: 8:28:53 time: 0.661545 data_time: 0.060798 memory: 12959 loss_kpt: 330.455771 acc_pose: 0.788592 loss: 330.455771 2022/10/13 01:44:58 - mmengine - INFO - Epoch(train) [129][400/586] lr: 5.000000e-03 eta: 8:28:23 time: 0.678709 data_time: 0.056209 memory: 12959 loss_kpt: 329.234335 acc_pose: 0.832120 loss: 329.234335 2022/10/13 01:45:31 - mmengine - INFO - Epoch(train) [129][450/586] lr: 5.000000e-03 eta: 8:27:52 time: 0.674670 data_time: 0.057751 memory: 12959 loss_kpt: 333.235957 acc_pose: 0.772119 loss: 333.235957 2022/10/13 01:46:04 - mmengine - INFO - Epoch(train) [129][500/586] lr: 5.000000e-03 eta: 8:27:20 time: 0.660820 data_time: 0.055258 memory: 12959 loss_kpt: 328.320218 acc_pose: 0.828600 loss: 328.320218 2022/10/13 01:46:38 - mmengine - INFO - Epoch(train) [129][550/586] lr: 5.000000e-03 eta: 8:26:49 time: 0.675481 data_time: 0.062899 memory: 12959 loss_kpt: 333.002119 acc_pose: 0.840366 loss: 333.002119 2022/10/13 01:47:02 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:47:36 - mmengine - INFO - Epoch(train) [130][50/586] lr: 5.000000e-03 eta: 8:25:42 time: 0.688578 data_time: 0.069495 memory: 12959 loss_kpt: 330.259086 acc_pose: 0.855988 loss: 330.259086 2022/10/13 01:48:10 - mmengine - INFO - Epoch(train) [130][100/586] lr: 5.000000e-03 eta: 8:25:10 time: 0.668387 data_time: 0.059147 memory: 12959 loss_kpt: 328.444205 acc_pose: 0.859515 loss: 328.444205 2022/10/13 01:48:44 - mmengine - INFO - Epoch(train) [130][150/586] lr: 5.000000e-03 eta: 8:24:40 time: 0.680749 data_time: 0.056067 memory: 12959 loss_kpt: 329.397665 acc_pose: 0.822272 loss: 329.397665 2022/10/13 01:49:18 - mmengine - INFO - Epoch(train) [130][200/586] lr: 5.000000e-03 eta: 8:24:09 time: 0.676858 data_time: 0.055234 memory: 12959 loss_kpt: 326.961606 acc_pose: 0.818681 loss: 326.961606 2022/10/13 01:49:52 - mmengine - INFO - Epoch(train) [130][250/586] lr: 5.000000e-03 eta: 8:23:38 time: 0.682398 data_time: 0.054131 memory: 12959 loss_kpt: 329.363602 acc_pose: 0.722908 loss: 329.363602 2022/10/13 01:50:25 - mmengine - INFO - Epoch(train) [130][300/586] lr: 5.000000e-03 eta: 8:23:07 time: 0.670369 data_time: 0.054567 memory: 12959 loss_kpt: 330.348085 acc_pose: 0.889431 loss: 330.348085 2022/10/13 01:51:00 - mmengine - INFO - Epoch(train) [130][350/586] lr: 5.000000e-03 eta: 8:22:37 time: 0.691335 data_time: 0.060495 memory: 12959 loss_kpt: 333.817062 acc_pose: 0.819552 loss: 333.817062 2022/10/13 01:51:34 - mmengine - INFO - Epoch(train) [130][400/586] lr: 5.000000e-03 eta: 8:22:06 time: 0.686688 data_time: 0.057158 memory: 12959 loss_kpt: 331.003293 acc_pose: 0.824903 loss: 331.003293 2022/10/13 01:51:38 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:52:09 - mmengine - INFO - Epoch(train) [130][450/586] lr: 5.000000e-03 eta: 8:21:36 time: 0.688021 data_time: 0.055420 memory: 12959 loss_kpt: 327.027397 acc_pose: 0.813360 loss: 327.027397 2022/10/13 01:52:43 - mmengine - INFO - Epoch(train) [130][500/586] lr: 5.000000e-03 eta: 8:21:05 time: 0.683348 data_time: 0.053926 memory: 12959 loss_kpt: 329.883490 acc_pose: 0.790943 loss: 329.883490 2022/10/13 01:53:17 - mmengine - INFO - Epoch(train) [130][550/586] lr: 5.000000e-03 eta: 8:20:34 time: 0.681639 data_time: 0.055567 memory: 12959 loss_kpt: 334.427688 acc_pose: 0.865343 loss: 334.427688 2022/10/13 01:53:41 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 01:53:41 - mmengine - INFO - Saving checkpoint at 130 epochs 2022/10/13 01:53:59 - mmengine - INFO - Epoch(val) [130][50/407] eta: 0:01:35 time: 0.268204 data_time: 0.012847 memory: 12959 2022/10/13 01:54:12 - mmengine - INFO - Epoch(val) [130][100/407] eta: 0:01:21 time: 0.264124 data_time: 0.008218 memory: 2407 2022/10/13 01:54:25 - mmengine - INFO - Epoch(val) [130][150/407] eta: 0:01:08 time: 0.265076 data_time: 0.008483 memory: 2407 2022/10/13 01:54:38 - mmengine - INFO - Epoch(val) [130][200/407] eta: 0:00:54 time: 0.261458 data_time: 0.007886 memory: 2407 2022/10/13 01:54:51 - mmengine - INFO - Epoch(val) [130][250/407] eta: 0:00:41 time: 0.264565 data_time: 0.008257 memory: 2407 2022/10/13 01:55:05 - mmengine - INFO - Epoch(val) [130][300/407] eta: 0:00:28 time: 0.263019 data_time: 0.007952 memory: 2407 2022/10/13 01:55:18 - mmengine - INFO - Epoch(val) [130][350/407] eta: 0:00:15 time: 0.266719 data_time: 0.008191 memory: 2407 2022/10/13 01:55:31 - mmengine - INFO - Epoch(val) [130][400/407] eta: 0:00:01 time: 0.256841 data_time: 0.007576 memory: 2407 2022/10/13 01:55:45 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 01:56:01 - mmengine - INFO - Epoch(val) [130][407/407] coco/AP: 0.724475 coco/AP .5: 0.892779 coco/AP .75: 0.799990 coco/AP (M): 0.692900 coco/AP (L): 0.786202 coco/AR: 0.790664 coco/AR .5: 0.935926 coco/AR .75: 0.853589 coco/AR (M): 0.747064 coco/AR (L): 0.850725 2022/10/13 01:56:35 - mmengine - INFO - Epoch(train) [131][50/586] lr: 5.000000e-03 eta: 8:19:26 time: 0.689175 data_time: 0.069844 memory: 12959 loss_kpt: 328.316703 acc_pose: 0.849561 loss: 328.316703 2022/10/13 01:57:08 - mmengine - INFO - Epoch(train) [131][100/586] lr: 5.000000e-03 eta: 8:18:55 time: 0.663275 data_time: 0.054051 memory: 12959 loss_kpt: 329.639716 acc_pose: 0.829115 loss: 329.639716 2022/10/13 01:57:42 - mmengine - INFO - Epoch(train) [131][150/586] lr: 5.000000e-03 eta: 8:18:24 time: 0.672840 data_time: 0.054786 memory: 12959 loss_kpt: 327.774295 acc_pose: 0.841722 loss: 327.774295 2022/10/13 01:58:16 - mmengine - INFO - Epoch(train) [131][200/586] lr: 5.000000e-03 eta: 8:17:53 time: 0.670809 data_time: 0.052217 memory: 12959 loss_kpt: 328.390953 acc_pose: 0.873492 loss: 328.390953 2022/10/13 01:58:49 - mmengine - INFO - Epoch(train) [131][250/586] lr: 5.000000e-03 eta: 8:17:22 time: 0.673644 data_time: 0.055555 memory: 12959 loss_kpt: 323.034581 acc_pose: 0.846666 loss: 323.034581 2022/10/13 01:59:23 - mmengine - INFO - Epoch(train) [131][300/586] lr: 5.000000e-03 eta: 8:16:51 time: 0.670256 data_time: 0.054712 memory: 12959 loss_kpt: 327.664489 acc_pose: 0.834990 loss: 327.664489 2022/10/13 01:59:57 - mmengine - INFO - Epoch(train) [131][350/586] lr: 5.000000e-03 eta: 8:16:20 time: 0.673680 data_time: 0.058301 memory: 12959 loss_kpt: 329.691573 acc_pose: 0.839935 loss: 329.691573 2022/10/13 02:00:30 - mmengine - INFO - Epoch(train) [131][400/586] lr: 5.000000e-03 eta: 8:15:49 time: 0.669305 data_time: 0.051836 memory: 12959 loss_kpt: 324.421473 acc_pose: 0.862428 loss: 324.421473 2022/10/13 02:01:05 - mmengine - INFO - Epoch(train) [131][450/586] lr: 5.000000e-03 eta: 8:15:19 time: 0.691614 data_time: 0.057697 memory: 12959 loss_kpt: 327.204380 acc_pose: 0.807219 loss: 327.204380 2022/10/13 02:01:39 - mmengine - INFO - Epoch(train) [131][500/586] lr: 5.000000e-03 eta: 8:14:48 time: 0.696239 data_time: 0.058166 memory: 12959 loss_kpt: 332.346636 acc_pose: 0.816003 loss: 332.346636 2022/10/13 02:02:14 - mmengine - INFO - Epoch(train) [131][550/586] lr: 5.000000e-03 eta: 8:14:18 time: 0.689692 data_time: 0.054876 memory: 12959 loss_kpt: 325.019801 acc_pose: 0.823144 loss: 325.019801 2022/10/13 02:02:39 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:03:14 - mmengine - INFO - Epoch(train) [132][50/586] lr: 5.000000e-03 eta: 8:13:11 time: 0.702014 data_time: 0.071648 memory: 12959 loss_kpt: 335.674483 acc_pose: 0.776667 loss: 335.674483 2022/10/13 02:03:48 - mmengine - INFO - Epoch(train) [132][100/586] lr: 5.000000e-03 eta: 8:12:40 time: 0.686689 data_time: 0.053745 memory: 12959 loss_kpt: 329.356161 acc_pose: 0.829523 loss: 329.356161 2022/10/13 02:04:22 - mmengine - INFO - Epoch(train) [132][150/586] lr: 5.000000e-03 eta: 8:12:09 time: 0.684423 data_time: 0.062440 memory: 12959 loss_kpt: 325.065120 acc_pose: 0.738421 loss: 325.065120 2022/10/13 02:04:56 - mmengine - INFO - Epoch(train) [132][200/586] lr: 5.000000e-03 eta: 8:11:39 time: 0.676617 data_time: 0.052645 memory: 12959 loss_kpt: 328.667270 acc_pose: 0.815168 loss: 328.667270 2022/10/13 02:05:19 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:05:30 - mmengine - INFO - Epoch(train) [132][250/586] lr: 5.000000e-03 eta: 8:11:08 time: 0.682488 data_time: 0.060054 memory: 12959 loss_kpt: 326.055508 acc_pose: 0.778074 loss: 326.055508 2022/10/13 02:06:04 - mmengine - INFO - Epoch(train) [132][300/586] lr: 5.000000e-03 eta: 8:10:37 time: 0.676528 data_time: 0.052514 memory: 12959 loss_kpt: 321.950679 acc_pose: 0.888725 loss: 321.950679 2022/10/13 02:06:39 - mmengine - INFO - Epoch(train) [132][350/586] lr: 5.000000e-03 eta: 8:10:06 time: 0.689591 data_time: 0.058838 memory: 12959 loss_kpt: 324.529280 acc_pose: 0.850650 loss: 324.529280 2022/10/13 02:07:13 - mmengine - INFO - Epoch(train) [132][400/586] lr: 5.000000e-03 eta: 8:09:36 time: 0.689337 data_time: 0.052007 memory: 12959 loss_kpt: 326.016945 acc_pose: 0.859669 loss: 326.016945 2022/10/13 02:07:47 - mmengine - INFO - Epoch(train) [132][450/586] lr: 5.000000e-03 eta: 8:09:05 time: 0.684886 data_time: 0.058552 memory: 12959 loss_kpt: 325.664018 acc_pose: 0.864662 loss: 325.664018 2022/10/13 02:08:21 - mmengine - INFO - Epoch(train) [132][500/586] lr: 5.000000e-03 eta: 8:08:34 time: 0.672039 data_time: 0.054450 memory: 12959 loss_kpt: 328.335020 acc_pose: 0.854837 loss: 328.335020 2022/10/13 02:08:55 - mmengine - INFO - Epoch(train) [132][550/586] lr: 5.000000e-03 eta: 8:08:03 time: 0.681091 data_time: 0.056509 memory: 12959 loss_kpt: 327.035894 acc_pose: 0.815343 loss: 327.035894 2022/10/13 02:09:19 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:09:53 - mmengine - INFO - Epoch(train) [133][50/586] lr: 5.000000e-03 eta: 8:06:56 time: 0.683021 data_time: 0.073105 memory: 12959 loss_kpt: 328.444910 acc_pose: 0.844511 loss: 328.444910 2022/10/13 02:10:27 - mmengine - INFO - Epoch(train) [133][100/586] lr: 5.000000e-03 eta: 8:06:25 time: 0.677311 data_time: 0.054179 memory: 12959 loss_kpt: 326.449395 acc_pose: 0.882210 loss: 326.449395 2022/10/13 02:11:02 - mmengine - INFO - Epoch(train) [133][150/586] lr: 5.000000e-03 eta: 8:05:55 time: 0.692605 data_time: 0.058237 memory: 12959 loss_kpt: 332.697569 acc_pose: 0.797851 loss: 332.697569 2022/10/13 02:11:36 - mmengine - INFO - Epoch(train) [133][200/586] lr: 5.000000e-03 eta: 8:05:24 time: 0.682212 data_time: 0.055034 memory: 12959 loss_kpt: 325.972094 acc_pose: 0.840250 loss: 325.972094 2022/10/13 02:12:10 - mmengine - INFO - Epoch(train) [133][250/586] lr: 5.000000e-03 eta: 8:04:53 time: 0.684174 data_time: 0.057645 memory: 12959 loss_kpt: 328.336664 acc_pose: 0.861102 loss: 328.336664 2022/10/13 02:12:44 - mmengine - INFO - Epoch(train) [133][300/586] lr: 5.000000e-03 eta: 8:04:22 time: 0.681614 data_time: 0.056312 memory: 12959 loss_kpt: 330.596332 acc_pose: 0.850255 loss: 330.596332 2022/10/13 02:13:18 - mmengine - INFO - Epoch(train) [133][350/586] lr: 5.000000e-03 eta: 8:03:51 time: 0.666474 data_time: 0.055625 memory: 12959 loss_kpt: 327.789692 acc_pose: 0.820682 loss: 327.789692 2022/10/13 02:13:51 - mmengine - INFO - Epoch(train) [133][400/586] lr: 5.000000e-03 eta: 8:03:20 time: 0.669799 data_time: 0.063731 memory: 12959 loss_kpt: 331.788032 acc_pose: 0.790128 loss: 331.788032 2022/10/13 02:14:26 - mmengine - INFO - Epoch(train) [133][450/586] lr: 5.000000e-03 eta: 8:02:49 time: 0.689121 data_time: 0.057433 memory: 12959 loss_kpt: 331.552912 acc_pose: 0.848101 loss: 331.552912 2022/10/13 02:15:00 - mmengine - INFO - Epoch(train) [133][500/586] lr: 5.000000e-03 eta: 8:02:18 time: 0.676423 data_time: 0.052656 memory: 12959 loss_kpt: 326.663348 acc_pose: 0.728053 loss: 326.663348 2022/10/13 02:15:33 - mmengine - INFO - Epoch(train) [133][550/586] lr: 5.000000e-03 eta: 8:01:47 time: 0.675744 data_time: 0.053969 memory: 12959 loss_kpt: 328.480419 acc_pose: 0.825351 loss: 328.480419 2022/10/13 02:15:58 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:16:32 - mmengine - INFO - Epoch(train) [134][50/586] lr: 5.000000e-03 eta: 8:00:40 time: 0.681278 data_time: 0.065161 memory: 12959 loss_kpt: 330.747570 acc_pose: 0.850299 loss: 330.747570 2022/10/13 02:16:40 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:17:05 - mmengine - INFO - Epoch(train) [134][100/586] lr: 5.000000e-03 eta: 8:00:09 time: 0.656914 data_time: 0.056897 memory: 12959 loss_kpt: 328.749857 acc_pose: 0.713909 loss: 328.749857 2022/10/13 02:17:38 - mmengine - INFO - Epoch(train) [134][150/586] lr: 5.000000e-03 eta: 7:59:37 time: 0.663202 data_time: 0.053881 memory: 12959 loss_kpt: 324.531022 acc_pose: 0.812868 loss: 324.531022 2022/10/13 02:18:11 - mmengine - INFO - Epoch(train) [134][200/586] lr: 5.000000e-03 eta: 7:59:06 time: 0.667434 data_time: 0.060925 memory: 12959 loss_kpt: 332.258817 acc_pose: 0.766778 loss: 332.258817 2022/10/13 02:18:45 - mmengine - INFO - Epoch(train) [134][250/586] lr: 5.000000e-03 eta: 7:58:35 time: 0.675418 data_time: 0.057486 memory: 12959 loss_kpt: 327.437333 acc_pose: 0.823413 loss: 327.437333 2022/10/13 02:19:19 - mmengine - INFO - Epoch(train) [134][300/586] lr: 5.000000e-03 eta: 7:58:04 time: 0.670427 data_time: 0.060550 memory: 12959 loss_kpt: 326.394140 acc_pose: 0.829475 loss: 326.394140 2022/10/13 02:19:52 - mmengine - INFO - Epoch(train) [134][350/586] lr: 5.000000e-03 eta: 7:57:33 time: 0.665700 data_time: 0.058354 memory: 12959 loss_kpt: 330.915104 acc_pose: 0.827676 loss: 330.915104 2022/10/13 02:20:25 - mmengine - INFO - Epoch(train) [134][400/586] lr: 5.000000e-03 eta: 7:57:02 time: 0.662807 data_time: 0.060835 memory: 12959 loss_kpt: 335.148801 acc_pose: 0.770927 loss: 335.148801 2022/10/13 02:20:59 - mmengine - INFO - Epoch(train) [134][450/586] lr: 5.000000e-03 eta: 7:56:30 time: 0.670339 data_time: 0.059236 memory: 12959 loss_kpt: 320.342338 acc_pose: 0.828256 loss: 320.342338 2022/10/13 02:21:33 - mmengine - INFO - Epoch(train) [134][500/586] lr: 5.000000e-03 eta: 7:56:00 time: 0.682078 data_time: 0.059604 memory: 12959 loss_kpt: 331.673156 acc_pose: 0.904023 loss: 331.673156 2022/10/13 02:22:07 - mmengine - INFO - Epoch(train) [134][550/586] lr: 5.000000e-03 eta: 7:55:29 time: 0.681474 data_time: 0.059656 memory: 12959 loss_kpt: 328.734487 acc_pose: 0.761368 loss: 328.734487 2022/10/13 02:22:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:23:06 - mmengine - INFO - Epoch(train) [135][50/586] lr: 5.000000e-03 eta: 7:54:22 time: 0.687949 data_time: 0.070814 memory: 12959 loss_kpt: 320.849775 acc_pose: 0.796423 loss: 320.849775 2022/10/13 02:23:39 - mmengine - INFO - Epoch(train) [135][100/586] lr: 5.000000e-03 eta: 7:53:51 time: 0.671412 data_time: 0.058003 memory: 12959 loss_kpt: 328.708691 acc_pose: 0.845725 loss: 328.708691 2022/10/13 02:24:13 - mmengine - INFO - Epoch(train) [135][150/586] lr: 5.000000e-03 eta: 7:53:20 time: 0.677871 data_time: 0.056956 memory: 12959 loss_kpt: 331.595873 acc_pose: 0.805951 loss: 331.595873 2022/10/13 02:24:47 - mmengine - INFO - Epoch(train) [135][200/586] lr: 5.000000e-03 eta: 7:52:49 time: 0.671824 data_time: 0.054091 memory: 12959 loss_kpt: 328.914733 acc_pose: 0.871535 loss: 328.914733 2022/10/13 02:25:21 - mmengine - INFO - Epoch(train) [135][250/586] lr: 5.000000e-03 eta: 7:52:18 time: 0.680447 data_time: 0.059530 memory: 12959 loss_kpt: 328.119353 acc_pose: 0.865363 loss: 328.119353 2022/10/13 02:25:55 - mmengine - INFO - Epoch(train) [135][300/586] lr: 5.000000e-03 eta: 7:51:47 time: 0.675741 data_time: 0.056820 memory: 12959 loss_kpt: 330.481927 acc_pose: 0.779498 loss: 330.481927 2022/10/13 02:26:29 - mmengine - INFO - Epoch(train) [135][350/586] lr: 5.000000e-03 eta: 7:51:16 time: 0.688038 data_time: 0.059939 memory: 12959 loss_kpt: 325.388111 acc_pose: 0.825905 loss: 325.388111 2022/10/13 02:27:03 - mmengine - INFO - Epoch(train) [135][400/586] lr: 5.000000e-03 eta: 7:50:46 time: 0.681967 data_time: 0.060665 memory: 12959 loss_kpt: 320.788275 acc_pose: 0.745597 loss: 320.788275 2022/10/13 02:27:37 - mmengine - INFO - Epoch(train) [135][450/586] lr: 5.000000e-03 eta: 7:50:15 time: 0.679875 data_time: 0.058595 memory: 12959 loss_kpt: 331.858045 acc_pose: 0.864311 loss: 331.858045 2022/10/13 02:27:55 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:28:11 - mmengine - INFO - Epoch(train) [135][500/586] lr: 5.000000e-03 eta: 7:49:44 time: 0.679321 data_time: 0.056856 memory: 12959 loss_kpt: 331.545073 acc_pose: 0.847142 loss: 331.545073 2022/10/13 02:28:47 - mmengine - INFO - Epoch(train) [135][550/586] lr: 5.000000e-03 eta: 7:49:14 time: 0.715943 data_time: 0.059849 memory: 12959 loss_kpt: 330.142544 acc_pose: 0.839112 loss: 330.142544 2022/10/13 02:29:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:29:46 - mmengine - INFO - Epoch(train) [136][50/586] lr: 5.000000e-03 eta: 7:48:07 time: 0.688106 data_time: 0.068322 memory: 12959 loss_kpt: 323.320941 acc_pose: 0.782443 loss: 323.320941 2022/10/13 02:30:20 - mmengine - INFO - Epoch(train) [136][100/586] lr: 5.000000e-03 eta: 7:47:36 time: 0.672180 data_time: 0.055882 memory: 12959 loss_kpt: 327.859724 acc_pose: 0.846218 loss: 327.859724 2022/10/13 02:30:54 - mmengine - INFO - Epoch(train) [136][150/586] lr: 5.000000e-03 eta: 7:47:06 time: 0.682827 data_time: 0.055733 memory: 12959 loss_kpt: 325.299861 acc_pose: 0.861007 loss: 325.299861 2022/10/13 02:31:29 - mmengine - INFO - Epoch(train) [136][200/586] lr: 5.000000e-03 eta: 7:46:35 time: 0.689732 data_time: 0.059130 memory: 12959 loss_kpt: 334.014954 acc_pose: 0.837790 loss: 334.014954 2022/10/13 02:32:03 - mmengine - INFO - Epoch(train) [136][250/586] lr: 5.000000e-03 eta: 7:46:04 time: 0.684598 data_time: 0.060052 memory: 12959 loss_kpt: 330.996097 acc_pose: 0.739483 loss: 330.996097 2022/10/13 02:32:37 - mmengine - INFO - Epoch(train) [136][300/586] lr: 5.000000e-03 eta: 7:45:33 time: 0.679430 data_time: 0.058705 memory: 12959 loss_kpt: 326.902953 acc_pose: 0.821704 loss: 326.902953 2022/10/13 02:33:11 - mmengine - INFO - Epoch(train) [136][350/586] lr: 5.000000e-03 eta: 7:45:02 time: 0.683045 data_time: 0.058903 memory: 12959 loss_kpt: 329.210349 acc_pose: 0.832339 loss: 329.210349 2022/10/13 02:33:45 - mmengine - INFO - Epoch(train) [136][400/586] lr: 5.000000e-03 eta: 7:44:32 time: 0.682692 data_time: 0.054873 memory: 12959 loss_kpt: 332.954189 acc_pose: 0.885559 loss: 332.954189 2022/10/13 02:34:20 - mmengine - INFO - Epoch(train) [136][450/586] lr: 5.000000e-03 eta: 7:44:01 time: 0.687763 data_time: 0.061796 memory: 12959 loss_kpt: 326.352742 acc_pose: 0.782434 loss: 326.352742 2022/10/13 02:34:53 - mmengine - INFO - Epoch(train) [136][500/586] lr: 5.000000e-03 eta: 7:43:30 time: 0.672054 data_time: 0.056830 memory: 12959 loss_kpt: 326.274044 acc_pose: 0.874364 loss: 326.274044 2022/10/13 02:35:29 - mmengine - INFO - Epoch(train) [136][550/586] lr: 5.000000e-03 eta: 7:43:00 time: 0.715649 data_time: 0.054413 memory: 12959 loss_kpt: 328.716028 acc_pose: 0.859505 loss: 328.716028 2022/10/13 02:35:54 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:36:29 - mmengine - INFO - Epoch(train) [137][50/586] lr: 5.000000e-03 eta: 7:41:54 time: 0.711056 data_time: 0.068484 memory: 12959 loss_kpt: 329.485392 acc_pose: 0.835140 loss: 329.485392 2022/10/13 02:37:04 - mmengine - INFO - Epoch(train) [137][100/586] lr: 5.000000e-03 eta: 7:41:24 time: 0.689276 data_time: 0.053368 memory: 12959 loss_kpt: 326.194886 acc_pose: 0.798328 loss: 326.194886 2022/10/13 02:37:39 - mmengine - INFO - Epoch(train) [137][150/586] lr: 5.000000e-03 eta: 7:40:53 time: 0.696737 data_time: 0.059806 memory: 12959 loss_kpt: 327.878428 acc_pose: 0.806893 loss: 327.878428 2022/10/13 02:38:13 - mmengine - INFO - Epoch(train) [137][200/586] lr: 5.000000e-03 eta: 7:40:23 time: 0.695762 data_time: 0.058960 memory: 12959 loss_kpt: 319.505090 acc_pose: 0.840430 loss: 319.505090 2022/10/13 02:38:49 - mmengine - INFO - Epoch(train) [137][250/586] lr: 5.000000e-03 eta: 7:39:53 time: 0.711566 data_time: 0.061943 memory: 12959 loss_kpt: 325.225123 acc_pose: 0.847609 loss: 325.225123 2022/10/13 02:39:24 - mmengine - INFO - Epoch(train) [137][300/586] lr: 5.000000e-03 eta: 7:39:22 time: 0.696262 data_time: 0.050485 memory: 12959 loss_kpt: 326.028784 acc_pose: 0.852619 loss: 326.028784 2022/10/13 02:39:26 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:39:58 - mmengine - INFO - Epoch(train) [137][350/586] lr: 5.000000e-03 eta: 7:38:51 time: 0.692695 data_time: 0.054861 memory: 12959 loss_kpt: 328.147505 acc_pose: 0.828459 loss: 328.147505 2022/10/13 02:40:33 - mmengine - INFO - Epoch(train) [137][400/586] lr: 5.000000e-03 eta: 7:38:21 time: 0.685990 data_time: 0.054819 memory: 12959 loss_kpt: 329.404476 acc_pose: 0.885980 loss: 329.404476 2022/10/13 02:41:07 - mmengine - INFO - Epoch(train) [137][450/586] lr: 5.000000e-03 eta: 7:37:50 time: 0.694681 data_time: 0.058591 memory: 12959 loss_kpt: 326.976315 acc_pose: 0.847303 loss: 326.976315 2022/10/13 02:41:42 - mmengine - INFO - Epoch(train) [137][500/586] lr: 5.000000e-03 eta: 7:37:20 time: 0.691210 data_time: 0.057098 memory: 12959 loss_kpt: 322.123039 acc_pose: 0.877755 loss: 322.123039 2022/10/13 02:42:16 - mmengine - INFO - Epoch(train) [137][550/586] lr: 5.000000e-03 eta: 7:36:49 time: 0.684976 data_time: 0.064353 memory: 12959 loss_kpt: 325.541466 acc_pose: 0.798050 loss: 325.541466 2022/10/13 02:42:41 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:43:16 - mmengine - INFO - Epoch(train) [138][50/586] lr: 5.000000e-03 eta: 7:35:43 time: 0.701162 data_time: 0.072220 memory: 12959 loss_kpt: 325.027687 acc_pose: 0.805458 loss: 325.027687 2022/10/13 02:43:50 - mmengine - INFO - Epoch(train) [138][100/586] lr: 5.000000e-03 eta: 7:35:12 time: 0.684207 data_time: 0.054027 memory: 12959 loss_kpt: 327.762686 acc_pose: 0.859981 loss: 327.762686 2022/10/13 02:44:24 - mmengine - INFO - Epoch(train) [138][150/586] lr: 5.000000e-03 eta: 7:34:41 time: 0.684252 data_time: 0.057147 memory: 12959 loss_kpt: 319.904039 acc_pose: 0.804634 loss: 319.904039 2022/10/13 02:44:59 - mmengine - INFO - Epoch(train) [138][200/586] lr: 5.000000e-03 eta: 7:34:11 time: 0.701647 data_time: 0.058034 memory: 12959 loss_kpt: 332.028510 acc_pose: 0.882768 loss: 332.028510 2022/10/13 02:45:33 - mmengine - INFO - Epoch(train) [138][250/586] lr: 5.000000e-03 eta: 7:33:40 time: 0.678024 data_time: 0.058028 memory: 12959 loss_kpt: 324.571782 acc_pose: 0.835155 loss: 324.571782 2022/10/13 02:46:08 - mmengine - INFO - Epoch(train) [138][300/586] lr: 5.000000e-03 eta: 7:33:09 time: 0.692948 data_time: 0.052985 memory: 12959 loss_kpt: 320.729957 acc_pose: 0.806281 loss: 320.729957 2022/10/13 02:46:43 - mmengine - INFO - Epoch(train) [138][350/586] lr: 5.000000e-03 eta: 7:32:39 time: 0.690415 data_time: 0.060809 memory: 12959 loss_kpt: 322.783120 acc_pose: 0.797534 loss: 322.783120 2022/10/13 02:47:18 - mmengine - INFO - Epoch(train) [138][400/586] lr: 5.000000e-03 eta: 7:32:08 time: 0.702822 data_time: 0.054467 memory: 12959 loss_kpt: 324.291109 acc_pose: 0.767218 loss: 324.291109 2022/10/13 02:47:52 - mmengine - INFO - Epoch(train) [138][450/586] lr: 5.000000e-03 eta: 7:31:38 time: 0.685343 data_time: 0.055921 memory: 12959 loss_kpt: 325.311647 acc_pose: 0.882095 loss: 325.311647 2022/10/13 02:48:26 - mmengine - INFO - Epoch(train) [138][500/586] lr: 5.000000e-03 eta: 7:31:07 time: 0.685928 data_time: 0.051658 memory: 12959 loss_kpt: 324.123331 acc_pose: 0.833940 loss: 324.123331 2022/10/13 02:49:01 - mmengine - INFO - Epoch(train) [138][550/586] lr: 5.000000e-03 eta: 7:30:36 time: 0.695604 data_time: 0.058947 memory: 12959 loss_kpt: 325.556573 acc_pose: 0.813669 loss: 325.556573 2022/10/13 02:49:26 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:50:00 - mmengine - INFO - Epoch(train) [139][50/586] lr: 5.000000e-03 eta: 7:29:30 time: 0.687118 data_time: 0.068591 memory: 12959 loss_kpt: 329.242838 acc_pose: 0.875616 loss: 329.242838 2022/10/13 02:50:34 - mmengine - INFO - Epoch(train) [139][100/586] lr: 5.000000e-03 eta: 7:28:59 time: 0.684822 data_time: 0.055130 memory: 12959 loss_kpt: 326.676117 acc_pose: 0.833212 loss: 326.676117 2022/10/13 02:50:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:51:08 - mmengine - INFO - Epoch(train) [139][150/586] lr: 5.000000e-03 eta: 7:28:29 time: 0.681796 data_time: 0.056447 memory: 12959 loss_kpt: 324.466219 acc_pose: 0.895317 loss: 324.466219 2022/10/13 02:51:42 - mmengine - INFO - Epoch(train) [139][200/586] lr: 5.000000e-03 eta: 7:27:57 time: 0.673810 data_time: 0.055564 memory: 12959 loss_kpt: 326.059268 acc_pose: 0.913360 loss: 326.059268 2022/10/13 02:52:17 - mmengine - INFO - Epoch(train) [139][250/586] lr: 5.000000e-03 eta: 7:27:27 time: 0.693332 data_time: 0.060794 memory: 12959 loss_kpt: 325.576715 acc_pose: 0.848466 loss: 325.576715 2022/10/13 02:52:51 - mmengine - INFO - Epoch(train) [139][300/586] lr: 5.000000e-03 eta: 7:26:56 time: 0.679646 data_time: 0.055016 memory: 12959 loss_kpt: 323.168688 acc_pose: 0.835036 loss: 323.168688 2022/10/13 02:53:25 - mmengine - INFO - Epoch(train) [139][350/586] lr: 5.000000e-03 eta: 7:26:25 time: 0.678569 data_time: 0.055178 memory: 12959 loss_kpt: 328.383922 acc_pose: 0.883366 loss: 328.383922 2022/10/13 02:53:59 - mmengine - INFO - Epoch(train) [139][400/586] lr: 5.000000e-03 eta: 7:25:54 time: 0.683792 data_time: 0.060264 memory: 12959 loss_kpt: 332.632448 acc_pose: 0.770068 loss: 332.632448 2022/10/13 02:54:33 - mmengine - INFO - Epoch(train) [139][450/586] lr: 5.000000e-03 eta: 7:25:23 time: 0.688380 data_time: 0.059411 memory: 12959 loss_kpt: 329.762017 acc_pose: 0.837425 loss: 329.762017 2022/10/13 02:55:07 - mmengine - INFO - Epoch(train) [139][500/586] lr: 5.000000e-03 eta: 7:24:52 time: 0.676615 data_time: 0.057107 memory: 12959 loss_kpt: 326.049396 acc_pose: 0.799931 loss: 326.049396 2022/10/13 02:55:41 - mmengine - INFO - Epoch(train) [139][550/586] lr: 5.000000e-03 eta: 7:24:21 time: 0.680550 data_time: 0.056998 memory: 12959 loss_kpt: 320.037159 acc_pose: 0.751306 loss: 320.037159 2022/10/13 02:56:05 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 02:56:40 - mmengine - INFO - Epoch(train) [140][50/586] lr: 5.000000e-03 eta: 7:23:16 time: 0.693179 data_time: 0.067955 memory: 12959 loss_kpt: 324.771972 acc_pose: 0.834858 loss: 324.771972 2022/10/13 02:57:14 - mmengine - INFO - Epoch(train) [140][100/586] lr: 5.000000e-03 eta: 7:22:45 time: 0.683915 data_time: 0.055944 memory: 12959 loss_kpt: 318.333192 acc_pose: 0.876268 loss: 318.333192 2022/10/13 02:57:49 - mmengine - INFO - Epoch(train) [140][150/586] lr: 5.000000e-03 eta: 7:22:14 time: 0.688764 data_time: 0.062098 memory: 12959 loss_kpt: 333.632842 acc_pose: 0.827891 loss: 333.632842 2022/10/13 02:58:23 - mmengine - INFO - Epoch(train) [140][200/586] lr: 5.000000e-03 eta: 7:21:43 time: 0.680049 data_time: 0.050489 memory: 12959 loss_kpt: 328.582648 acc_pose: 0.850402 loss: 328.582648 2022/10/13 02:58:57 - mmengine - INFO - Epoch(train) [140][250/586] lr: 5.000000e-03 eta: 7:21:12 time: 0.680802 data_time: 0.059824 memory: 12959 loss_kpt: 321.503730 acc_pose: 0.829696 loss: 321.503730 2022/10/13 02:59:31 - mmengine - INFO - Epoch(train) [140][300/586] lr: 5.000000e-03 eta: 7:20:41 time: 0.680603 data_time: 0.058920 memory: 12959 loss_kpt: 325.746556 acc_pose: 0.822979 loss: 325.746556 2022/10/13 03:00:05 - mmengine - INFO - Epoch(train) [140][350/586] lr: 5.000000e-03 eta: 7:20:10 time: 0.681324 data_time: 0.054957 memory: 12959 loss_kpt: 329.022579 acc_pose: 0.852447 loss: 329.022579 2022/10/13 03:00:38 - mmengine - INFO - Epoch(train) [140][400/586] lr: 5.000000e-03 eta: 7:19:39 time: 0.668717 data_time: 0.051846 memory: 12959 loss_kpt: 324.381125 acc_pose: 0.884233 loss: 324.381125 2022/10/13 03:01:13 - mmengine - INFO - Epoch(train) [140][450/586] lr: 5.000000e-03 eta: 7:19:08 time: 0.685940 data_time: 0.058711 memory: 12959 loss_kpt: 330.203259 acc_pose: 0.873149 loss: 330.203259 2022/10/13 03:01:47 - mmengine - INFO - Epoch(train) [140][500/586] lr: 5.000000e-03 eta: 7:18:37 time: 0.676853 data_time: 0.051250 memory: 12959 loss_kpt: 323.211581 acc_pose: 0.807365 loss: 323.211581 2022/10/13 03:02:18 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:02:20 - mmengine - INFO - Epoch(train) [140][550/586] lr: 5.000000e-03 eta: 7:18:06 time: 0.675781 data_time: 0.059378 memory: 12959 loss_kpt: 324.487610 acc_pose: 0.821111 loss: 324.487610 2022/10/13 03:02:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:02:45 - mmengine - INFO - Saving checkpoint at 140 epochs 2022/10/13 03:03:03 - mmengine - INFO - Epoch(val) [140][50/407] eta: 0:01:39 time: 0.278098 data_time: 0.012147 memory: 12959 2022/10/13 03:03:16 - mmengine - INFO - Epoch(val) [140][100/407] eta: 0:01:19 time: 0.259498 data_time: 0.007316 memory: 2407 2022/10/13 03:03:29 - mmengine - INFO - Epoch(val) [140][150/407] eta: 0:01:06 time: 0.258018 data_time: 0.007745 memory: 2407 2022/10/13 03:03:42 - mmengine - INFO - Epoch(val) [140][200/407] eta: 0:00:53 time: 0.259181 data_time: 0.007585 memory: 2407 2022/10/13 03:03:54 - mmengine - INFO - Epoch(val) [140][250/407] eta: 0:00:40 time: 0.259227 data_time: 0.007554 memory: 2407 2022/10/13 03:04:08 - mmengine - INFO - Epoch(val) [140][300/407] eta: 0:00:27 time: 0.261418 data_time: 0.010425 memory: 2407 2022/10/13 03:04:21 - mmengine - INFO - Epoch(val) [140][350/407] eta: 0:00:14 time: 0.259153 data_time: 0.007599 memory: 2407 2022/10/13 03:04:34 - mmengine - INFO - Epoch(val) [140][400/407] eta: 0:00:01 time: 0.262696 data_time: 0.007517 memory: 2407 2022/10/13 03:04:47 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 03:05:03 - mmengine - INFO - Epoch(val) [140][407/407] coco/AP: 0.731326 coco/AP .5: 0.894523 coco/AP .75: 0.807108 coco/AP (M): 0.701644 coco/AP (L): 0.791307 coco/AR: 0.796741 coco/AR .5: 0.934981 coco/AR .75: 0.859887 coco/AR (M): 0.754466 coco/AR (L): 0.855258 2022/10/13 03:05:03 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_120.pth is removed 2022/10/13 03:05:06 - mmengine - INFO - The best checkpoint with 0.7313 coco/AP at 140 epoch is saved to best_coco/AP_epoch_140.pth. 2022/10/13 03:05:40 - mmengine - INFO - Epoch(train) [141][50/586] lr: 5.000000e-03 eta: 7:17:00 time: 0.675638 data_time: 0.064431 memory: 12959 loss_kpt: 328.267717 acc_pose: 0.799754 loss: 328.267717 2022/10/13 03:06:13 - mmengine - INFO - Epoch(train) [141][100/586] lr: 5.000000e-03 eta: 7:16:29 time: 0.671557 data_time: 0.055440 memory: 12959 loss_kpt: 323.161193 acc_pose: 0.769960 loss: 323.161193 2022/10/13 03:06:47 - mmengine - INFO - Epoch(train) [141][150/586] lr: 5.000000e-03 eta: 7:15:58 time: 0.677417 data_time: 0.054979 memory: 12959 loss_kpt: 325.702737 acc_pose: 0.817284 loss: 325.702737 2022/10/13 03:07:20 - mmengine - INFO - Epoch(train) [141][200/586] lr: 5.000000e-03 eta: 7:15:26 time: 0.666076 data_time: 0.051581 memory: 12959 loss_kpt: 322.277883 acc_pose: 0.735901 loss: 322.277883 2022/10/13 03:07:54 - mmengine - INFO - Epoch(train) [141][250/586] lr: 5.000000e-03 eta: 7:14:55 time: 0.679551 data_time: 0.061948 memory: 12959 loss_kpt: 325.230352 acc_pose: 0.789488 loss: 325.230352 2022/10/13 03:08:28 - mmengine - INFO - Epoch(train) [141][300/586] lr: 5.000000e-03 eta: 7:14:24 time: 0.678746 data_time: 0.054827 memory: 12959 loss_kpt: 325.226656 acc_pose: 0.844128 loss: 325.226656 2022/10/13 03:09:03 - mmengine - INFO - Epoch(train) [141][350/586] lr: 5.000000e-03 eta: 7:13:54 time: 0.694967 data_time: 0.060877 memory: 12959 loss_kpt: 321.631973 acc_pose: 0.750779 loss: 321.631973 2022/10/13 03:09:38 - mmengine - INFO - Epoch(train) [141][400/586] lr: 5.000000e-03 eta: 7:13:23 time: 0.690731 data_time: 0.056514 memory: 12959 loss_kpt: 325.854281 acc_pose: 0.805425 loss: 325.854281 2022/10/13 03:10:12 - mmengine - INFO - Epoch(train) [141][450/586] lr: 5.000000e-03 eta: 7:12:52 time: 0.693307 data_time: 0.062632 memory: 12959 loss_kpt: 321.375697 acc_pose: 0.819719 loss: 321.375697 2022/10/13 03:10:47 - mmengine - INFO - Epoch(train) [141][500/586] lr: 5.000000e-03 eta: 7:12:22 time: 0.697460 data_time: 0.061276 memory: 12959 loss_kpt: 330.117503 acc_pose: 0.768135 loss: 330.117503 2022/10/13 03:11:22 - mmengine - INFO - Epoch(train) [141][550/586] lr: 5.000000e-03 eta: 7:11:51 time: 0.703999 data_time: 0.057963 memory: 12959 loss_kpt: 322.003525 acc_pose: 0.783406 loss: 322.003525 2022/10/13 03:11:47 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:12:22 - mmengine - INFO - Epoch(train) [142][50/586] lr: 5.000000e-03 eta: 7:10:46 time: 0.690642 data_time: 0.070248 memory: 12959 loss_kpt: 325.202761 acc_pose: 0.785421 loss: 325.202761 2022/10/13 03:12:55 - mmengine - INFO - Epoch(train) [142][100/586] lr: 5.000000e-03 eta: 7:10:15 time: 0.673295 data_time: 0.060096 memory: 12959 loss_kpt: 328.286368 acc_pose: 0.817033 loss: 328.286368 2022/10/13 03:13:30 - mmengine - INFO - Epoch(train) [142][150/586] lr: 5.000000e-03 eta: 7:09:44 time: 0.690111 data_time: 0.060797 memory: 12959 loss_kpt: 323.497666 acc_pose: 0.834251 loss: 323.497666 2022/10/13 03:14:04 - mmengine - INFO - Epoch(train) [142][200/586] lr: 5.000000e-03 eta: 7:09:13 time: 0.677086 data_time: 0.065189 memory: 12959 loss_kpt: 325.149091 acc_pose: 0.835388 loss: 325.149091 2022/10/13 03:14:37 - mmengine - INFO - Epoch(train) [142][250/586] lr: 5.000000e-03 eta: 7:08:41 time: 0.661820 data_time: 0.060640 memory: 12959 loss_kpt: 322.274464 acc_pose: 0.826407 loss: 322.274464 2022/10/13 03:15:11 - mmengine - INFO - Epoch(train) [142][300/586] lr: 5.000000e-03 eta: 7:08:10 time: 0.670030 data_time: 0.061608 memory: 12959 loss_kpt: 327.218400 acc_pose: 0.842314 loss: 327.218400 2022/10/13 03:15:44 - mmengine - INFO - Epoch(train) [142][350/586] lr: 5.000000e-03 eta: 7:07:39 time: 0.676196 data_time: 0.060350 memory: 12959 loss_kpt: 334.278890 acc_pose: 0.788510 loss: 334.278890 2022/10/13 03:16:01 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:16:18 - mmengine - INFO - Epoch(train) [142][400/586] lr: 5.000000e-03 eta: 7:07:08 time: 0.673456 data_time: 0.057893 memory: 12959 loss_kpt: 322.188842 acc_pose: 0.790029 loss: 322.188842 2022/10/13 03:16:52 - mmengine - INFO - Epoch(train) [142][450/586] lr: 5.000000e-03 eta: 7:06:37 time: 0.672580 data_time: 0.054027 memory: 12959 loss_kpt: 325.495753 acc_pose: 0.820064 loss: 325.495753 2022/10/13 03:17:26 - mmengine - INFO - Epoch(train) [142][500/586] lr: 5.000000e-03 eta: 7:06:06 time: 0.687268 data_time: 0.058872 memory: 12959 loss_kpt: 324.791649 acc_pose: 0.747336 loss: 324.791649 2022/10/13 03:18:00 - mmengine - INFO - Epoch(train) [142][550/586] lr: 5.000000e-03 eta: 7:05:35 time: 0.678272 data_time: 0.053880 memory: 12959 loss_kpt: 324.035939 acc_pose: 0.882845 loss: 324.035939 2022/10/13 03:18:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:18:58 - mmengine - INFO - Epoch(train) [143][50/586] lr: 5.000000e-03 eta: 7:04:30 time: 0.680969 data_time: 0.062576 memory: 12959 loss_kpt: 328.145243 acc_pose: 0.870575 loss: 328.145243 2022/10/13 03:19:31 - mmengine - INFO - Epoch(train) [143][100/586] lr: 5.000000e-03 eta: 7:03:58 time: 0.662703 data_time: 0.056315 memory: 12959 loss_kpt: 323.470520 acc_pose: 0.887448 loss: 323.470520 2022/10/13 03:20:04 - mmengine - INFO - Epoch(train) [143][150/586] lr: 5.000000e-03 eta: 7:03:26 time: 0.651088 data_time: 0.057618 memory: 12959 loss_kpt: 323.353835 acc_pose: 0.834967 loss: 323.353835 2022/10/13 03:20:37 - mmengine - INFO - Epoch(train) [143][200/586] lr: 5.000000e-03 eta: 7:02:55 time: 0.659684 data_time: 0.055586 memory: 12959 loss_kpt: 325.649635 acc_pose: 0.760213 loss: 325.649635 2022/10/13 03:21:10 - mmengine - INFO - Epoch(train) [143][250/586] lr: 5.000000e-03 eta: 7:02:24 time: 0.672488 data_time: 0.061479 memory: 12959 loss_kpt: 330.346206 acc_pose: 0.784898 loss: 330.346206 2022/10/13 03:21:43 - mmengine - INFO - Epoch(train) [143][300/586] lr: 5.000000e-03 eta: 7:01:52 time: 0.652751 data_time: 0.058264 memory: 12959 loss_kpt: 325.235972 acc_pose: 0.860225 loss: 325.235972 2022/10/13 03:22:16 - mmengine - INFO - Epoch(train) [143][350/586] lr: 5.000000e-03 eta: 7:01:21 time: 0.667036 data_time: 0.061298 memory: 12959 loss_kpt: 322.409579 acc_pose: 0.827072 loss: 322.409579 2022/10/13 03:22:50 - mmengine - INFO - Epoch(train) [143][400/586] lr: 5.000000e-03 eta: 7:00:49 time: 0.669761 data_time: 0.055083 memory: 12959 loss_kpt: 323.114526 acc_pose: 0.867928 loss: 323.114526 2022/10/13 03:23:24 - mmengine - INFO - Epoch(train) [143][450/586] lr: 5.000000e-03 eta: 7:00:18 time: 0.675489 data_time: 0.058715 memory: 12959 loss_kpt: 323.504385 acc_pose: 0.845000 loss: 323.504385 2022/10/13 03:23:58 - mmengine - INFO - Epoch(train) [143][500/586] lr: 5.000000e-03 eta: 6:59:47 time: 0.682540 data_time: 0.051065 memory: 12959 loss_kpt: 329.002390 acc_pose: 0.849601 loss: 329.002390 2022/10/13 03:24:33 - mmengine - INFO - Epoch(train) [143][550/586] lr: 5.000000e-03 eta: 6:59:17 time: 0.705644 data_time: 0.059160 memory: 12959 loss_kpt: 326.080526 acc_pose: 0.855499 loss: 326.080526 2022/10/13 03:24:58 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:25:32 - mmengine - INFO - Epoch(train) [144][50/586] lr: 5.000000e-03 eta: 6:58:12 time: 0.685966 data_time: 0.070230 memory: 12959 loss_kpt: 329.093450 acc_pose: 0.850024 loss: 329.093450 2022/10/13 03:26:06 - mmengine - INFO - Epoch(train) [144][100/586] lr: 5.000000e-03 eta: 6:57:41 time: 0.670806 data_time: 0.060301 memory: 12959 loss_kpt: 328.010654 acc_pose: 0.782721 loss: 328.010654 2022/10/13 03:26:40 - mmengine - INFO - Epoch(train) [144][150/586] lr: 5.000000e-03 eta: 6:57:09 time: 0.671147 data_time: 0.052794 memory: 12959 loss_kpt: 320.762525 acc_pose: 0.807258 loss: 320.762525 2022/10/13 03:27:14 - mmengine - INFO - Epoch(train) [144][200/586] lr: 5.000000e-03 eta: 6:56:38 time: 0.681573 data_time: 0.051968 memory: 12959 loss_kpt: 329.722883 acc_pose: 0.733669 loss: 329.722883 2022/10/13 03:27:15 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:27:47 - mmengine - INFO - Epoch(train) [144][250/586] lr: 5.000000e-03 eta: 6:56:07 time: 0.669102 data_time: 0.053498 memory: 12959 loss_kpt: 329.324731 acc_pose: 0.775067 loss: 329.324731 2022/10/13 03:28:21 - mmengine - INFO - Epoch(train) [144][300/586] lr: 5.000000e-03 eta: 6:55:36 time: 0.686485 data_time: 0.058998 memory: 12959 loss_kpt: 323.720948 acc_pose: 0.836821 loss: 323.720948 2022/10/13 03:28:56 - mmengine - INFO - Epoch(train) [144][350/586] lr: 5.000000e-03 eta: 6:55:05 time: 0.686263 data_time: 0.054221 memory: 12959 loss_kpt: 319.378929 acc_pose: 0.833634 loss: 319.378929 2022/10/13 03:29:30 - mmengine - INFO - Epoch(train) [144][400/586] lr: 5.000000e-03 eta: 6:54:34 time: 0.677061 data_time: 0.053506 memory: 12959 loss_kpt: 325.003496 acc_pose: 0.847091 loss: 325.003496 2022/10/13 03:30:05 - mmengine - INFO - Epoch(train) [144][450/586] lr: 5.000000e-03 eta: 6:54:03 time: 0.697722 data_time: 0.062091 memory: 12959 loss_kpt: 324.143901 acc_pose: 0.863518 loss: 324.143901 2022/10/13 03:30:39 - mmengine - INFO - Epoch(train) [144][500/586] lr: 5.000000e-03 eta: 6:53:32 time: 0.681499 data_time: 0.056024 memory: 12959 loss_kpt: 327.287576 acc_pose: 0.878551 loss: 327.287576 2022/10/13 03:31:14 - mmengine - INFO - Epoch(train) [144][550/586] lr: 5.000000e-03 eta: 6:53:02 time: 0.704722 data_time: 0.054624 memory: 12959 loss_kpt: 321.008581 acc_pose: 0.845725 loss: 321.008581 2022/10/13 03:31:39 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:32:13 - mmengine - INFO - Epoch(train) [145][50/586] lr: 5.000000e-03 eta: 6:51:57 time: 0.687522 data_time: 0.064746 memory: 12959 loss_kpt: 327.535482 acc_pose: 0.883064 loss: 327.535482 2022/10/13 03:32:46 - mmengine - INFO - Epoch(train) [145][100/586] lr: 5.000000e-03 eta: 6:51:26 time: 0.661394 data_time: 0.059669 memory: 12959 loss_kpt: 327.326260 acc_pose: 0.872448 loss: 327.326260 2022/10/13 03:33:20 - mmengine - INFO - Epoch(train) [145][150/586] lr: 5.000000e-03 eta: 6:50:54 time: 0.668657 data_time: 0.057836 memory: 12959 loss_kpt: 325.232041 acc_pose: 0.883414 loss: 325.232041 2022/10/13 03:33:52 - mmengine - INFO - Epoch(train) [145][200/586] lr: 5.000000e-03 eta: 6:50:23 time: 0.658523 data_time: 0.052799 memory: 12959 loss_kpt: 321.982540 acc_pose: 0.780461 loss: 321.982540 2022/10/13 03:34:26 - mmengine - INFO - Epoch(train) [145][250/586] lr: 5.000000e-03 eta: 6:49:51 time: 0.667610 data_time: 0.054440 memory: 12959 loss_kpt: 320.059584 acc_pose: 0.837950 loss: 320.059584 2022/10/13 03:34:59 - mmengine - INFO - Epoch(train) [145][300/586] lr: 5.000000e-03 eta: 6:49:20 time: 0.665126 data_time: 0.053349 memory: 12959 loss_kpt: 322.351970 acc_pose: 0.830335 loss: 322.351970 2022/10/13 03:35:32 - mmengine - INFO - Epoch(train) [145][350/586] lr: 5.000000e-03 eta: 6:48:48 time: 0.664031 data_time: 0.055486 memory: 12959 loss_kpt: 324.814035 acc_pose: 0.817217 loss: 324.814035 2022/10/13 03:36:05 - mmengine - INFO - Epoch(train) [145][400/586] lr: 5.000000e-03 eta: 6:48:17 time: 0.651580 data_time: 0.057582 memory: 12959 loss_kpt: 320.781093 acc_pose: 0.860963 loss: 320.781093 2022/10/13 03:36:38 - mmengine - INFO - Epoch(train) [145][450/586] lr: 5.000000e-03 eta: 6:47:45 time: 0.659541 data_time: 0.056925 memory: 12959 loss_kpt: 326.660058 acc_pose: 0.840292 loss: 326.660058 2022/10/13 03:37:11 - mmengine - INFO - Epoch(train) [145][500/586] lr: 5.000000e-03 eta: 6:47:14 time: 0.662647 data_time: 0.055703 memory: 12959 loss_kpt: 322.869548 acc_pose: 0.874422 loss: 322.869548 2022/10/13 03:37:46 - mmengine - INFO - Epoch(train) [145][550/586] lr: 5.000000e-03 eta: 6:46:43 time: 0.699681 data_time: 0.054337 memory: 12959 loss_kpt: 319.073467 acc_pose: 0.757293 loss: 319.073467 2022/10/13 03:38:11 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:38:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:38:45 - mmengine - INFO - Epoch(train) [146][50/586] lr: 5.000000e-03 eta: 6:45:39 time: 0.687966 data_time: 0.064232 memory: 12959 loss_kpt: 329.339926 acc_pose: 0.825143 loss: 329.339926 2022/10/13 03:39:19 - mmengine - INFO - Epoch(train) [146][100/586] lr: 5.000000e-03 eta: 6:45:08 time: 0.675372 data_time: 0.053773 memory: 12959 loss_kpt: 326.813749 acc_pose: 0.768312 loss: 326.813749 2022/10/13 03:39:53 - mmengine - INFO - Epoch(train) [146][150/586] lr: 5.000000e-03 eta: 6:44:36 time: 0.673773 data_time: 0.057348 memory: 12959 loss_kpt: 326.690287 acc_pose: 0.829302 loss: 326.690287 2022/10/13 03:40:28 - mmengine - INFO - Epoch(train) [146][200/586] lr: 5.000000e-03 eta: 6:44:06 time: 0.697970 data_time: 0.053743 memory: 12959 loss_kpt: 328.473558 acc_pose: 0.882091 loss: 328.473558 2022/10/13 03:41:02 - mmengine - INFO - Epoch(train) [146][250/586] lr: 5.000000e-03 eta: 6:43:34 time: 0.677184 data_time: 0.058756 memory: 12959 loss_kpt: 326.328529 acc_pose: 0.866294 loss: 326.328529 2022/10/13 03:41:35 - mmengine - INFO - Epoch(train) [146][300/586] lr: 5.000000e-03 eta: 6:43:03 time: 0.677090 data_time: 0.056437 memory: 12959 loss_kpt: 323.844905 acc_pose: 0.753113 loss: 323.844905 2022/10/13 03:42:10 - mmengine - INFO - Epoch(train) [146][350/586] lr: 5.000000e-03 eta: 6:42:32 time: 0.695051 data_time: 0.058395 memory: 12959 loss_kpt: 320.156823 acc_pose: 0.887289 loss: 320.156823 2022/10/13 03:42:44 - mmengine - INFO - Epoch(train) [146][400/586] lr: 5.000000e-03 eta: 6:42:01 time: 0.685011 data_time: 0.061435 memory: 12959 loss_kpt: 321.786282 acc_pose: 0.792909 loss: 321.786282 2022/10/13 03:43:19 - mmengine - INFO - Epoch(train) [146][450/586] lr: 5.000000e-03 eta: 6:41:31 time: 0.690270 data_time: 0.056318 memory: 12959 loss_kpt: 327.142139 acc_pose: 0.844574 loss: 327.142139 2022/10/13 03:43:52 - mmengine - INFO - Epoch(train) [146][500/586] lr: 5.000000e-03 eta: 6:40:59 time: 0.670291 data_time: 0.051737 memory: 12959 loss_kpt: 323.739691 acc_pose: 0.845831 loss: 323.739691 2022/10/13 03:44:27 - mmengine - INFO - Epoch(train) [146][550/586] lr: 5.000000e-03 eta: 6:40:28 time: 0.691534 data_time: 0.059562 memory: 12959 loss_kpt: 321.301147 acc_pose: 0.811376 loss: 321.301147 2022/10/13 03:44:52 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:45:26 - mmengine - INFO - Epoch(train) [147][50/586] lr: 5.000000e-03 eta: 6:39:24 time: 0.685070 data_time: 0.070210 memory: 12959 loss_kpt: 324.314167 acc_pose: 0.826938 loss: 324.314167 2022/10/13 03:46:00 - mmengine - INFO - Epoch(train) [147][100/586] lr: 5.000000e-03 eta: 6:38:53 time: 0.673430 data_time: 0.052028 memory: 12959 loss_kpt: 324.302077 acc_pose: 0.811971 loss: 324.302077 2022/10/13 03:46:33 - mmengine - INFO - Epoch(train) [147][150/586] lr: 5.000000e-03 eta: 6:38:22 time: 0.675313 data_time: 0.054763 memory: 12959 loss_kpt: 328.478683 acc_pose: 0.751310 loss: 328.478683 2022/10/13 03:47:07 - mmengine - INFO - Epoch(train) [147][200/586] lr: 5.000000e-03 eta: 6:37:50 time: 0.673189 data_time: 0.051750 memory: 12959 loss_kpt: 319.793625 acc_pose: 0.846513 loss: 319.793625 2022/10/13 03:47:41 - mmengine - INFO - Epoch(train) [147][250/586] lr: 5.000000e-03 eta: 6:37:19 time: 0.679265 data_time: 0.054927 memory: 12959 loss_kpt: 320.044127 acc_pose: 0.834451 loss: 320.044127 2022/10/13 03:48:14 - mmengine - INFO - Epoch(train) [147][300/586] lr: 5.000000e-03 eta: 6:36:48 time: 0.667118 data_time: 0.052321 memory: 12959 loss_kpt: 320.378120 acc_pose: 0.863682 loss: 320.378120 2022/10/13 03:48:49 - mmengine - INFO - Epoch(train) [147][350/586] lr: 5.000000e-03 eta: 6:36:17 time: 0.689619 data_time: 0.056950 memory: 12959 loss_kpt: 328.583073 acc_pose: 0.808533 loss: 328.583073 2022/10/13 03:49:24 - mmengine - INFO - Epoch(train) [147][400/586] lr: 5.000000e-03 eta: 6:35:46 time: 0.698993 data_time: 0.056929 memory: 12959 loss_kpt: 329.249329 acc_pose: 0.801334 loss: 329.249329 2022/10/13 03:49:54 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:49:59 - mmengine - INFO - Epoch(train) [147][450/586] lr: 5.000000e-03 eta: 6:35:16 time: 0.696517 data_time: 0.058776 memory: 12959 loss_kpt: 325.455363 acc_pose: 0.779211 loss: 325.455363 2022/10/13 03:50:33 - mmengine - INFO - Epoch(train) [147][500/586] lr: 5.000000e-03 eta: 6:34:45 time: 0.687569 data_time: 0.054243 memory: 12959 loss_kpt: 321.960425 acc_pose: 0.879845 loss: 321.960425 2022/10/13 03:51:07 - mmengine - INFO - Epoch(train) [147][550/586] lr: 5.000000e-03 eta: 6:34:13 time: 0.680143 data_time: 0.055059 memory: 12959 loss_kpt: 326.111451 acc_pose: 0.818299 loss: 326.111451 2022/10/13 03:51:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:52:06 - mmengine - INFO - Epoch(train) [148][50/586] lr: 5.000000e-03 eta: 6:33:09 time: 0.685805 data_time: 0.066738 memory: 12959 loss_kpt: 323.414437 acc_pose: 0.883236 loss: 323.414437 2022/10/13 03:52:40 - mmengine - INFO - Epoch(train) [148][100/586] lr: 5.000000e-03 eta: 6:32:38 time: 0.684496 data_time: 0.053441 memory: 12959 loss_kpt: 330.797272 acc_pose: 0.848694 loss: 330.797272 2022/10/13 03:53:14 - mmengine - INFO - Epoch(train) [148][150/586] lr: 5.000000e-03 eta: 6:32:07 time: 0.686398 data_time: 0.062413 memory: 12959 loss_kpt: 326.178558 acc_pose: 0.817855 loss: 326.178558 2022/10/13 03:53:48 - mmengine - INFO - Epoch(train) [148][200/586] lr: 5.000000e-03 eta: 6:31:36 time: 0.673379 data_time: 0.052857 memory: 12959 loss_kpt: 319.193797 acc_pose: 0.838051 loss: 319.193797 2022/10/13 03:54:22 - mmengine - INFO - Epoch(train) [148][250/586] lr: 5.000000e-03 eta: 6:31:05 time: 0.689766 data_time: 0.061093 memory: 12959 loss_kpt: 318.006399 acc_pose: 0.850288 loss: 318.006399 2022/10/13 03:54:57 - mmengine - INFO - Epoch(train) [148][300/586] lr: 5.000000e-03 eta: 6:30:34 time: 0.681216 data_time: 0.060906 memory: 12959 loss_kpt: 323.354061 acc_pose: 0.900061 loss: 323.354061 2022/10/13 03:55:31 - mmengine - INFO - Epoch(train) [148][350/586] lr: 5.000000e-03 eta: 6:30:03 time: 0.682866 data_time: 0.061602 memory: 12959 loss_kpt: 329.477812 acc_pose: 0.831296 loss: 329.477812 2022/10/13 03:56:05 - mmengine - INFO - Epoch(train) [148][400/586] lr: 5.000000e-03 eta: 6:29:32 time: 0.677337 data_time: 0.056292 memory: 12959 loss_kpt: 324.591589 acc_pose: 0.848736 loss: 324.591589 2022/10/13 03:56:39 - mmengine - INFO - Epoch(train) [148][450/586] lr: 5.000000e-03 eta: 6:29:01 time: 0.681060 data_time: 0.059711 memory: 12959 loss_kpt: 325.391022 acc_pose: 0.761621 loss: 325.391022 2022/10/13 03:57:12 - mmengine - INFO - Epoch(train) [148][500/586] lr: 5.000000e-03 eta: 6:28:29 time: 0.672646 data_time: 0.054767 memory: 12959 loss_kpt: 327.027413 acc_pose: 0.904990 loss: 327.027413 2022/10/13 03:57:46 - mmengine - INFO - Epoch(train) [148][550/586] lr: 5.000000e-03 eta: 6:27:58 time: 0.675343 data_time: 0.056392 memory: 12959 loss_kpt: 324.200084 acc_pose: 0.838110 loss: 324.200084 2022/10/13 03:58:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 03:58:44 - mmengine - INFO - Epoch(train) [149][50/586] lr: 5.000000e-03 eta: 6:26:54 time: 0.679869 data_time: 0.070997 memory: 12959 loss_kpt: 330.840573 acc_pose: 0.835956 loss: 330.840573 2022/10/13 03:59:18 - mmengine - INFO - Epoch(train) [149][100/586] lr: 5.000000e-03 eta: 6:26:23 time: 0.682313 data_time: 0.053261 memory: 12959 loss_kpt: 325.352844 acc_pose: 0.813935 loss: 325.352844 2022/10/13 03:59:52 - mmengine - INFO - Epoch(train) [149][150/586] lr: 5.000000e-03 eta: 6:25:52 time: 0.663736 data_time: 0.056380 memory: 12959 loss_kpt: 328.051149 acc_pose: 0.840806 loss: 328.051149 2022/10/13 04:00:25 - mmengine - INFO - Epoch(train) [149][200/586] lr: 5.000000e-03 eta: 6:25:20 time: 0.665923 data_time: 0.055627 memory: 12959 loss_kpt: 322.428867 acc_pose: 0.766378 loss: 322.428867 2022/10/13 04:00:59 - mmengine - INFO - Epoch(train) [149][250/586] lr: 5.000000e-03 eta: 6:24:49 time: 0.676062 data_time: 0.059190 memory: 12959 loss_kpt: 318.463631 acc_pose: 0.825471 loss: 318.463631 2022/10/13 04:01:13 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:01:32 - mmengine - INFO - Epoch(train) [149][300/586] lr: 5.000000e-03 eta: 6:24:17 time: 0.666192 data_time: 0.054406 memory: 12959 loss_kpt: 322.576895 acc_pose: 0.797785 loss: 322.576895 2022/10/13 04:02:06 - mmengine - INFO - Epoch(train) [149][350/586] lr: 5.000000e-03 eta: 6:23:46 time: 0.677432 data_time: 0.059508 memory: 12959 loss_kpt: 321.359120 acc_pose: 0.812955 loss: 321.359120 2022/10/13 04:02:39 - mmengine - INFO - Epoch(train) [149][400/586] lr: 5.000000e-03 eta: 6:23:15 time: 0.666391 data_time: 0.055159 memory: 12959 loss_kpt: 320.272998 acc_pose: 0.879374 loss: 320.272998 2022/10/13 04:03:13 - mmengine - INFO - Epoch(train) [149][450/586] lr: 5.000000e-03 eta: 6:22:44 time: 0.676056 data_time: 0.058442 memory: 12959 loss_kpt: 322.769141 acc_pose: 0.843393 loss: 322.769141 2022/10/13 04:03:47 - mmengine - INFO - Epoch(train) [149][500/586] lr: 5.000000e-03 eta: 6:22:12 time: 0.672283 data_time: 0.056343 memory: 12959 loss_kpt: 323.852374 acc_pose: 0.880482 loss: 323.852374 2022/10/13 04:04:20 - mmengine - INFO - Epoch(train) [149][550/586] lr: 5.000000e-03 eta: 6:21:41 time: 0.666746 data_time: 0.056850 memory: 12959 loss_kpt: 323.337684 acc_pose: 0.844530 loss: 323.337684 2022/10/13 04:04:44 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:05:18 - mmengine - INFO - Epoch(train) [150][50/586] lr: 5.000000e-03 eta: 6:20:37 time: 0.688353 data_time: 0.071488 memory: 12959 loss_kpt: 327.457889 acc_pose: 0.869466 loss: 327.457889 2022/10/13 04:05:52 - mmengine - INFO - Epoch(train) [150][100/586] lr: 5.000000e-03 eta: 6:20:06 time: 0.678318 data_time: 0.056919 memory: 12959 loss_kpt: 319.801887 acc_pose: 0.798114 loss: 319.801887 2022/10/13 04:06:26 - mmengine - INFO - Epoch(train) [150][150/586] lr: 5.000000e-03 eta: 6:19:35 time: 0.679262 data_time: 0.055931 memory: 12959 loss_kpt: 324.838303 acc_pose: 0.816638 loss: 324.838303 2022/10/13 04:07:00 - mmengine - INFO - Epoch(train) [150][200/586] lr: 5.000000e-03 eta: 6:19:04 time: 0.677072 data_time: 0.062787 memory: 12959 loss_kpt: 322.351672 acc_pose: 0.887373 loss: 322.351672 2022/10/13 04:07:35 - mmengine - INFO - Epoch(train) [150][250/586] lr: 5.000000e-03 eta: 6:18:33 time: 0.692252 data_time: 0.060308 memory: 12959 loss_kpt: 319.880917 acc_pose: 0.864477 loss: 319.880917 2022/10/13 04:08:09 - mmengine - INFO - Epoch(train) [150][300/586] lr: 5.000000e-03 eta: 6:18:02 time: 0.679703 data_time: 0.055358 memory: 12959 loss_kpt: 329.960563 acc_pose: 0.796028 loss: 329.960563 2022/10/13 04:08:43 - mmengine - INFO - Epoch(train) [150][350/586] lr: 5.000000e-03 eta: 6:17:30 time: 0.676325 data_time: 0.058126 memory: 12959 loss_kpt: 325.423505 acc_pose: 0.857116 loss: 325.423505 2022/10/13 04:09:17 - mmengine - INFO - Epoch(train) [150][400/586] lr: 5.000000e-03 eta: 6:16:59 time: 0.682040 data_time: 0.056930 memory: 12959 loss_kpt: 326.570866 acc_pose: 0.893368 loss: 326.570866 2022/10/13 04:09:51 - mmengine - INFO - Epoch(train) [150][450/586] lr: 5.000000e-03 eta: 6:16:28 time: 0.688639 data_time: 0.058104 memory: 12959 loss_kpt: 323.420163 acc_pose: 0.840400 loss: 323.420163 2022/10/13 04:10:25 - mmengine - INFO - Epoch(train) [150][500/586] lr: 5.000000e-03 eta: 6:15:57 time: 0.685724 data_time: 0.055535 memory: 12959 loss_kpt: 328.016970 acc_pose: 0.831443 loss: 328.016970 2022/10/13 04:11:00 - mmengine - INFO - Epoch(train) [150][550/586] lr: 5.000000e-03 eta: 6:15:26 time: 0.681508 data_time: 0.059171 memory: 12959 loss_kpt: 321.865462 acc_pose: 0.875293 loss: 321.865462 2022/10/13 04:11:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:11:24 - mmengine - INFO - Saving checkpoint at 150 epochs 2022/10/13 04:11:41 - mmengine - INFO - Epoch(val) [150][50/407] eta: 0:01:35 time: 0.267913 data_time: 0.012586 memory: 12959 2022/10/13 04:11:55 - mmengine - INFO - Epoch(val) [150][100/407] eta: 0:01:20 time: 0.262582 data_time: 0.007627 memory: 2407 2022/10/13 04:12:08 - mmengine - INFO - Epoch(val) [150][150/407] eta: 0:01:07 time: 0.262166 data_time: 0.007818 memory: 2407 2022/10/13 04:12:21 - mmengine - INFO - Epoch(val) [150][200/407] eta: 0:00:53 time: 0.259570 data_time: 0.007554 memory: 2407 2022/10/13 04:12:34 - mmengine - INFO - Epoch(val) [150][250/407] eta: 0:00:41 time: 0.264322 data_time: 0.007941 memory: 2407 2022/10/13 04:12:47 - mmengine - INFO - Epoch(val) [150][300/407] eta: 0:00:28 time: 0.265162 data_time: 0.007875 memory: 2407 2022/10/13 04:13:00 - mmengine - INFO - Epoch(val) [150][350/407] eta: 0:00:15 time: 0.265860 data_time: 0.011198 memory: 2407 2022/10/13 04:13:13 - mmengine - INFO - Epoch(val) [150][400/407] eta: 0:00:01 time: 0.261986 data_time: 0.007626 memory: 2407 2022/10/13 04:13:27 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 04:13:43 - mmengine - INFO - Epoch(val) [150][407/407] coco/AP: 0.727404 coco/AP .5: 0.891200 coco/AP .75: 0.802783 coco/AP (M): 0.695995 coco/AP (L): 0.789713 coco/AR: 0.793010 coco/AR .5: 0.933564 coco/AR .75: 0.853275 coco/AR (M): 0.749385 coco/AR (L): 0.853660 2022/10/13 04:14:18 - mmengine - INFO - Epoch(train) [151][50/586] lr: 5.000000e-03 eta: 6:14:23 time: 0.691353 data_time: 0.067425 memory: 12959 loss_kpt: 322.603275 acc_pose: 0.885017 loss: 322.603275 2022/10/13 04:14:52 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:14:52 - mmengine - INFO - Epoch(train) [151][100/586] lr: 5.000000e-03 eta: 6:13:52 time: 0.688022 data_time: 0.057492 memory: 12959 loss_kpt: 326.752994 acc_pose: 0.778115 loss: 326.752994 2022/10/13 04:15:28 - mmengine - INFO - Epoch(train) [151][150/586] lr: 5.000000e-03 eta: 6:13:21 time: 0.708322 data_time: 0.062180 memory: 12959 loss_kpt: 321.314929 acc_pose: 0.801110 loss: 321.314929 2022/10/13 04:16:03 - mmengine - INFO - Epoch(train) [151][200/586] lr: 5.000000e-03 eta: 6:12:50 time: 0.707754 data_time: 0.054831 memory: 12959 loss_kpt: 321.699341 acc_pose: 0.857170 loss: 321.699341 2022/10/13 04:16:38 - mmengine - INFO - Epoch(train) [151][250/586] lr: 5.000000e-03 eta: 6:12:20 time: 0.704030 data_time: 0.059299 memory: 12959 loss_kpt: 322.791173 acc_pose: 0.760693 loss: 322.791173 2022/10/13 04:17:14 - mmengine - INFO - Epoch(train) [151][300/586] lr: 5.000000e-03 eta: 6:11:49 time: 0.710874 data_time: 0.053119 memory: 12959 loss_kpt: 317.220128 acc_pose: 0.819423 loss: 317.220128 2022/10/13 04:17:49 - mmengine - INFO - Epoch(train) [151][350/586] lr: 5.000000e-03 eta: 6:11:18 time: 0.705385 data_time: 0.061036 memory: 12959 loss_kpt: 324.901867 acc_pose: 0.888419 loss: 324.901867 2022/10/13 04:18:25 - mmengine - INFO - Epoch(train) [151][400/586] lr: 5.000000e-03 eta: 6:10:48 time: 0.713482 data_time: 0.055263 memory: 12959 loss_kpt: 324.181086 acc_pose: 0.858192 loss: 324.181086 2022/10/13 04:19:00 - mmengine - INFO - Epoch(train) [151][450/586] lr: 5.000000e-03 eta: 6:10:17 time: 0.708892 data_time: 0.054538 memory: 12959 loss_kpt: 321.450041 acc_pose: 0.820124 loss: 321.450041 2022/10/13 04:19:36 - mmengine - INFO - Epoch(train) [151][500/586] lr: 5.000000e-03 eta: 6:09:47 time: 0.719265 data_time: 0.060243 memory: 12959 loss_kpt: 322.347137 acc_pose: 0.835862 loss: 322.347137 2022/10/13 04:20:12 - mmengine - INFO - Epoch(train) [151][550/586] lr: 5.000000e-03 eta: 6:09:16 time: 0.714159 data_time: 0.055476 memory: 12959 loss_kpt: 323.553749 acc_pose: 0.787540 loss: 323.553749 2022/10/13 04:20:37 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:21:11 - mmengine - INFO - Epoch(train) [152][50/586] lr: 5.000000e-03 eta: 6:08:13 time: 0.682766 data_time: 0.069617 memory: 12959 loss_kpt: 316.939993 acc_pose: 0.769714 loss: 316.939993 2022/10/13 04:21:45 - mmengine - INFO - Epoch(train) [152][100/586] lr: 5.000000e-03 eta: 6:07:41 time: 0.669721 data_time: 0.057742 memory: 12959 loss_kpt: 323.937488 acc_pose: 0.713872 loss: 323.937488 2022/10/13 04:22:19 - mmengine - INFO - Epoch(train) [152][150/586] lr: 5.000000e-03 eta: 6:07:10 time: 0.680331 data_time: 0.059506 memory: 12959 loss_kpt: 320.421122 acc_pose: 0.844000 loss: 320.421122 2022/10/13 04:22:53 - mmengine - INFO - Epoch(train) [152][200/586] lr: 5.000000e-03 eta: 6:06:39 time: 0.678205 data_time: 0.055028 memory: 12959 loss_kpt: 323.315334 acc_pose: 0.806295 loss: 323.315334 2022/10/13 04:23:27 - mmengine - INFO - Epoch(train) [152][250/586] lr: 5.000000e-03 eta: 6:06:08 time: 0.688652 data_time: 0.055689 memory: 12959 loss_kpt: 326.364282 acc_pose: 0.838106 loss: 326.364282 2022/10/13 04:24:01 - mmengine - INFO - Epoch(train) [152][300/586] lr: 5.000000e-03 eta: 6:05:37 time: 0.683109 data_time: 0.052485 memory: 12959 loss_kpt: 321.950767 acc_pose: 0.832269 loss: 321.950767 2022/10/13 04:24:36 - mmengine - INFO - Epoch(train) [152][350/586] lr: 5.000000e-03 eta: 6:05:06 time: 0.694818 data_time: 0.060137 memory: 12959 loss_kpt: 323.545387 acc_pose: 0.823251 loss: 323.545387 2022/10/13 04:25:10 - mmengine - INFO - Epoch(train) [152][400/586] lr: 5.000000e-03 eta: 6:04:34 time: 0.674776 data_time: 0.058628 memory: 12959 loss_kpt: 325.546262 acc_pose: 0.803677 loss: 325.546262 2022/10/13 04:25:44 - mmengine - INFO - Epoch(train) [152][450/586] lr: 5.000000e-03 eta: 6:04:03 time: 0.674996 data_time: 0.053612 memory: 12959 loss_kpt: 317.426962 acc_pose: 0.853067 loss: 317.426962 2022/10/13 04:26:17 - mmengine - INFO - Epoch(train) [152][500/586] lr: 5.000000e-03 eta: 6:03:32 time: 0.666453 data_time: 0.053114 memory: 12959 loss_kpt: 326.520560 acc_pose: 0.772839 loss: 326.520560 2022/10/13 04:26:26 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:26:51 - mmengine - INFO - Epoch(train) [152][550/586] lr: 5.000000e-03 eta: 6:03:00 time: 0.670325 data_time: 0.058794 memory: 12959 loss_kpt: 322.400925 acc_pose: 0.831951 loss: 322.400925 2022/10/13 04:27:14 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:27:49 - mmengine - INFO - Epoch(train) [153][50/586] lr: 5.000000e-03 eta: 6:01:57 time: 0.690873 data_time: 0.074396 memory: 12959 loss_kpt: 322.239710 acc_pose: 0.822405 loss: 322.239710 2022/10/13 04:28:23 - mmengine - INFO - Epoch(train) [153][100/586] lr: 5.000000e-03 eta: 6:01:26 time: 0.671417 data_time: 0.059354 memory: 12959 loss_kpt: 316.779483 acc_pose: 0.907161 loss: 316.779483 2022/10/13 04:28:56 - mmengine - INFO - Epoch(train) [153][150/586] lr: 5.000000e-03 eta: 6:00:54 time: 0.666989 data_time: 0.056252 memory: 12959 loss_kpt: 323.317364 acc_pose: 0.828017 loss: 323.317364 2022/10/13 04:29:30 - mmengine - INFO - Epoch(train) [153][200/586] lr: 5.000000e-03 eta: 6:00:23 time: 0.674641 data_time: 0.058300 memory: 12959 loss_kpt: 327.677984 acc_pose: 0.799388 loss: 327.677984 2022/10/13 04:30:04 - mmengine - INFO - Epoch(train) [153][250/586] lr: 5.000000e-03 eta: 5:59:52 time: 0.680102 data_time: 0.058790 memory: 12959 loss_kpt: 322.736142 acc_pose: 0.850679 loss: 322.736142 2022/10/13 04:30:38 - mmengine - INFO - Epoch(train) [153][300/586] lr: 5.000000e-03 eta: 5:59:21 time: 0.675034 data_time: 0.057832 memory: 12959 loss_kpt: 323.129110 acc_pose: 0.913718 loss: 323.129110 2022/10/13 04:31:11 - mmengine - INFO - Epoch(train) [153][350/586] lr: 5.000000e-03 eta: 5:58:49 time: 0.672513 data_time: 0.058626 memory: 12959 loss_kpt: 315.937556 acc_pose: 0.815181 loss: 315.937556 2022/10/13 04:31:44 - mmengine - INFO - Epoch(train) [153][400/586] lr: 5.000000e-03 eta: 5:58:18 time: 0.665734 data_time: 0.054347 memory: 12959 loss_kpt: 319.411625 acc_pose: 0.886192 loss: 319.411625 2022/10/13 04:32:18 - mmengine - INFO - Epoch(train) [153][450/586] lr: 5.000000e-03 eta: 5:57:46 time: 0.668663 data_time: 0.055518 memory: 12959 loss_kpt: 320.900143 acc_pose: 0.865938 loss: 320.900143 2022/10/13 04:32:51 - mmengine - INFO - Epoch(train) [153][500/586] lr: 5.000000e-03 eta: 5:57:15 time: 0.664462 data_time: 0.058792 memory: 12959 loss_kpt: 322.180247 acc_pose: 0.882078 loss: 322.180247 2022/10/13 04:33:25 - mmengine - INFO - Epoch(train) [153][550/586] lr: 5.000000e-03 eta: 5:56:43 time: 0.676347 data_time: 0.054510 memory: 12959 loss_kpt: 320.842147 acc_pose: 0.846109 loss: 320.842147 2022/10/13 04:33:48 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:34:24 - mmengine - INFO - Epoch(train) [154][50/586] lr: 5.000000e-03 eta: 5:55:41 time: 0.714922 data_time: 0.067085 memory: 12959 loss_kpt: 321.553107 acc_pose: 0.790099 loss: 321.553107 2022/10/13 04:34:59 - mmengine - INFO - Epoch(train) [154][100/586] lr: 5.000000e-03 eta: 5:55:10 time: 0.695991 data_time: 0.056068 memory: 12959 loss_kpt: 324.124457 acc_pose: 0.806372 loss: 324.124457 2022/10/13 04:35:35 - mmengine - INFO - Epoch(train) [154][150/586] lr: 5.000000e-03 eta: 5:54:39 time: 0.714060 data_time: 0.061734 memory: 12959 loss_kpt: 328.605611 acc_pose: 0.806484 loss: 328.605611 2022/10/13 04:36:10 - mmengine - INFO - Epoch(train) [154][200/586] lr: 5.000000e-03 eta: 5:54:09 time: 0.711183 data_time: 0.057364 memory: 12959 loss_kpt: 323.994501 acc_pose: 0.899619 loss: 323.994501 2022/10/13 04:36:46 - mmengine - INFO - Epoch(train) [154][250/586] lr: 5.000000e-03 eta: 5:53:38 time: 0.716343 data_time: 0.059281 memory: 12959 loss_kpt: 321.220217 acc_pose: 0.836333 loss: 321.220217 2022/10/13 04:37:21 - mmengine - INFO - Epoch(train) [154][300/586] lr: 5.000000e-03 eta: 5:53:07 time: 0.702254 data_time: 0.055125 memory: 12959 loss_kpt: 324.486707 acc_pose: 0.877397 loss: 324.486707 2022/10/13 04:37:51 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:37:57 - mmengine - INFO - Epoch(train) [154][350/586] lr: 5.000000e-03 eta: 5:52:37 time: 0.704281 data_time: 0.059088 memory: 12959 loss_kpt: 318.816320 acc_pose: 0.884590 loss: 318.816320 2022/10/13 04:38:32 - mmengine - INFO - Epoch(train) [154][400/586] lr: 5.000000e-03 eta: 5:52:06 time: 0.707864 data_time: 0.057316 memory: 12959 loss_kpt: 324.531886 acc_pose: 0.856511 loss: 324.531886 2022/10/13 04:39:07 - mmengine - INFO - Epoch(train) [154][450/586] lr: 5.000000e-03 eta: 5:51:35 time: 0.704105 data_time: 0.058950 memory: 12959 loss_kpt: 315.296535 acc_pose: 0.831162 loss: 315.296535 2022/10/13 04:39:42 - mmengine - INFO - Epoch(train) [154][500/586] lr: 5.000000e-03 eta: 5:51:04 time: 0.703863 data_time: 0.052038 memory: 12959 loss_kpt: 323.962858 acc_pose: 0.826523 loss: 323.962858 2022/10/13 04:40:18 - mmengine - INFO - Epoch(train) [154][550/586] lr: 5.000000e-03 eta: 5:50:33 time: 0.711659 data_time: 0.060186 memory: 12959 loss_kpt: 316.985714 acc_pose: 0.845218 loss: 316.985714 2022/10/13 04:40:43 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:41:17 - mmengine - INFO - Epoch(train) [155][50/586] lr: 5.000000e-03 eta: 5:49:31 time: 0.685936 data_time: 0.071388 memory: 12959 loss_kpt: 327.929578 acc_pose: 0.888171 loss: 327.929578 2022/10/13 04:41:50 - mmengine - INFO - Epoch(train) [155][100/586] lr: 5.000000e-03 eta: 5:48:59 time: 0.662108 data_time: 0.053381 memory: 12959 loss_kpt: 322.582554 acc_pose: 0.882285 loss: 322.582554 2022/10/13 04:42:24 - mmengine - INFO - Epoch(train) [155][150/586] lr: 5.000000e-03 eta: 5:48:28 time: 0.665036 data_time: 0.060324 memory: 12959 loss_kpt: 318.820630 acc_pose: 0.817045 loss: 318.820630 2022/10/13 04:42:58 - mmengine - INFO - Epoch(train) [155][200/586] lr: 5.000000e-03 eta: 5:47:56 time: 0.687406 data_time: 0.055373 memory: 12959 loss_kpt: 325.543769 acc_pose: 0.817567 loss: 325.543769 2022/10/13 04:43:33 - mmengine - INFO - Epoch(train) [155][250/586] lr: 5.000000e-03 eta: 5:47:25 time: 0.691168 data_time: 0.058816 memory: 12959 loss_kpt: 326.219182 acc_pose: 0.746064 loss: 326.219182 2022/10/13 04:44:06 - mmengine - INFO - Epoch(train) [155][300/586] lr: 5.000000e-03 eta: 5:46:54 time: 0.675012 data_time: 0.055800 memory: 12959 loss_kpt: 318.096614 acc_pose: 0.794923 loss: 318.096614 2022/10/13 04:44:41 - mmengine - INFO - Epoch(train) [155][350/586] lr: 5.000000e-03 eta: 5:46:23 time: 0.690902 data_time: 0.055627 memory: 12959 loss_kpt: 323.461542 acc_pose: 0.810211 loss: 323.461542 2022/10/13 04:45:14 - mmengine - INFO - Epoch(train) [155][400/586] lr: 5.000000e-03 eta: 5:45:51 time: 0.672827 data_time: 0.054551 memory: 12959 loss_kpt: 323.194090 acc_pose: 0.864339 loss: 323.194090 2022/10/13 04:45:49 - mmengine - INFO - Epoch(train) [155][450/586] lr: 5.000000e-03 eta: 5:45:20 time: 0.686430 data_time: 0.057710 memory: 12959 loss_kpt: 325.997458 acc_pose: 0.779901 loss: 325.997458 2022/10/13 04:46:23 - mmengine - INFO - Epoch(train) [155][500/586] lr: 5.000000e-03 eta: 5:44:49 time: 0.673616 data_time: 0.056535 memory: 12959 loss_kpt: 321.275775 acc_pose: 0.754831 loss: 321.275775 2022/10/13 04:46:57 - mmengine - INFO - Epoch(train) [155][550/586] lr: 5.000000e-03 eta: 5:44:18 time: 0.697764 data_time: 0.055474 memory: 12959 loss_kpt: 325.400049 acc_pose: 0.813933 loss: 325.400049 2022/10/13 04:47:21 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:47:56 - mmengine - INFO - Epoch(train) [156][50/586] lr: 5.000000e-03 eta: 5:43:15 time: 0.687366 data_time: 0.068934 memory: 12959 loss_kpt: 321.206998 acc_pose: 0.779786 loss: 321.206998 2022/10/13 04:48:29 - mmengine - INFO - Epoch(train) [156][100/586] lr: 5.000000e-03 eta: 5:42:44 time: 0.674266 data_time: 0.057849 memory: 12959 loss_kpt: 327.765233 acc_pose: 0.813907 loss: 327.765233 2022/10/13 04:49:03 - mmengine - INFO - Epoch(train) [156][150/586] lr: 5.000000e-03 eta: 5:42:13 time: 0.679454 data_time: 0.055712 memory: 12959 loss_kpt: 327.945222 acc_pose: 0.884196 loss: 327.945222 2022/10/13 04:49:17 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:49:37 - mmengine - INFO - Epoch(train) [156][200/586] lr: 5.000000e-03 eta: 5:41:41 time: 0.675660 data_time: 0.055701 memory: 12959 loss_kpt: 320.450655 acc_pose: 0.858135 loss: 320.450655 2022/10/13 04:50:11 - mmengine - INFO - Epoch(train) [156][250/586] lr: 5.000000e-03 eta: 5:41:10 time: 0.667269 data_time: 0.054413 memory: 12959 loss_kpt: 323.180854 acc_pose: 0.862402 loss: 323.180854 2022/10/13 04:50:44 - mmengine - INFO - Epoch(train) [156][300/586] lr: 5.000000e-03 eta: 5:40:38 time: 0.662240 data_time: 0.059628 memory: 12959 loss_kpt: 322.915413 acc_pose: 0.925980 loss: 322.915413 2022/10/13 04:51:18 - mmengine - INFO - Epoch(train) [156][350/586] lr: 5.000000e-03 eta: 5:40:07 time: 0.683655 data_time: 0.055566 memory: 12959 loss_kpt: 320.861113 acc_pose: 0.866851 loss: 320.861113 2022/10/13 04:51:51 - mmengine - INFO - Epoch(train) [156][400/586] lr: 5.000000e-03 eta: 5:39:36 time: 0.667866 data_time: 0.058781 memory: 12959 loss_kpt: 319.309695 acc_pose: 0.789304 loss: 319.309695 2022/10/13 04:52:25 - mmengine - INFO - Epoch(train) [156][450/586] lr: 5.000000e-03 eta: 5:39:04 time: 0.680959 data_time: 0.058878 memory: 12959 loss_kpt: 324.716625 acc_pose: 0.735797 loss: 324.716625 2022/10/13 04:52:59 - mmengine - INFO - Epoch(train) [156][500/586] lr: 5.000000e-03 eta: 5:38:33 time: 0.667819 data_time: 0.061994 memory: 12959 loss_kpt: 322.275008 acc_pose: 0.859220 loss: 322.275008 2022/10/13 04:53:32 - mmengine - INFO - Epoch(train) [156][550/586] lr: 5.000000e-03 eta: 5:38:01 time: 0.672379 data_time: 0.058647 memory: 12959 loss_kpt: 323.429826 acc_pose: 0.822280 loss: 323.429826 2022/10/13 04:53:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 04:54:31 - mmengine - INFO - Epoch(train) [157][50/586] lr: 5.000000e-03 eta: 5:36:59 time: 0.697343 data_time: 0.068674 memory: 12959 loss_kpt: 318.196837 acc_pose: 0.794504 loss: 318.196837 2022/10/13 04:55:05 - mmengine - INFO - Epoch(train) [157][100/586] lr: 5.000000e-03 eta: 5:36:28 time: 0.693738 data_time: 0.055924 memory: 12959 loss_kpt: 323.155899 acc_pose: 0.905517 loss: 323.155899 2022/10/13 04:55:40 - mmengine - INFO - Epoch(train) [157][150/586] lr: 5.000000e-03 eta: 5:35:57 time: 0.681327 data_time: 0.061926 memory: 12959 loss_kpt: 325.422725 acc_pose: 0.881761 loss: 325.422725 2022/10/13 04:56:13 - mmengine - INFO - Epoch(train) [157][200/586] lr: 5.000000e-03 eta: 5:35:25 time: 0.667724 data_time: 0.055128 memory: 12959 loss_kpt: 323.788033 acc_pose: 0.836081 loss: 323.788033 2022/10/13 04:56:47 - mmengine - INFO - Epoch(train) [157][250/586] lr: 5.000000e-03 eta: 5:34:54 time: 0.683485 data_time: 0.059222 memory: 12959 loss_kpt: 320.409026 acc_pose: 0.863879 loss: 320.409026 2022/10/13 04:57:21 - mmengine - INFO - Epoch(train) [157][300/586] lr: 5.000000e-03 eta: 5:34:23 time: 0.680309 data_time: 0.056114 memory: 12959 loss_kpt: 315.323359 acc_pose: 0.839174 loss: 315.323359 2022/10/13 04:57:55 - mmengine - INFO - Epoch(train) [157][350/586] lr: 5.000000e-03 eta: 5:33:51 time: 0.674149 data_time: 0.057944 memory: 12959 loss_kpt: 322.000453 acc_pose: 0.878323 loss: 322.000453 2022/10/13 04:58:29 - mmengine - INFO - Epoch(train) [157][400/586] lr: 5.000000e-03 eta: 5:33:20 time: 0.674198 data_time: 0.051783 memory: 12959 loss_kpt: 324.595067 acc_pose: 0.801115 loss: 324.595067 2022/10/13 04:59:02 - mmengine - INFO - Epoch(train) [157][450/586] lr: 5.000000e-03 eta: 5:32:48 time: 0.673657 data_time: 0.061730 memory: 12959 loss_kpt: 326.299408 acc_pose: 0.920392 loss: 326.299408 2022/10/13 04:59:36 - mmengine - INFO - Epoch(train) [157][500/586] lr: 5.000000e-03 eta: 5:32:17 time: 0.672736 data_time: 0.052124 memory: 12959 loss_kpt: 322.720954 acc_pose: 0.826446 loss: 322.720954 2022/10/13 05:00:09 - mmengine - INFO - Epoch(train) [157][550/586] lr: 5.000000e-03 eta: 5:31:45 time: 0.664089 data_time: 0.058395 memory: 12959 loss_kpt: 328.203710 acc_pose: 0.839462 loss: 328.203710 2022/10/13 05:00:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:00:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:01:07 - mmengine - INFO - Epoch(train) [158][50/586] lr: 5.000000e-03 eta: 5:30:43 time: 0.682834 data_time: 0.072472 memory: 12959 loss_kpt: 320.699054 acc_pose: 0.857200 loss: 320.699054 2022/10/13 05:01:40 - mmengine - INFO - Epoch(train) [158][100/586] lr: 5.000000e-03 eta: 5:30:12 time: 0.661779 data_time: 0.056852 memory: 12959 loss_kpt: 324.664468 acc_pose: 0.851430 loss: 324.664468 2022/10/13 05:02:13 - mmengine - INFO - Epoch(train) [158][150/586] lr: 5.000000e-03 eta: 5:29:40 time: 0.668190 data_time: 0.052162 memory: 12959 loss_kpt: 324.398193 acc_pose: 0.767134 loss: 324.398193 2022/10/13 05:02:47 - mmengine - INFO - Epoch(train) [158][200/586] lr: 5.000000e-03 eta: 5:29:09 time: 0.668112 data_time: 0.056390 memory: 12959 loss_kpt: 321.459278 acc_pose: 0.869779 loss: 321.459278 2022/10/13 05:03:20 - mmengine - INFO - Epoch(train) [158][250/586] lr: 5.000000e-03 eta: 5:28:37 time: 0.671308 data_time: 0.060702 memory: 12959 loss_kpt: 330.452281 acc_pose: 0.845777 loss: 330.452281 2022/10/13 05:03:54 - mmengine - INFO - Epoch(train) [158][300/586] lr: 5.000000e-03 eta: 5:28:06 time: 0.668706 data_time: 0.055515 memory: 12959 loss_kpt: 330.007201 acc_pose: 0.859586 loss: 330.007201 2022/10/13 05:04:28 - mmengine - INFO - Epoch(train) [158][350/586] lr: 5.000000e-03 eta: 5:27:34 time: 0.677996 data_time: 0.055943 memory: 12959 loss_kpt: 317.260826 acc_pose: 0.849073 loss: 317.260826 2022/10/13 05:05:02 - mmengine - INFO - Epoch(train) [158][400/586] lr: 5.000000e-03 eta: 5:27:03 time: 0.690268 data_time: 0.057504 memory: 12959 loss_kpt: 321.687309 acc_pose: 0.865454 loss: 321.687309 2022/10/13 05:05:37 - mmengine - INFO - Epoch(train) [158][450/586] lr: 5.000000e-03 eta: 5:26:32 time: 0.689284 data_time: 0.060633 memory: 12959 loss_kpt: 325.294614 acc_pose: 0.888010 loss: 325.294614 2022/10/13 05:06:11 - mmengine - INFO - Epoch(train) [158][500/586] lr: 5.000000e-03 eta: 5:26:01 time: 0.694903 data_time: 0.059225 memory: 12959 loss_kpt: 323.985699 acc_pose: 0.830258 loss: 323.985699 2022/10/13 05:06:46 - mmengine - INFO - Epoch(train) [158][550/586] lr: 5.000000e-03 eta: 5:25:30 time: 0.699096 data_time: 0.057141 memory: 12959 loss_kpt: 320.481195 acc_pose: 0.848820 loss: 320.481195 2022/10/13 05:07:11 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:07:46 - mmengine - INFO - Epoch(train) [159][50/586] lr: 5.000000e-03 eta: 5:24:28 time: 0.692948 data_time: 0.068309 memory: 12959 loss_kpt: 317.175471 acc_pose: 0.827799 loss: 317.175471 2022/10/13 05:08:19 - mmengine - INFO - Epoch(train) [159][100/586] lr: 5.000000e-03 eta: 5:23:56 time: 0.665027 data_time: 0.057585 memory: 12959 loss_kpt: 321.551782 acc_pose: 0.886553 loss: 321.551782 2022/10/13 05:08:52 - mmengine - INFO - Epoch(train) [159][150/586] lr: 5.000000e-03 eta: 5:23:25 time: 0.668708 data_time: 0.052821 memory: 12959 loss_kpt: 322.978223 acc_pose: 0.881685 loss: 322.978223 2022/10/13 05:09:26 - mmengine - INFO - Epoch(train) [159][200/586] lr: 5.000000e-03 eta: 5:22:53 time: 0.663973 data_time: 0.053656 memory: 12959 loss_kpt: 317.972028 acc_pose: 0.792943 loss: 317.972028 2022/10/13 05:09:59 - mmengine - INFO - Epoch(train) [159][250/586] lr: 5.000000e-03 eta: 5:22:22 time: 0.660399 data_time: 0.057514 memory: 12959 loss_kpt: 314.048124 acc_pose: 0.826790 loss: 314.048124 2022/10/13 05:10:32 - mmengine - INFO - Epoch(train) [159][300/586] lr: 5.000000e-03 eta: 5:21:50 time: 0.659079 data_time: 0.052926 memory: 12959 loss_kpt: 322.477241 acc_pose: 0.784477 loss: 322.477241 2022/10/13 05:11:05 - mmengine - INFO - Epoch(train) [159][350/586] lr: 5.000000e-03 eta: 5:21:18 time: 0.658743 data_time: 0.056556 memory: 12959 loss_kpt: 319.691854 acc_pose: 0.757589 loss: 319.691854 2022/10/13 05:11:37 - mmengine - INFO - Epoch(train) [159][400/586] lr: 5.000000e-03 eta: 5:20:46 time: 0.655612 data_time: 0.056915 memory: 12959 loss_kpt: 323.710667 acc_pose: 0.880743 loss: 323.710667 2022/10/13 05:11:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:12:10 - mmengine - INFO - Epoch(train) [159][450/586] lr: 5.000000e-03 eta: 5:20:15 time: 0.659182 data_time: 0.056669 memory: 12959 loss_kpt: 325.957503 acc_pose: 0.837083 loss: 325.957503 2022/10/13 05:12:43 - mmengine - INFO - Epoch(train) [159][500/586] lr: 5.000000e-03 eta: 5:19:43 time: 0.660147 data_time: 0.060411 memory: 12959 loss_kpt: 315.890570 acc_pose: 0.741249 loss: 315.890570 2022/10/13 05:13:17 - mmengine - INFO - Epoch(train) [159][550/586] lr: 5.000000e-03 eta: 5:19:12 time: 0.667086 data_time: 0.057374 memory: 12959 loss_kpt: 317.992307 acc_pose: 0.868305 loss: 317.992307 2022/10/13 05:13:40 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:14:15 - mmengine - INFO - Epoch(train) [160][50/586] lr: 5.000000e-03 eta: 5:18:10 time: 0.687183 data_time: 0.069810 memory: 12959 loss_kpt: 326.008870 acc_pose: 0.745580 loss: 326.008870 2022/10/13 05:14:49 - mmengine - INFO - Epoch(train) [160][100/586] lr: 5.000000e-03 eta: 5:17:38 time: 0.676549 data_time: 0.054994 memory: 12959 loss_kpt: 320.592133 acc_pose: 0.881847 loss: 320.592133 2022/10/13 05:15:23 - mmengine - INFO - Epoch(train) [160][150/586] lr: 5.000000e-03 eta: 5:17:07 time: 0.681969 data_time: 0.062628 memory: 12959 loss_kpt: 318.874907 acc_pose: 0.881042 loss: 318.874907 2022/10/13 05:15:57 - mmengine - INFO - Epoch(train) [160][200/586] lr: 5.000000e-03 eta: 5:16:36 time: 0.680132 data_time: 0.057534 memory: 12959 loss_kpt: 327.648942 acc_pose: 0.866080 loss: 327.648942 2022/10/13 05:16:32 - mmengine - INFO - Epoch(train) [160][250/586] lr: 5.000000e-03 eta: 5:16:05 time: 0.697987 data_time: 0.063441 memory: 12959 loss_kpt: 324.438433 acc_pose: 0.826947 loss: 324.438433 2022/10/13 05:17:06 - mmengine - INFO - Epoch(train) [160][300/586] lr: 5.000000e-03 eta: 5:15:33 time: 0.686912 data_time: 0.058418 memory: 12959 loss_kpt: 319.379947 acc_pose: 0.867208 loss: 319.379947 2022/10/13 05:17:41 - mmengine - INFO - Epoch(train) [160][350/586] lr: 5.000000e-03 eta: 5:15:02 time: 0.696556 data_time: 0.062541 memory: 12959 loss_kpt: 320.179523 acc_pose: 0.822275 loss: 320.179523 2022/10/13 05:18:15 - mmengine - INFO - Epoch(train) [160][400/586] lr: 5.000000e-03 eta: 5:14:31 time: 0.687585 data_time: 0.058732 memory: 12959 loss_kpt: 322.333163 acc_pose: 0.819161 loss: 322.333163 2022/10/13 05:18:50 - mmengine - INFO - Epoch(train) [160][450/586] lr: 5.000000e-03 eta: 5:14:00 time: 0.688987 data_time: 0.056289 memory: 12959 loss_kpt: 320.656078 acc_pose: 0.858606 loss: 320.656078 2022/10/13 05:19:24 - mmengine - INFO - Epoch(train) [160][500/586] lr: 5.000000e-03 eta: 5:13:29 time: 0.693192 data_time: 0.054115 memory: 12959 loss_kpt: 321.785298 acc_pose: 0.866365 loss: 321.785298 2022/10/13 05:19:59 - mmengine - INFO - Epoch(train) [160][550/586] lr: 5.000000e-03 eta: 5:12:58 time: 0.692786 data_time: 0.060218 memory: 12959 loss_kpt: 317.402688 acc_pose: 0.823611 loss: 317.402688 2022/10/13 05:20:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:20:24 - mmengine - INFO - Saving checkpoint at 160 epochs 2022/10/13 05:20:41 - mmengine - INFO - Epoch(val) [160][50/407] eta: 0:01:34 time: 0.265927 data_time: 0.012899 memory: 12959 2022/10/13 05:20:54 - mmengine - INFO - Epoch(val) [160][100/407] eta: 0:01:19 time: 0.258407 data_time: 0.007366 memory: 2407 2022/10/13 05:21:07 - mmengine - INFO - Epoch(val) [160][150/407] eta: 0:01:07 time: 0.261025 data_time: 0.007749 memory: 2407 2022/10/13 05:21:20 - mmengine - INFO - Epoch(val) [160][200/407] eta: 0:00:53 time: 0.259769 data_time: 0.007654 memory: 2407 2022/10/13 05:21:33 - mmengine - INFO - Epoch(val) [160][250/407] eta: 0:00:40 time: 0.260993 data_time: 0.008173 memory: 2407 2022/10/13 05:21:47 - mmengine - INFO - Epoch(val) [160][300/407] eta: 0:00:28 time: 0.267202 data_time: 0.007452 memory: 2407 2022/10/13 05:22:00 - mmengine - INFO - Epoch(val) [160][350/407] eta: 0:00:15 time: 0.266293 data_time: 0.008007 memory: 2407 2022/10/13 05:22:13 - mmengine - INFO - Epoch(val) [160][400/407] eta: 0:00:01 time: 0.258818 data_time: 0.007650 memory: 2407 2022/10/13 05:22:27 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 05:22:43 - mmengine - INFO - Epoch(val) [160][407/407] coco/AP: 0.733696 coco/AP .5: 0.894266 coco/AP .75: 0.808501 coco/AP (M): 0.702216 coco/AP (L): 0.795350 coco/AR: 0.799418 coco/AR .5: 0.934981 coco/AR .75: 0.861146 coco/AR (M): 0.756132 coco/AR (L): 0.858937 2022/10/13 05:22:43 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_140.pth is removed 2022/10/13 05:22:45 - mmengine - INFO - The best checkpoint with 0.7337 coco/AP at 160 epoch is saved to best_coco/AP_epoch_160.pth. 2022/10/13 05:23:20 - mmengine - INFO - Epoch(train) [161][50/586] lr: 5.000000e-03 eta: 5:11:56 time: 0.700273 data_time: 0.067763 memory: 12959 loss_kpt: 318.067017 acc_pose: 0.852971 loss: 318.067017 2022/10/13 05:23:54 - mmengine - INFO - Epoch(train) [161][100/586] lr: 5.000000e-03 eta: 5:11:25 time: 0.681670 data_time: 0.056584 memory: 12959 loss_kpt: 321.408159 acc_pose: 0.854548 loss: 321.408159 2022/10/13 05:24:29 - mmengine - INFO - Epoch(train) [161][150/586] lr: 5.000000e-03 eta: 5:10:54 time: 0.695580 data_time: 0.061236 memory: 12959 loss_kpt: 317.812216 acc_pose: 0.739330 loss: 317.812216 2022/10/13 05:25:04 - mmengine - INFO - Epoch(train) [161][200/586] lr: 5.000000e-03 eta: 5:10:22 time: 0.689022 data_time: 0.060528 memory: 12959 loss_kpt: 313.121501 acc_pose: 0.887765 loss: 313.121501 2022/10/13 05:25:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:25:39 - mmengine - INFO - Epoch(train) [161][250/586] lr: 5.000000e-03 eta: 5:09:51 time: 0.701250 data_time: 0.058633 memory: 12959 loss_kpt: 318.666813 acc_pose: 0.857593 loss: 318.666813 2022/10/13 05:26:14 - mmengine - INFO - Epoch(train) [161][300/586] lr: 5.000000e-03 eta: 5:09:21 time: 0.712078 data_time: 0.054576 memory: 12959 loss_kpt: 318.551961 acc_pose: 0.878296 loss: 318.551961 2022/10/13 05:26:49 - mmengine - INFO - Epoch(train) [161][350/586] lr: 5.000000e-03 eta: 5:08:49 time: 0.690599 data_time: 0.059200 memory: 12959 loss_kpt: 322.736055 acc_pose: 0.878809 loss: 322.736055 2022/10/13 05:27:23 - mmengine - INFO - Epoch(train) [161][400/586] lr: 5.000000e-03 eta: 5:08:18 time: 0.691628 data_time: 0.054043 memory: 12959 loss_kpt: 321.594849 acc_pose: 0.761200 loss: 321.594849 2022/10/13 05:27:58 - mmengine - INFO - Epoch(train) [161][450/586] lr: 5.000000e-03 eta: 5:07:47 time: 0.682099 data_time: 0.057424 memory: 12959 loss_kpt: 326.291113 acc_pose: 0.928833 loss: 326.291113 2022/10/13 05:28:32 - mmengine - INFO - Epoch(train) [161][500/586] lr: 5.000000e-03 eta: 5:07:16 time: 0.689309 data_time: 0.059247 memory: 12959 loss_kpt: 319.655994 acc_pose: 0.864084 loss: 319.655994 2022/10/13 05:29:07 - mmengine - INFO - Epoch(train) [161][550/586] lr: 5.000000e-03 eta: 5:06:44 time: 0.694547 data_time: 0.060134 memory: 12959 loss_kpt: 319.747978 acc_pose: 0.870639 loss: 319.747978 2022/10/13 05:29:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:30:06 - mmengine - INFO - Epoch(train) [162][50/586] lr: 5.000000e-03 eta: 5:05:43 time: 0.679473 data_time: 0.067353 memory: 12959 loss_kpt: 318.624464 acc_pose: 0.859027 loss: 318.624464 2022/10/13 05:30:39 - mmengine - INFO - Epoch(train) [162][100/586] lr: 5.000000e-03 eta: 5:05:11 time: 0.667054 data_time: 0.052770 memory: 12959 loss_kpt: 315.655853 acc_pose: 0.781723 loss: 315.655853 2022/10/13 05:31:14 - mmengine - INFO - Epoch(train) [162][150/586] lr: 5.000000e-03 eta: 5:04:40 time: 0.692174 data_time: 0.054886 memory: 12959 loss_kpt: 328.299024 acc_pose: 0.771881 loss: 328.299024 2022/10/13 05:31:48 - mmengine - INFO - Epoch(train) [162][200/586] lr: 5.000000e-03 eta: 5:04:09 time: 0.697317 data_time: 0.057142 memory: 12959 loss_kpt: 316.851971 acc_pose: 0.879089 loss: 316.851971 2022/10/13 05:32:23 - mmengine - INFO - Epoch(train) [162][250/586] lr: 5.000000e-03 eta: 5:03:38 time: 0.682848 data_time: 0.058647 memory: 12959 loss_kpt: 321.719227 acc_pose: 0.856692 loss: 321.719227 2022/10/13 05:32:57 - mmengine - INFO - Epoch(train) [162][300/586] lr: 5.000000e-03 eta: 5:03:06 time: 0.678448 data_time: 0.057718 memory: 12959 loss_kpt: 325.652473 acc_pose: 0.821426 loss: 325.652473 2022/10/13 05:33:31 - mmengine - INFO - Epoch(train) [162][350/586] lr: 5.000000e-03 eta: 5:02:35 time: 0.688460 data_time: 0.063342 memory: 12959 loss_kpt: 317.571552 acc_pose: 0.857378 loss: 317.571552 2022/10/13 05:34:05 - mmengine - INFO - Epoch(train) [162][400/586] lr: 5.000000e-03 eta: 5:02:03 time: 0.675330 data_time: 0.055141 memory: 12959 loss_kpt: 314.002310 acc_pose: 0.795226 loss: 314.002310 2022/10/13 05:34:38 - mmengine - INFO - Epoch(train) [162][450/586] lr: 5.000000e-03 eta: 5:01:32 time: 0.668984 data_time: 0.056143 memory: 12959 loss_kpt: 320.897997 acc_pose: 0.833725 loss: 320.897997 2022/10/13 05:35:12 - mmengine - INFO - Epoch(train) [162][500/586] lr: 5.000000e-03 eta: 5:01:00 time: 0.665031 data_time: 0.053852 memory: 12959 loss_kpt: 320.607838 acc_pose: 0.833436 loss: 320.607838 2022/10/13 05:35:45 - mmengine - INFO - Epoch(train) [162][550/586] lr: 5.000000e-03 eta: 5:00:29 time: 0.675992 data_time: 0.055998 memory: 12959 loss_kpt: 323.320050 acc_pose: 0.821862 loss: 323.320050 2022/10/13 05:36:09 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:36:44 - mmengine - INFO - Epoch(train) [163][50/586] lr: 5.000000e-03 eta: 4:59:28 time: 0.701182 data_time: 0.069935 memory: 12959 loss_kpt: 320.812266 acc_pose: 0.829076 loss: 320.812266 2022/10/13 05:36:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:37:18 - mmengine - INFO - Epoch(train) [163][100/586] lr: 5.000000e-03 eta: 4:58:57 time: 0.692143 data_time: 0.054812 memory: 12959 loss_kpt: 322.857139 acc_pose: 0.862334 loss: 322.857139 2022/10/13 05:37:54 - mmengine - INFO - Epoch(train) [163][150/586] lr: 5.000000e-03 eta: 4:58:25 time: 0.702757 data_time: 0.059432 memory: 12959 loss_kpt: 317.619888 acc_pose: 0.869992 loss: 317.619888 2022/10/13 05:38:29 - mmengine - INFO - Epoch(train) [163][200/586] lr: 5.000000e-03 eta: 4:57:54 time: 0.699388 data_time: 0.051097 memory: 12959 loss_kpt: 323.199684 acc_pose: 0.843107 loss: 323.199684 2022/10/13 05:39:03 - mmengine - INFO - Epoch(train) [163][250/586] lr: 5.000000e-03 eta: 4:57:23 time: 0.688000 data_time: 0.057375 memory: 12959 loss_kpt: 321.405621 acc_pose: 0.892091 loss: 321.405621 2022/10/13 05:39:37 - mmengine - INFO - Epoch(train) [163][300/586] lr: 5.000000e-03 eta: 4:56:52 time: 0.688141 data_time: 0.052050 memory: 12959 loss_kpt: 319.679221 acc_pose: 0.792075 loss: 319.679221 2022/10/13 05:40:12 - mmengine - INFO - Epoch(train) [163][350/586] lr: 5.000000e-03 eta: 4:56:21 time: 0.696133 data_time: 0.056478 memory: 12959 loss_kpt: 321.032559 acc_pose: 0.843760 loss: 321.032559 2022/10/13 05:40:47 - mmengine - INFO - Epoch(train) [163][400/586] lr: 5.000000e-03 eta: 4:55:49 time: 0.689951 data_time: 0.051450 memory: 12959 loss_kpt: 319.095212 acc_pose: 0.844226 loss: 319.095212 2022/10/13 05:41:22 - mmengine - INFO - Epoch(train) [163][450/586] lr: 5.000000e-03 eta: 4:55:18 time: 0.698262 data_time: 0.059134 memory: 12959 loss_kpt: 318.784299 acc_pose: 0.891358 loss: 318.784299 2022/10/13 05:41:56 - mmengine - INFO - Epoch(train) [163][500/586] lr: 5.000000e-03 eta: 4:54:47 time: 0.692717 data_time: 0.053565 memory: 12959 loss_kpt: 320.631520 acc_pose: 0.813441 loss: 320.631520 2022/10/13 05:42:31 - mmengine - INFO - Epoch(train) [163][550/586] lr: 5.000000e-03 eta: 4:54:16 time: 0.702113 data_time: 0.058966 memory: 12959 loss_kpt: 317.385640 acc_pose: 0.849331 loss: 317.385640 2022/10/13 05:42:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:43:31 - mmengine - INFO - Epoch(train) [164][50/586] lr: 5.000000e-03 eta: 4:53:15 time: 0.691299 data_time: 0.061850 memory: 12959 loss_kpt: 309.428681 acc_pose: 0.790396 loss: 309.428681 2022/10/13 05:44:05 - mmengine - INFO - Epoch(train) [164][100/586] lr: 5.000000e-03 eta: 4:52:43 time: 0.676006 data_time: 0.056830 memory: 12959 loss_kpt: 320.746415 acc_pose: 0.823942 loss: 320.746415 2022/10/13 05:44:38 - mmengine - INFO - Epoch(train) [164][150/586] lr: 5.000000e-03 eta: 4:52:12 time: 0.672625 data_time: 0.054332 memory: 12959 loss_kpt: 316.647865 acc_pose: 0.816569 loss: 316.647865 2022/10/13 05:45:12 - mmengine - INFO - Epoch(train) [164][200/586] lr: 5.000000e-03 eta: 4:51:40 time: 0.670047 data_time: 0.056229 memory: 12959 loss_kpt: 317.541729 acc_pose: 0.864901 loss: 317.541729 2022/10/13 05:45:45 - mmengine - INFO - Epoch(train) [164][250/586] lr: 5.000000e-03 eta: 4:51:09 time: 0.666618 data_time: 0.054775 memory: 12959 loss_kpt: 319.236705 acc_pose: 0.807841 loss: 319.236705 2022/10/13 05:46:19 - mmengine - INFO - Epoch(train) [164][300/586] lr: 5.000000e-03 eta: 4:50:37 time: 0.680220 data_time: 0.054852 memory: 12959 loss_kpt: 323.306175 acc_pose: 0.831951 loss: 323.306175 2022/10/13 05:46:53 - mmengine - INFO - Epoch(train) [164][350/586] lr: 5.000000e-03 eta: 4:50:06 time: 0.680842 data_time: 0.053051 memory: 12959 loss_kpt: 322.492128 acc_pose: 0.828842 loss: 322.492128 2022/10/13 05:47:27 - mmengine - INFO - Epoch(train) [164][400/586] lr: 5.000000e-03 eta: 4:49:34 time: 0.677000 data_time: 0.055948 memory: 12959 loss_kpt: 318.175032 acc_pose: 0.862978 loss: 318.175032 2022/10/13 05:48:01 - mmengine - INFO - Epoch(train) [164][450/586] lr: 5.000000e-03 eta: 4:49:03 time: 0.679870 data_time: 0.058455 memory: 12959 loss_kpt: 321.480860 acc_pose: 0.888663 loss: 321.480860 2022/10/13 05:48:23 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:48:35 - mmengine - INFO - Epoch(train) [164][500/586] lr: 5.000000e-03 eta: 4:48:31 time: 0.673925 data_time: 0.054598 memory: 12959 loss_kpt: 314.108480 acc_pose: 0.908720 loss: 314.108480 2022/10/13 05:49:09 - mmengine - INFO - Epoch(train) [164][550/586] lr: 5.000000e-03 eta: 4:48:00 time: 0.691641 data_time: 0.054731 memory: 12959 loss_kpt: 321.803989 acc_pose: 0.874162 loss: 321.803989 2022/10/13 05:49:33 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:50:08 - mmengine - INFO - Epoch(train) [165][50/586] lr: 5.000000e-03 eta: 4:46:59 time: 0.695099 data_time: 0.063495 memory: 12959 loss_kpt: 315.987881 acc_pose: 0.908026 loss: 315.987881 2022/10/13 05:50:42 - mmengine - INFO - Epoch(train) [165][100/586] lr: 5.000000e-03 eta: 4:46:28 time: 0.677226 data_time: 0.056779 memory: 12959 loss_kpt: 317.888401 acc_pose: 0.838529 loss: 317.888401 2022/10/13 05:51:15 - mmengine - INFO - Epoch(train) [165][150/586] lr: 5.000000e-03 eta: 4:45:56 time: 0.662863 data_time: 0.057982 memory: 12959 loss_kpt: 319.922780 acc_pose: 0.833802 loss: 319.922780 2022/10/13 05:51:49 - mmengine - INFO - Epoch(train) [165][200/586] lr: 5.000000e-03 eta: 4:45:25 time: 0.678956 data_time: 0.058362 memory: 12959 loss_kpt: 322.258003 acc_pose: 0.825893 loss: 322.258003 2022/10/13 05:52:23 - mmengine - INFO - Epoch(train) [165][250/586] lr: 5.000000e-03 eta: 4:44:53 time: 0.672644 data_time: 0.057350 memory: 12959 loss_kpt: 319.885997 acc_pose: 0.883695 loss: 319.885997 2022/10/13 05:52:56 - mmengine - INFO - Epoch(train) [165][300/586] lr: 5.000000e-03 eta: 4:44:21 time: 0.671647 data_time: 0.053651 memory: 12959 loss_kpt: 321.798923 acc_pose: 0.785100 loss: 321.798923 2022/10/13 05:53:30 - mmengine - INFO - Epoch(train) [165][350/586] lr: 5.000000e-03 eta: 4:43:50 time: 0.667433 data_time: 0.057870 memory: 12959 loss_kpt: 327.262250 acc_pose: 0.849153 loss: 327.262250 2022/10/13 05:54:03 - mmengine - INFO - Epoch(train) [165][400/586] lr: 5.000000e-03 eta: 4:43:18 time: 0.672875 data_time: 0.056403 memory: 12959 loss_kpt: 319.580916 acc_pose: 0.821805 loss: 319.580916 2022/10/13 05:54:37 - mmengine - INFO - Epoch(train) [165][450/586] lr: 5.000000e-03 eta: 4:42:47 time: 0.667798 data_time: 0.059287 memory: 12959 loss_kpt: 315.629323 acc_pose: 0.794286 loss: 315.629323 2022/10/13 05:55:10 - mmengine - INFO - Epoch(train) [165][500/586] lr: 5.000000e-03 eta: 4:42:15 time: 0.670103 data_time: 0.052229 memory: 12959 loss_kpt: 315.370594 acc_pose: 0.788858 loss: 315.370594 2022/10/13 05:55:44 - mmengine - INFO - Epoch(train) [165][550/586] lr: 5.000000e-03 eta: 4:41:44 time: 0.673544 data_time: 0.055732 memory: 12959 loss_kpt: 324.488010 acc_pose: 0.885598 loss: 324.488010 2022/10/13 05:56:08 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 05:56:43 - mmengine - INFO - Epoch(train) [166][50/586] lr: 5.000000e-03 eta: 4:40:43 time: 0.700427 data_time: 0.070229 memory: 12959 loss_kpt: 314.958569 acc_pose: 0.834179 loss: 314.958569 2022/10/13 05:57:18 - mmengine - INFO - Epoch(train) [166][100/586] lr: 5.000000e-03 eta: 4:40:12 time: 0.706901 data_time: 0.057315 memory: 12959 loss_kpt: 321.034410 acc_pose: 0.838769 loss: 321.034410 2022/10/13 05:57:53 - mmengine - INFO - Epoch(train) [166][150/586] lr: 5.000000e-03 eta: 4:39:41 time: 0.703295 data_time: 0.063877 memory: 12959 loss_kpt: 318.213967 acc_pose: 0.805995 loss: 318.213967 2022/10/13 05:58:29 - mmengine - INFO - Epoch(train) [166][200/586] lr: 5.000000e-03 eta: 4:39:10 time: 0.711474 data_time: 0.055359 memory: 12959 loss_kpt: 320.661110 acc_pose: 0.803657 loss: 320.661110 2022/10/13 05:59:04 - mmengine - INFO - Epoch(train) [166][250/586] lr: 5.000000e-03 eta: 4:38:39 time: 0.709427 data_time: 0.056552 memory: 12959 loss_kpt: 318.570538 acc_pose: 0.824767 loss: 318.570538 2022/10/13 05:59:39 - mmengine - INFO - Epoch(train) [166][300/586] lr: 5.000000e-03 eta: 4:38:07 time: 0.695415 data_time: 0.057465 memory: 12959 loss_kpt: 320.063650 acc_pose: 0.838113 loss: 320.063650 2022/10/13 05:59:46 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:00:14 - mmengine - INFO - Epoch(train) [166][350/586] lr: 5.000000e-03 eta: 4:37:36 time: 0.698491 data_time: 0.057138 memory: 12959 loss_kpt: 323.890286 acc_pose: 0.902445 loss: 323.890286 2022/10/13 06:00:49 - mmengine - INFO - Epoch(train) [166][400/586] lr: 5.000000e-03 eta: 4:37:05 time: 0.702897 data_time: 0.056374 memory: 12959 loss_kpt: 313.941587 acc_pose: 0.886990 loss: 313.941587 2022/10/13 06:01:24 - mmengine - INFO - Epoch(train) [166][450/586] lr: 5.000000e-03 eta: 4:36:34 time: 0.695857 data_time: 0.058359 memory: 12959 loss_kpt: 321.284979 acc_pose: 0.834642 loss: 321.284979 2022/10/13 06:01:59 - mmengine - INFO - Epoch(train) [166][500/586] lr: 5.000000e-03 eta: 4:36:03 time: 0.700319 data_time: 0.060422 memory: 12959 loss_kpt: 323.119084 acc_pose: 0.875874 loss: 323.119084 2022/10/13 06:02:34 - mmengine - INFO - Epoch(train) [166][550/586] lr: 5.000000e-03 eta: 4:35:31 time: 0.698970 data_time: 0.058040 memory: 12959 loss_kpt: 312.180358 acc_pose: 0.823163 loss: 312.180358 2022/10/13 06:02:59 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:03:33 - mmengine - INFO - Epoch(train) [167][50/586] lr: 5.000000e-03 eta: 4:34:31 time: 0.688058 data_time: 0.063837 memory: 12959 loss_kpt: 314.947256 acc_pose: 0.780289 loss: 314.947256 2022/10/13 06:04:07 - mmengine - INFO - Epoch(train) [167][100/586] lr: 5.000000e-03 eta: 4:33:59 time: 0.672622 data_time: 0.053259 memory: 12959 loss_kpt: 319.945297 acc_pose: 0.807455 loss: 319.945297 2022/10/13 06:04:41 - mmengine - INFO - Epoch(train) [167][150/586] lr: 5.000000e-03 eta: 4:33:28 time: 0.686489 data_time: 0.059854 memory: 12959 loss_kpt: 314.051626 acc_pose: 0.864252 loss: 314.051626 2022/10/13 06:05:16 - mmengine - INFO - Epoch(train) [167][200/586] lr: 5.000000e-03 eta: 4:32:56 time: 0.685767 data_time: 0.057811 memory: 12959 loss_kpt: 321.684726 acc_pose: 0.803286 loss: 321.684726 2022/10/13 06:05:50 - mmengine - INFO - Epoch(train) [167][250/586] lr: 5.000000e-03 eta: 4:32:25 time: 0.676240 data_time: 0.055898 memory: 12959 loss_kpt: 314.814183 acc_pose: 0.871619 loss: 314.814183 2022/10/13 06:06:24 - mmengine - INFO - Epoch(train) [167][300/586] lr: 5.000000e-03 eta: 4:31:53 time: 0.680973 data_time: 0.052084 memory: 12959 loss_kpt: 318.151182 acc_pose: 0.808803 loss: 318.151182 2022/10/13 06:06:58 - mmengine - INFO - Epoch(train) [167][350/586] lr: 5.000000e-03 eta: 4:31:22 time: 0.679427 data_time: 0.058048 memory: 12959 loss_kpt: 321.448668 acc_pose: 0.825387 loss: 321.448668 2022/10/13 06:07:31 - mmengine - INFO - Epoch(train) [167][400/586] lr: 5.000000e-03 eta: 4:30:50 time: 0.670271 data_time: 0.055724 memory: 12959 loss_kpt: 319.685732 acc_pose: 0.856765 loss: 319.685732 2022/10/13 06:08:06 - mmengine - INFO - Epoch(train) [167][450/586] lr: 5.000000e-03 eta: 4:30:19 time: 0.690871 data_time: 0.053396 memory: 12959 loss_kpt: 315.180212 acc_pose: 0.870427 loss: 315.180212 2022/10/13 06:08:40 - mmengine - INFO - Epoch(train) [167][500/586] lr: 5.000000e-03 eta: 4:29:47 time: 0.680770 data_time: 0.055134 memory: 12959 loss_kpt: 321.523879 acc_pose: 0.895574 loss: 321.523879 2022/10/13 06:09:15 - mmengine - INFO - Epoch(train) [167][550/586] lr: 5.000000e-03 eta: 4:29:16 time: 0.699519 data_time: 0.053523 memory: 12959 loss_kpt: 322.028718 acc_pose: 0.819816 loss: 322.028718 2022/10/13 06:09:39 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:10:13 - mmengine - INFO - Epoch(train) [168][50/586] lr: 5.000000e-03 eta: 4:28:16 time: 0.681723 data_time: 0.067942 memory: 12959 loss_kpt: 320.192284 acc_pose: 0.842119 loss: 320.192284 2022/10/13 06:10:47 - mmengine - INFO - Epoch(train) [168][100/586] lr: 5.000000e-03 eta: 4:27:44 time: 0.673118 data_time: 0.056418 memory: 12959 loss_kpt: 322.144225 acc_pose: 0.846803 loss: 322.144225 2022/10/13 06:11:13 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:11:21 - mmengine - INFO - Epoch(train) [168][150/586] lr: 5.000000e-03 eta: 4:27:13 time: 0.688334 data_time: 0.056023 memory: 12959 loss_kpt: 324.067618 acc_pose: 0.832672 loss: 324.067618 2022/10/13 06:11:55 - mmengine - INFO - Epoch(train) [168][200/586] lr: 5.000000e-03 eta: 4:26:41 time: 0.674673 data_time: 0.055306 memory: 12959 loss_kpt: 321.085770 acc_pose: 0.805878 loss: 321.085770 2022/10/13 06:12:28 - mmengine - INFO - Epoch(train) [168][250/586] lr: 5.000000e-03 eta: 4:26:10 time: 0.664651 data_time: 0.059504 memory: 12959 loss_kpt: 321.041583 acc_pose: 0.827640 loss: 321.041583 2022/10/13 06:13:02 - mmengine - INFO - Epoch(train) [168][300/586] lr: 5.000000e-03 eta: 4:25:38 time: 0.668727 data_time: 0.052472 memory: 12959 loss_kpt: 320.842731 acc_pose: 0.848770 loss: 320.842731 2022/10/13 06:13:36 - mmengine - INFO - Epoch(train) [168][350/586] lr: 5.000000e-03 eta: 4:25:06 time: 0.679603 data_time: 0.060258 memory: 12959 loss_kpt: 319.842116 acc_pose: 0.859586 loss: 319.842116 2022/10/13 06:14:08 - mmengine - INFO - Epoch(train) [168][400/586] lr: 5.000000e-03 eta: 4:24:35 time: 0.655860 data_time: 0.055454 memory: 12959 loss_kpt: 314.725865 acc_pose: 0.858748 loss: 314.725865 2022/10/13 06:14:42 - mmengine - INFO - Epoch(train) [168][450/586] lr: 5.000000e-03 eta: 4:24:03 time: 0.667612 data_time: 0.057860 memory: 12959 loss_kpt: 318.225233 acc_pose: 0.916437 loss: 318.225233 2022/10/13 06:15:16 - mmengine - INFO - Epoch(train) [168][500/586] lr: 5.000000e-03 eta: 4:23:31 time: 0.674568 data_time: 0.058003 memory: 12959 loss_kpt: 315.795721 acc_pose: 0.829803 loss: 315.795721 2022/10/13 06:15:49 - mmengine - INFO - Epoch(train) [168][550/586] lr: 5.000000e-03 eta: 4:23:00 time: 0.660652 data_time: 0.055966 memory: 12959 loss_kpt: 316.704511 acc_pose: 0.870716 loss: 316.704511 2022/10/13 06:16:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:16:47 - mmengine - INFO - Epoch(train) [169][50/586] lr: 5.000000e-03 eta: 4:21:59 time: 0.689878 data_time: 0.069662 memory: 12959 loss_kpt: 321.577533 acc_pose: 0.876685 loss: 321.577533 2022/10/13 06:17:21 - mmengine - INFO - Epoch(train) [169][100/586] lr: 5.000000e-03 eta: 4:21:28 time: 0.679458 data_time: 0.056408 memory: 12959 loss_kpt: 312.797706 acc_pose: 0.854055 loss: 312.797706 2022/10/13 06:17:56 - mmengine - INFO - Epoch(train) [169][150/586] lr: 5.000000e-03 eta: 4:20:56 time: 0.689878 data_time: 0.060597 memory: 12959 loss_kpt: 316.367488 acc_pose: 0.861365 loss: 316.367488 2022/10/13 06:18:29 - mmengine - INFO - Epoch(train) [169][200/586] lr: 5.000000e-03 eta: 4:20:25 time: 0.669694 data_time: 0.053013 memory: 12959 loss_kpt: 316.665291 acc_pose: 0.887244 loss: 316.665291 2022/10/13 06:19:04 - mmengine - INFO - Epoch(train) [169][250/586] lr: 5.000000e-03 eta: 4:19:53 time: 0.690022 data_time: 0.059955 memory: 12959 loss_kpt: 326.309597 acc_pose: 0.846801 loss: 326.309597 2022/10/13 06:19:38 - mmengine - INFO - Epoch(train) [169][300/586] lr: 5.000000e-03 eta: 4:19:22 time: 0.692224 data_time: 0.052432 memory: 12959 loss_kpt: 326.551390 acc_pose: 0.890884 loss: 326.551390 2022/10/13 06:20:13 - mmengine - INFO - Epoch(train) [169][350/586] lr: 5.000000e-03 eta: 4:18:51 time: 0.688043 data_time: 0.056971 memory: 12959 loss_kpt: 314.821085 acc_pose: 0.879644 loss: 314.821085 2022/10/13 06:20:47 - mmengine - INFO - Epoch(train) [169][400/586] lr: 5.000000e-03 eta: 4:18:19 time: 0.681349 data_time: 0.058835 memory: 12959 loss_kpt: 317.929860 acc_pose: 0.815883 loss: 317.929860 2022/10/13 06:21:21 - mmengine - INFO - Epoch(train) [169][450/586] lr: 5.000000e-03 eta: 4:17:48 time: 0.693889 data_time: 0.065512 memory: 12959 loss_kpt: 311.571113 acc_pose: 0.848016 loss: 311.571113 2022/10/13 06:21:56 - mmengine - INFO - Epoch(train) [169][500/586] lr: 5.000000e-03 eta: 4:17:16 time: 0.688358 data_time: 0.056587 memory: 12959 loss_kpt: 317.020081 acc_pose: 0.855785 loss: 317.020081 2022/10/13 06:22:31 - mmengine - INFO - Epoch(train) [169][550/586] lr: 5.000000e-03 eta: 4:16:45 time: 0.699678 data_time: 0.064819 memory: 12959 loss_kpt: 318.120405 acc_pose: 0.914238 loss: 318.120405 2022/10/13 06:22:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:22:55 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:23:30 - mmengine - INFO - Epoch(train) [170][50/586] lr: 5.000000e-03 eta: 4:15:45 time: 0.691079 data_time: 0.069878 memory: 12959 loss_kpt: 316.731889 acc_pose: 0.862603 loss: 316.731889 2022/10/13 06:24:03 - mmengine - INFO - Epoch(train) [170][100/586] lr: 5.000000e-03 eta: 4:15:14 time: 0.675942 data_time: 0.051280 memory: 12959 loss_kpt: 321.996385 acc_pose: 0.836983 loss: 321.996385 2022/10/13 06:24:38 - mmengine - INFO - Epoch(train) [170][150/586] lr: 5.000000e-03 eta: 4:14:42 time: 0.683640 data_time: 0.055062 memory: 12959 loss_kpt: 316.602924 acc_pose: 0.810331 loss: 316.602924 2022/10/13 06:25:11 - mmengine - INFO - Epoch(train) [170][200/586] lr: 5.000000e-03 eta: 4:14:10 time: 0.675165 data_time: 0.059366 memory: 12959 loss_kpt: 316.555124 acc_pose: 0.877377 loss: 316.555124 2022/10/13 06:25:46 - mmengine - INFO - Epoch(train) [170][250/586] lr: 5.000000e-03 eta: 4:13:39 time: 0.690405 data_time: 0.056168 memory: 12959 loss_kpt: 323.253528 acc_pose: 0.836067 loss: 323.253528 2022/10/13 06:26:20 - mmengine - INFO - Epoch(train) [170][300/586] lr: 5.000000e-03 eta: 4:13:07 time: 0.674987 data_time: 0.055835 memory: 12959 loss_kpt: 316.813046 acc_pose: 0.824191 loss: 316.813046 2022/10/13 06:26:53 - mmengine - INFO - Epoch(train) [170][350/586] lr: 5.000000e-03 eta: 4:12:36 time: 0.671880 data_time: 0.055253 memory: 12959 loss_kpt: 320.271378 acc_pose: 0.863442 loss: 320.271378 2022/10/13 06:27:26 - mmengine - INFO - Epoch(train) [170][400/586] lr: 5.000000e-03 eta: 4:12:04 time: 0.661730 data_time: 0.052810 memory: 12959 loss_kpt: 316.040023 acc_pose: 0.878493 loss: 316.040023 2022/10/13 06:28:00 - mmengine - INFO - Epoch(train) [170][450/586] lr: 5.000000e-03 eta: 4:11:33 time: 0.679289 data_time: 0.057280 memory: 12959 loss_kpt: 319.519780 acc_pose: 0.802124 loss: 319.519780 2022/10/13 06:28:34 - mmengine - INFO - Epoch(train) [170][500/586] lr: 5.000000e-03 eta: 4:11:01 time: 0.673244 data_time: 0.055545 memory: 12959 loss_kpt: 315.728708 acc_pose: 0.825259 loss: 315.728708 2022/10/13 06:29:07 - mmengine - INFO - Epoch(train) [170][550/586] lr: 5.000000e-03 eta: 4:10:29 time: 0.666195 data_time: 0.053458 memory: 12959 loss_kpt: 317.433325 acc_pose: 0.872360 loss: 317.433325 2022/10/13 06:29:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:29:31 - mmengine - INFO - Saving checkpoint at 170 epochs 2022/10/13 06:29:49 - mmengine - INFO - Epoch(val) [170][50/407] eta: 0:01:38 time: 0.275255 data_time: 0.012957 memory: 12959 2022/10/13 06:30:02 - mmengine - INFO - Epoch(val) [170][100/407] eta: 0:01:20 time: 0.262180 data_time: 0.007391 memory: 2407 2022/10/13 06:30:15 - mmengine - INFO - Epoch(val) [170][150/407] eta: 0:01:07 time: 0.264192 data_time: 0.007725 memory: 2407 2022/10/13 06:30:28 - mmengine - INFO - Epoch(val) [170][200/407] eta: 0:00:53 time: 0.260713 data_time: 0.007092 memory: 2407 2022/10/13 06:30:42 - mmengine - INFO - Epoch(val) [170][250/407] eta: 0:00:41 time: 0.262486 data_time: 0.007924 memory: 2407 2022/10/13 06:30:54 - mmengine - INFO - Epoch(val) [170][300/407] eta: 0:00:27 time: 0.258160 data_time: 0.007235 memory: 2407 2022/10/13 06:31:07 - mmengine - INFO - Epoch(val) [170][350/407] eta: 0:00:14 time: 0.256448 data_time: 0.007806 memory: 2407 2022/10/13 06:31:20 - mmengine - INFO - Epoch(val) [170][400/407] eta: 0:00:01 time: 0.257744 data_time: 0.007377 memory: 2407 2022/10/13 06:31:34 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 06:31:50 - mmengine - INFO - Epoch(val) [170][407/407] coco/AP: 0.734808 coco/AP .5: 0.894113 coco/AP .75: 0.806226 coco/AP (M): 0.702469 coco/AP (L): 0.797682 coco/AR: 0.799543 coco/AR .5: 0.934824 coco/AR .75: 0.859257 coco/AR (M): 0.756378 coco/AR (L): 0.859086 2022/10/13 06:31:50 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_160.pth is removed 2022/10/13 06:31:53 - mmengine - INFO - The best checkpoint with 0.7348 coco/AP at 170 epoch is saved to best_coco/AP_epoch_170.pth. 2022/10/13 06:32:27 - mmengine - INFO - Epoch(train) [171][50/586] lr: 5.000000e-04 eta: 4:09:29 time: 0.692894 data_time: 0.064387 memory: 12959 loss_kpt: 317.164978 acc_pose: 0.752829 loss: 317.164978 2022/10/13 06:33:02 - mmengine - INFO - Epoch(train) [171][100/586] lr: 5.000000e-04 eta: 4:08:58 time: 0.690522 data_time: 0.058940 memory: 12959 loss_kpt: 314.851350 acc_pose: 0.876147 loss: 314.851350 2022/10/13 06:33:37 - mmengine - INFO - Epoch(train) [171][150/586] lr: 5.000000e-04 eta: 4:08:27 time: 0.708824 data_time: 0.059114 memory: 12959 loss_kpt: 310.808997 acc_pose: 0.870607 loss: 310.808997 2022/10/13 06:34:13 - mmengine - INFO - Epoch(train) [171][200/586] lr: 5.000000e-04 eta: 4:07:56 time: 0.708324 data_time: 0.055607 memory: 12959 loss_kpt: 307.446023 acc_pose: 0.813388 loss: 307.446023 2022/10/13 06:34:48 - mmengine - INFO - Epoch(train) [171][250/586] lr: 5.000000e-04 eta: 4:07:24 time: 0.711463 data_time: 0.061853 memory: 12959 loss_kpt: 310.671384 acc_pose: 0.845165 loss: 310.671384 2022/10/13 06:35:23 - mmengine - INFO - Epoch(train) [171][300/586] lr: 5.000000e-04 eta: 4:06:53 time: 0.702727 data_time: 0.052988 memory: 12959 loss_kpt: 312.776411 acc_pose: 0.841784 loss: 312.776411 2022/10/13 06:36:00 - mmengine - INFO - Epoch(train) [171][350/586] lr: 5.000000e-04 eta: 4:06:22 time: 0.731314 data_time: 0.057991 memory: 12959 loss_kpt: 316.203765 acc_pose: 0.841607 loss: 316.203765 2022/10/13 06:36:21 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:36:36 - mmengine - INFO - Epoch(train) [171][400/586] lr: 5.000000e-04 eta: 4:05:51 time: 0.713299 data_time: 0.058625 memory: 12959 loss_kpt: 311.552654 acc_pose: 0.767613 loss: 311.552654 2022/10/13 06:37:12 - mmengine - INFO - Epoch(train) [171][450/586] lr: 5.000000e-04 eta: 4:05:20 time: 0.723607 data_time: 0.058645 memory: 12959 loss_kpt: 313.535988 acc_pose: 0.870908 loss: 313.535988 2022/10/13 06:37:48 - mmengine - INFO - Epoch(train) [171][500/586] lr: 5.000000e-04 eta: 4:04:49 time: 0.722252 data_time: 0.052653 memory: 12959 loss_kpt: 308.613696 acc_pose: 0.843442 loss: 308.613696 2022/10/13 06:38:25 - mmengine - INFO - Epoch(train) [171][550/586] lr: 5.000000e-04 eta: 4:04:18 time: 0.736577 data_time: 0.062301 memory: 12959 loss_kpt: 306.581794 acc_pose: 0.878019 loss: 306.581794 2022/10/13 06:38:50 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:39:25 - mmengine - INFO - Epoch(train) [172][50/586] lr: 5.000000e-04 eta: 4:03:18 time: 0.687227 data_time: 0.067159 memory: 12959 loss_kpt: 308.383013 acc_pose: 0.884363 loss: 308.383013 2022/10/13 06:39:58 - mmengine - INFO - Epoch(train) [172][100/586] lr: 5.000000e-04 eta: 4:02:46 time: 0.668780 data_time: 0.058557 memory: 12959 loss_kpt: 307.749107 acc_pose: 0.823556 loss: 307.749107 2022/10/13 06:40:33 - mmengine - INFO - Epoch(train) [172][150/586] lr: 5.000000e-04 eta: 4:02:15 time: 0.686647 data_time: 0.059962 memory: 12959 loss_kpt: 310.505277 acc_pose: 0.852134 loss: 310.505277 2022/10/13 06:41:06 - mmengine - INFO - Epoch(train) [172][200/586] lr: 5.000000e-04 eta: 4:01:43 time: 0.676754 data_time: 0.064357 memory: 12959 loss_kpt: 306.437177 acc_pose: 0.809763 loss: 306.437177 2022/10/13 06:41:40 - mmengine - INFO - Epoch(train) [172][250/586] lr: 5.000000e-04 eta: 4:01:12 time: 0.674632 data_time: 0.058600 memory: 12959 loss_kpt: 300.204063 acc_pose: 0.856466 loss: 300.204063 2022/10/13 06:42:14 - mmengine - INFO - Epoch(train) [172][300/586] lr: 5.000000e-04 eta: 4:00:40 time: 0.679241 data_time: 0.056412 memory: 12959 loss_kpt: 307.731950 acc_pose: 0.872956 loss: 307.731950 2022/10/13 06:42:48 - mmengine - INFO - Epoch(train) [172][350/586] lr: 5.000000e-04 eta: 4:00:09 time: 0.678012 data_time: 0.061106 memory: 12959 loss_kpt: 314.447297 acc_pose: 0.885467 loss: 314.447297 2022/10/13 06:43:22 - mmengine - INFO - Epoch(train) [172][400/586] lr: 5.000000e-04 eta: 3:59:37 time: 0.672001 data_time: 0.057732 memory: 12959 loss_kpt: 309.897325 acc_pose: 0.887643 loss: 309.897325 2022/10/13 06:43:56 - mmengine - INFO - Epoch(train) [172][450/586] lr: 5.000000e-04 eta: 3:59:05 time: 0.681197 data_time: 0.060912 memory: 12959 loss_kpt: 308.448236 acc_pose: 0.834581 loss: 308.448236 2022/10/13 06:44:29 - mmengine - INFO - Epoch(train) [172][500/586] lr: 5.000000e-04 eta: 3:58:34 time: 0.668090 data_time: 0.050909 memory: 12959 loss_kpt: 307.839451 acc_pose: 0.831459 loss: 307.839451 2022/10/13 06:45:03 - mmengine - INFO - Epoch(train) [172][550/586] lr: 5.000000e-04 eta: 3:58:02 time: 0.677130 data_time: 0.059073 memory: 12959 loss_kpt: 305.701676 acc_pose: 0.901667 loss: 305.701676 2022/10/13 06:45:27 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:46:02 - mmengine - INFO - Epoch(train) [173][50/586] lr: 5.000000e-04 eta: 3:57:03 time: 0.707746 data_time: 0.074522 memory: 12959 loss_kpt: 304.410323 acc_pose: 0.889820 loss: 304.410323 2022/10/13 06:46:37 - mmengine - INFO - Epoch(train) [173][100/586] lr: 5.000000e-04 eta: 3:56:31 time: 0.687728 data_time: 0.052413 memory: 12959 loss_kpt: 310.286700 acc_pose: 0.857252 loss: 310.286700 2022/10/13 06:47:12 - mmengine - INFO - Epoch(train) [173][150/586] lr: 5.000000e-04 eta: 3:56:00 time: 0.699016 data_time: 0.058995 memory: 12959 loss_kpt: 309.071548 acc_pose: 0.895850 loss: 309.071548 2022/10/13 06:47:46 - mmengine - INFO - Epoch(train) [173][200/586] lr: 5.000000e-04 eta: 3:55:28 time: 0.688479 data_time: 0.051160 memory: 12959 loss_kpt: 312.351451 acc_pose: 0.853785 loss: 312.351451 2022/10/13 06:47:52 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:48:21 - mmengine - INFO - Epoch(train) [173][250/586] lr: 5.000000e-04 eta: 3:54:57 time: 0.693231 data_time: 0.061533 memory: 12959 loss_kpt: 307.830651 acc_pose: 0.858206 loss: 307.830651 2022/10/13 06:48:55 - mmengine - INFO - Epoch(train) [173][300/586] lr: 5.000000e-04 eta: 3:54:25 time: 0.682734 data_time: 0.052498 memory: 12959 loss_kpt: 306.232393 acc_pose: 0.894119 loss: 306.232393 2022/10/13 06:49:30 - mmengine - INFO - Epoch(train) [173][350/586] lr: 5.000000e-04 eta: 3:53:54 time: 0.693361 data_time: 0.058807 memory: 12959 loss_kpt: 310.689111 acc_pose: 0.903463 loss: 310.689111 2022/10/13 06:50:04 - mmengine - INFO - Epoch(train) [173][400/586] lr: 5.000000e-04 eta: 3:53:22 time: 0.689797 data_time: 0.055515 memory: 12959 loss_kpt: 309.266525 acc_pose: 0.884646 loss: 309.266525 2022/10/13 06:50:39 - mmengine - INFO - Epoch(train) [173][450/586] lr: 5.000000e-04 eta: 3:52:51 time: 0.693396 data_time: 0.055532 memory: 12959 loss_kpt: 315.883929 acc_pose: 0.884013 loss: 315.883929 2022/10/13 06:51:13 - mmengine - INFO - Epoch(train) [173][500/586] lr: 5.000000e-04 eta: 3:52:19 time: 0.688172 data_time: 0.056462 memory: 12959 loss_kpt: 310.489800 acc_pose: 0.843150 loss: 310.489800 2022/10/13 06:51:48 - mmengine - INFO - Epoch(train) [173][550/586] lr: 5.000000e-04 eta: 3:51:48 time: 0.690223 data_time: 0.059414 memory: 12959 loss_kpt: 304.454114 acc_pose: 0.769496 loss: 304.454114 2022/10/13 06:52:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:52:47 - mmengine - INFO - Epoch(train) [174][50/586] lr: 5.000000e-04 eta: 3:50:48 time: 0.686766 data_time: 0.070696 memory: 12959 loss_kpt: 311.850984 acc_pose: 0.858992 loss: 311.850984 2022/10/13 06:53:20 - mmengine - INFO - Epoch(train) [174][100/586] lr: 5.000000e-04 eta: 3:50:17 time: 0.669515 data_time: 0.059492 memory: 12959 loss_kpt: 305.809899 acc_pose: 0.829136 loss: 305.809899 2022/10/13 06:53:54 - mmengine - INFO - Epoch(train) [174][150/586] lr: 5.000000e-04 eta: 3:49:45 time: 0.677433 data_time: 0.061688 memory: 12959 loss_kpt: 304.471102 acc_pose: 0.855523 loss: 304.471102 2022/10/13 06:54:27 - mmengine - INFO - Epoch(train) [174][200/586] lr: 5.000000e-04 eta: 3:49:13 time: 0.668661 data_time: 0.055391 memory: 12959 loss_kpt: 312.092726 acc_pose: 0.878315 loss: 312.092726 2022/10/13 06:55:01 - mmengine - INFO - Epoch(train) [174][250/586] lr: 5.000000e-04 eta: 3:48:42 time: 0.663364 data_time: 0.057399 memory: 12959 loss_kpt: 308.488437 acc_pose: 0.840118 loss: 308.488437 2022/10/13 06:55:34 - mmengine - INFO - Epoch(train) [174][300/586] lr: 5.000000e-04 eta: 3:48:10 time: 0.661964 data_time: 0.052650 memory: 12959 loss_kpt: 311.564982 acc_pose: 0.878784 loss: 311.564982 2022/10/13 06:56:07 - mmengine - INFO - Epoch(train) [174][350/586] lr: 5.000000e-04 eta: 3:47:38 time: 0.670816 data_time: 0.052098 memory: 12959 loss_kpt: 314.810583 acc_pose: 0.914398 loss: 314.810583 2022/10/13 06:56:40 - mmengine - INFO - Epoch(train) [174][400/586] lr: 5.000000e-04 eta: 3:47:06 time: 0.660368 data_time: 0.057841 memory: 12959 loss_kpt: 313.730020 acc_pose: 0.861507 loss: 313.730020 2022/10/13 06:57:14 - mmengine - INFO - Epoch(train) [174][450/586] lr: 5.000000e-04 eta: 3:46:35 time: 0.672186 data_time: 0.053845 memory: 12959 loss_kpt: 316.784103 acc_pose: 0.843628 loss: 316.784103 2022/10/13 06:57:47 - mmengine - INFO - Epoch(train) [174][500/586] lr: 5.000000e-04 eta: 3:46:03 time: 0.665557 data_time: 0.056158 memory: 12959 loss_kpt: 310.676631 acc_pose: 0.839887 loss: 310.676631 2022/10/13 06:58:21 - mmengine - INFO - Epoch(train) [174][550/586] lr: 5.000000e-04 eta: 3:45:31 time: 0.670692 data_time: 0.054448 memory: 12959 loss_kpt: 309.822131 acc_pose: 0.797905 loss: 309.822131 2022/10/13 06:58:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:59:10 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 06:59:19 - mmengine - INFO - Epoch(train) [175][50/586] lr: 5.000000e-04 eta: 3:44:32 time: 0.687390 data_time: 0.075375 memory: 12959 loss_kpt: 300.010033 acc_pose: 0.899645 loss: 300.010033 2022/10/13 06:59:53 - mmengine - INFO - Epoch(train) [175][100/586] lr: 5.000000e-04 eta: 3:44:00 time: 0.681570 data_time: 0.056069 memory: 12959 loss_kpt: 310.779748 acc_pose: 0.882006 loss: 310.779748 2022/10/13 07:00:28 - mmengine - INFO - Epoch(train) [175][150/586] lr: 5.000000e-04 eta: 3:43:29 time: 0.691919 data_time: 0.065751 memory: 12959 loss_kpt: 309.639389 acc_pose: 0.838796 loss: 309.639389 2022/10/13 07:01:01 - mmengine - INFO - Epoch(train) [175][200/586] lr: 5.000000e-04 eta: 3:42:57 time: 0.668893 data_time: 0.061183 memory: 12959 loss_kpt: 305.107441 acc_pose: 0.856978 loss: 305.107441 2022/10/13 07:01:35 - mmengine - INFO - Epoch(train) [175][250/586] lr: 5.000000e-04 eta: 3:42:26 time: 0.680920 data_time: 0.060393 memory: 12959 loss_kpt: 310.899922 acc_pose: 0.853984 loss: 310.899922 2022/10/13 07:02:09 - mmengine - INFO - Epoch(train) [175][300/586] lr: 5.000000e-04 eta: 3:41:54 time: 0.665295 data_time: 0.057483 memory: 12959 loss_kpt: 305.734523 acc_pose: 0.843607 loss: 305.734523 2022/10/13 07:02:43 - mmengine - INFO - Epoch(train) [175][350/586] lr: 5.000000e-04 eta: 3:41:22 time: 0.676215 data_time: 0.055532 memory: 12959 loss_kpt: 309.702146 acc_pose: 0.840861 loss: 309.702146 2022/10/13 07:03:16 - mmengine - INFO - Epoch(train) [175][400/586] lr: 5.000000e-04 eta: 3:40:50 time: 0.669009 data_time: 0.056912 memory: 12959 loss_kpt: 313.990912 acc_pose: 0.859430 loss: 313.990912 2022/10/13 07:03:50 - mmengine - INFO - Epoch(train) [175][450/586] lr: 5.000000e-04 eta: 3:40:19 time: 0.683540 data_time: 0.063103 memory: 12959 loss_kpt: 311.644206 acc_pose: 0.836810 loss: 311.644206 2022/10/13 07:04:24 - mmengine - INFO - Epoch(train) [175][500/586] lr: 5.000000e-04 eta: 3:39:47 time: 0.673581 data_time: 0.054309 memory: 12959 loss_kpt: 313.222918 acc_pose: 0.878934 loss: 313.222918 2022/10/13 07:04:58 - mmengine - INFO - Epoch(train) [175][550/586] lr: 5.000000e-04 eta: 3:39:16 time: 0.680380 data_time: 0.058310 memory: 12959 loss_kpt: 306.103903 acc_pose: 0.815559 loss: 306.103903 2022/10/13 07:05:22 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:05:57 - mmengine - INFO - Epoch(train) [176][50/586] lr: 5.000000e-04 eta: 3:38:16 time: 0.686322 data_time: 0.064344 memory: 12959 loss_kpt: 308.310662 acc_pose: 0.878231 loss: 308.310662 2022/10/13 07:06:30 - mmengine - INFO - Epoch(train) [176][100/586] lr: 5.000000e-04 eta: 3:37:45 time: 0.659916 data_time: 0.056052 memory: 12959 loss_kpt: 306.946299 acc_pose: 0.866344 loss: 306.946299 2022/10/13 07:07:03 - mmengine - INFO - Epoch(train) [176][150/586] lr: 5.000000e-04 eta: 3:37:13 time: 0.664381 data_time: 0.056302 memory: 12959 loss_kpt: 311.847340 acc_pose: 0.773418 loss: 311.847340 2022/10/13 07:07:37 - mmengine - INFO - Epoch(train) [176][200/586] lr: 5.000000e-04 eta: 3:36:41 time: 0.676803 data_time: 0.055609 memory: 12959 loss_kpt: 316.994635 acc_pose: 0.884588 loss: 316.994635 2022/10/13 07:08:10 - mmengine - INFO - Epoch(train) [176][250/586] lr: 5.000000e-04 eta: 3:36:09 time: 0.668952 data_time: 0.057823 memory: 12959 loss_kpt: 311.399081 acc_pose: 0.891188 loss: 311.399081 2022/10/13 07:08:44 - mmengine - INFO - Epoch(train) [176][300/586] lr: 5.000000e-04 eta: 3:35:38 time: 0.667004 data_time: 0.052249 memory: 12959 loss_kpt: 309.057559 acc_pose: 0.854786 loss: 309.057559 2022/10/13 07:09:17 - mmengine - INFO - Epoch(train) [176][350/586] lr: 5.000000e-04 eta: 3:35:06 time: 0.676405 data_time: 0.058994 memory: 12959 loss_kpt: 311.483530 acc_pose: 0.823273 loss: 311.483530 2022/10/13 07:09:51 - mmengine - INFO - Epoch(train) [176][400/586] lr: 5.000000e-04 eta: 3:34:34 time: 0.670185 data_time: 0.055093 memory: 12959 loss_kpt: 311.580430 acc_pose: 0.835407 loss: 311.580430 2022/10/13 07:10:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:10:24 - mmengine - INFO - Epoch(train) [176][450/586] lr: 5.000000e-04 eta: 3:34:03 time: 0.668654 data_time: 0.058279 memory: 12959 loss_kpt: 304.140237 acc_pose: 0.859334 loss: 304.140237 2022/10/13 07:10:58 - mmengine - INFO - Epoch(train) [176][500/586] lr: 5.000000e-04 eta: 3:33:31 time: 0.671848 data_time: 0.056417 memory: 12959 loss_kpt: 310.073776 acc_pose: 0.845990 loss: 310.073776 2022/10/13 07:11:32 - mmengine - INFO - Epoch(train) [176][550/586] lr: 5.000000e-04 eta: 3:32:59 time: 0.684015 data_time: 0.056455 memory: 12959 loss_kpt: 307.408500 acc_pose: 0.749346 loss: 307.408500 2022/10/13 07:11:57 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:12:32 - mmengine - INFO - Epoch(train) [177][50/586] lr: 5.000000e-04 eta: 3:32:00 time: 0.687949 data_time: 0.066481 memory: 12959 loss_kpt: 312.598288 acc_pose: 0.850516 loss: 312.598288 2022/10/13 07:13:05 - mmengine - INFO - Epoch(train) [177][100/586] lr: 5.000000e-04 eta: 3:31:29 time: 0.677506 data_time: 0.052717 memory: 12959 loss_kpt: 313.371671 acc_pose: 0.861718 loss: 313.371671 2022/10/13 07:13:40 - mmengine - INFO - Epoch(train) [177][150/586] lr: 5.000000e-04 eta: 3:30:57 time: 0.683408 data_time: 0.056021 memory: 12959 loss_kpt: 305.102560 acc_pose: 0.832197 loss: 305.102560 2022/10/13 07:14:14 - mmengine - INFO - Epoch(train) [177][200/586] lr: 5.000000e-04 eta: 3:30:25 time: 0.683548 data_time: 0.056307 memory: 12959 loss_kpt: 310.564105 acc_pose: 0.789398 loss: 310.564105 2022/10/13 07:14:48 - mmengine - INFO - Epoch(train) [177][250/586] lr: 5.000000e-04 eta: 3:29:54 time: 0.690862 data_time: 0.058223 memory: 12959 loss_kpt: 312.386888 acc_pose: 0.805990 loss: 312.386888 2022/10/13 07:15:23 - mmengine - INFO - Epoch(train) [177][300/586] lr: 5.000000e-04 eta: 3:29:22 time: 0.689443 data_time: 0.056763 memory: 12959 loss_kpt: 302.269088 acc_pose: 0.829220 loss: 302.269088 2022/10/13 07:15:57 - mmengine - INFO - Epoch(train) [177][350/586] lr: 5.000000e-04 eta: 3:28:51 time: 0.678894 data_time: 0.056450 memory: 12959 loss_kpt: 307.396506 acc_pose: 0.874752 loss: 307.396506 2022/10/13 07:16:31 - mmengine - INFO - Epoch(train) [177][400/586] lr: 5.000000e-04 eta: 3:28:19 time: 0.672616 data_time: 0.052834 memory: 12959 loss_kpt: 305.075182 acc_pose: 0.833155 loss: 305.075182 2022/10/13 07:17:05 - mmengine - INFO - Epoch(train) [177][450/586] lr: 5.000000e-04 eta: 3:27:47 time: 0.682235 data_time: 0.061348 memory: 12959 loss_kpt: 306.351806 acc_pose: 0.885337 loss: 306.351806 2022/10/13 07:17:38 - mmengine - INFO - Epoch(train) [177][500/586] lr: 5.000000e-04 eta: 3:27:16 time: 0.675099 data_time: 0.057008 memory: 12959 loss_kpt: 304.806883 acc_pose: 0.878236 loss: 304.806883 2022/10/13 07:18:12 - mmengine - INFO - Epoch(train) [177][550/586] lr: 5.000000e-04 eta: 3:26:44 time: 0.675603 data_time: 0.059639 memory: 12959 loss_kpt: 309.717937 acc_pose: 0.880987 loss: 309.717937 2022/10/13 07:18:36 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:19:11 - mmengine - INFO - Epoch(train) [178][50/586] lr: 5.000000e-04 eta: 3:25:45 time: 0.685743 data_time: 0.069391 memory: 12959 loss_kpt: 315.226144 acc_pose: 0.779549 loss: 315.226144 2022/10/13 07:19:44 - mmengine - INFO - Epoch(train) [178][100/586] lr: 5.000000e-04 eta: 3:25:13 time: 0.675492 data_time: 0.056320 memory: 12959 loss_kpt: 305.828787 acc_pose: 0.880298 loss: 305.828787 2022/10/13 07:20:18 - mmengine - INFO - Epoch(train) [178][150/586] lr: 5.000000e-04 eta: 3:24:42 time: 0.664243 data_time: 0.060454 memory: 12959 loss_kpt: 307.535541 acc_pose: 0.864234 loss: 307.535541 2022/10/13 07:20:52 - mmengine - INFO - Epoch(train) [178][200/586] lr: 5.000000e-04 eta: 3:24:10 time: 0.679831 data_time: 0.055126 memory: 12959 loss_kpt: 314.663130 acc_pose: 0.843885 loss: 314.663130 2022/10/13 07:21:26 - mmengine - INFO - Epoch(train) [178][250/586] lr: 5.000000e-04 eta: 3:23:38 time: 0.679689 data_time: 0.059799 memory: 12959 loss_kpt: 315.257405 acc_pose: 0.850920 loss: 315.257405 2022/10/13 07:21:44 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:21:59 - mmengine - INFO - Epoch(train) [178][300/586] lr: 5.000000e-04 eta: 3:23:07 time: 0.668311 data_time: 0.055779 memory: 12959 loss_kpt: 307.346918 acc_pose: 0.813004 loss: 307.346918 2022/10/13 07:22:32 - mmengine - INFO - Epoch(train) [178][350/586] lr: 5.000000e-04 eta: 3:22:35 time: 0.664809 data_time: 0.058633 memory: 12959 loss_kpt: 307.757377 acc_pose: 0.867203 loss: 307.757377 2022/10/13 07:23:06 - mmengine - INFO - Epoch(train) [178][400/586] lr: 5.000000e-04 eta: 3:22:03 time: 0.667532 data_time: 0.056502 memory: 12959 loss_kpt: 306.119230 acc_pose: 0.862926 loss: 306.119230 2022/10/13 07:23:40 - mmengine - INFO - Epoch(train) [178][450/586] lr: 5.000000e-04 eta: 3:21:31 time: 0.674828 data_time: 0.059419 memory: 12959 loss_kpt: 306.128341 acc_pose: 0.857383 loss: 306.128341 2022/10/13 07:24:13 - mmengine - INFO - Epoch(train) [178][500/586] lr: 5.000000e-04 eta: 3:21:00 time: 0.672924 data_time: 0.052438 memory: 12959 loss_kpt: 307.782343 acc_pose: 0.846537 loss: 307.782343 2022/10/13 07:24:49 - mmengine - INFO - Epoch(train) [178][550/586] lr: 5.000000e-04 eta: 3:20:28 time: 0.711320 data_time: 0.064156 memory: 12959 loss_kpt: 305.753039 acc_pose: 0.864000 loss: 305.753039 2022/10/13 07:25:14 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:25:48 - mmengine - INFO - Epoch(train) [179][50/586] lr: 5.000000e-04 eta: 3:19:29 time: 0.678107 data_time: 0.068016 memory: 12959 loss_kpt: 310.097694 acc_pose: 0.892499 loss: 310.097694 2022/10/13 07:26:20 - mmengine - INFO - Epoch(train) [179][100/586] lr: 5.000000e-04 eta: 3:18:58 time: 0.655680 data_time: 0.056599 memory: 12959 loss_kpt: 310.888880 acc_pose: 0.823856 loss: 310.888880 2022/10/13 07:26:54 - mmengine - INFO - Epoch(train) [179][150/586] lr: 5.000000e-04 eta: 3:18:26 time: 0.663825 data_time: 0.059741 memory: 12959 loss_kpt: 306.094885 acc_pose: 0.926451 loss: 306.094885 2022/10/13 07:27:28 - mmengine - INFO - Epoch(train) [179][200/586] lr: 5.000000e-04 eta: 3:17:54 time: 0.688511 data_time: 0.058878 memory: 12959 loss_kpt: 307.292095 acc_pose: 0.934838 loss: 307.292095 2022/10/13 07:28:03 - mmengine - INFO - Epoch(train) [179][250/586] lr: 5.000000e-04 eta: 3:17:23 time: 0.691125 data_time: 0.055698 memory: 12959 loss_kpt: 309.232500 acc_pose: 0.881728 loss: 309.232500 2022/10/13 07:28:37 - mmengine - INFO - Epoch(train) [179][300/586] lr: 5.000000e-04 eta: 3:16:51 time: 0.682640 data_time: 0.052622 memory: 12959 loss_kpt: 307.616414 acc_pose: 0.849400 loss: 307.616414 2022/10/13 07:29:11 - mmengine - INFO - Epoch(train) [179][350/586] lr: 5.000000e-04 eta: 3:16:19 time: 0.680119 data_time: 0.060432 memory: 12959 loss_kpt: 307.185966 acc_pose: 0.884966 loss: 307.185966 2022/10/13 07:29:45 - mmengine - INFO - Epoch(train) [179][400/586] lr: 5.000000e-04 eta: 3:15:48 time: 0.689011 data_time: 0.052348 memory: 12959 loss_kpt: 307.611074 acc_pose: 0.879663 loss: 307.611074 2022/10/13 07:30:20 - mmengine - INFO - Epoch(train) [179][450/586] lr: 5.000000e-04 eta: 3:15:16 time: 0.686229 data_time: 0.057751 memory: 12959 loss_kpt: 309.082642 acc_pose: 0.875010 loss: 309.082642 2022/10/13 07:30:54 - mmengine - INFO - Epoch(train) [179][500/586] lr: 5.000000e-04 eta: 3:14:44 time: 0.678552 data_time: 0.054980 memory: 12959 loss_kpt: 303.925033 acc_pose: 0.871990 loss: 303.925033 2022/10/13 07:31:28 - mmengine - INFO - Epoch(train) [179][550/586] lr: 5.000000e-04 eta: 3:14:13 time: 0.684888 data_time: 0.055953 memory: 12959 loss_kpt: 307.895679 acc_pose: 0.881522 loss: 307.895679 2022/10/13 07:31:52 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:32:27 - mmengine - INFO - Epoch(train) [180][50/586] lr: 5.000000e-04 eta: 3:13:14 time: 0.686423 data_time: 0.068188 memory: 12959 loss_kpt: 308.628683 acc_pose: 0.875609 loss: 308.628683 2022/10/13 07:33:00 - mmengine - INFO - Epoch(train) [180][100/586] lr: 5.000000e-04 eta: 3:12:42 time: 0.654645 data_time: 0.058815 memory: 12959 loss_kpt: 312.608101 acc_pose: 0.872670 loss: 312.608101 2022/10/13 07:33:04 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:33:33 - mmengine - INFO - Epoch(train) [180][150/586] lr: 5.000000e-04 eta: 3:12:11 time: 0.663227 data_time: 0.065048 memory: 12959 loss_kpt: 311.016538 acc_pose: 0.864541 loss: 311.016538 2022/10/13 07:34:06 - mmengine - INFO - Epoch(train) [180][200/586] lr: 5.000000e-04 eta: 3:11:39 time: 0.661661 data_time: 0.058782 memory: 12959 loss_kpt: 308.751677 acc_pose: 0.866106 loss: 308.751677 2022/10/13 07:34:39 - mmengine - INFO - Epoch(train) [180][250/586] lr: 5.000000e-04 eta: 3:11:07 time: 0.657197 data_time: 0.056803 memory: 12959 loss_kpt: 306.116886 acc_pose: 0.830792 loss: 306.116886 2022/10/13 07:35:12 - mmengine - INFO - Epoch(train) [180][300/586] lr: 5.000000e-04 eta: 3:10:35 time: 0.663122 data_time: 0.058956 memory: 12959 loss_kpt: 307.565627 acc_pose: 0.914939 loss: 307.565627 2022/10/13 07:35:45 - mmengine - INFO - Epoch(train) [180][350/586] lr: 5.000000e-04 eta: 3:10:03 time: 0.657865 data_time: 0.063837 memory: 12959 loss_kpt: 304.850753 acc_pose: 0.846947 loss: 304.850753 2022/10/13 07:36:18 - mmengine - INFO - Epoch(train) [180][400/586] lr: 5.000000e-04 eta: 3:09:31 time: 0.665935 data_time: 0.058742 memory: 12959 loss_kpt: 306.225941 acc_pose: 0.882901 loss: 306.225941 2022/10/13 07:36:51 - mmengine - INFO - Epoch(train) [180][450/586] lr: 5.000000e-04 eta: 3:09:00 time: 0.658558 data_time: 0.059676 memory: 12959 loss_kpt: 300.682081 acc_pose: 0.886247 loss: 300.682081 2022/10/13 07:37:26 - mmengine - INFO - Epoch(train) [180][500/586] lr: 5.000000e-04 eta: 3:08:28 time: 0.692567 data_time: 0.057187 memory: 12959 loss_kpt: 307.637195 acc_pose: 0.894685 loss: 307.637195 2022/10/13 07:38:00 - mmengine - INFO - Epoch(train) [180][550/586] lr: 5.000000e-04 eta: 3:07:56 time: 0.690218 data_time: 0.063738 memory: 12959 loss_kpt: 311.384005 acc_pose: 0.867625 loss: 311.384005 2022/10/13 07:38:25 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:38:25 - mmengine - INFO - Saving checkpoint at 180 epochs 2022/10/13 07:38:42 - mmengine - INFO - Epoch(val) [180][50/407] eta: 0:01:35 time: 0.268179 data_time: 0.012577 memory: 12959 2022/10/13 07:38:55 - mmengine - INFO - Epoch(val) [180][100/407] eta: 0:01:20 time: 0.261402 data_time: 0.007605 memory: 2407 2022/10/13 07:39:08 - mmengine - INFO - Epoch(val) [180][150/407] eta: 0:01:07 time: 0.260707 data_time: 0.007962 memory: 2407 2022/10/13 07:39:21 - mmengine - INFO - Epoch(val) [180][200/407] eta: 0:00:53 time: 0.259022 data_time: 0.007248 memory: 2407 2022/10/13 07:39:34 - mmengine - INFO - Epoch(val) [180][250/407] eta: 0:00:40 time: 0.260033 data_time: 0.008003 memory: 2407 2022/10/13 07:39:47 - mmengine - INFO - Epoch(val) [180][300/407] eta: 0:00:27 time: 0.258133 data_time: 0.007209 memory: 2407 2022/10/13 07:40:00 - mmengine - INFO - Epoch(val) [180][350/407] eta: 0:00:15 time: 0.263652 data_time: 0.008072 memory: 2407 2022/10/13 07:40:13 - mmengine - INFO - Epoch(val) [180][400/407] eta: 0:00:01 time: 0.260836 data_time: 0.007386 memory: 2407 2022/10/13 07:40:27 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 07:40:43 - mmengine - INFO - Epoch(val) [180][407/407] coco/AP: 0.746587 coco/AP .5: 0.897501 coco/AP .75: 0.821484 coco/AP (M): 0.713099 coco/AP (L): 0.810718 coco/AR: 0.810831 coco/AR .5: 0.938445 coco/AR .75: 0.873741 coco/AR (M): 0.766785 coco/AR (L): 0.871646 2022/10/13 07:40:43 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_170.pth is removed 2022/10/13 07:40:46 - mmengine - INFO - The best checkpoint with 0.7466 coco/AP at 180 epoch is saved to best_coco/AP_epoch_180.pth. 2022/10/13 07:41:20 - mmengine - INFO - Epoch(train) [181][50/586] lr: 5.000000e-04 eta: 3:06:58 time: 0.681047 data_time: 0.063350 memory: 12959 loss_kpt: 305.990676 acc_pose: 0.852748 loss: 305.990676 2022/10/13 07:41:54 - mmengine - INFO - Epoch(train) [181][100/586] lr: 5.000000e-04 eta: 3:06:26 time: 0.677090 data_time: 0.058824 memory: 12959 loss_kpt: 310.896512 acc_pose: 0.867848 loss: 310.896512 2022/10/13 07:42:28 - mmengine - INFO - Epoch(train) [181][150/586] lr: 5.000000e-04 eta: 3:05:54 time: 0.673085 data_time: 0.059488 memory: 12959 loss_kpt: 305.566172 acc_pose: 0.853761 loss: 305.566172 2022/10/13 07:43:01 - mmengine - INFO - Epoch(train) [181][200/586] lr: 5.000000e-04 eta: 3:05:23 time: 0.673060 data_time: 0.059231 memory: 12959 loss_kpt: 304.923765 acc_pose: 0.864186 loss: 304.923765 2022/10/13 07:43:35 - mmengine - INFO - Epoch(train) [181][250/586] lr: 5.000000e-04 eta: 3:04:51 time: 0.674878 data_time: 0.058837 memory: 12959 loss_kpt: 307.713121 acc_pose: 0.839948 loss: 307.713121 2022/10/13 07:44:09 - mmengine - INFO - Epoch(train) [181][300/586] lr: 5.000000e-04 eta: 3:04:19 time: 0.674822 data_time: 0.062756 memory: 12959 loss_kpt: 303.705948 acc_pose: 0.865765 loss: 303.705948 2022/10/13 07:44:43 - mmengine - INFO - Epoch(train) [181][350/586] lr: 5.000000e-04 eta: 3:03:48 time: 0.680107 data_time: 0.059756 memory: 12959 loss_kpt: 307.833074 acc_pose: 0.915855 loss: 307.833074 2022/10/13 07:45:16 - mmengine - INFO - Epoch(train) [181][400/586] lr: 5.000000e-04 eta: 3:03:16 time: 0.668103 data_time: 0.058891 memory: 12959 loss_kpt: 303.933044 acc_pose: 0.850959 loss: 303.933044 2022/10/13 07:45:50 - mmengine - INFO - Epoch(train) [181][450/586] lr: 5.000000e-04 eta: 3:02:44 time: 0.676520 data_time: 0.058454 memory: 12959 loss_kpt: 313.771448 acc_pose: 0.853444 loss: 313.771448 2022/10/13 07:46:24 - mmengine - INFO - Epoch(train) [181][500/586] lr: 5.000000e-04 eta: 3:02:12 time: 0.680035 data_time: 0.059959 memory: 12959 loss_kpt: 304.144057 acc_pose: 0.817096 loss: 304.144057 2022/10/13 07:46:37 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:46:58 - mmengine - INFO - Epoch(train) [181][550/586] lr: 5.000000e-04 eta: 3:01:41 time: 0.675103 data_time: 0.058235 memory: 12959 loss_kpt: 301.114238 acc_pose: 0.830675 loss: 301.114238 2022/10/13 07:47:22 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:47:56 - mmengine - INFO - Epoch(train) [182][50/586] lr: 5.000000e-04 eta: 3:00:42 time: 0.682544 data_time: 0.072655 memory: 12959 loss_kpt: 305.352512 acc_pose: 0.899678 loss: 305.352512 2022/10/13 07:48:30 - mmengine - INFO - Epoch(train) [182][100/586] lr: 5.000000e-04 eta: 3:00:11 time: 0.677002 data_time: 0.057462 memory: 12959 loss_kpt: 307.894037 acc_pose: 0.856974 loss: 307.894037 2022/10/13 07:49:04 - mmengine - INFO - Epoch(train) [182][150/586] lr: 5.000000e-04 eta: 2:59:39 time: 0.673229 data_time: 0.054549 memory: 12959 loss_kpt: 307.387522 acc_pose: 0.846160 loss: 307.387522 2022/10/13 07:49:38 - mmengine - INFO - Epoch(train) [182][200/586] lr: 5.000000e-04 eta: 2:59:07 time: 0.673872 data_time: 0.056900 memory: 12959 loss_kpt: 304.855455 acc_pose: 0.852532 loss: 304.855455 2022/10/13 07:50:12 - mmengine - INFO - Epoch(train) [182][250/586] lr: 5.000000e-04 eta: 2:58:36 time: 0.688100 data_time: 0.058767 memory: 12959 loss_kpt: 305.391979 acc_pose: 0.794685 loss: 305.391979 2022/10/13 07:50:46 - mmengine - INFO - Epoch(train) [182][300/586] lr: 5.000000e-04 eta: 2:58:04 time: 0.676680 data_time: 0.058688 memory: 12959 loss_kpt: 311.277665 acc_pose: 0.801191 loss: 311.277665 2022/10/13 07:51:20 - mmengine - INFO - Epoch(train) [182][350/586] lr: 5.000000e-04 eta: 2:57:32 time: 0.686251 data_time: 0.062587 memory: 12959 loss_kpt: 302.253726 acc_pose: 0.854938 loss: 302.253726 2022/10/13 07:51:54 - mmengine - INFO - Epoch(train) [182][400/586] lr: 5.000000e-04 eta: 2:57:01 time: 0.676923 data_time: 0.053286 memory: 12959 loss_kpt: 303.300248 acc_pose: 0.819839 loss: 303.300248 2022/10/13 07:52:29 - mmengine - INFO - Epoch(train) [182][450/586] lr: 5.000000e-04 eta: 2:56:29 time: 0.692258 data_time: 0.055552 memory: 12959 loss_kpt: 305.965353 acc_pose: 0.869354 loss: 305.965353 2022/10/13 07:53:03 - mmengine - INFO - Epoch(train) [182][500/586] lr: 5.000000e-04 eta: 2:55:57 time: 0.682252 data_time: 0.053834 memory: 12959 loss_kpt: 301.058903 acc_pose: 0.926598 loss: 301.058903 2022/10/13 07:53:38 - mmengine - INFO - Epoch(train) [182][550/586] lr: 5.000000e-04 eta: 2:55:26 time: 0.697277 data_time: 0.055717 memory: 12959 loss_kpt: 307.088681 acc_pose: 0.885384 loss: 307.088681 2022/10/13 07:54:03 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:54:37 - mmengine - INFO - Epoch(train) [183][50/586] lr: 5.000000e-04 eta: 2:54:27 time: 0.690402 data_time: 0.073196 memory: 12959 loss_kpt: 303.955978 acc_pose: 0.855318 loss: 303.955978 2022/10/13 07:55:12 - mmengine - INFO - Epoch(train) [183][100/586] lr: 5.000000e-04 eta: 2:53:56 time: 0.693429 data_time: 0.061194 memory: 12959 loss_kpt: 305.744132 acc_pose: 0.898055 loss: 305.744132 2022/10/13 07:55:46 - mmengine - INFO - Epoch(train) [183][150/586] lr: 5.000000e-04 eta: 2:53:24 time: 0.686718 data_time: 0.063724 memory: 12959 loss_kpt: 313.553118 acc_pose: 0.853803 loss: 313.553118 2022/10/13 07:56:20 - mmengine - INFO - Epoch(train) [183][200/586] lr: 5.000000e-04 eta: 2:52:53 time: 0.684486 data_time: 0.053047 memory: 12959 loss_kpt: 307.892850 acc_pose: 0.890129 loss: 307.892850 2022/10/13 07:56:55 - mmengine - INFO - Epoch(train) [183][250/586] lr: 5.000000e-04 eta: 2:52:21 time: 0.687372 data_time: 0.062056 memory: 12959 loss_kpt: 306.507002 acc_pose: 0.848886 loss: 306.507002 2022/10/13 07:57:29 - mmengine - INFO - Epoch(train) [183][300/586] lr: 5.000000e-04 eta: 2:51:49 time: 0.688551 data_time: 0.060660 memory: 12959 loss_kpt: 306.164081 acc_pose: 0.843132 loss: 306.164081 2022/10/13 07:58:02 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 07:58:04 - mmengine - INFO - Epoch(train) [183][350/586] lr: 5.000000e-04 eta: 2:51:18 time: 0.687673 data_time: 0.061271 memory: 12959 loss_kpt: 311.314175 acc_pose: 0.896256 loss: 311.314175 2022/10/13 07:58:38 - mmengine - INFO - Epoch(train) [183][400/586] lr: 5.000000e-04 eta: 2:50:46 time: 0.680943 data_time: 0.062217 memory: 12959 loss_kpt: 301.482525 acc_pose: 0.857655 loss: 301.482525 2022/10/13 07:59:13 - mmengine - INFO - Epoch(train) [183][450/586] lr: 5.000000e-04 eta: 2:50:14 time: 0.694371 data_time: 0.069386 memory: 12959 loss_kpt: 310.712294 acc_pose: 0.893560 loss: 310.712294 2022/10/13 07:59:46 - mmengine - INFO - Epoch(train) [183][500/586] lr: 5.000000e-04 eta: 2:49:43 time: 0.672590 data_time: 0.058183 memory: 12959 loss_kpt: 301.619352 acc_pose: 0.856018 loss: 301.619352 2022/10/13 08:00:21 - mmengine - INFO - Epoch(train) [183][550/586] lr: 5.000000e-04 eta: 2:49:11 time: 0.690038 data_time: 0.064235 memory: 12959 loss_kpt: 311.242492 acc_pose: 0.866095 loss: 311.242492 2022/10/13 08:00:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:01:20 - mmengine - INFO - Epoch(train) [184][50/586] lr: 5.000000e-04 eta: 2:48:13 time: 0.695666 data_time: 0.081026 memory: 12959 loss_kpt: 309.601083 acc_pose: 0.803912 loss: 309.601083 2022/10/13 08:01:54 - mmengine - INFO - Epoch(train) [184][100/586] lr: 5.000000e-04 eta: 2:47:41 time: 0.687144 data_time: 0.057823 memory: 12959 loss_kpt: 302.984194 acc_pose: 0.898106 loss: 302.984194 2022/10/13 08:02:28 - mmengine - INFO - Epoch(train) [184][150/586] lr: 5.000000e-04 eta: 2:47:10 time: 0.687085 data_time: 0.067023 memory: 12959 loss_kpt: 303.214514 acc_pose: 0.886879 loss: 303.214514 2022/10/13 08:03:03 - mmengine - INFO - Epoch(train) [184][200/586] lr: 5.000000e-04 eta: 2:46:38 time: 0.684276 data_time: 0.058210 memory: 12959 loss_kpt: 319.194084 acc_pose: 0.849001 loss: 319.194084 2022/10/13 08:03:36 - mmengine - INFO - Epoch(train) [184][250/586] lr: 5.000000e-04 eta: 2:46:06 time: 0.677337 data_time: 0.065900 memory: 12959 loss_kpt: 306.348188 acc_pose: 0.883793 loss: 306.348188 2022/10/13 08:04:11 - mmengine - INFO - Epoch(train) [184][300/586] lr: 5.000000e-04 eta: 2:45:35 time: 0.682681 data_time: 0.058386 memory: 12959 loss_kpt: 300.998062 acc_pose: 0.843902 loss: 300.998062 2022/10/13 08:04:45 - mmengine - INFO - Epoch(train) [184][350/586] lr: 5.000000e-04 eta: 2:45:03 time: 0.691715 data_time: 0.065845 memory: 12959 loss_kpt: 303.274342 acc_pose: 0.849906 loss: 303.274342 2022/10/13 08:05:19 - mmengine - INFO - Epoch(train) [184][400/586] lr: 5.000000e-04 eta: 2:44:31 time: 0.679739 data_time: 0.057976 memory: 12959 loss_kpt: 309.499855 acc_pose: 0.874864 loss: 309.499855 2022/10/13 08:05:53 - mmengine - INFO - Epoch(train) [184][450/586] lr: 5.000000e-04 eta: 2:43:59 time: 0.681579 data_time: 0.065530 memory: 12959 loss_kpt: 311.548371 acc_pose: 0.832093 loss: 311.548371 2022/10/13 08:06:28 - mmengine - INFO - Epoch(train) [184][500/586] lr: 5.000000e-04 eta: 2:43:28 time: 0.698648 data_time: 0.059116 memory: 12959 loss_kpt: 306.704687 acc_pose: 0.867435 loss: 306.704687 2022/10/13 08:07:04 - mmengine - INFO - Epoch(train) [184][550/586] lr: 5.000000e-04 eta: 2:42:56 time: 0.707692 data_time: 0.065264 memory: 12959 loss_kpt: 304.494795 acc_pose: 0.822744 loss: 304.494795 2022/10/13 08:07:29 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:08:03 - mmengine - INFO - Epoch(train) [185][50/586] lr: 5.000000e-04 eta: 2:41:58 time: 0.690895 data_time: 0.069037 memory: 12959 loss_kpt: 312.918801 acc_pose: 0.883281 loss: 312.918801 2022/10/13 08:08:37 - mmengine - INFO - Epoch(train) [185][100/586] lr: 5.000000e-04 eta: 2:41:27 time: 0.668113 data_time: 0.064087 memory: 12959 loss_kpt: 307.818181 acc_pose: 0.861772 loss: 307.818181 2022/10/13 08:09:11 - mmengine - INFO - Epoch(train) [185][150/586] lr: 5.000000e-04 eta: 2:40:55 time: 0.677594 data_time: 0.064052 memory: 12959 loss_kpt: 303.399161 acc_pose: 0.880995 loss: 303.399161 2022/10/13 08:09:28 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:09:44 - mmengine - INFO - Epoch(train) [185][200/586] lr: 5.000000e-04 eta: 2:40:23 time: 0.661860 data_time: 0.061185 memory: 12959 loss_kpt: 305.380845 acc_pose: 0.831413 loss: 305.380845 2022/10/13 08:10:17 - mmengine - INFO - Epoch(train) [185][250/586] lr: 5.000000e-04 eta: 2:39:51 time: 0.666309 data_time: 0.062306 memory: 12959 loss_kpt: 310.351597 acc_pose: 0.830946 loss: 310.351597 2022/10/13 08:10:50 - mmengine - INFO - Epoch(train) [185][300/586] lr: 5.000000e-04 eta: 2:39:19 time: 0.663629 data_time: 0.067326 memory: 12959 loss_kpt: 301.573860 acc_pose: 0.893603 loss: 301.573860 2022/10/13 08:11:23 - mmengine - INFO - Epoch(train) [185][350/586] lr: 5.000000e-04 eta: 2:38:47 time: 0.660197 data_time: 0.063260 memory: 12959 loss_kpt: 305.962952 acc_pose: 0.827361 loss: 305.962952 2022/10/13 08:11:57 - mmengine - INFO - Epoch(train) [185][400/586] lr: 5.000000e-04 eta: 2:38:16 time: 0.666298 data_time: 0.063896 memory: 12959 loss_kpt: 300.868457 acc_pose: 0.868095 loss: 300.868457 2022/10/13 08:12:30 - mmengine - INFO - Epoch(train) [185][450/586] lr: 5.000000e-04 eta: 2:37:44 time: 0.668385 data_time: 0.062895 memory: 12959 loss_kpt: 304.815084 acc_pose: 0.909115 loss: 304.815084 2022/10/13 08:13:03 - mmengine - INFO - Epoch(train) [185][500/586] lr: 5.000000e-04 eta: 2:37:12 time: 0.663459 data_time: 0.066470 memory: 12959 loss_kpt: 311.755905 acc_pose: 0.800863 loss: 311.755905 2022/10/13 08:13:36 - mmengine - INFO - Epoch(train) [185][550/586] lr: 5.000000e-04 eta: 2:36:40 time: 0.662177 data_time: 0.063694 memory: 12959 loss_kpt: 301.483781 acc_pose: 0.836116 loss: 301.483781 2022/10/13 08:14:00 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:14:34 - mmengine - INFO - Epoch(train) [186][50/586] lr: 5.000000e-04 eta: 2:35:42 time: 0.680452 data_time: 0.076241 memory: 12959 loss_kpt: 306.680565 acc_pose: 0.837447 loss: 306.680565 2022/10/13 08:15:08 - mmengine - INFO - Epoch(train) [186][100/586] lr: 5.000000e-04 eta: 2:35:10 time: 0.663108 data_time: 0.058427 memory: 12959 loss_kpt: 312.445608 acc_pose: 0.793998 loss: 312.445608 2022/10/13 08:15:41 - mmengine - INFO - Epoch(train) [186][150/586] lr: 5.000000e-04 eta: 2:34:39 time: 0.674182 data_time: 0.067539 memory: 12959 loss_kpt: 301.446859 acc_pose: 0.776065 loss: 301.446859 2022/10/13 08:16:14 - mmengine - INFO - Epoch(train) [186][200/586] lr: 5.000000e-04 eta: 2:34:07 time: 0.664486 data_time: 0.060516 memory: 12959 loss_kpt: 302.501827 acc_pose: 0.874796 loss: 302.501827 2022/10/13 08:16:48 - mmengine - INFO - Epoch(train) [186][250/586] lr: 5.000000e-04 eta: 2:33:35 time: 0.673304 data_time: 0.062790 memory: 12959 loss_kpt: 304.145863 acc_pose: 0.872873 loss: 304.145863 2022/10/13 08:17:23 - mmengine - INFO - Epoch(train) [186][300/586] lr: 5.000000e-04 eta: 2:33:03 time: 0.688674 data_time: 0.065712 memory: 12959 loss_kpt: 305.661344 acc_pose: 0.852130 loss: 305.661344 2022/10/13 08:17:56 - mmengine - INFO - Epoch(train) [186][350/586] lr: 5.000000e-04 eta: 2:32:32 time: 0.665802 data_time: 0.064536 memory: 12959 loss_kpt: 306.837513 acc_pose: 0.863858 loss: 306.837513 2022/10/13 08:18:29 - mmengine - INFO - Epoch(train) [186][400/586] lr: 5.000000e-04 eta: 2:32:00 time: 0.653712 data_time: 0.062762 memory: 12959 loss_kpt: 303.652911 acc_pose: 0.875570 loss: 303.652911 2022/10/13 08:19:02 - mmengine - INFO - Epoch(train) [186][450/586] lr: 5.000000e-04 eta: 2:31:28 time: 0.660910 data_time: 0.071061 memory: 12959 loss_kpt: 308.023995 acc_pose: 0.883777 loss: 308.023995 2022/10/13 08:19:35 - mmengine - INFO - Epoch(train) [186][500/586] lr: 5.000000e-04 eta: 2:30:56 time: 0.656523 data_time: 0.063575 memory: 12959 loss_kpt: 308.522717 acc_pose: 0.871309 loss: 308.522717 2022/10/13 08:20:08 - mmengine - INFO - Epoch(train) [186][550/586] lr: 5.000000e-04 eta: 2:30:24 time: 0.664311 data_time: 0.065221 memory: 12959 loss_kpt: 306.359285 acc_pose: 0.850149 loss: 306.359285 2022/10/13 08:20:31 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:20:34 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:21:05 - mmengine - INFO - Epoch(train) [187][50/586] lr: 5.000000e-04 eta: 2:29:26 time: 0.677017 data_time: 0.071793 memory: 12959 loss_kpt: 304.753261 acc_pose: 0.882939 loss: 304.753261 2022/10/13 08:21:38 - mmengine - INFO - Epoch(train) [187][100/586] lr: 5.000000e-04 eta: 2:28:54 time: 0.669071 data_time: 0.062489 memory: 12959 loss_kpt: 306.788403 acc_pose: 0.897228 loss: 306.788403 2022/10/13 08:22:12 - mmengine - INFO - Epoch(train) [187][150/586] lr: 5.000000e-04 eta: 2:28:23 time: 0.669768 data_time: 0.059293 memory: 12959 loss_kpt: 300.123885 acc_pose: 0.840889 loss: 300.123885 2022/10/13 08:22:45 - mmengine - INFO - Epoch(train) [187][200/586] lr: 5.000000e-04 eta: 2:27:51 time: 0.654388 data_time: 0.056374 memory: 12959 loss_kpt: 305.721483 acc_pose: 0.831477 loss: 305.721483 2022/10/13 08:23:18 - mmengine - INFO - Epoch(train) [187][250/586] lr: 5.000000e-04 eta: 2:27:19 time: 0.668042 data_time: 0.060653 memory: 12959 loss_kpt: 308.905572 acc_pose: 0.775998 loss: 308.905572 2022/10/13 08:23:52 - mmengine - INFO - Epoch(train) [187][300/586] lr: 5.000000e-04 eta: 2:26:47 time: 0.669309 data_time: 0.062015 memory: 12959 loss_kpt: 305.196253 acc_pose: 0.923708 loss: 305.196253 2022/10/13 08:24:25 - mmengine - INFO - Epoch(train) [187][350/586] lr: 5.000000e-04 eta: 2:26:15 time: 0.676754 data_time: 0.061588 memory: 12959 loss_kpt: 308.043466 acc_pose: 0.903683 loss: 308.043466 2022/10/13 08:24:58 - mmengine - INFO - Epoch(train) [187][400/586] lr: 5.000000e-04 eta: 2:25:43 time: 0.657338 data_time: 0.055593 memory: 12959 loss_kpt: 305.500068 acc_pose: 0.826539 loss: 305.500068 2022/10/13 08:25:31 - mmengine - INFO - Epoch(train) [187][450/586] lr: 5.000000e-04 eta: 2:25:12 time: 0.661641 data_time: 0.061652 memory: 12959 loss_kpt: 307.394616 acc_pose: 0.852272 loss: 307.394616 2022/10/13 08:26:05 - mmengine - INFO - Epoch(train) [187][500/586] lr: 5.000000e-04 eta: 2:24:40 time: 0.663546 data_time: 0.057012 memory: 12959 loss_kpt: 304.664823 acc_pose: 0.826639 loss: 304.664823 2022/10/13 08:26:37 - mmengine - INFO - Epoch(train) [187][550/586] lr: 5.000000e-04 eta: 2:24:08 time: 0.657881 data_time: 0.063295 memory: 12959 loss_kpt: 311.309932 acc_pose: 0.793137 loss: 311.309932 2022/10/13 08:27:01 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:27:35 - mmengine - INFO - Epoch(train) [188][50/586] lr: 5.000000e-04 eta: 2:23:10 time: 0.675398 data_time: 0.073155 memory: 12959 loss_kpt: 306.739794 acc_pose: 0.813206 loss: 306.739794 2022/10/13 08:28:07 - mmengine - INFO - Epoch(train) [188][100/586] lr: 5.000000e-04 eta: 2:22:38 time: 0.649835 data_time: 0.059505 memory: 12959 loss_kpt: 303.928423 acc_pose: 0.880038 loss: 303.928423 2022/10/13 08:28:41 - mmengine - INFO - Epoch(train) [188][150/586] lr: 5.000000e-04 eta: 2:22:06 time: 0.664201 data_time: 0.063670 memory: 12959 loss_kpt: 301.689369 acc_pose: 0.843222 loss: 301.689369 2022/10/13 08:29:13 - mmengine - INFO - Epoch(train) [188][200/586] lr: 5.000000e-04 eta: 2:21:35 time: 0.646567 data_time: 0.058770 memory: 12959 loss_kpt: 306.114798 acc_pose: 0.903217 loss: 306.114798 2022/10/13 08:29:46 - mmengine - INFO - Epoch(train) [188][250/586] lr: 5.000000e-04 eta: 2:21:03 time: 0.661549 data_time: 0.064423 memory: 12959 loss_kpt: 303.589286 acc_pose: 0.861386 loss: 303.589286 2022/10/13 08:30:18 - mmengine - INFO - Epoch(train) [188][300/586] lr: 5.000000e-04 eta: 2:20:31 time: 0.648407 data_time: 0.060290 memory: 12959 loss_kpt: 302.764137 acc_pose: 0.885811 loss: 302.764137 2022/10/13 08:30:52 - mmengine - INFO - Epoch(train) [188][350/586] lr: 5.000000e-04 eta: 2:19:59 time: 0.677841 data_time: 0.060324 memory: 12959 loss_kpt: 305.992555 acc_pose: 0.805185 loss: 305.992555 2022/10/13 08:31:26 - mmengine - INFO - Epoch(train) [188][400/586] lr: 5.000000e-04 eta: 2:19:27 time: 0.677924 data_time: 0.066445 memory: 12959 loss_kpt: 308.006791 acc_pose: 0.838317 loss: 308.006791 2022/10/13 08:31:38 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:32:00 - mmengine - INFO - Epoch(train) [188][450/586] lr: 5.000000e-04 eta: 2:18:55 time: 0.673533 data_time: 0.059902 memory: 12959 loss_kpt: 302.256008 acc_pose: 0.827693 loss: 302.256008 2022/10/13 08:32:34 - mmengine - INFO - Epoch(train) [188][500/586] lr: 5.000000e-04 eta: 2:18:24 time: 0.671685 data_time: 0.060337 memory: 12959 loss_kpt: 297.673626 acc_pose: 0.885516 loss: 297.673626 2022/10/13 08:33:08 - mmengine - INFO - Epoch(train) [188][550/586] lr: 5.000000e-04 eta: 2:17:52 time: 0.682369 data_time: 0.063377 memory: 12959 loss_kpt: 301.734683 acc_pose: 0.871738 loss: 301.734683 2022/10/13 08:33:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:34:06 - mmengine - INFO - Epoch(train) [189][50/586] lr: 5.000000e-04 eta: 2:16:54 time: 0.683073 data_time: 0.070406 memory: 12959 loss_kpt: 308.447927 acc_pose: 0.878731 loss: 308.447927 2022/10/13 08:34:40 - mmengine - INFO - Epoch(train) [189][100/586] lr: 5.000000e-04 eta: 2:16:23 time: 0.674311 data_time: 0.062626 memory: 12959 loss_kpt: 302.091811 acc_pose: 0.873894 loss: 302.091811 2022/10/13 08:35:14 - mmengine - INFO - Epoch(train) [189][150/586] lr: 5.000000e-04 eta: 2:15:51 time: 0.679316 data_time: 0.060152 memory: 12959 loss_kpt: 303.343335 acc_pose: 0.884063 loss: 303.343335 2022/10/13 08:35:47 - mmengine - INFO - Epoch(train) [189][200/586] lr: 5.000000e-04 eta: 2:15:19 time: 0.666989 data_time: 0.058446 memory: 12959 loss_kpt: 302.364223 acc_pose: 0.839138 loss: 302.364223 2022/10/13 08:36:22 - mmengine - INFO - Epoch(train) [189][250/586] lr: 5.000000e-04 eta: 2:14:47 time: 0.685745 data_time: 0.061702 memory: 12959 loss_kpt: 307.387939 acc_pose: 0.866841 loss: 307.387939 2022/10/13 08:36:57 - mmengine - INFO - Epoch(train) [189][300/586] lr: 5.000000e-04 eta: 2:14:16 time: 0.699705 data_time: 0.058946 memory: 12959 loss_kpt: 304.657134 acc_pose: 0.888307 loss: 304.657134 2022/10/13 08:37:31 - mmengine - INFO - Epoch(train) [189][350/586] lr: 5.000000e-04 eta: 2:13:44 time: 0.689216 data_time: 0.062161 memory: 12959 loss_kpt: 306.634130 acc_pose: 0.889314 loss: 306.634130 2022/10/13 08:38:06 - mmengine - INFO - Epoch(train) [189][400/586] lr: 5.000000e-04 eta: 2:13:12 time: 0.691733 data_time: 0.057100 memory: 12959 loss_kpt: 306.742842 acc_pose: 0.853443 loss: 306.742842 2022/10/13 08:38:40 - mmengine - INFO - Epoch(train) [189][450/586] lr: 5.000000e-04 eta: 2:12:41 time: 0.688723 data_time: 0.063539 memory: 12959 loss_kpt: 307.091827 acc_pose: 0.881122 loss: 307.091827 2022/10/13 08:39:14 - mmengine - INFO - Epoch(train) [189][500/586] lr: 5.000000e-04 eta: 2:12:09 time: 0.681859 data_time: 0.060876 memory: 12959 loss_kpt: 306.841518 acc_pose: 0.858396 loss: 306.841518 2022/10/13 08:39:48 - mmengine - INFO - Epoch(train) [189][550/586] lr: 5.000000e-04 eta: 2:11:37 time: 0.679289 data_time: 0.062728 memory: 12959 loss_kpt: 302.958786 acc_pose: 0.910303 loss: 302.958786 2022/10/13 08:40:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:40:46 - mmengine - INFO - Epoch(train) [190][50/586] lr: 5.000000e-04 eta: 2:10:40 time: 0.689963 data_time: 0.071693 memory: 12959 loss_kpt: 305.700438 acc_pose: 0.865519 loss: 305.700438 2022/10/13 08:41:20 - mmengine - INFO - Epoch(train) [190][100/586] lr: 5.000000e-04 eta: 2:10:08 time: 0.664838 data_time: 0.058934 memory: 12959 loss_kpt: 307.057598 acc_pose: 0.845909 loss: 307.057598 2022/10/13 08:41:53 - mmengine - INFO - Epoch(train) [190][150/586] lr: 5.000000e-04 eta: 2:09:36 time: 0.675823 data_time: 0.061059 memory: 12959 loss_kpt: 304.777295 acc_pose: 0.902794 loss: 304.777295 2022/10/13 08:42:27 - mmengine - INFO - Epoch(train) [190][200/586] lr: 5.000000e-04 eta: 2:09:04 time: 0.671052 data_time: 0.064826 memory: 12959 loss_kpt: 304.222696 acc_pose: 0.863207 loss: 304.222696 2022/10/13 08:42:58 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:43:00 - mmengine - INFO - Epoch(train) [190][250/586] lr: 5.000000e-04 eta: 2:08:32 time: 0.669573 data_time: 0.061418 memory: 12959 loss_kpt: 304.711509 acc_pose: 0.825542 loss: 304.711509 2022/10/13 08:43:34 - mmengine - INFO - Epoch(train) [190][300/586] lr: 5.000000e-04 eta: 2:08:01 time: 0.664737 data_time: 0.061967 memory: 12959 loss_kpt: 301.836219 acc_pose: 0.765624 loss: 301.836219 2022/10/13 08:44:07 - mmengine - INFO - Epoch(train) [190][350/586] lr: 5.000000e-04 eta: 2:07:29 time: 0.665644 data_time: 0.064018 memory: 12959 loss_kpt: 301.739969 acc_pose: 0.868575 loss: 301.739969 2022/10/13 08:44:41 - mmengine - INFO - Epoch(train) [190][400/586] lr: 5.000000e-04 eta: 2:06:57 time: 0.672301 data_time: 0.058406 memory: 12959 loss_kpt: 306.675255 acc_pose: 0.844748 loss: 306.675255 2022/10/13 08:45:14 - mmengine - INFO - Epoch(train) [190][450/586] lr: 5.000000e-04 eta: 2:06:25 time: 0.671947 data_time: 0.063941 memory: 12959 loss_kpt: 305.621638 acc_pose: 0.891695 loss: 305.621638 2022/10/13 08:45:48 - mmengine - INFO - Epoch(train) [190][500/586] lr: 5.000000e-04 eta: 2:05:53 time: 0.678832 data_time: 0.059753 memory: 12959 loss_kpt: 309.717208 acc_pose: 0.882510 loss: 309.717208 2022/10/13 08:46:22 - mmengine - INFO - Epoch(train) [190][550/586] lr: 5.000000e-04 eta: 2:05:21 time: 0.671707 data_time: 0.060850 memory: 12959 loss_kpt: 305.732500 acc_pose: 0.903554 loss: 305.732500 2022/10/13 08:46:46 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:46:46 - mmengine - INFO - Saving checkpoint at 190 epochs 2022/10/13 08:47:04 - mmengine - INFO - Epoch(val) [190][50/407] eta: 0:01:36 time: 0.269556 data_time: 0.013257 memory: 12959 2022/10/13 08:47:17 - mmengine - INFO - Epoch(val) [190][100/407] eta: 0:01:20 time: 0.261892 data_time: 0.008502 memory: 2407 2022/10/13 08:47:30 - mmengine - INFO - Epoch(val) [190][150/407] eta: 0:01:07 time: 0.261078 data_time: 0.008766 memory: 2407 2022/10/13 08:47:43 - mmengine - INFO - Epoch(val) [190][200/407] eta: 0:00:53 time: 0.259950 data_time: 0.008338 memory: 2407 2022/10/13 08:47:56 - mmengine - INFO - Epoch(val) [190][250/407] eta: 0:00:40 time: 0.260739 data_time: 0.008233 memory: 2407 2022/10/13 08:48:09 - mmengine - INFO - Epoch(val) [190][300/407] eta: 0:00:28 time: 0.261740 data_time: 0.007604 memory: 2407 2022/10/13 08:48:22 - mmengine - INFO - Epoch(val) [190][350/407] eta: 0:00:14 time: 0.262638 data_time: 0.008094 memory: 2407 2022/10/13 08:48:35 - mmengine - INFO - Epoch(val) [190][400/407] eta: 0:00:01 time: 0.263251 data_time: 0.007518 memory: 2407 2022/10/13 08:48:50 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 08:49:06 - mmengine - INFO - Epoch(val) [190][407/407] coco/AP: 0.749739 coco/AP .5: 0.900523 coco/AP .75: 0.823554 coco/AP (M): 0.716341 coco/AP (L): 0.813704 coco/AR: 0.812138 coco/AR .5: 0.941278 coco/AR .75: 0.873741 coco/AR (M): 0.768888 coco/AR (L): 0.872241 2022/10/13 08:49:06 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_180.pth is removed 2022/10/13 08:49:09 - mmengine - INFO - The best checkpoint with 0.7497 coco/AP at 190 epoch is saved to best_coco/AP_epoch_190.pth. 2022/10/13 08:49:43 - mmengine - INFO - Epoch(train) [191][50/586] lr: 5.000000e-04 eta: 2:04:24 time: 0.684606 data_time: 0.075381 memory: 12959 loss_kpt: 306.865014 acc_pose: 0.819537 loss: 306.865014 2022/10/13 08:50:16 - mmengine - INFO - Epoch(train) [191][100/586] lr: 5.000000e-04 eta: 2:03:52 time: 0.668195 data_time: 0.064401 memory: 12959 loss_kpt: 304.367986 acc_pose: 0.849228 loss: 304.367986 2022/10/13 08:50:50 - mmengine - INFO - Epoch(train) [191][150/586] lr: 5.000000e-04 eta: 2:03:21 time: 0.670650 data_time: 0.068481 memory: 12959 loss_kpt: 310.123842 acc_pose: 0.863362 loss: 310.123842 2022/10/13 08:51:23 - mmengine - INFO - Epoch(train) [191][200/586] lr: 5.000000e-04 eta: 2:02:49 time: 0.662265 data_time: 0.062724 memory: 12959 loss_kpt: 302.662842 acc_pose: 0.868345 loss: 302.662842 2022/10/13 08:51:56 - mmengine - INFO - Epoch(train) [191][250/586] lr: 5.000000e-04 eta: 2:02:17 time: 0.664905 data_time: 0.064720 memory: 12959 loss_kpt: 308.150042 acc_pose: 0.766455 loss: 308.150042 2022/10/13 08:52:30 - mmengine - INFO - Epoch(train) [191][300/586] lr: 5.000000e-04 eta: 2:01:45 time: 0.664963 data_time: 0.062830 memory: 12959 loss_kpt: 305.172354 acc_pose: 0.830907 loss: 305.172354 2022/10/13 08:53:03 - mmengine - INFO - Epoch(train) [191][350/586] lr: 5.000000e-04 eta: 2:01:13 time: 0.663103 data_time: 0.065963 memory: 12959 loss_kpt: 308.584039 acc_pose: 0.836298 loss: 308.584039 2022/10/13 08:53:36 - mmengine - INFO - Epoch(train) [191][400/586] lr: 5.000000e-04 eta: 2:00:41 time: 0.673784 data_time: 0.060334 memory: 12959 loss_kpt: 305.255074 acc_pose: 0.848171 loss: 305.255074 2022/10/13 08:54:10 - mmengine - INFO - Epoch(train) [191][450/586] lr: 5.000000e-04 eta: 2:00:10 time: 0.671828 data_time: 0.065232 memory: 12959 loss_kpt: 307.499768 acc_pose: 0.894006 loss: 307.499768 2022/10/13 08:54:43 - mmengine - INFO - Epoch(train) [191][500/586] lr: 5.000000e-04 eta: 1:59:38 time: 0.667039 data_time: 0.061856 memory: 12959 loss_kpt: 307.133591 acc_pose: 0.861791 loss: 307.133591 2022/10/13 08:55:17 - mmengine - INFO - Epoch(train) [191][550/586] lr: 5.000000e-04 eta: 1:59:06 time: 0.676054 data_time: 0.066768 memory: 12959 loss_kpt: 308.621526 acc_pose: 0.859546 loss: 308.621526 2022/10/13 08:55:42 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:56:16 - mmengine - INFO - Epoch(train) [192][50/586] lr: 5.000000e-04 eta: 1:58:09 time: 0.674628 data_time: 0.073264 memory: 12959 loss_kpt: 308.346884 acc_pose: 0.910401 loss: 308.346884 2022/10/13 08:56:32 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 08:56:49 - mmengine - INFO - Epoch(train) [192][100/586] lr: 5.000000e-04 eta: 1:57:37 time: 0.665771 data_time: 0.061381 memory: 12959 loss_kpt: 312.177750 acc_pose: 0.907114 loss: 312.177750 2022/10/13 08:57:22 - mmengine - INFO - Epoch(train) [192][150/586] lr: 5.000000e-04 eta: 1:57:05 time: 0.666912 data_time: 0.064604 memory: 12959 loss_kpt: 307.114114 acc_pose: 0.855254 loss: 307.114114 2022/10/13 08:57:55 - mmengine - INFO - Epoch(train) [192][200/586] lr: 5.000000e-04 eta: 1:56:33 time: 0.662333 data_time: 0.058901 memory: 12959 loss_kpt: 310.380555 acc_pose: 0.888140 loss: 310.380555 2022/10/13 08:58:30 - mmengine - INFO - Epoch(train) [192][250/586] lr: 5.000000e-04 eta: 1:56:01 time: 0.682284 data_time: 0.065653 memory: 12959 loss_kpt: 307.471455 acc_pose: 0.877379 loss: 307.471455 2022/10/13 08:59:03 - mmengine - INFO - Epoch(train) [192][300/586] lr: 5.000000e-04 eta: 1:55:30 time: 0.677855 data_time: 0.060689 memory: 12959 loss_kpt: 302.950573 acc_pose: 0.866326 loss: 302.950573 2022/10/13 08:59:38 - mmengine - INFO - Epoch(train) [192][350/586] lr: 5.000000e-04 eta: 1:54:58 time: 0.681679 data_time: 0.067650 memory: 12959 loss_kpt: 309.570135 acc_pose: 0.877402 loss: 309.570135 2022/10/13 09:00:12 - mmengine - INFO - Epoch(train) [192][400/586] lr: 5.000000e-04 eta: 1:54:26 time: 0.685105 data_time: 0.058113 memory: 12959 loss_kpt: 306.304783 acc_pose: 0.880410 loss: 306.304783 2022/10/13 09:00:46 - mmengine - INFO - Epoch(train) [192][450/586] lr: 5.000000e-04 eta: 1:53:54 time: 0.679540 data_time: 0.062907 memory: 12959 loss_kpt: 306.640947 acc_pose: 0.935799 loss: 306.640947 2022/10/13 09:01:20 - mmengine - INFO - Epoch(train) [192][500/586] lr: 5.000000e-04 eta: 1:53:22 time: 0.672399 data_time: 0.063096 memory: 12959 loss_kpt: 305.988819 acc_pose: 0.874667 loss: 305.988819 2022/10/13 09:01:54 - mmengine - INFO - Epoch(train) [192][550/586] lr: 5.000000e-04 eta: 1:52:51 time: 0.691278 data_time: 0.067979 memory: 12959 loss_kpt: 305.469135 acc_pose: 0.818850 loss: 305.469135 2022/10/13 09:02:18 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:02:53 - mmengine - INFO - Epoch(train) [193][50/586] lr: 5.000000e-04 eta: 1:51:54 time: 0.677973 data_time: 0.078757 memory: 12959 loss_kpt: 309.318131 acc_pose: 0.861405 loss: 309.318131 2022/10/13 09:03:26 - mmengine - INFO - Epoch(train) [193][100/586] lr: 5.000000e-04 eta: 1:51:22 time: 0.667535 data_time: 0.059224 memory: 12959 loss_kpt: 308.170598 acc_pose: 0.864532 loss: 308.170598 2022/10/13 09:04:00 - mmengine - INFO - Epoch(train) [193][150/586] lr: 5.000000e-04 eta: 1:50:50 time: 0.676408 data_time: 0.066149 memory: 12959 loss_kpt: 309.914960 acc_pose: 0.815395 loss: 309.914960 2022/10/13 09:04:33 - mmengine - INFO - Epoch(train) [193][200/586] lr: 5.000000e-04 eta: 1:50:18 time: 0.659611 data_time: 0.059557 memory: 12959 loss_kpt: 306.429986 acc_pose: 0.887329 loss: 306.429986 2022/10/13 09:05:07 - mmengine - INFO - Epoch(train) [193][250/586] lr: 5.000000e-04 eta: 1:49:46 time: 0.678143 data_time: 0.065883 memory: 12959 loss_kpt: 304.025384 acc_pose: 0.898913 loss: 304.025384 2022/10/13 09:05:40 - mmengine - INFO - Epoch(train) [193][300/586] lr: 5.000000e-04 eta: 1:49:15 time: 0.669434 data_time: 0.059350 memory: 12959 loss_kpt: 304.586510 acc_pose: 0.884579 loss: 304.586510 2022/10/13 09:06:14 - mmengine - INFO - Epoch(train) [193][350/586] lr: 5.000000e-04 eta: 1:48:43 time: 0.671539 data_time: 0.066317 memory: 12959 loss_kpt: 304.972918 acc_pose: 0.822376 loss: 304.972918 2022/10/13 09:06:47 - mmengine - INFO - Epoch(train) [193][400/586] lr: 5.000000e-04 eta: 1:48:11 time: 0.668545 data_time: 0.059624 memory: 12959 loss_kpt: 304.104507 acc_pose: 0.895370 loss: 304.104507 2022/10/13 09:07:20 - mmengine - INFO - Epoch(train) [193][450/586] lr: 5.000000e-04 eta: 1:47:39 time: 0.662223 data_time: 0.061994 memory: 12959 loss_kpt: 303.175675 acc_pose: 0.877369 loss: 303.175675 2022/10/13 09:07:46 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:07:54 - mmengine - INFO - Epoch(train) [193][500/586] lr: 5.000000e-04 eta: 1:47:07 time: 0.673689 data_time: 0.055254 memory: 12959 loss_kpt: 303.266890 acc_pose: 0.870140 loss: 303.266890 2022/10/13 09:08:27 - mmengine - INFO - Epoch(train) [193][550/586] lr: 5.000000e-04 eta: 1:46:35 time: 0.664561 data_time: 0.064451 memory: 12959 loss_kpt: 307.474445 acc_pose: 0.899704 loss: 307.474445 2022/10/13 09:08:51 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:09:26 - mmengine - INFO - Epoch(train) [194][50/586] lr: 5.000000e-04 eta: 1:45:38 time: 0.700798 data_time: 0.076595 memory: 12959 loss_kpt: 303.097472 acc_pose: 0.848970 loss: 303.097472 2022/10/13 09:10:00 - mmengine - INFO - Epoch(train) [194][100/586] lr: 5.000000e-04 eta: 1:45:07 time: 0.681399 data_time: 0.060541 memory: 12959 loss_kpt: 304.899604 acc_pose: 0.937309 loss: 304.899604 2022/10/13 09:10:36 - mmengine - INFO - Epoch(train) [194][150/586] lr: 5.000000e-04 eta: 1:44:35 time: 0.708841 data_time: 0.067653 memory: 12959 loss_kpt: 305.158918 acc_pose: 0.830776 loss: 305.158918 2022/10/13 09:11:10 - mmengine - INFO - Epoch(train) [194][200/586] lr: 5.000000e-04 eta: 1:44:03 time: 0.694828 data_time: 0.058778 memory: 12959 loss_kpt: 302.946509 acc_pose: 0.814040 loss: 302.946509 2022/10/13 09:11:45 - mmengine - INFO - Epoch(train) [194][250/586] lr: 5.000000e-04 eta: 1:43:32 time: 0.700018 data_time: 0.063244 memory: 12959 loss_kpt: 304.364856 acc_pose: 0.908383 loss: 304.364856 2022/10/13 09:12:20 - mmengine - INFO - Epoch(train) [194][300/586] lr: 5.000000e-04 eta: 1:43:00 time: 0.698085 data_time: 0.070622 memory: 12959 loss_kpt: 305.290077 acc_pose: 0.844408 loss: 305.290077 2022/10/13 09:12:55 - mmengine - INFO - Epoch(train) [194][350/586] lr: 5.000000e-04 eta: 1:42:28 time: 0.700116 data_time: 0.065768 memory: 12959 loss_kpt: 301.068274 acc_pose: 0.885939 loss: 301.068274 2022/10/13 09:13:30 - mmengine - INFO - Epoch(train) [194][400/586] lr: 5.000000e-04 eta: 1:41:56 time: 0.698747 data_time: 0.063314 memory: 12959 loss_kpt: 307.916657 acc_pose: 0.840446 loss: 307.916657 2022/10/13 09:14:06 - mmengine - INFO - Epoch(train) [194][450/586] lr: 5.000000e-04 eta: 1:41:25 time: 0.711945 data_time: 0.071326 memory: 12959 loss_kpt: 312.181496 acc_pose: 0.845434 loss: 312.181496 2022/10/13 09:14:41 - mmengine - INFO - Epoch(train) [194][500/586] lr: 5.000000e-04 eta: 1:40:53 time: 0.701265 data_time: 0.061176 memory: 12959 loss_kpt: 299.495569 acc_pose: 0.831476 loss: 299.495569 2022/10/13 09:15:17 - mmengine - INFO - Epoch(train) [194][550/586] lr: 5.000000e-04 eta: 1:40:21 time: 0.711186 data_time: 0.064832 memory: 12959 loss_kpt: 308.341680 acc_pose: 0.881438 loss: 308.341680 2022/10/13 09:15:42 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:16:16 - mmengine - INFO - Epoch(train) [195][50/586] lr: 5.000000e-04 eta: 1:39:25 time: 0.683916 data_time: 0.073837 memory: 12959 loss_kpt: 306.881943 acc_pose: 0.812499 loss: 306.881943 2022/10/13 09:16:49 - mmengine - INFO - Epoch(train) [195][100/586] lr: 5.000000e-04 eta: 1:38:53 time: 0.669488 data_time: 0.063311 memory: 12959 loss_kpt: 311.030522 acc_pose: 0.901800 loss: 311.030522 2022/10/13 09:17:24 - mmengine - INFO - Epoch(train) [195][150/586] lr: 5.000000e-04 eta: 1:38:21 time: 0.683591 data_time: 0.068337 memory: 12959 loss_kpt: 305.718440 acc_pose: 0.843986 loss: 305.718440 2022/10/13 09:17:58 - mmengine - INFO - Epoch(train) [195][200/586] lr: 5.000000e-04 eta: 1:37:49 time: 0.686539 data_time: 0.062839 memory: 12959 loss_kpt: 302.351610 acc_pose: 0.860731 loss: 302.351610 2022/10/13 09:18:32 - mmengine - INFO - Epoch(train) [195][250/586] lr: 5.000000e-04 eta: 1:37:17 time: 0.679926 data_time: 0.068379 memory: 12959 loss_kpt: 304.007387 acc_pose: 0.804185 loss: 304.007387 2022/10/13 09:19:06 - mmengine - INFO - Epoch(train) [195][300/586] lr: 5.000000e-04 eta: 1:36:45 time: 0.674657 data_time: 0.072307 memory: 12959 loss_kpt: 303.527142 acc_pose: 0.837991 loss: 303.527142 2022/10/13 09:19:17 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:19:40 - mmengine - INFO - Epoch(train) [195][350/586] lr: 5.000000e-04 eta: 1:36:14 time: 0.687933 data_time: 0.066115 memory: 12959 loss_kpt: 302.738486 acc_pose: 0.885702 loss: 302.738486 2022/10/13 09:20:14 - mmengine - INFO - Epoch(train) [195][400/586] lr: 5.000000e-04 eta: 1:35:42 time: 0.685042 data_time: 0.063849 memory: 12959 loss_kpt: 308.457686 acc_pose: 0.856499 loss: 308.457686 2022/10/13 09:20:48 - mmengine - INFO - Epoch(train) [195][450/586] lr: 5.000000e-04 eta: 1:35:10 time: 0.678305 data_time: 0.061245 memory: 12959 loss_kpt: 308.354001 acc_pose: 0.872672 loss: 308.354001 2022/10/13 09:21:22 - mmengine - INFO - Epoch(train) [195][500/586] lr: 5.000000e-04 eta: 1:34:38 time: 0.682406 data_time: 0.063916 memory: 12959 loss_kpt: 308.417968 acc_pose: 0.872559 loss: 308.417968 2022/10/13 09:21:57 - mmengine - INFO - Epoch(train) [195][550/586] lr: 5.000000e-04 eta: 1:34:06 time: 0.683312 data_time: 0.064986 memory: 12959 loss_kpt: 305.649493 acc_pose: 0.918901 loss: 305.649493 2022/10/13 09:22:21 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:22:56 - mmengine - INFO - Epoch(train) [196][50/586] lr: 5.000000e-04 eta: 1:33:10 time: 0.704030 data_time: 0.073859 memory: 12959 loss_kpt: 305.442964 acc_pose: 0.860773 loss: 305.442964 2022/10/13 09:23:30 - mmengine - INFO - Epoch(train) [196][100/586] lr: 5.000000e-04 eta: 1:32:38 time: 0.679400 data_time: 0.064714 memory: 12959 loss_kpt: 305.719833 acc_pose: 0.865518 loss: 305.719833 2022/10/13 09:24:05 - mmengine - INFO - Epoch(train) [196][150/586] lr: 5.000000e-04 eta: 1:32:06 time: 0.694489 data_time: 0.067499 memory: 12959 loss_kpt: 303.547570 acc_pose: 0.844872 loss: 303.547570 2022/10/13 09:24:39 - mmengine - INFO - Epoch(train) [196][200/586] lr: 5.000000e-04 eta: 1:31:34 time: 0.685389 data_time: 0.063777 memory: 12959 loss_kpt: 310.527019 acc_pose: 0.837681 loss: 310.527019 2022/10/13 09:25:14 - mmengine - INFO - Epoch(train) [196][250/586] lr: 5.000000e-04 eta: 1:31:03 time: 0.696650 data_time: 0.062046 memory: 12959 loss_kpt: 307.685452 acc_pose: 0.879696 loss: 307.685452 2022/10/13 09:25:48 - mmengine - INFO - Epoch(train) [196][300/586] lr: 5.000000e-04 eta: 1:30:31 time: 0.692750 data_time: 0.062172 memory: 12959 loss_kpt: 301.206207 acc_pose: 0.918939 loss: 301.206207 2022/10/13 09:26:23 - mmengine - INFO - Epoch(train) [196][350/586] lr: 5.000000e-04 eta: 1:29:59 time: 0.694807 data_time: 0.064280 memory: 12959 loss_kpt: 304.815758 acc_pose: 0.805608 loss: 304.815758 2022/10/13 09:26:58 - mmengine - INFO - Epoch(train) [196][400/586] lr: 5.000000e-04 eta: 1:29:27 time: 0.689372 data_time: 0.065027 memory: 12959 loss_kpt: 301.225068 acc_pose: 0.842955 loss: 301.225068 2022/10/13 09:27:32 - mmengine - INFO - Epoch(train) [196][450/586] lr: 5.000000e-04 eta: 1:28:55 time: 0.684443 data_time: 0.060849 memory: 12959 loss_kpt: 308.391254 acc_pose: 0.865624 loss: 308.391254 2022/10/13 09:28:06 - mmengine - INFO - Epoch(train) [196][500/586] lr: 5.000000e-04 eta: 1:28:24 time: 0.687183 data_time: 0.066399 memory: 12959 loss_kpt: 309.714525 acc_pose: 0.834744 loss: 309.714525 2022/10/13 09:28:41 - mmengine - INFO - Epoch(train) [196][550/586] lr: 5.000000e-04 eta: 1:27:52 time: 0.687418 data_time: 0.063491 memory: 12959 loss_kpt: 304.397298 acc_pose: 0.897178 loss: 304.397298 2022/10/13 09:29:06 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:29:40 - mmengine - INFO - Epoch(train) [197][50/586] lr: 5.000000e-04 eta: 1:26:55 time: 0.689926 data_time: 0.072990 memory: 12959 loss_kpt: 302.970696 acc_pose: 0.906887 loss: 302.970696 2022/10/13 09:30:15 - mmengine - INFO - Epoch(train) [197][100/586] lr: 5.000000e-04 eta: 1:26:23 time: 0.689259 data_time: 0.059551 memory: 12959 loss_kpt: 305.297095 acc_pose: 0.778867 loss: 305.297095 2022/10/13 09:30:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:30:49 - mmengine - INFO - Epoch(train) [197][150/586] lr: 5.000000e-04 eta: 1:25:52 time: 0.679588 data_time: 0.060750 memory: 12959 loss_kpt: 305.118795 acc_pose: 0.889335 loss: 305.118795 2022/10/13 09:31:22 - mmengine - INFO - Epoch(train) [197][200/586] lr: 5.000000e-04 eta: 1:25:20 time: 0.669863 data_time: 0.061978 memory: 12959 loss_kpt: 303.076135 acc_pose: 0.807976 loss: 303.076135 2022/10/13 09:31:56 - mmengine - INFO - Epoch(train) [197][250/586] lr: 5.000000e-04 eta: 1:24:48 time: 0.677911 data_time: 0.065296 memory: 12959 loss_kpt: 308.261268 acc_pose: 0.833515 loss: 308.261268 2022/10/13 09:32:30 - mmengine - INFO - Epoch(train) [197][300/586] lr: 5.000000e-04 eta: 1:24:16 time: 0.673988 data_time: 0.065456 memory: 12959 loss_kpt: 307.735584 acc_pose: 0.738761 loss: 307.735584 2022/10/13 09:33:04 - mmengine - INFO - Epoch(train) [197][350/586] lr: 5.000000e-04 eta: 1:23:44 time: 0.673326 data_time: 0.060655 memory: 12959 loss_kpt: 303.433116 acc_pose: 0.850082 loss: 303.433116 2022/10/13 09:33:37 - mmengine - INFO - Epoch(train) [197][400/586] lr: 5.000000e-04 eta: 1:23:12 time: 0.664746 data_time: 0.061993 memory: 12959 loss_kpt: 308.086743 acc_pose: 0.910463 loss: 308.086743 2022/10/13 09:34:10 - mmengine - INFO - Epoch(train) [197][450/586] lr: 5.000000e-04 eta: 1:22:40 time: 0.663251 data_time: 0.070498 memory: 12959 loss_kpt: 306.281683 acc_pose: 0.778975 loss: 306.281683 2022/10/13 09:34:43 - mmengine - INFO - Epoch(train) [197][500/586] lr: 5.000000e-04 eta: 1:22:08 time: 0.663977 data_time: 0.069227 memory: 12959 loss_kpt: 302.520743 acc_pose: 0.842552 loss: 302.520743 2022/10/13 09:35:17 - mmengine - INFO - Epoch(train) [197][550/586] lr: 5.000000e-04 eta: 1:21:37 time: 0.666579 data_time: 0.062620 memory: 12959 loss_kpt: 300.658328 acc_pose: 0.895710 loss: 300.658328 2022/10/13 09:35:40 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:36:14 - mmengine - INFO - Epoch(train) [198][50/586] lr: 5.000000e-04 eta: 1:20:40 time: 0.675314 data_time: 0.071304 memory: 12959 loss_kpt: 305.766669 acc_pose: 0.875362 loss: 305.766669 2022/10/13 09:36:47 - mmengine - INFO - Epoch(train) [198][100/586] lr: 5.000000e-04 eta: 1:20:08 time: 0.662517 data_time: 0.061779 memory: 12959 loss_kpt: 303.656854 acc_pose: 0.861483 loss: 303.656854 2022/10/13 09:37:21 - mmengine - INFO - Epoch(train) [198][150/586] lr: 5.000000e-04 eta: 1:19:36 time: 0.674729 data_time: 0.065421 memory: 12959 loss_kpt: 301.592064 acc_pose: 0.823421 loss: 301.592064 2022/10/13 09:37:54 - mmengine - INFO - Epoch(train) [198][200/586] lr: 5.000000e-04 eta: 1:19:04 time: 0.662384 data_time: 0.059306 memory: 12959 loss_kpt: 304.225999 acc_pose: 0.861994 loss: 304.225999 2022/10/13 09:38:28 - mmengine - INFO - Epoch(train) [198][250/586] lr: 5.000000e-04 eta: 1:18:33 time: 0.667292 data_time: 0.060336 memory: 12959 loss_kpt: 302.011708 acc_pose: 0.899439 loss: 302.011708 2022/10/13 09:39:01 - mmengine - INFO - Epoch(train) [198][300/586] lr: 5.000000e-04 eta: 1:18:01 time: 0.664852 data_time: 0.062645 memory: 12959 loss_kpt: 306.230693 acc_pose: 0.838721 loss: 306.230693 2022/10/13 09:39:34 - mmengine - INFO - Epoch(train) [198][350/586] lr: 5.000000e-04 eta: 1:17:29 time: 0.669443 data_time: 0.061005 memory: 12959 loss_kpt: 302.948457 acc_pose: 0.859183 loss: 302.948457 2022/10/13 09:40:08 - mmengine - INFO - Epoch(train) [198][400/586] lr: 5.000000e-04 eta: 1:16:57 time: 0.669228 data_time: 0.064641 memory: 12959 loss_kpt: 298.345102 acc_pose: 0.864903 loss: 298.345102 2022/10/13 09:40:42 - mmengine - INFO - Epoch(train) [198][450/586] lr: 5.000000e-04 eta: 1:16:25 time: 0.674477 data_time: 0.064506 memory: 12959 loss_kpt: 304.014617 acc_pose: 0.837409 loss: 304.014617 2022/10/13 09:41:15 - mmengine - INFO - Epoch(train) [198][500/586] lr: 5.000000e-04 eta: 1:15:53 time: 0.670095 data_time: 0.061835 memory: 12959 loss_kpt: 303.724818 acc_pose: 0.878019 loss: 303.724818 2022/10/13 09:41:49 - mmengine - INFO - Epoch(train) [198][550/586] lr: 5.000000e-04 eta: 1:15:21 time: 0.674077 data_time: 0.058112 memory: 12959 loss_kpt: 307.545790 acc_pose: 0.866632 loss: 307.545790 2022/10/13 09:41:54 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:42:13 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:42:48 - mmengine - INFO - Epoch(train) [199][50/586] lr: 5.000000e-04 eta: 1:14:25 time: 0.699848 data_time: 0.070903 memory: 12959 loss_kpt: 296.707998 acc_pose: 0.834281 loss: 296.707998 2022/10/13 09:43:22 - mmengine - INFO - Epoch(train) [199][100/586] lr: 5.000000e-04 eta: 1:13:53 time: 0.671616 data_time: 0.057182 memory: 12959 loss_kpt: 301.168879 acc_pose: 0.842551 loss: 301.168879 2022/10/13 09:43:56 - mmengine - INFO - Epoch(train) [199][150/586] lr: 5.000000e-04 eta: 1:13:21 time: 0.677613 data_time: 0.063398 memory: 12959 loss_kpt: 302.171783 acc_pose: 0.826495 loss: 302.171783 2022/10/13 09:44:30 - mmengine - INFO - Epoch(train) [199][200/586] lr: 5.000000e-04 eta: 1:12:49 time: 0.676751 data_time: 0.057023 memory: 12959 loss_kpt: 300.987606 acc_pose: 0.928188 loss: 300.987606 2022/10/13 09:45:05 - mmengine - INFO - Epoch(train) [199][250/586] lr: 5.000000e-04 eta: 1:12:18 time: 0.700203 data_time: 0.065780 memory: 12959 loss_kpt: 304.260821 acc_pose: 0.858423 loss: 304.260821 2022/10/13 09:45:39 - mmengine - INFO - Epoch(train) [199][300/586] lr: 5.000000e-04 eta: 1:11:46 time: 0.681643 data_time: 0.058745 memory: 12959 loss_kpt: 304.229046 acc_pose: 0.853781 loss: 304.229046 2022/10/13 09:46:14 - mmengine - INFO - Epoch(train) [199][350/586] lr: 5.000000e-04 eta: 1:11:14 time: 0.696812 data_time: 0.065264 memory: 12959 loss_kpt: 307.134659 acc_pose: 0.894582 loss: 307.134659 2022/10/13 09:46:48 - mmengine - INFO - Epoch(train) [199][400/586] lr: 5.000000e-04 eta: 1:10:42 time: 0.685639 data_time: 0.060329 memory: 12959 loss_kpt: 301.152953 acc_pose: 0.874491 loss: 301.152953 2022/10/13 09:47:22 - mmengine - INFO - Epoch(train) [199][450/586] lr: 5.000000e-04 eta: 1:10:10 time: 0.686733 data_time: 0.061515 memory: 12959 loss_kpt: 304.464399 acc_pose: 0.861182 loss: 304.464399 2022/10/13 09:47:57 - mmengine - INFO - Epoch(train) [199][500/586] lr: 5.000000e-04 eta: 1:09:38 time: 0.700905 data_time: 0.061086 memory: 12959 loss_kpt: 304.475496 acc_pose: 0.873710 loss: 304.475496 2022/10/13 09:48:32 - mmengine - INFO - Epoch(train) [199][550/586] lr: 5.000000e-04 eta: 1:09:06 time: 0.688410 data_time: 0.063920 memory: 12959 loss_kpt: 309.097416 acc_pose: 0.879207 loss: 309.097416 2022/10/13 09:48:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:49:31 - mmengine - INFO - Epoch(train) [200][50/586] lr: 5.000000e-04 eta: 1:08:10 time: 0.701520 data_time: 0.076817 memory: 12959 loss_kpt: 304.432295 acc_pose: 0.834361 loss: 304.432295 2022/10/13 09:50:05 - mmengine - INFO - Epoch(train) [200][100/586] lr: 5.000000e-04 eta: 1:07:38 time: 0.674179 data_time: 0.065448 memory: 12959 loss_kpt: 306.859744 acc_pose: 0.859270 loss: 306.859744 2022/10/13 09:50:39 - mmengine - INFO - Epoch(train) [200][150/586] lr: 5.000000e-04 eta: 1:07:07 time: 0.690439 data_time: 0.063445 memory: 12959 loss_kpt: 295.461683 acc_pose: 0.875105 loss: 295.461683 2022/10/13 09:51:14 - mmengine - INFO - Epoch(train) [200][200/586] lr: 5.000000e-04 eta: 1:06:35 time: 0.685343 data_time: 0.061973 memory: 12959 loss_kpt: 302.035812 acc_pose: 0.842945 loss: 302.035812 2022/10/13 09:51:48 - mmengine - INFO - Epoch(train) [200][250/586] lr: 5.000000e-04 eta: 1:06:03 time: 0.683244 data_time: 0.061747 memory: 12959 loss_kpt: 301.984260 acc_pose: 0.879334 loss: 301.984260 2022/10/13 09:52:22 - mmengine - INFO - Epoch(train) [200][300/586] lr: 5.000000e-04 eta: 1:05:31 time: 0.674618 data_time: 0.063221 memory: 12959 loss_kpt: 307.066382 acc_pose: 0.853395 loss: 307.066382 2022/10/13 09:52:55 - mmengine - INFO - Epoch(train) [200][350/586] lr: 5.000000e-04 eta: 1:04:59 time: 0.675152 data_time: 0.059467 memory: 12959 loss_kpt: 300.957424 acc_pose: 0.877426 loss: 300.957424 2022/10/13 09:53:20 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:53:29 - mmengine - INFO - Epoch(train) [200][400/586] lr: 5.000000e-04 eta: 1:04:27 time: 0.674933 data_time: 0.063221 memory: 12959 loss_kpt: 308.098988 acc_pose: 0.823863 loss: 308.098988 2022/10/13 09:54:03 - mmengine - INFO - Epoch(train) [200][450/586] lr: 5.000000e-04 eta: 1:03:55 time: 0.683088 data_time: 0.059783 memory: 12959 loss_kpt: 306.892614 acc_pose: 0.879488 loss: 306.892614 2022/10/13 09:54:37 - mmengine - INFO - Epoch(train) [200][500/586] lr: 5.000000e-04 eta: 1:03:23 time: 0.670071 data_time: 0.062808 memory: 12959 loss_kpt: 300.253678 acc_pose: 0.899792 loss: 300.253678 2022/10/13 09:55:11 - mmengine - INFO - Epoch(train) [200][550/586] lr: 5.000000e-04 eta: 1:02:52 time: 0.680434 data_time: 0.061425 memory: 12959 loss_kpt: 300.052621 acc_pose: 0.914191 loss: 300.052621 2022/10/13 09:55:35 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 09:55:35 - mmengine - INFO - Saving checkpoint at 200 epochs 2022/10/13 09:55:52 - mmengine - INFO - Epoch(val) [200][50/407] eta: 0:01:35 time: 0.267150 data_time: 0.012681 memory: 12959 2022/10/13 09:56:05 - mmengine - INFO - Epoch(val) [200][100/407] eta: 0:01:20 time: 0.260978 data_time: 0.007601 memory: 2407 2022/10/13 09:56:18 - mmengine - INFO - Epoch(val) [200][150/407] eta: 0:01:07 time: 0.262525 data_time: 0.008016 memory: 2407 2022/10/13 09:56:32 - mmengine - INFO - Epoch(val) [200][200/407] eta: 0:00:55 time: 0.268775 data_time: 0.007992 memory: 2407 2022/10/13 09:56:45 - mmengine - INFO - Epoch(val) [200][250/407] eta: 0:00:40 time: 0.260332 data_time: 0.007857 memory: 2407 2022/10/13 09:56:58 - mmengine - INFO - Epoch(val) [200][300/407] eta: 0:00:27 time: 0.260997 data_time: 0.007679 memory: 2407 2022/10/13 09:57:11 - mmengine - INFO - Epoch(val) [200][350/407] eta: 0:00:14 time: 0.261277 data_time: 0.007777 memory: 2407 2022/10/13 09:57:24 - mmengine - INFO - Epoch(val) [200][400/407] eta: 0:00:01 time: 0.257046 data_time: 0.007671 memory: 2407 2022/10/13 09:57:38 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 09:57:55 - mmengine - INFO - Epoch(val) [200][407/407] coco/AP: 0.749539 coco/AP .5: 0.901006 coco/AP .75: 0.823400 coco/AP (M): 0.715617 coco/AP (L): 0.813783 coco/AR: 0.812783 coco/AR .5: 0.940963 coco/AR .75: 0.873898 coco/AR (M): 0.769626 coco/AR (L): 0.872835 2022/10/13 09:58:29 - mmengine - INFO - Epoch(train) [201][50/586] lr: 5.000000e-05 eta: 1:01:56 time: 0.688060 data_time: 0.077260 memory: 12959 loss_kpt: 299.883496 acc_pose: 0.857859 loss: 299.883496 2022/10/13 09:59:03 - mmengine - INFO - Epoch(train) [201][100/586] lr: 5.000000e-05 eta: 1:01:24 time: 0.675461 data_time: 0.057234 memory: 12959 loss_kpt: 306.111257 acc_pose: 0.882397 loss: 306.111257 2022/10/13 09:59:37 - mmengine - INFO - Epoch(train) [201][150/586] lr: 5.000000e-05 eta: 1:00:52 time: 0.682987 data_time: 0.065213 memory: 12959 loss_kpt: 304.954188 acc_pose: 0.882233 loss: 304.954188 2022/10/13 10:00:12 - mmengine - INFO - Epoch(train) [201][200/586] lr: 5.000000e-05 eta: 1:00:20 time: 0.684049 data_time: 0.066478 memory: 12959 loss_kpt: 301.174654 acc_pose: 0.856180 loss: 301.174654 2022/10/13 10:00:46 - mmengine - INFO - Epoch(train) [201][250/586] lr: 5.000000e-05 eta: 0:59:48 time: 0.687660 data_time: 0.066319 memory: 12959 loss_kpt: 304.654613 acc_pose: 0.924280 loss: 304.654613 2022/10/13 10:01:20 - mmengine - INFO - Epoch(train) [201][300/586] lr: 5.000000e-05 eta: 0:59:16 time: 0.679577 data_time: 0.066588 memory: 12959 loss_kpt: 299.323049 acc_pose: 0.865000 loss: 299.323049 2022/10/13 10:01:54 - mmengine - INFO - Epoch(train) [201][350/586] lr: 5.000000e-05 eta: 0:58:44 time: 0.677800 data_time: 0.064825 memory: 12959 loss_kpt: 302.439096 acc_pose: 0.805536 loss: 302.439096 2022/10/13 10:02:28 - mmengine - INFO - Epoch(train) [201][400/586] lr: 5.000000e-05 eta: 0:58:12 time: 0.678200 data_time: 0.064956 memory: 12959 loss_kpt: 304.890499 acc_pose: 0.874071 loss: 304.890499 2022/10/13 10:03:02 - mmengine - INFO - Epoch(train) [201][450/586] lr: 5.000000e-05 eta: 0:57:40 time: 0.681382 data_time: 0.062198 memory: 12959 loss_kpt: 303.986455 acc_pose: 0.783964 loss: 303.986455 2022/10/13 10:03:36 - mmengine - INFO - Epoch(train) [201][500/586] lr: 5.000000e-05 eta: 0:57:09 time: 0.687789 data_time: 0.062944 memory: 12959 loss_kpt: 300.971675 acc_pose: 0.829020 loss: 300.971675 2022/10/13 10:04:11 - mmengine - INFO - Epoch(train) [201][550/586] lr: 5.000000e-05 eta: 0:56:37 time: 0.691618 data_time: 0.064583 memory: 12959 loss_kpt: 308.232993 acc_pose: 0.835643 loss: 308.232993 2022/10/13 10:04:35 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:05:09 - mmengine - INFO - Epoch(train) [202][50/586] lr: 5.000000e-05 eta: 0:55:41 time: 0.677691 data_time: 0.076033 memory: 12959 loss_kpt: 301.023592 acc_pose: 0.880039 loss: 301.023592 2022/10/13 10:05:43 - mmengine - INFO - Epoch(train) [202][100/586] lr: 5.000000e-05 eta: 0:55:09 time: 0.666325 data_time: 0.055177 memory: 12959 loss_kpt: 302.062465 acc_pose: 0.832363 loss: 302.062465 2022/10/13 10:06:17 - mmengine - INFO - Epoch(train) [202][150/586] lr: 5.000000e-05 eta: 0:54:37 time: 0.683707 data_time: 0.060822 memory: 12959 loss_kpt: 303.185557 acc_pose: 0.843843 loss: 303.185557 2022/10/13 10:06:50 - mmengine - INFO - Epoch(train) [202][200/586] lr: 5.000000e-05 eta: 0:54:05 time: 0.655258 data_time: 0.060203 memory: 12959 loss_kpt: 305.669575 acc_pose: 0.867823 loss: 305.669575 2022/10/13 10:06:59 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:07:23 - mmengine - INFO - Epoch(train) [202][250/586] lr: 5.000000e-05 eta: 0:53:33 time: 0.659174 data_time: 0.060771 memory: 12959 loss_kpt: 303.641999 acc_pose: 0.852981 loss: 303.641999 2022/10/13 10:07:56 - mmengine - INFO - Epoch(train) [202][300/586] lr: 5.000000e-05 eta: 0:53:01 time: 0.659336 data_time: 0.058375 memory: 12959 loss_kpt: 304.260229 acc_pose: 0.902661 loss: 304.260229 2022/10/13 10:08:28 - mmengine - INFO - Epoch(train) [202][350/586] lr: 5.000000e-05 eta: 0:52:29 time: 0.655602 data_time: 0.062162 memory: 12959 loss_kpt: 305.145995 acc_pose: 0.793764 loss: 305.145995 2022/10/13 10:09:01 - mmengine - INFO - Epoch(train) [202][400/586] lr: 5.000000e-05 eta: 0:51:57 time: 0.657482 data_time: 0.058736 memory: 12959 loss_kpt: 300.145652 acc_pose: 0.883076 loss: 300.145652 2022/10/13 10:09:34 - mmengine - INFO - Epoch(train) [202][450/586] lr: 5.000000e-05 eta: 0:51:25 time: 0.660866 data_time: 0.060166 memory: 12959 loss_kpt: 303.236132 acc_pose: 0.893033 loss: 303.236132 2022/10/13 10:10:07 - mmengine - INFO - Epoch(train) [202][500/586] lr: 5.000000e-05 eta: 0:50:53 time: 0.662011 data_time: 0.060448 memory: 12959 loss_kpt: 303.797453 acc_pose: 0.845437 loss: 303.797453 2022/10/13 10:10:41 - mmengine - INFO - Epoch(train) [202][550/586] lr: 5.000000e-05 eta: 0:50:21 time: 0.666692 data_time: 0.063383 memory: 12959 loss_kpt: 305.278418 acc_pose: 0.929531 loss: 305.278418 2022/10/13 10:11:04 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:11:39 - mmengine - INFO - Epoch(train) [203][50/586] lr: 5.000000e-05 eta: 0:49:26 time: 0.689830 data_time: 0.072046 memory: 12959 loss_kpt: 301.296102 acc_pose: 0.920037 loss: 301.296102 2022/10/13 10:12:13 - mmengine - INFO - Epoch(train) [203][100/586] lr: 5.000000e-05 eta: 0:48:54 time: 0.680032 data_time: 0.060280 memory: 12959 loss_kpt: 313.529573 acc_pose: 0.840899 loss: 313.529573 2022/10/13 10:12:47 - mmengine - INFO - Epoch(train) [203][150/586] lr: 5.000000e-05 eta: 0:48:22 time: 0.671378 data_time: 0.062172 memory: 12959 loss_kpt: 306.354210 acc_pose: 0.878693 loss: 306.354210 2022/10/13 10:13:20 - mmengine - INFO - Epoch(train) [203][200/586] lr: 5.000000e-05 eta: 0:47:50 time: 0.663969 data_time: 0.056310 memory: 12959 loss_kpt: 302.081154 acc_pose: 0.877364 loss: 302.081154 2022/10/13 10:13:55 - mmengine - INFO - Epoch(train) [203][250/586] lr: 5.000000e-05 eta: 0:47:18 time: 0.697661 data_time: 0.069044 memory: 12959 loss_kpt: 300.695688 acc_pose: 0.921284 loss: 300.695688 2022/10/13 10:14:28 - mmengine - INFO - Epoch(train) [203][300/586] lr: 5.000000e-05 eta: 0:46:46 time: 0.666244 data_time: 0.056594 memory: 12959 loss_kpt: 308.722074 acc_pose: 0.830723 loss: 308.722074 2022/10/13 10:15:02 - mmengine - INFO - Epoch(train) [203][350/586] lr: 5.000000e-05 eta: 0:46:14 time: 0.673655 data_time: 0.065837 memory: 12959 loss_kpt: 306.828128 acc_pose: 0.807213 loss: 306.828128 2022/10/13 10:15:35 - mmengine - INFO - Epoch(train) [203][400/586] lr: 5.000000e-05 eta: 0:45:42 time: 0.663227 data_time: 0.059744 memory: 12959 loss_kpt: 302.754692 acc_pose: 0.870864 loss: 302.754692 2022/10/13 10:16:09 - mmengine - INFO - Epoch(train) [203][450/586] lr: 5.000000e-05 eta: 0:45:10 time: 0.676936 data_time: 0.060865 memory: 12959 loss_kpt: 308.524142 acc_pose: 0.888714 loss: 308.524142 2022/10/13 10:16:42 - mmengine - INFO - Epoch(train) [203][500/586] lr: 5.000000e-05 eta: 0:44:38 time: 0.658757 data_time: 0.058975 memory: 12959 loss_kpt: 305.448968 acc_pose: 0.797244 loss: 305.448968 2022/10/13 10:17:16 - mmengine - INFO - Epoch(train) [203][550/586] lr: 5.000000e-05 eta: 0:44:06 time: 0.679456 data_time: 0.067072 memory: 12959 loss_kpt: 301.666044 acc_pose: 0.815106 loss: 301.666044 2022/10/13 10:17:40 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:18:09 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:18:14 - mmengine - INFO - Epoch(train) [204][50/586] lr: 5.000000e-05 eta: 0:43:11 time: 0.683685 data_time: 0.075394 memory: 12959 loss_kpt: 304.056863 acc_pose: 0.793359 loss: 304.056863 2022/10/13 10:18:48 - mmengine - INFO - Epoch(train) [204][100/586] lr: 5.000000e-05 eta: 0:42:39 time: 0.670082 data_time: 0.065758 memory: 12959 loss_kpt: 303.300061 acc_pose: 0.857251 loss: 303.300061 2022/10/13 10:19:22 - mmengine - INFO - Epoch(train) [204][150/586] lr: 5.000000e-05 eta: 0:42:07 time: 0.682870 data_time: 0.061039 memory: 12959 loss_kpt: 302.027169 acc_pose: 0.894498 loss: 302.027169 2022/10/13 10:19:58 - mmengine - INFO - Epoch(train) [204][200/586] lr: 5.000000e-05 eta: 0:41:35 time: 0.719542 data_time: 0.064773 memory: 12959 loss_kpt: 297.949820 acc_pose: 0.827881 loss: 297.949820 2022/10/13 10:20:33 - mmengine - INFO - Epoch(train) [204][250/586] lr: 5.000000e-05 eta: 0:41:03 time: 0.699148 data_time: 0.059178 memory: 12959 loss_kpt: 309.783555 acc_pose: 0.902182 loss: 309.783555 2022/10/13 10:21:07 - mmengine - INFO - Epoch(train) [204][300/586] lr: 5.000000e-05 eta: 0:40:31 time: 0.676571 data_time: 0.064274 memory: 12959 loss_kpt: 301.671881 acc_pose: 0.838689 loss: 301.671881 2022/10/13 10:21:41 - mmengine - INFO - Epoch(train) [204][350/586] lr: 5.000000e-05 eta: 0:39:59 time: 0.687711 data_time: 0.061412 memory: 12959 loss_kpt: 302.525670 acc_pose: 0.843773 loss: 302.525670 2022/10/13 10:22:15 - mmengine - INFO - Epoch(train) [204][400/586] lr: 5.000000e-05 eta: 0:39:27 time: 0.685505 data_time: 0.065930 memory: 12959 loss_kpt: 302.505536 acc_pose: 0.886682 loss: 302.505536 2022/10/13 10:22:50 - mmengine - INFO - Epoch(train) [204][450/586] lr: 5.000000e-05 eta: 0:38:56 time: 0.689856 data_time: 0.058319 memory: 12959 loss_kpt: 303.409490 acc_pose: 0.878785 loss: 303.409490 2022/10/13 10:23:24 - mmengine - INFO - Epoch(train) [204][500/586] lr: 5.000000e-05 eta: 0:38:24 time: 0.688864 data_time: 0.066236 memory: 12959 loss_kpt: 308.888754 acc_pose: 0.819929 loss: 308.888754 2022/10/13 10:23:59 - mmengine - INFO - Epoch(train) [204][550/586] lr: 5.000000e-05 eta: 0:37:52 time: 0.693535 data_time: 0.060576 memory: 12959 loss_kpt: 307.294594 acc_pose: 0.904830 loss: 307.294594 2022/10/13 10:24:24 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:24:59 - mmengine - INFO - Epoch(train) [205][50/586] lr: 5.000000e-05 eta: 0:36:56 time: 0.697611 data_time: 0.075806 memory: 12959 loss_kpt: 300.556119 acc_pose: 0.873285 loss: 300.556119 2022/10/13 10:25:33 - mmengine - INFO - Epoch(train) [205][100/586] lr: 5.000000e-05 eta: 0:36:24 time: 0.683133 data_time: 0.061258 memory: 12959 loss_kpt: 306.148753 acc_pose: 0.872434 loss: 306.148753 2022/10/13 10:26:07 - mmengine - INFO - Epoch(train) [205][150/586] lr: 5.000000e-05 eta: 0:35:52 time: 0.684791 data_time: 0.066222 memory: 12959 loss_kpt: 306.247414 acc_pose: 0.841118 loss: 306.247414 2022/10/13 10:26:41 - mmengine - INFO - Epoch(train) [205][200/586] lr: 5.000000e-05 eta: 0:35:20 time: 0.677263 data_time: 0.060151 memory: 12959 loss_kpt: 302.021031 acc_pose: 0.888675 loss: 302.021031 2022/10/13 10:27:15 - mmengine - INFO - Epoch(train) [205][250/586] lr: 5.000000e-05 eta: 0:34:48 time: 0.689101 data_time: 0.065581 memory: 12959 loss_kpt: 304.584499 acc_pose: 0.809417 loss: 304.584499 2022/10/13 10:27:50 - mmengine - INFO - Epoch(train) [205][300/586] lr: 5.000000e-05 eta: 0:34:17 time: 0.687648 data_time: 0.062506 memory: 12959 loss_kpt: 298.813064 acc_pose: 0.830211 loss: 298.813064 2022/10/13 10:28:24 - mmengine - INFO - Epoch(train) [205][350/586] lr: 5.000000e-05 eta: 0:33:45 time: 0.679761 data_time: 0.063557 memory: 12959 loss_kpt: 307.043978 acc_pose: 0.859913 loss: 307.043978 2022/10/13 10:28:58 - mmengine - INFO - Epoch(train) [205][400/586] lr: 5.000000e-05 eta: 0:33:13 time: 0.687023 data_time: 0.061405 memory: 12959 loss_kpt: 305.056430 acc_pose: 0.866088 loss: 305.056430 2022/10/13 10:29:33 - mmengine - INFO - Epoch(train) [205][450/586] lr: 5.000000e-05 eta: 0:32:41 time: 0.687265 data_time: 0.065655 memory: 12959 loss_kpt: 303.881144 acc_pose: 0.802081 loss: 303.881144 2022/10/13 10:29:36 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:30:07 - mmengine - INFO - Epoch(train) [205][500/586] lr: 5.000000e-05 eta: 0:32:09 time: 0.686032 data_time: 0.057926 memory: 12959 loss_kpt: 297.644987 acc_pose: 0.843025 loss: 297.644987 2022/10/13 10:30:41 - mmengine - INFO - Epoch(train) [205][550/586] lr: 5.000000e-05 eta: 0:31:37 time: 0.678880 data_time: 0.064541 memory: 12959 loss_kpt: 297.579378 acc_pose: 0.870054 loss: 297.579378 2022/10/13 10:31:05 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:31:40 - mmengine - INFO - Epoch(train) [206][50/586] lr: 5.000000e-05 eta: 0:30:41 time: 0.696102 data_time: 0.074463 memory: 12959 loss_kpt: 304.834312 acc_pose: 0.863857 loss: 304.834312 2022/10/13 10:32:14 - mmengine - INFO - Epoch(train) [206][100/586] lr: 5.000000e-05 eta: 0:30:09 time: 0.676161 data_time: 0.060437 memory: 12959 loss_kpt: 301.984068 acc_pose: 0.845815 loss: 301.984068 2022/10/13 10:32:47 - mmengine - INFO - Epoch(train) [206][150/586] lr: 5.000000e-05 eta: 0:29:37 time: 0.666479 data_time: 0.060572 memory: 12959 loss_kpt: 299.358965 acc_pose: 0.897589 loss: 299.358965 2022/10/13 10:33:20 - mmengine - INFO - Epoch(train) [206][200/586] lr: 5.000000e-05 eta: 0:29:06 time: 0.660132 data_time: 0.060315 memory: 12959 loss_kpt: 309.248212 acc_pose: 0.854054 loss: 309.248212 2022/10/13 10:33:54 - mmengine - INFO - Epoch(train) [206][250/586] lr: 5.000000e-05 eta: 0:28:34 time: 0.683914 data_time: 0.062033 memory: 12959 loss_kpt: 303.436352 acc_pose: 0.887607 loss: 303.436352 2022/10/13 10:34:28 - mmengine - INFO - Epoch(train) [206][300/586] lr: 5.000000e-05 eta: 0:28:02 time: 0.677491 data_time: 0.060909 memory: 12959 loss_kpt: 301.034258 acc_pose: 0.874581 loss: 301.034258 2022/10/13 10:35:02 - mmengine - INFO - Epoch(train) [206][350/586] lr: 5.000000e-05 eta: 0:27:30 time: 0.673844 data_time: 0.061844 memory: 12959 loss_kpt: 309.333480 acc_pose: 0.879616 loss: 309.333480 2022/10/13 10:35:36 - mmengine - INFO - Epoch(train) [206][400/586] lr: 5.000000e-05 eta: 0:26:58 time: 0.680151 data_time: 0.063386 memory: 12959 loss_kpt: 300.118690 acc_pose: 0.903334 loss: 300.118690 2022/10/13 10:36:11 - mmengine - INFO - Epoch(train) [206][450/586] lr: 5.000000e-05 eta: 0:26:26 time: 0.701780 data_time: 0.062767 memory: 12959 loss_kpt: 308.698728 acc_pose: 0.867305 loss: 308.698728 2022/10/13 10:36:46 - mmengine - INFO - Epoch(train) [206][500/586] lr: 5.000000e-05 eta: 0:25:54 time: 0.696143 data_time: 0.061175 memory: 12959 loss_kpt: 300.225494 acc_pose: 0.884213 loss: 300.225494 2022/10/13 10:37:21 - mmengine - INFO - Epoch(train) [206][550/586] lr: 5.000000e-05 eta: 0:25:22 time: 0.696572 data_time: 0.062864 memory: 12959 loss_kpt: 308.329518 acc_pose: 0.898350 loss: 308.329518 2022/10/13 10:37:45 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:38:20 - mmengine - INFO - Epoch(train) [207][50/586] lr: 5.000000e-05 eta: 0:24:27 time: 0.693984 data_time: 0.072351 memory: 12959 loss_kpt: 303.333476 acc_pose: 0.886876 loss: 303.333476 2022/10/13 10:38:54 - mmengine - INFO - Epoch(train) [207][100/586] lr: 5.000000e-05 eta: 0:23:55 time: 0.686416 data_time: 0.065846 memory: 12959 loss_kpt: 305.296701 acc_pose: 0.917058 loss: 305.296701 2022/10/13 10:39:28 - mmengine - INFO - Epoch(train) [207][150/586] lr: 5.000000e-05 eta: 0:23:23 time: 0.676381 data_time: 0.062633 memory: 12959 loss_kpt: 307.392805 acc_pose: 0.877993 loss: 307.392805 2022/10/13 10:40:02 - mmengine - INFO - Epoch(train) [207][200/586] lr: 5.000000e-05 eta: 0:22:51 time: 0.676878 data_time: 0.062605 memory: 12959 loss_kpt: 303.701005 acc_pose: 0.863862 loss: 303.701005 2022/10/13 10:40:36 - mmengine - INFO - Epoch(train) [207][250/586] lr: 5.000000e-05 eta: 0:22:19 time: 0.685224 data_time: 0.063539 memory: 12959 loss_kpt: 304.646990 acc_pose: 0.924858 loss: 304.646990 2022/10/13 10:40:59 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:41:10 - mmengine - INFO - Epoch(train) [207][300/586] lr: 5.000000e-05 eta: 0:21:47 time: 0.682263 data_time: 0.066559 memory: 12959 loss_kpt: 301.421624 acc_pose: 0.920289 loss: 301.421624 2022/10/13 10:41:45 - mmengine - INFO - Epoch(train) [207][350/586] lr: 5.000000e-05 eta: 0:21:15 time: 0.692178 data_time: 0.065900 memory: 12959 loss_kpt: 304.952201 acc_pose: 0.764776 loss: 304.952201 2022/10/13 10:42:19 - mmengine - INFO - Epoch(train) [207][400/586] lr: 5.000000e-05 eta: 0:20:43 time: 0.683246 data_time: 0.057816 memory: 12959 loss_kpt: 307.644788 acc_pose: 0.845455 loss: 307.644788 2022/10/13 10:42:53 - mmengine - INFO - Epoch(train) [207][450/586] lr: 5.000000e-05 eta: 0:20:11 time: 0.685069 data_time: 0.061238 memory: 12959 loss_kpt: 301.170512 acc_pose: 0.879531 loss: 301.170512 2022/10/13 10:43:28 - mmengine - INFO - Epoch(train) [207][500/586] lr: 5.000000e-05 eta: 0:19:39 time: 0.683395 data_time: 0.063952 memory: 12959 loss_kpt: 305.040393 acc_pose: 0.779370 loss: 305.040393 2022/10/13 10:44:02 - mmengine - INFO - Epoch(train) [207][550/586] lr: 5.000000e-05 eta: 0:19:07 time: 0.693442 data_time: 0.060516 memory: 12959 loss_kpt: 302.758387 acc_pose: 0.854373 loss: 302.758387 2022/10/13 10:44:27 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:45:02 - mmengine - INFO - Epoch(train) [208][50/586] lr: 5.000000e-05 eta: 0:18:12 time: 0.696589 data_time: 0.071462 memory: 12959 loss_kpt: 301.755603 acc_pose: 0.838735 loss: 301.755603 2022/10/13 10:45:35 - mmengine - INFO - Epoch(train) [208][100/586] lr: 5.000000e-05 eta: 0:17:40 time: 0.679385 data_time: 0.059828 memory: 12959 loss_kpt: 299.938117 acc_pose: 0.852759 loss: 299.938117 2022/10/13 10:46:11 - mmengine - INFO - Epoch(train) [208][150/586] lr: 5.000000e-05 eta: 0:17:08 time: 0.701231 data_time: 0.061667 memory: 12959 loss_kpt: 300.662165 acc_pose: 0.897408 loss: 300.662165 2022/10/13 10:46:45 - mmengine - INFO - Epoch(train) [208][200/586] lr: 5.000000e-05 eta: 0:16:36 time: 0.690884 data_time: 0.060350 memory: 12959 loss_kpt: 298.739991 acc_pose: 0.910241 loss: 298.739991 2022/10/13 10:47:20 - mmengine - INFO - Epoch(train) [208][250/586] lr: 5.000000e-05 eta: 0:16:04 time: 0.693481 data_time: 0.059888 memory: 12959 loss_kpt: 301.624797 acc_pose: 0.840695 loss: 301.624797 2022/10/13 10:47:55 - mmengine - INFO - Epoch(train) [208][300/586] lr: 5.000000e-05 eta: 0:15:32 time: 0.695145 data_time: 0.063677 memory: 12959 loss_kpt: 304.678273 acc_pose: 0.853079 loss: 304.678273 2022/10/13 10:48:29 - mmengine - INFO - Epoch(train) [208][350/586] lr: 5.000000e-05 eta: 0:15:00 time: 0.682628 data_time: 0.057894 memory: 12959 loss_kpt: 305.332277 acc_pose: 0.839317 loss: 305.332277 2022/10/13 10:49:03 - mmengine - INFO - Epoch(train) [208][400/586] lr: 5.000000e-05 eta: 0:14:28 time: 0.688337 data_time: 0.058838 memory: 12959 loss_kpt: 300.836299 acc_pose: 0.883344 loss: 300.836299 2022/10/13 10:49:38 - mmengine - INFO - Epoch(train) [208][450/586] lr: 5.000000e-05 eta: 0:13:56 time: 0.691781 data_time: 0.066034 memory: 12959 loss_kpt: 307.020492 acc_pose: 0.863181 loss: 307.020492 2022/10/13 10:50:13 - mmengine - INFO - Epoch(train) [208][500/586] lr: 5.000000e-05 eta: 0:13:24 time: 0.695643 data_time: 0.063076 memory: 12959 loss_kpt: 299.227019 acc_pose: 0.867775 loss: 299.227019 2022/10/13 10:50:47 - mmengine - INFO - Epoch(train) [208][550/586] lr: 5.000000e-05 eta: 0:12:52 time: 0.687830 data_time: 0.061545 memory: 12959 loss_kpt: 299.982476 acc_pose: 0.878113 loss: 299.982476 2022/10/13 10:51:12 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:51:46 - mmengine - INFO - Epoch(train) [209][50/586] lr: 5.000000e-05 eta: 0:11:57 time: 0.696405 data_time: 0.071330 memory: 12959 loss_kpt: 301.973080 acc_pose: 0.890962 loss: 301.973080 2022/10/13 10:52:21 - mmengine - INFO - Epoch(train) [209][100/586] lr: 5.000000e-05 eta: 0:11:25 time: 0.700681 data_time: 0.062891 memory: 12959 loss_kpt: 299.965426 acc_pose: 0.841173 loss: 299.965426 2022/10/13 10:52:30 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:52:56 - mmengine - INFO - Epoch(train) [209][150/586] lr: 5.000000e-05 eta: 0:10:53 time: 0.694364 data_time: 0.060876 memory: 12959 loss_kpt: 306.583852 acc_pose: 0.840599 loss: 306.583852 2022/10/13 10:53:31 - mmengine - INFO - Epoch(train) [209][200/586] lr: 5.000000e-05 eta: 0:10:21 time: 0.691236 data_time: 0.062922 memory: 12959 loss_kpt: 303.068654 acc_pose: 0.897648 loss: 303.068654 2022/10/13 10:54:05 - mmengine - INFO - Epoch(train) [209][250/586] lr: 5.000000e-05 eta: 0:09:49 time: 0.688485 data_time: 0.066405 memory: 12959 loss_kpt: 297.911617 acc_pose: 0.868893 loss: 297.911617 2022/10/13 10:54:40 - mmengine - INFO - Epoch(train) [209][300/586] lr: 5.000000e-05 eta: 0:09:17 time: 0.688570 data_time: 0.061560 memory: 12959 loss_kpt: 299.745303 acc_pose: 0.862673 loss: 299.745303 2022/10/13 10:55:14 - mmengine - INFO - Epoch(train) [209][350/586] lr: 5.000000e-05 eta: 0:08:45 time: 0.689186 data_time: 0.058966 memory: 12959 loss_kpt: 303.611906 acc_pose: 0.903173 loss: 303.611906 2022/10/13 10:55:49 - mmengine - INFO - Epoch(train) [209][400/586] lr: 5.000000e-05 eta: 0:08:13 time: 0.688011 data_time: 0.057220 memory: 12959 loss_kpt: 309.930721 acc_pose: 0.829589 loss: 309.930721 2022/10/13 10:56:23 - mmengine - INFO - Epoch(train) [209][450/586] lr: 5.000000e-05 eta: 0:07:41 time: 0.694367 data_time: 0.065257 memory: 12959 loss_kpt: 303.298781 acc_pose: 0.851827 loss: 303.298781 2022/10/13 10:56:58 - mmengine - INFO - Epoch(train) [209][500/586] lr: 5.000000e-05 eta: 0:07:09 time: 0.686070 data_time: 0.062647 memory: 12959 loss_kpt: 302.185421 acc_pose: 0.854214 loss: 302.185421 2022/10/13 10:57:32 - mmengine - INFO - Epoch(train) [209][550/586] lr: 5.000000e-05 eta: 0:06:37 time: 0.689543 data_time: 0.063551 memory: 12959 loss_kpt: 304.339225 acc_pose: 0.908034 loss: 304.339225 2022/10/13 10:57:56 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 10:58:30 - mmengine - INFO - Epoch(train) [210][50/586] lr: 5.000000e-05 eta: 0:05:42 time: 0.683406 data_time: 0.071735 memory: 12959 loss_kpt: 305.323396 acc_pose: 0.870216 loss: 305.323396 2022/10/13 10:59:04 - mmengine - INFO - Epoch(train) [210][100/586] lr: 5.000000e-05 eta: 0:05:10 time: 0.669885 data_time: 0.058504 memory: 12959 loss_kpt: 300.844235 acc_pose: 0.894013 loss: 300.844235 2022/10/13 10:59:37 - mmengine - INFO - Epoch(train) [210][150/586] lr: 5.000000e-05 eta: 0:04:38 time: 0.672628 data_time: 0.061165 memory: 12959 loss_kpt: 306.733368 acc_pose: 0.902150 loss: 306.733368 2022/10/13 11:00:11 - mmengine - INFO - Epoch(train) [210][200/586] lr: 5.000000e-05 eta: 0:04:06 time: 0.676112 data_time: 0.062474 memory: 12959 loss_kpt: 304.585903 acc_pose: 0.807698 loss: 304.585903 2022/10/13 11:00:45 - mmengine - INFO - Epoch(train) [210][250/586] lr: 5.000000e-05 eta: 0:03:34 time: 0.683629 data_time: 0.063473 memory: 12959 loss_kpt: 302.330140 acc_pose: 0.878506 loss: 302.330140 2022/10/13 11:01:19 - mmengine - INFO - Epoch(train) [210][300/586] lr: 5.000000e-05 eta: 0:03:02 time: 0.661202 data_time: 0.059056 memory: 12959 loss_kpt: 303.201580 acc_pose: 0.868888 loss: 303.201580 2022/10/13 11:01:52 - mmengine - INFO - Epoch(train) [210][350/586] lr: 5.000000e-05 eta: 0:02:30 time: 0.675527 data_time: 0.063213 memory: 12959 loss_kpt: 301.484423 acc_pose: 0.818625 loss: 301.484423 2022/10/13 11:02:26 - mmengine - INFO - Epoch(train) [210][400/586] lr: 5.000000e-05 eta: 0:01:58 time: 0.679272 data_time: 0.060403 memory: 12959 loss_kpt: 309.634430 acc_pose: 0.775565 loss: 309.634430 2022/10/13 11:03:01 - mmengine - INFO - Epoch(train) [210][450/586] lr: 5.000000e-05 eta: 0:01:27 time: 0.690583 data_time: 0.064862 memory: 12959 loss_kpt: 301.462302 acc_pose: 0.873326 loss: 301.462302 2022/10/13 11:03:35 - mmengine - INFO - Epoch(train) [210][500/586] lr: 5.000000e-05 eta: 0:00:55 time: 0.680261 data_time: 0.060682 memory: 12959 loss_kpt: 298.935472 acc_pose: 0.867287 loss: 298.935472 2022/10/13 11:03:53 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 11:04:09 - mmengine - INFO - Epoch(train) [210][550/586] lr: 5.000000e-05 eta: 0:00:23 time: 0.687680 data_time: 0.060836 memory: 12959 loss_kpt: 305.745441 acc_pose: 0.894121 loss: 305.745441 2022/10/13 11:04:34 - mmengine - INFO - Exp name: td-hm_3xrsn50_8xb32-210e_coco-256x192_20221012_105904 2022/10/13 11:04:34 - mmengine - INFO - Saving checkpoint at 210 epochs 2022/10/13 11:04:52 - mmengine - INFO - Epoch(val) [210][50/407] eta: 0:01:37 time: 0.274045 data_time: 0.015351 memory: 12959 2022/10/13 11:05:05 - mmengine - INFO - Epoch(val) [210][100/407] eta: 0:01:21 time: 0.263957 data_time: 0.008351 memory: 2407 2022/10/13 11:05:18 - mmengine - INFO - Epoch(val) [210][150/407] eta: 0:01:06 time: 0.260551 data_time: 0.007717 memory: 2407 2022/10/13 11:05:31 - mmengine - INFO - Epoch(val) [210][200/407] eta: 0:00:53 time: 0.259510 data_time: 0.007275 memory: 2407 2022/10/13 11:05:44 - mmengine - INFO - Epoch(val) [210][250/407] eta: 0:00:40 time: 0.259470 data_time: 0.007691 memory: 2407 2022/10/13 11:05:57 - mmengine - INFO - Epoch(val) [210][300/407] eta: 0:00:27 time: 0.261106 data_time: 0.007517 memory: 2407 2022/10/13 11:06:10 - mmengine - INFO - Epoch(val) [210][350/407] eta: 0:00:14 time: 0.262703 data_time: 0.008347 memory: 2407 2022/10/13 11:06:23 - mmengine - INFO - Epoch(val) [210][400/407] eta: 0:00:01 time: 0.263334 data_time: 0.011821 memory: 2407 2022/10/13 11:06:37 - mmengine - INFO - Evaluating CocoMetric... 2022/10/13 11:06:54 - mmengine - INFO - Epoch(val) [210][407/407] coco/AP: 0.750484 coco/AP .5: 0.900662 coco/AP .75: 0.824263 coco/AP (M): 0.716404 coco/AP (L): 0.815446 coco/AR: 0.813759 coco/AR .5: 0.940963 coco/AR .75: 0.875315 coco/AR (M): 0.770336 coco/AR (L): 0.873987 2022/10/13 11:06:54 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/liqikai/openmmlab/pt112cu113py38/mmpose/work_dirs/20221012/rsn3x/best_coco/AP_epoch_190.pth is removed 2022/10/13 11:06:57 - mmengine - INFO - The best checkpoint with 0.7505 coco/AP at 210 epoch is saved to best_coco/AP_epoch_210.pth.