2022/12/25 16:45:21 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] CUDA available: True numpy_random_seed: 1097942080 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/petrelfs/share/cuda-11.3 NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (GCC) 5.4.0 PyTorch: 1.11.0 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.3 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.2 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.12.0 OpenCV: 4.6.0 MMEngine: 0.3.2 Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None diff_rank_seed: False deterministic: False Distributed launcher: pytorch Distributed training: True GPU number: 8 ------------------------------------------------------------ 2022/12/25 16:45:21 - mmengine - INFO - Config: default_scope = 'mmaction' default_hooks = dict( runtime_info=dict(type='RuntimeInfoHook'), timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=100, ignore_last=False), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=1, save_best='auto'), sampler_seed=dict(type='DistSamplerSeedHook'), sync_buffers=dict(type='SyncBuffersHook')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) log_processor = dict(type='LogProcessor', window_size=20, by_epoch=True) vis_backends = [dict(type='LocalVisBackend')] visualizer = dict( type='ActionVisualizer', vis_backends=[dict(type='LocalVisBackend')]) log_level = 'INFO' load_from = None resume = False model = dict( type='RecognizerGCN', backbone=dict( type='STGCN', gcn_adaptive='init', gcn_with_res=True, tcn_type='mstcn', graph_cfg=dict(layout='coco', mode='spatial')), cls_head=dict(type='GCNHead', num_classes=60, in_channels=256)) dataset_type = 'PoseDataset' ann_file = 'data/skeleton/ntu60_2d.pkl' train_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['b']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] val_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['b']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] test_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['b']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] train_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='RepeatDataset', times=5, dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['b']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_train'))) val_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['b']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) test_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['b']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) val_evaluator = [dict(type='AccMetric')] test_evaluator = [dict(type='AccMetric')] train_cfg = dict( type='EpochBasedTrainLoop', max_epochs=16, val_begin=1, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='CosineAnnealingLR', eta_min=0, T_max=16, by_epoch=True, convert_to_iter_based=True) ] optim_wrapper = dict( optimizer=dict( type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True)) auto_scale_lr = dict(enable=False, base_batch_size=128) launcher = 'pytorch' work_dir = './work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d' randomness = dict(seed=None, diff_rank_seed=False, deterministic=False) 2022/12/25 16:45:21 - mmengine - INFO - Result has been saved to /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/modules_statistic_results.json 2022/12/25 16:45:21 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (NORMAL ) SyncBuffersHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (VERY_LOW ) CheckpointHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- Name of parameter - Initialization information backbone.data_bn.weight - torch.Size([51]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.data_bn.bias - torch.Size([51]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.conv.weight - torch.Size([192, 3, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.0.weight - torch.Size([64, 3, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.conv.weight - torch.Size([384, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.0.weight - torch.Size([128, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.residual.conv.weight - torch.Size([128, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.residual.conv.bias - torch.Size([128]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.residual.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.residual.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.conv.weight - torch.Size([384, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.conv.weight - torch.Size([384, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.conv.weight - torch.Size([768, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.0.weight - torch.Size([256, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.residual.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.residual.conv.bias - torch.Size([256]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.residual.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.residual.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.conv.weight - torch.Size([768, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.conv.weight - torch.Size([768, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN cls_head.fc.weight - torch.Size([60, 256]): NormalInit: mean=0, std=0.01, bias=0 cls_head.fc.bias - torch.Size([60]): NormalInit: mean=0, std=0.01, bias=0 2022/12/25 16:45:54 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d. 2022/12/25 16:46:04 - mmengine - INFO - Epoch(train) [1][ 100/1567] lr: 9.9996e-02 eta: 0:41:23 time: 0.0830 data_time: 0.0061 memory: 1827 loss: 2.5436 top1_acc: 0.3750 top5_acc: 0.5625 loss_cls: 2.5436 2022/12/25 16:46:13 - mmengine - INFO - Epoch(train) [1][ 200/1567] lr: 9.9984e-02 eta: 0:38:35 time: 0.0830 data_time: 0.0062 memory: 1827 loss: 1.4988 top1_acc: 0.5000 top5_acc: 0.9375 loss_cls: 1.4988 2022/12/25 16:46:22 - mmengine - INFO - Epoch(train) [1][ 300/1567] lr: 9.9965e-02 eta: 0:37:14 time: 0.0853 data_time: 0.0071 memory: 1827 loss: 1.2563 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 1.2563 2022/12/25 16:46:30 - mmengine - INFO - Epoch(train) [1][ 400/1567] lr: 9.9938e-02 eta: 0:36:26 time: 0.0837 data_time: 0.0061 memory: 1827 loss: 0.9826 top1_acc: 0.6250 top5_acc: 0.9375 loss_cls: 0.9826 2022/12/25 16:46:38 - mmengine - INFO - Epoch(train) [1][ 500/1567] lr: 9.9902e-02 eta: 0:35:54 time: 0.0833 data_time: 0.0062 memory: 1827 loss: 0.8857 top1_acc: 0.5000 top5_acc: 1.0000 loss_cls: 0.8857 2022/12/25 16:46:47 - mmengine - INFO - Epoch(train) [1][ 600/1567] lr: 9.9859e-02 eta: 0:35:27 time: 0.0823 data_time: 0.0062 memory: 1827 loss: 0.7783 top1_acc: 0.6250 top5_acc: 1.0000 loss_cls: 0.7783 2022/12/25 16:46:55 - mmengine - INFO - Epoch(train) [1][ 700/1567] lr: 9.9808e-02 eta: 0:35:06 time: 0.0826 data_time: 0.0068 memory: 1827 loss: 0.7272 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.7272 2022/12/25 16:47:03 - mmengine - INFO - Epoch(train) [1][ 800/1567] lr: 9.9750e-02 eta: 0:34:48 time: 0.0836 data_time: 0.0061 memory: 1827 loss: 0.6298 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.6298 2022/12/25 16:47:12 - mmengine - INFO - Epoch(train) [1][ 900/1567] lr: 9.9683e-02 eta: 0:34:34 time: 0.0846 data_time: 0.0061 memory: 1827 loss: 0.5878 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.5878 2022/12/25 16:47:20 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:47:20 - mmengine - INFO - Epoch(train) [1][1000/1567] lr: 9.9609e-02 eta: 0:34:19 time: 0.0833 data_time: 0.0064 memory: 1827 loss: 0.5936 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5936 2022/12/25 16:47:28 - mmengine - INFO - Epoch(train) [1][1100/1567] lr: 9.9527e-02 eta: 0:34:07 time: 0.0824 data_time: 0.0062 memory: 1827 loss: 0.4792 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.4792 2022/12/25 16:47:37 - mmengine - INFO - Epoch(train) [1][1200/1567] lr: 9.9437e-02 eta: 0:33:55 time: 0.0874 data_time: 0.0063 memory: 1827 loss: 0.4517 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4517 2022/12/25 16:47:45 - mmengine - INFO - Epoch(train) [1][1300/1567] lr: 9.9339e-02 eta: 0:33:47 time: 0.0855 data_time: 0.0061 memory: 1827 loss: 0.5148 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.5148 2022/12/25 16:47:54 - mmengine - INFO - Epoch(train) [1][1400/1567] lr: 9.9234e-02 eta: 0:33:35 time: 0.0832 data_time: 0.0062 memory: 1827 loss: 0.4639 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4639 2022/12/25 16:48:02 - mmengine - INFO - Epoch(train) [1][1500/1567] lr: 9.9121e-02 eta: 0:33:24 time: 0.0833 data_time: 0.0061 memory: 1827 loss: 0.4828 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.4828 2022/12/25 16:48:08 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:48:08 - mmengine - INFO - Epoch(train) [1][1567/1567] lr: 9.9040e-02 eta: 0:33:17 time: 0.0850 data_time: 0.0059 memory: 1827 loss: 0.5411 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.5411 2022/12/25 16:48:08 - mmengine - INFO - Saving checkpoint at 1 epochs 2022/12/25 16:48:11 - mmengine - INFO - Epoch(val) [1][100/129] eta: 0:00:00 time: 0.0256 data_time: 0.0060 memory: 263 2022/12/25 16:48:12 - mmengine - INFO - Epoch(val) [1][129/129] acc/top1: 0.6526 acc/top5: 0.9561 acc/mean1: 0.6523 2022/12/25 16:48:12 - mmengine - INFO - The best checkpoint with 0.6526 acc/top1 at 1 epoch is saved to best_acc/top1_epoch_1.pth. 2022/12/25 16:48:21 - mmengine - INFO - Epoch(train) [2][ 100/1567] lr: 9.8914e-02 eta: 0:33:07 time: 0.0834 data_time: 0.0064 memory: 1827 loss: 0.4741 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4741 2022/12/25 16:48:29 - mmengine - INFO - Epoch(train) [2][ 200/1567] lr: 9.8781e-02 eta: 0:32:58 time: 0.0860 data_time: 0.0062 memory: 1827 loss: 0.4859 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4859 2022/12/25 16:48:38 - mmengine - INFO - Epoch(train) [2][ 300/1567] lr: 9.8639e-02 eta: 0:32:48 time: 0.0828 data_time: 0.0062 memory: 1827 loss: 0.3743 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3743 2022/12/25 16:48:46 - mmengine - INFO - Epoch(train) [2][ 400/1567] lr: 9.8491e-02 eta: 0:32:40 time: 0.0860 data_time: 0.0063 memory: 1827 loss: 0.4187 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4187 2022/12/25 16:48:49 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:48:55 - mmengine - INFO - Epoch(train) [2][ 500/1567] lr: 9.8334e-02 eta: 0:32:31 time: 0.0824 data_time: 0.0064 memory: 1827 loss: 0.3619 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3619 2022/12/25 16:49:03 - mmengine - INFO - Epoch(train) [2][ 600/1567] lr: 9.8170e-02 eta: 0:32:21 time: 0.0828 data_time: 0.0063 memory: 1827 loss: 0.3405 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3405 2022/12/25 16:49:11 - mmengine - INFO - Epoch(train) [2][ 700/1567] lr: 9.7998e-02 eta: 0:32:12 time: 0.0830 data_time: 0.0063 memory: 1827 loss: 0.3550 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3550 2022/12/25 16:49:20 - mmengine - INFO - Epoch(train) [2][ 800/1567] lr: 9.7819e-02 eta: 0:32:03 time: 0.0862 data_time: 0.0063 memory: 1827 loss: 0.3812 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.3812 2022/12/25 16:49:29 - mmengine - INFO - Epoch(train) [2][ 900/1567] lr: 9.7632e-02 eta: 0:32:03 time: 0.0942 data_time: 0.0062 memory: 1827 loss: 0.3587 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.3587 2022/12/25 16:49:38 - mmengine - INFO - Epoch(train) [2][1000/1567] lr: 9.7438e-02 eta: 0:32:01 time: 0.0891 data_time: 0.0062 memory: 1827 loss: 0.3018 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3018 2022/12/25 16:49:47 - mmengine - INFO - Epoch(train) [2][1100/1567] lr: 9.7236e-02 eta: 0:31:56 time: 0.0899 data_time: 0.0065 memory: 1827 loss: 0.3677 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3677 2022/12/25 16:49:56 - mmengine - INFO - Epoch(train) [2][1200/1567] lr: 9.7027e-02 eta: 0:31:51 time: 0.0884 data_time: 0.0071 memory: 1827 loss: 0.3540 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3540 2022/12/25 16:50:05 - mmengine - INFO - Epoch(train) [2][1300/1567] lr: 9.6810e-02 eta: 0:31:45 time: 0.0890 data_time: 0.0064 memory: 1827 loss: 0.3079 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3079 2022/12/25 16:50:14 - mmengine - INFO - Epoch(train) [2][1400/1567] lr: 9.6587e-02 eta: 0:31:35 time: 0.0826 data_time: 0.0063 memory: 1827 loss: 0.3174 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3174 2022/12/25 16:50:16 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:50:22 - mmengine - INFO - Epoch(train) [2][1500/1567] lr: 9.6355e-02 eta: 0:31:25 time: 0.0837 data_time: 0.0063 memory: 1827 loss: 0.3002 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3002 2022/12/25 16:50:27 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:50:27 - mmengine - INFO - Epoch(train) [2][1567/1567] lr: 9.6196e-02 eta: 0:31:17 time: 0.0824 data_time: 0.0068 memory: 1827 loss: 0.3428 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3428 2022/12/25 16:50:27 - mmengine - INFO - Saving checkpoint at 2 epochs 2022/12/25 16:50:31 - mmengine - INFO - Epoch(val) [2][100/129] eta: 0:00:00 time: 0.0249 data_time: 0.0058 memory: 263 2022/12/25 16:50:32 - mmengine - INFO - Epoch(val) [2][129/129] acc/top1: 0.7512 acc/top5: 0.9661 acc/mean1: 0.7510 2022/12/25 16:50:32 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_1.pth is removed 2022/12/25 16:50:32 - mmengine - INFO - The best checkpoint with 0.7512 acc/top1 at 2 epoch is saved to best_acc/top1_epoch_2.pth. 2022/12/25 16:50:40 - mmengine - INFO - Epoch(train) [3][ 100/1567] lr: 9.5953e-02 eta: 0:31:08 time: 0.0838 data_time: 0.0069 memory: 1827 loss: 0.2696 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2696 2022/12/25 16:50:49 - mmengine - INFO - Epoch(train) [3][ 200/1567] lr: 9.5703e-02 eta: 0:30:58 time: 0.0834 data_time: 0.0063 memory: 1827 loss: 0.3005 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3005 2022/12/25 16:50:57 - mmengine - INFO - Epoch(train) [3][ 300/1567] lr: 9.5445e-02 eta: 0:30:48 time: 0.0820 data_time: 0.0068 memory: 1827 loss: 0.3690 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3690 2022/12/25 16:51:05 - mmengine - INFO - Epoch(train) [3][ 400/1567] lr: 9.5180e-02 eta: 0:30:38 time: 0.0836 data_time: 0.0064 memory: 1827 loss: 0.2925 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2925 2022/12/25 16:51:14 - mmengine - INFO - Epoch(train) [3][ 500/1567] lr: 9.4908e-02 eta: 0:30:28 time: 0.0843 data_time: 0.0062 memory: 1827 loss: 0.3671 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3671 2022/12/25 16:51:22 - mmengine - INFO - Epoch(train) [3][ 600/1567] lr: 9.4629e-02 eta: 0:30:19 time: 0.0837 data_time: 0.0065 memory: 1827 loss: 0.3756 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3756 2022/12/25 16:51:30 - mmengine - INFO - Epoch(train) [3][ 700/1567] lr: 9.4343e-02 eta: 0:30:10 time: 0.0834 data_time: 0.0063 memory: 1827 loss: 0.2834 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.2834 2022/12/25 16:51:39 - mmengine - INFO - Epoch(train) [3][ 800/1567] lr: 9.4050e-02 eta: 0:30:00 time: 0.0825 data_time: 0.0063 memory: 1827 loss: 0.2883 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2883 2022/12/25 16:51:44 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:51:47 - mmengine - INFO - Epoch(train) [3][ 900/1567] lr: 9.3750e-02 eta: 0:29:51 time: 0.0848 data_time: 0.0064 memory: 1827 loss: 0.3952 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.3952 2022/12/25 16:51:55 - mmengine - INFO - Epoch(train) [3][1000/1567] lr: 9.3444e-02 eta: 0:29:41 time: 0.0830 data_time: 0.0064 memory: 1827 loss: 0.3457 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.3457 2022/12/25 16:52:04 - mmengine - INFO - Epoch(train) [3][1100/1567] lr: 9.3130e-02 eta: 0:29:32 time: 0.0838 data_time: 0.0064 memory: 1827 loss: 0.3162 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3162 2022/12/25 16:52:12 - mmengine - INFO - Epoch(train) [3][1200/1567] lr: 9.2810e-02 eta: 0:29:24 time: 0.0861 data_time: 0.0063 memory: 1827 loss: 0.3198 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3198 2022/12/25 16:52:21 - mmengine - INFO - Epoch(train) [3][1300/1567] lr: 9.2483e-02 eta: 0:29:16 time: 0.0843 data_time: 0.0067 memory: 1827 loss: 0.2661 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2661 2022/12/25 16:52:29 - mmengine - INFO - Epoch(train) [3][1400/1567] lr: 9.2149e-02 eta: 0:29:06 time: 0.0832 data_time: 0.0064 memory: 1827 loss: 0.2777 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2777 2022/12/25 16:52:38 - mmengine - INFO - Epoch(train) [3][1500/1567] lr: 9.1809e-02 eta: 0:28:58 time: 0.0836 data_time: 0.0063 memory: 1827 loss: 0.3511 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3511 2022/12/25 16:52:43 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:52:43 - mmengine - INFO - Epoch(train) [3][1567/1567] lr: 9.1577e-02 eta: 0:28:51 time: 0.0831 data_time: 0.0063 memory: 1827 loss: 0.3711 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3711 2022/12/25 16:52:43 - mmengine - INFO - Saving checkpoint at 3 epochs 2022/12/25 16:52:46 - mmengine - INFO - Epoch(val) [3][100/129] eta: 0:00:00 time: 0.0270 data_time: 0.0061 memory: 263 2022/12/25 16:52:47 - mmengine - INFO - Epoch(val) [3][129/129] acc/top1: 0.7974 acc/top5: 0.9709 acc/mean1: 0.7974 2022/12/25 16:52:47 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_2.pth is removed 2022/12/25 16:52:48 - mmengine - INFO - The best checkpoint with 0.7974 acc/top1 at 3 epoch is saved to best_acc/top1_epoch_3.pth. 2022/12/25 16:52:56 - mmengine - INFO - Epoch(train) [4][ 100/1567] lr: 9.1226e-02 eta: 0:28:43 time: 0.0834 data_time: 0.0062 memory: 1827 loss: 0.2467 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2467 2022/12/25 16:53:05 - mmengine - INFO - Epoch(train) [4][ 200/1567] lr: 9.0868e-02 eta: 0:28:34 time: 0.0848 data_time: 0.0067 memory: 1827 loss: 0.3327 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3327 2022/12/25 16:53:13 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:53:13 - mmengine - INFO - Epoch(train) [4][ 300/1567] lr: 9.0504e-02 eta: 0:28:26 time: 0.0883 data_time: 0.0064 memory: 1827 loss: 0.3123 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.3123 2022/12/25 16:53:22 - mmengine - INFO - Epoch(train) [4][ 400/1567] lr: 9.0133e-02 eta: 0:28:19 time: 0.0835 data_time: 0.0067 memory: 1827 loss: 0.2293 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2293 2022/12/25 16:53:30 - mmengine - INFO - Epoch(train) [4][ 500/1567] lr: 8.9756e-02 eta: 0:28:10 time: 0.0825 data_time: 0.0064 memory: 1827 loss: 0.3437 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3437 2022/12/25 16:53:39 - mmengine - INFO - Epoch(train) [4][ 600/1567] lr: 8.9373e-02 eta: 0:28:01 time: 0.0832 data_time: 0.0064 memory: 1827 loss: 0.2724 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2724 2022/12/25 16:53:47 - mmengine - INFO - Epoch(train) [4][ 700/1567] lr: 8.8984e-02 eta: 0:27:52 time: 0.0840 data_time: 0.0064 memory: 1827 loss: 0.2653 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2653 2022/12/25 16:53:56 - mmengine - INFO - Epoch(train) [4][ 800/1567] lr: 8.8589e-02 eta: 0:27:43 time: 0.0850 data_time: 0.0065 memory: 1827 loss: 0.2456 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2456 2022/12/25 16:54:04 - mmengine - INFO - Epoch(train) [4][ 900/1567] lr: 8.8187e-02 eta: 0:27:35 time: 0.0855 data_time: 0.0068 memory: 1827 loss: 0.2449 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2449 2022/12/25 16:54:13 - mmengine - INFO - Epoch(train) [4][1000/1567] lr: 8.7780e-02 eta: 0:27:26 time: 0.0853 data_time: 0.0067 memory: 1827 loss: 0.2356 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2356 2022/12/25 16:54:21 - mmengine - INFO - Epoch(train) [4][1100/1567] lr: 8.7367e-02 eta: 0:27:18 time: 0.0857 data_time: 0.0069 memory: 1827 loss: 0.2546 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2546 2022/12/25 16:54:30 - mmengine - INFO - Epoch(train) [4][1200/1567] lr: 8.6947e-02 eta: 0:27:09 time: 0.0835 data_time: 0.0066 memory: 1827 loss: 0.2664 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2664 2022/12/25 16:54:38 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:54:38 - mmengine - INFO - Epoch(train) [4][1300/1567] lr: 8.6522e-02 eta: 0:27:00 time: 0.0827 data_time: 0.0069 memory: 1827 loss: 0.2327 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2327 2022/12/25 16:54:47 - mmengine - INFO - Epoch(train) [4][1400/1567] lr: 8.6092e-02 eta: 0:26:52 time: 0.0871 data_time: 0.0074 memory: 1827 loss: 0.2387 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2387 2022/12/25 16:54:55 - mmengine - INFO - Epoch(train) [4][1500/1567] lr: 8.5655e-02 eta: 0:26:44 time: 0.0862 data_time: 0.0065 memory: 1827 loss: 0.3192 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3192 2022/12/25 16:55:01 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:55:01 - mmengine - INFO - Epoch(train) [4][1567/1567] lr: 8.5360e-02 eta: 0:26:39 time: 0.0841 data_time: 0.0064 memory: 1827 loss: 0.3550 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3550 2022/12/25 16:55:01 - mmengine - INFO - Saving checkpoint at 4 epochs 2022/12/25 16:55:04 - mmengine - INFO - Epoch(val) [4][100/129] eta: 0:00:00 time: 0.0258 data_time: 0.0058 memory: 263 2022/12/25 16:55:05 - mmengine - INFO - Epoch(val) [4][129/129] acc/top1: 0.7675 acc/top5: 0.9738 acc/mean1: 0.7671 2022/12/25 16:55:14 - mmengine - INFO - Epoch(train) [5][ 100/1567] lr: 8.4914e-02 eta: 0:26:30 time: 0.0842 data_time: 0.0067 memory: 1827 loss: 0.2972 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2972 2022/12/25 16:55:22 - mmengine - INFO - Epoch(train) [5][ 200/1567] lr: 8.4463e-02 eta: 0:26:21 time: 0.0844 data_time: 0.0069 memory: 1827 loss: 0.2509 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2509 2022/12/25 16:55:31 - mmengine - INFO - Epoch(train) [5][ 300/1567] lr: 8.4006e-02 eta: 0:26:13 time: 0.0841 data_time: 0.0065 memory: 1827 loss: 0.2479 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2479 2022/12/25 16:55:39 - mmengine - INFO - Epoch(train) [5][ 400/1567] lr: 8.3544e-02 eta: 0:26:04 time: 0.0831 data_time: 0.0064 memory: 1827 loss: 0.2450 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2450 2022/12/25 16:55:47 - mmengine - INFO - Epoch(train) [5][ 500/1567] lr: 8.3077e-02 eta: 0:25:55 time: 0.0834 data_time: 0.0071 memory: 1827 loss: 0.1864 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1864 2022/12/25 16:55:56 - mmengine - INFO - Epoch(train) [5][ 600/1567] lr: 8.2605e-02 eta: 0:25:47 time: 0.0871 data_time: 0.0067 memory: 1827 loss: 0.2734 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2734 2022/12/25 16:56:05 - mmengine - INFO - Epoch(train) [5][ 700/1567] lr: 8.2127e-02 eta: 0:25:39 time: 0.0873 data_time: 0.0064 memory: 1827 loss: 0.2199 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2199 2022/12/25 16:56:08 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:56:13 - mmengine - INFO - Epoch(train) [5][ 800/1567] lr: 8.1645e-02 eta: 0:25:31 time: 0.0853 data_time: 0.0066 memory: 1827 loss: 0.2570 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2570 2022/12/25 16:56:22 - mmengine - INFO - Epoch(train) [5][ 900/1567] lr: 8.1157e-02 eta: 0:25:22 time: 0.0853 data_time: 0.0069 memory: 1827 loss: 0.2598 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2598 2022/12/25 16:56:30 - mmengine - INFO - Epoch(train) [5][1000/1567] lr: 8.0665e-02 eta: 0:25:13 time: 0.0835 data_time: 0.0066 memory: 1827 loss: 0.2251 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2251 2022/12/25 16:56:39 - mmengine - INFO - Epoch(train) [5][1100/1567] lr: 8.0167e-02 eta: 0:25:05 time: 0.0832 data_time: 0.0066 memory: 1827 loss: 0.1867 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1867 2022/12/25 16:56:47 - mmengine - INFO - Epoch(train) [5][1200/1567] lr: 7.9665e-02 eta: 0:24:56 time: 0.0833 data_time: 0.0066 memory: 1827 loss: 0.2412 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2412 2022/12/25 16:56:56 - mmengine - INFO - Epoch(train) [5][1300/1567] lr: 7.9159e-02 eta: 0:24:48 time: 0.0863 data_time: 0.0066 memory: 1827 loss: 0.2166 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2166 2022/12/25 16:57:04 - mmengine - INFO - Epoch(train) [5][1400/1567] lr: 7.8647e-02 eta: 0:24:40 time: 0.0863 data_time: 0.0063 memory: 1827 loss: 0.1796 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1796 2022/12/25 16:57:13 - mmengine - INFO - Epoch(train) [5][1500/1567] lr: 7.8132e-02 eta: 0:24:31 time: 0.0843 data_time: 0.0066 memory: 1827 loss: 0.2577 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2577 2022/12/25 16:57:19 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:57:19 - mmengine - INFO - Epoch(train) [5][1567/1567] lr: 7.7784e-02 eta: 0:24:25 time: 0.0835 data_time: 0.0064 memory: 1827 loss: 0.3593 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.3593 2022/12/25 16:57:19 - mmengine - INFO - Saving checkpoint at 5 epochs 2022/12/25 16:57:22 - mmengine - INFO - Epoch(val) [5][100/129] eta: 0:00:00 time: 0.0266 data_time: 0.0061 memory: 263 2022/12/25 16:57:23 - mmengine - INFO - Epoch(val) [5][129/129] acc/top1: 0.8118 acc/top5: 0.9754 acc/mean1: 0.8117 2022/12/25 16:57:23 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_3.pth is removed 2022/12/25 16:57:23 - mmengine - INFO - The best checkpoint with 0.8118 acc/top1 at 5 epoch is saved to best_acc/top1_epoch_5.pth. 2022/12/25 16:57:32 - mmengine - INFO - Epoch(train) [6][ 100/1567] lr: 7.7261e-02 eta: 0:24:17 time: 0.0879 data_time: 0.0067 memory: 1827 loss: 0.2459 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2459 2022/12/25 16:57:37 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:57:41 - mmengine - INFO - Epoch(train) [6][ 200/1567] lr: 7.6733e-02 eta: 0:24:09 time: 0.0879 data_time: 0.0067 memory: 1827 loss: 0.2104 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2104 2022/12/25 16:57:49 - mmengine - INFO - Epoch(train) [6][ 300/1567] lr: 7.6202e-02 eta: 0:24:01 time: 0.0837 data_time: 0.0069 memory: 1827 loss: 0.2304 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2304 2022/12/25 16:57:58 - mmengine - INFO - Epoch(train) [6][ 400/1567] lr: 7.5666e-02 eta: 0:23:53 time: 0.0865 data_time: 0.0064 memory: 1827 loss: 0.1681 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1681 2022/12/25 16:58:06 - mmengine - INFO - Epoch(train) [6][ 500/1567] lr: 7.5126e-02 eta: 0:23:44 time: 0.0858 data_time: 0.0068 memory: 1827 loss: 0.2676 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2676 2022/12/25 16:58:15 - mmengine - INFO - Epoch(train) [6][ 600/1567] lr: 7.4583e-02 eta: 0:23:36 time: 0.0842 data_time: 0.0065 memory: 1827 loss: 0.2093 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2093 2022/12/25 16:58:24 - mmengine - INFO - Epoch(train) [6][ 700/1567] lr: 7.4035e-02 eta: 0:23:27 time: 0.0845 data_time: 0.0073 memory: 1827 loss: 0.2063 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2063 2022/12/25 16:58:32 - mmengine - INFO - Epoch(train) [6][ 800/1567] lr: 7.3484e-02 eta: 0:23:19 time: 0.0834 data_time: 0.0063 memory: 1827 loss: 0.1398 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1398 2022/12/25 16:58:40 - mmengine - INFO - Epoch(train) [6][ 900/1567] lr: 7.2929e-02 eta: 0:23:10 time: 0.0833 data_time: 0.0065 memory: 1827 loss: 0.1831 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1831 2022/12/25 16:58:49 - mmengine - INFO - Epoch(train) [6][1000/1567] lr: 7.2371e-02 eta: 0:23:01 time: 0.0832 data_time: 0.0065 memory: 1827 loss: 0.2280 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2280 2022/12/25 16:58:57 - mmengine - INFO - Epoch(train) [6][1100/1567] lr: 7.1809e-02 eta: 0:22:53 time: 0.0840 data_time: 0.0065 memory: 1827 loss: 0.1859 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.1859 2022/12/25 16:59:03 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:59:06 - mmengine - INFO - Epoch(train) [6][1200/1567] lr: 7.1243e-02 eta: 0:22:44 time: 0.0863 data_time: 0.0065 memory: 1827 loss: 0.2003 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2003 2022/12/25 16:59:15 - mmengine - INFO - Epoch(train) [6][1300/1567] lr: 7.0674e-02 eta: 0:22:36 time: 0.0880 data_time: 0.0066 memory: 1827 loss: 0.2063 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2063 2022/12/25 16:59:23 - mmengine - INFO - Epoch(train) [6][1400/1567] lr: 7.0102e-02 eta: 0:22:28 time: 0.0856 data_time: 0.0067 memory: 1827 loss: 0.1635 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1635 2022/12/25 16:59:32 - mmengine - INFO - Epoch(train) [6][1500/1567] lr: 6.9527e-02 eta: 0:22:19 time: 0.0841 data_time: 0.0065 memory: 1827 loss: 0.2406 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2406 2022/12/25 16:59:37 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 16:59:37 - mmengine - INFO - Epoch(train) [6][1567/1567] lr: 6.9140e-02 eta: 0:22:14 time: 0.0847 data_time: 0.0065 memory: 1827 loss: 0.3511 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.3511 2022/12/25 16:59:37 - mmengine - INFO - Saving checkpoint at 6 epochs 2022/12/25 16:59:40 - mmengine - INFO - Epoch(val) [6][100/129] eta: 0:00:00 time: 0.0263 data_time: 0.0060 memory: 263 2022/12/25 16:59:41 - mmengine - INFO - Epoch(val) [6][129/129] acc/top1: 0.8661 acc/top5: 0.9846 acc/mean1: 0.8660 2022/12/25 16:59:41 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_5.pth is removed 2022/12/25 16:59:42 - mmengine - INFO - The best checkpoint with 0.8661 acc/top1 at 6 epoch is saved to best_acc/top1_epoch_6.pth. 2022/12/25 16:59:50 - mmengine - INFO - Epoch(train) [7][ 100/1567] lr: 6.8560e-02 eta: 0:22:05 time: 0.0847 data_time: 0.0067 memory: 1827 loss: 0.1997 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1997 2022/12/25 16:59:59 - mmengine - INFO - Epoch(train) [7][ 200/1567] lr: 6.7976e-02 eta: 0:21:56 time: 0.0845 data_time: 0.0068 memory: 1827 loss: 0.1640 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1640 2022/12/25 17:00:07 - mmengine - INFO - Epoch(train) [7][ 300/1567] lr: 6.7390e-02 eta: 0:21:48 time: 0.0880 data_time: 0.0068 memory: 1827 loss: 0.2185 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2185 2022/12/25 17:00:16 - mmengine - INFO - Epoch(train) [7][ 400/1567] lr: 6.6802e-02 eta: 0:21:40 time: 0.0878 data_time: 0.0068 memory: 1827 loss: 0.2032 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2032 2022/12/25 17:00:24 - mmengine - INFO - Epoch(train) [7][ 500/1567] lr: 6.6210e-02 eta: 0:21:31 time: 0.0829 data_time: 0.0067 memory: 1827 loss: 0.2146 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2146 2022/12/25 17:00:33 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:00:33 - mmengine - INFO - Epoch(train) [7][ 600/1567] lr: 6.5616e-02 eta: 0:21:23 time: 0.0837 data_time: 0.0064 memory: 1827 loss: 0.1407 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1407 2022/12/25 17:00:42 - mmengine - INFO - Epoch(train) [7][ 700/1567] lr: 6.5020e-02 eta: 0:21:14 time: 0.0868 data_time: 0.0067 memory: 1827 loss: 0.1524 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1524 2022/12/25 17:00:50 - mmengine - INFO - Epoch(train) [7][ 800/1567] lr: 6.4421e-02 eta: 0:21:06 time: 0.0881 data_time: 0.0066 memory: 1827 loss: 0.1989 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1989 2022/12/25 17:00:59 - mmengine - INFO - Epoch(train) [7][ 900/1567] lr: 6.3820e-02 eta: 0:20:58 time: 0.0849 data_time: 0.0069 memory: 1827 loss: 0.2222 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2222 2022/12/25 17:01:07 - mmengine - INFO - Epoch(train) [7][1000/1567] lr: 6.3217e-02 eta: 0:20:49 time: 0.0884 data_time: 0.0071 memory: 1827 loss: 0.2222 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2222 2022/12/25 17:01:16 - mmengine - INFO - Epoch(train) [7][1100/1567] lr: 6.2612e-02 eta: 0:20:40 time: 0.0837 data_time: 0.0068 memory: 1827 loss: 0.1502 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1502 2022/12/25 17:01:24 - mmengine - INFO - Epoch(train) [7][1200/1567] lr: 6.2005e-02 eta: 0:20:32 time: 0.0859 data_time: 0.0068 memory: 1827 loss: 0.2204 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2204 2022/12/25 17:01:33 - mmengine - INFO - Epoch(train) [7][1300/1567] lr: 6.1396e-02 eta: 0:20:24 time: 0.0863 data_time: 0.0068 memory: 1827 loss: 0.1938 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1938 2022/12/25 17:01:42 - mmengine - INFO - Epoch(train) [7][1400/1567] lr: 6.0785e-02 eta: 0:20:15 time: 0.0840 data_time: 0.0066 memory: 1827 loss: 0.1980 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1980 2022/12/25 17:01:50 - mmengine - INFO - Epoch(train) [7][1500/1567] lr: 6.0172e-02 eta: 0:20:07 time: 0.0859 data_time: 0.0066 memory: 1827 loss: 0.1917 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1917 2022/12/25 17:01:56 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:01:56 - mmengine - INFO - Epoch(train) [7][1567/1567] lr: 5.9761e-02 eta: 0:20:01 time: 0.0834 data_time: 0.0065 memory: 1827 loss: 0.3150 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3150 2022/12/25 17:01:56 - mmengine - INFO - Saving checkpoint at 7 epochs 2022/12/25 17:01:59 - mmengine - INFO - Epoch(val) [7][100/129] eta: 0:00:00 time: 0.0255 data_time: 0.0058 memory: 263 2022/12/25 17:02:00 - mmengine - INFO - Epoch(val) [7][129/129] acc/top1: 0.8544 acc/top5: 0.9865 acc/mean1: 0.8543 2022/12/25 17:02:03 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:02:09 - mmengine - INFO - Epoch(train) [8][ 100/1567] lr: 5.9145e-02 eta: 0:19:53 time: 0.0867 data_time: 0.0063 memory: 1827 loss: 0.1675 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1675 2022/12/25 17:02:17 - mmengine - INFO - Epoch(train) [8][ 200/1567] lr: 5.8529e-02 eta: 0:19:44 time: 0.0868 data_time: 0.0069 memory: 1827 loss: 0.1382 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1382 2022/12/25 17:02:26 - mmengine - INFO - Epoch(train) [8][ 300/1567] lr: 5.7911e-02 eta: 0:19:36 time: 0.0864 data_time: 0.0073 memory: 1827 loss: 0.1694 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1694 2022/12/25 17:02:35 - mmengine - INFO - Epoch(train) [8][ 400/1567] lr: 5.7292e-02 eta: 0:19:28 time: 0.0862 data_time: 0.0064 memory: 1827 loss: 0.1265 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1265 2022/12/25 17:02:43 - mmengine - INFO - Epoch(train) [8][ 500/1567] lr: 5.6671e-02 eta: 0:19:19 time: 0.0834 data_time: 0.0064 memory: 1827 loss: 0.1743 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1743 2022/12/25 17:02:52 - mmengine - INFO - Epoch(train) [8][ 600/1567] lr: 5.6050e-02 eta: 0:19:10 time: 0.0860 data_time: 0.0069 memory: 1827 loss: 0.1350 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1350 2022/12/25 17:03:00 - mmengine - INFO - Epoch(train) [8][ 700/1567] lr: 5.5427e-02 eta: 0:19:02 time: 0.0844 data_time: 0.0067 memory: 1827 loss: 0.1617 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1617 2022/12/25 17:03:08 - mmengine - INFO - Epoch(train) [8][ 800/1567] lr: 5.4804e-02 eta: 0:18:53 time: 0.0835 data_time: 0.0066 memory: 1827 loss: 0.1869 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1869 2022/12/25 17:03:17 - mmengine - INFO - Epoch(train) [8][ 900/1567] lr: 5.4180e-02 eta: 0:18:45 time: 0.0884 data_time: 0.0067 memory: 1827 loss: 0.1486 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1486 2022/12/25 17:03:26 - mmengine - INFO - Epoch(train) [8][1000/1567] lr: 5.3556e-02 eta: 0:18:37 time: 0.0883 data_time: 0.0078 memory: 1827 loss: 0.1414 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1414 2022/12/25 17:03:29 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:03:35 - mmengine - INFO - Epoch(train) [8][1100/1567] lr: 5.2930e-02 eta: 0:18:28 time: 0.0881 data_time: 0.0074 memory: 1827 loss: 0.1354 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1354 2022/12/25 17:03:44 - mmengine - INFO - Epoch(train) [8][1200/1567] lr: 5.2305e-02 eta: 0:18:20 time: 0.0879 data_time: 0.0069 memory: 1827 loss: 0.1810 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1810 2022/12/25 17:03:52 - mmengine - INFO - Epoch(train) [8][1300/1567] lr: 5.1679e-02 eta: 0:18:12 time: 0.0841 data_time: 0.0066 memory: 1827 loss: 0.1173 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1173 2022/12/25 17:04:01 - mmengine - INFO - Epoch(train) [8][1400/1567] lr: 5.1052e-02 eta: 0:18:03 time: 0.0845 data_time: 0.0066 memory: 1827 loss: 0.1769 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1769 2022/12/25 17:04:09 - mmengine - INFO - Epoch(train) [8][1500/1567] lr: 5.0426e-02 eta: 0:17:55 time: 0.0837 data_time: 0.0066 memory: 1827 loss: 0.2068 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2068 2022/12/25 17:04:15 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:04:15 - mmengine - INFO - Epoch(train) [8][1567/1567] lr: 5.0006e-02 eta: 0:17:49 time: 0.0836 data_time: 0.0065 memory: 1827 loss: 0.3133 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3133 2022/12/25 17:04:15 - mmengine - INFO - Saving checkpoint at 8 epochs 2022/12/25 17:04:18 - mmengine - INFO - Epoch(val) [8][100/129] eta: 0:00:00 time: 0.0264 data_time: 0.0059 memory: 263 2022/12/25 17:04:19 - mmengine - INFO - Epoch(val) [8][129/129] acc/top1: 0.8767 acc/top5: 0.9865 acc/mean1: 0.8766 2022/12/25 17:04:19 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_6.pth is removed 2022/12/25 17:04:19 - mmengine - INFO - The best checkpoint with 0.8767 acc/top1 at 8 epoch is saved to best_acc/top1_epoch_8.pth. 2022/12/25 17:04:28 - mmengine - INFO - Epoch(train) [9][ 100/1567] lr: 4.9380e-02 eta: 0:17:40 time: 0.0845 data_time: 0.0067 memory: 1827 loss: 0.1952 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1952 2022/12/25 17:04:36 - mmengine - INFO - Epoch(train) [9][ 200/1567] lr: 4.8753e-02 eta: 0:17:32 time: 0.0848 data_time: 0.0069 memory: 1827 loss: 0.1613 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1613 2022/12/25 17:04:45 - mmengine - INFO - Epoch(train) [9][ 300/1567] lr: 4.8127e-02 eta: 0:17:23 time: 0.0840 data_time: 0.0066 memory: 1827 loss: 0.1066 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1066 2022/12/25 17:04:53 - mmengine - INFO - Epoch(train) [9][ 400/1567] lr: 4.7501e-02 eta: 0:17:14 time: 0.0849 data_time: 0.0065 memory: 1827 loss: 0.1459 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1459 2022/12/25 17:04:58 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:05:01 - mmengine - INFO - Epoch(train) [9][ 500/1567] lr: 4.6876e-02 eta: 0:17:06 time: 0.0837 data_time: 0.0062 memory: 1827 loss: 0.1234 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1234 2022/12/25 17:05:10 - mmengine - INFO - Epoch(train) [9][ 600/1567] lr: 4.6251e-02 eta: 0:16:57 time: 0.0843 data_time: 0.0071 memory: 1827 loss: 0.1564 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1564 2022/12/25 17:05:18 - mmengine - INFO - Epoch(train) [9][ 700/1567] lr: 4.5626e-02 eta: 0:16:48 time: 0.0838 data_time: 0.0074 memory: 1827 loss: 0.1116 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1116 2022/12/25 17:05:27 - mmengine - INFO - Epoch(train) [9][ 800/1567] lr: 4.5003e-02 eta: 0:16:40 time: 0.0840 data_time: 0.0068 memory: 1827 loss: 0.0872 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.0872 2022/12/25 17:05:35 - mmengine - INFO - Epoch(train) [9][ 900/1567] lr: 4.4380e-02 eta: 0:16:31 time: 0.0850 data_time: 0.0065 memory: 1827 loss: 0.0971 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0971 2022/12/25 17:05:44 - mmengine - INFO - Epoch(train) [9][1000/1567] lr: 4.3757e-02 eta: 0:16:23 time: 0.0840 data_time: 0.0068 memory: 1827 loss: 0.1205 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1205 2022/12/25 17:05:52 - mmengine - INFO - Epoch(train) [9][1100/1567] lr: 4.3136e-02 eta: 0:16:14 time: 0.0854 data_time: 0.0064 memory: 1827 loss: 0.1358 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1358 2022/12/25 17:06:01 - mmengine - INFO - Epoch(train) [9][1200/1567] lr: 4.2516e-02 eta: 0:16:06 time: 0.0845 data_time: 0.0063 memory: 1827 loss: 0.1119 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1119 2022/12/25 17:06:09 - mmengine - INFO - Epoch(train) [9][1300/1567] lr: 4.1897e-02 eta: 0:15:57 time: 0.0846 data_time: 0.0065 memory: 1827 loss: 0.0975 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0975 2022/12/25 17:06:18 - mmengine - INFO - Epoch(train) [9][1400/1567] lr: 4.1280e-02 eta: 0:15:49 time: 0.0852 data_time: 0.0066 memory: 1827 loss: 0.1045 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1045 2022/12/25 17:06:23 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:06:26 - mmengine - INFO - Epoch(train) [9][1500/1567] lr: 4.0664e-02 eta: 0:15:40 time: 0.0834 data_time: 0.0063 memory: 1827 loss: 0.1027 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1027 2022/12/25 17:06:32 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:06:32 - mmengine - INFO - Epoch(train) [9][1567/1567] lr: 4.0252e-02 eta: 0:15:34 time: 0.0824 data_time: 0.0068 memory: 1827 loss: 0.2599 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.2599 2022/12/25 17:06:32 - mmengine - INFO - Saving checkpoint at 9 epochs 2022/12/25 17:06:35 - mmengine - INFO - Epoch(val) [9][100/129] eta: 0:00:00 time: 0.0276 data_time: 0.0066 memory: 263 2022/12/25 17:06:36 - mmengine - INFO - Epoch(val) [9][129/129] acc/top1: 0.8633 acc/top5: 0.9872 acc/mean1: 0.8632 2022/12/25 17:06:44 - mmengine - INFO - Epoch(train) [10][ 100/1567] lr: 3.9638e-02 eta: 0:15:26 time: 0.0835 data_time: 0.0063 memory: 1827 loss: 0.1210 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1210 2022/12/25 17:06:53 - mmengine - INFO - Epoch(train) [10][ 200/1567] lr: 3.9026e-02 eta: 0:15:17 time: 0.0844 data_time: 0.0066 memory: 1827 loss: 0.1336 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1336 2022/12/25 17:07:01 - mmengine - INFO - Epoch(train) [10][ 300/1567] lr: 3.8415e-02 eta: 0:15:09 time: 0.0843 data_time: 0.0063 memory: 1827 loss: 0.0981 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0981 2022/12/25 17:07:10 - mmengine - INFO - Epoch(train) [10][ 400/1567] lr: 3.7807e-02 eta: 0:15:00 time: 0.0836 data_time: 0.0075 memory: 1827 loss: 0.1079 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1079 2022/12/25 17:07:18 - mmengine - INFO - Epoch(train) [10][ 500/1567] lr: 3.7200e-02 eta: 0:14:51 time: 0.0833 data_time: 0.0068 memory: 1827 loss: 0.0978 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0978 2022/12/25 17:07:27 - mmengine - INFO - Epoch(train) [10][ 600/1567] lr: 3.6596e-02 eta: 0:14:43 time: 0.0842 data_time: 0.0067 memory: 1827 loss: 0.0952 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.0952 2022/12/25 17:07:35 - mmengine - INFO - Epoch(train) [10][ 700/1567] lr: 3.5993e-02 eta: 0:14:34 time: 0.0855 data_time: 0.0065 memory: 1827 loss: 0.0995 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0995 2022/12/25 17:07:44 - mmengine - INFO - Epoch(train) [10][ 800/1567] lr: 3.5393e-02 eta: 0:14:26 time: 0.0846 data_time: 0.0070 memory: 1827 loss: 0.0770 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0770 2022/12/25 17:07:53 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:07:53 - mmengine - INFO - Epoch(train) [10][ 900/1567] lr: 3.4795e-02 eta: 0:14:18 time: 0.0871 data_time: 0.0075 memory: 1827 loss: 0.0786 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0786 2022/12/25 17:08:01 - mmengine - INFO - Epoch(train) [10][1000/1567] lr: 3.4199e-02 eta: 0:14:09 time: 0.0829 data_time: 0.0065 memory: 1827 loss: 0.1254 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1254 2022/12/25 17:08:10 - mmengine - INFO - Epoch(train) [10][1100/1567] lr: 3.3606e-02 eta: 0:14:00 time: 0.0849 data_time: 0.0067 memory: 1827 loss: 0.0928 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0928 2022/12/25 17:08:18 - mmengine - INFO - Epoch(train) [10][1200/1567] lr: 3.3015e-02 eta: 0:13:52 time: 0.0852 data_time: 0.0067 memory: 1827 loss: 0.0987 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0987 2022/12/25 17:08:27 - mmengine - INFO - Epoch(train) [10][1300/1567] lr: 3.2428e-02 eta: 0:13:43 time: 0.0844 data_time: 0.0066 memory: 1827 loss: 0.1030 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1030 2022/12/25 17:08:35 - mmengine - INFO - Epoch(train) [10][1400/1567] lr: 3.1842e-02 eta: 0:13:35 time: 0.0857 data_time: 0.0069 memory: 1827 loss: 0.0710 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0710 2022/12/25 17:08:44 - mmengine - INFO - Epoch(train) [10][1500/1567] lr: 3.1260e-02 eta: 0:13:26 time: 0.0844 data_time: 0.0066 memory: 1827 loss: 0.0837 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0837 2022/12/25 17:08:50 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:08:50 - mmengine - INFO - Epoch(train) [10][1567/1567] lr: 3.0872e-02 eta: 0:13:21 time: 0.0851 data_time: 0.0064 memory: 1827 loss: 0.2153 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.2153 2022/12/25 17:08:50 - mmengine - INFO - Saving checkpoint at 10 epochs 2022/12/25 17:08:53 - mmengine - INFO - Epoch(val) [10][100/129] eta: 0:00:00 time: 0.0267 data_time: 0.0060 memory: 263 2022/12/25 17:08:54 - mmengine - INFO - Epoch(val) [10][129/129] acc/top1: 0.8907 acc/top5: 0.9913 acc/mean1: 0.8907 2022/12/25 17:08:54 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_8.pth is removed 2022/12/25 17:08:54 - mmengine - INFO - The best checkpoint with 0.8907 acc/top1 at 10 epoch is saved to best_acc/top1_epoch_10.pth. 2022/12/25 17:09:03 - mmengine - INFO - Epoch(train) [11][ 100/1567] lr: 3.0294e-02 eta: 0:13:12 time: 0.0839 data_time: 0.0065 memory: 1827 loss: 0.0808 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0808 2022/12/25 17:09:11 - mmengine - INFO - Epoch(train) [11][ 200/1567] lr: 2.9720e-02 eta: 0:13:04 time: 0.0842 data_time: 0.0066 memory: 1827 loss: 0.0822 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0822 2022/12/25 17:09:20 - mmengine - INFO - Epoch(train) [11][ 300/1567] lr: 2.9149e-02 eta: 0:12:55 time: 0.0863 data_time: 0.0065 memory: 1827 loss: 0.0955 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0955 2022/12/25 17:09:22 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:09:28 - mmengine - INFO - Epoch(train) [11][ 400/1567] lr: 2.8581e-02 eta: 0:12:47 time: 0.0857 data_time: 0.0067 memory: 1827 loss: 0.0714 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0714 2022/12/25 17:09:37 - mmengine - INFO - Epoch(train) [11][ 500/1567] lr: 2.8017e-02 eta: 0:12:38 time: 0.0851 data_time: 0.0075 memory: 1827 loss: 0.0395 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0395 2022/12/25 17:09:46 - mmengine - INFO - Epoch(train) [11][ 600/1567] lr: 2.7456e-02 eta: 0:12:30 time: 0.0860 data_time: 0.0069 memory: 1827 loss: 0.0707 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0707 2022/12/25 17:09:54 - mmengine - INFO - Epoch(train) [11][ 700/1567] lr: 2.6898e-02 eta: 0:12:21 time: 0.0841 data_time: 0.0072 memory: 1827 loss: 0.0933 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0933 2022/12/25 17:10:03 - mmengine - INFO - Epoch(train) [11][ 800/1567] lr: 2.6345e-02 eta: 0:12:13 time: 0.0875 data_time: 0.0066 memory: 1827 loss: 0.0691 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0691 2022/12/25 17:10:12 - mmengine - INFO - Epoch(train) [11][ 900/1567] lr: 2.5794e-02 eta: 0:12:04 time: 0.0862 data_time: 0.0064 memory: 1827 loss: 0.1038 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1038 2022/12/25 17:10:20 - mmengine - INFO - Epoch(train) [11][1000/1567] lr: 2.5248e-02 eta: 0:11:56 time: 0.0839 data_time: 0.0067 memory: 1827 loss: 0.0729 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0729 2022/12/25 17:10:28 - mmengine - INFO - Epoch(train) [11][1100/1567] lr: 2.4706e-02 eta: 0:11:47 time: 0.0835 data_time: 0.0068 memory: 1827 loss: 0.0432 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0432 2022/12/25 17:10:37 - mmengine - INFO - Epoch(train) [11][1200/1567] lr: 2.4167e-02 eta: 0:11:39 time: 0.0884 data_time: 0.0068 memory: 1827 loss: 0.0602 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0602 2022/12/25 17:10:46 - mmengine - INFO - Epoch(train) [11][1300/1567] lr: 2.3633e-02 eta: 0:11:30 time: 0.0861 data_time: 0.0065 memory: 1827 loss: 0.0501 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0501 2022/12/25 17:10:48 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:10:54 - mmengine - INFO - Epoch(train) [11][1400/1567] lr: 2.3103e-02 eta: 0:11:22 time: 0.0838 data_time: 0.0068 memory: 1827 loss: 0.0812 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0812 2022/12/25 17:11:03 - mmengine - INFO - Epoch(train) [11][1500/1567] lr: 2.2577e-02 eta: 0:11:13 time: 0.0881 data_time: 0.0069 memory: 1827 loss: 0.0624 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0624 2022/12/25 17:11:09 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:11:09 - mmengine - INFO - Epoch(train) [11][1567/1567] lr: 2.2227e-02 eta: 0:11:08 time: 0.0870 data_time: 0.0066 memory: 1827 loss: 0.2085 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2085 2022/12/25 17:11:09 - mmengine - INFO - Saving checkpoint at 11 epochs 2022/12/25 17:11:12 - mmengine - INFO - Epoch(val) [11][100/129] eta: 0:00:00 time: 0.0272 data_time: 0.0064 memory: 263 2022/12/25 17:11:13 - mmengine - INFO - Epoch(val) [11][129/129] acc/top1: 0.8904 acc/top5: 0.9899 acc/mean1: 0.8903 2022/12/25 17:11:21 - mmengine - INFO - Epoch(train) [12][ 100/1567] lr: 2.1708e-02 eta: 0:10:59 time: 0.0887 data_time: 0.0068 memory: 1827 loss: 0.0444 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0444 2022/12/25 17:11:30 - mmengine - INFO - Epoch(train) [12][ 200/1567] lr: 2.1194e-02 eta: 0:10:51 time: 0.0889 data_time: 0.0067 memory: 1827 loss: 0.0538 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0538 2022/12/25 17:11:39 - mmengine - INFO - Epoch(train) [12][ 300/1567] lr: 2.0684e-02 eta: 0:10:42 time: 0.0839 data_time: 0.0067 memory: 1827 loss: 0.0351 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0351 2022/12/25 17:11:47 - mmengine - INFO - Epoch(train) [12][ 400/1567] lr: 2.0179e-02 eta: 0:10:34 time: 0.0893 data_time: 0.0075 memory: 1827 loss: 0.0539 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0539 2022/12/25 17:11:56 - mmengine - INFO - Epoch(train) [12][ 500/1567] lr: 1.9678e-02 eta: 0:10:25 time: 0.0879 data_time: 0.0065 memory: 1827 loss: 0.0417 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0417 2022/12/25 17:12:05 - mmengine - INFO - Epoch(train) [12][ 600/1567] lr: 1.9182e-02 eta: 0:10:17 time: 0.0886 data_time: 0.0069 memory: 1827 loss: 0.0487 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0487 2022/12/25 17:12:14 - mmengine - INFO - Epoch(train) [12][ 700/1567] lr: 1.8691e-02 eta: 0:10:08 time: 0.0837 data_time: 0.0076 memory: 1827 loss: 0.0400 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0400 2022/12/25 17:12:19 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:12:22 - mmengine - INFO - Epoch(train) [12][ 800/1567] lr: 1.8205e-02 eta: 0:10:00 time: 0.0876 data_time: 0.0068 memory: 1827 loss: 0.0278 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0278 2022/12/25 17:12:31 - mmengine - INFO - Epoch(train) [12][ 900/1567] lr: 1.7724e-02 eta: 0:09:51 time: 0.0873 data_time: 0.0074 memory: 1827 loss: 0.0386 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0386 2022/12/25 17:12:40 - mmengine - INFO - Epoch(train) [12][1000/1567] lr: 1.7248e-02 eta: 0:09:43 time: 0.0856 data_time: 0.0066 memory: 1827 loss: 0.0295 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0295 2022/12/25 17:12:48 - mmengine - INFO - Epoch(train) [12][1100/1567] lr: 1.6778e-02 eta: 0:09:34 time: 0.0875 data_time: 0.0068 memory: 1827 loss: 0.0308 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0308 2022/12/25 17:12:57 - mmengine - INFO - Epoch(train) [12][1200/1567] lr: 1.6312e-02 eta: 0:09:26 time: 0.0884 data_time: 0.0067 memory: 1827 loss: 0.0254 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0254 2022/12/25 17:13:06 - mmengine - INFO - Epoch(train) [12][1300/1567] lr: 1.5852e-02 eta: 0:09:18 time: 0.0908 data_time: 0.0069 memory: 1827 loss: 0.0238 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0238 2022/12/25 17:13:15 - mmengine - INFO - Epoch(train) [12][1400/1567] lr: 1.5397e-02 eta: 0:09:09 time: 0.0877 data_time: 0.0066 memory: 1827 loss: 0.0369 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0369 2022/12/25 17:13:24 - mmengine - INFO - Epoch(train) [12][1500/1567] lr: 1.4947e-02 eta: 0:09:01 time: 0.0870 data_time: 0.0067 memory: 1827 loss: 0.0207 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0207 2022/12/25 17:13:30 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:13:30 - mmengine - INFO - Epoch(train) [12][1567/1567] lr: 1.4649e-02 eta: 0:08:55 time: 0.0861 data_time: 0.0067 memory: 1827 loss: 0.2010 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2010 2022/12/25 17:13:30 - mmengine - INFO - Saving checkpoint at 12 epochs 2022/12/25 17:13:33 - mmengine - INFO - Epoch(val) [12][100/129] eta: 0:00:00 time: 0.0267 data_time: 0.0060 memory: 263 2022/12/25 17:13:34 - mmengine - INFO - Epoch(val) [12][129/129] acc/top1: 0.9064 acc/top5: 0.9918 acc/mean1: 0.9063 2022/12/25 17:13:34 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_10.pth is removed 2022/12/25 17:13:34 - mmengine - INFO - The best checkpoint with 0.9064 acc/top1 at 12 epoch is saved to best_acc/top1_epoch_12.pth. 2022/12/25 17:13:42 - mmengine - INFO - Epoch(train) [13][ 100/1567] lr: 1.4209e-02 eta: 0:08:46 time: 0.0847 data_time: 0.0069 memory: 1827 loss: 0.0232 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0232 2022/12/25 17:13:51 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:13:51 - mmengine - INFO - Epoch(train) [13][ 200/1567] lr: 1.3774e-02 eta: 0:08:38 time: 0.0869 data_time: 0.0070 memory: 1827 loss: 0.0215 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0215 2022/12/25 17:14:00 - mmengine - INFO - Epoch(train) [13][ 300/1567] lr: 1.3345e-02 eta: 0:08:29 time: 0.0856 data_time: 0.0068 memory: 1827 loss: 0.0227 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0227 2022/12/25 17:14:08 - mmengine - INFO - Epoch(train) [13][ 400/1567] lr: 1.2922e-02 eta: 0:08:21 time: 0.0876 data_time: 0.0068 memory: 1827 loss: 0.0167 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0167 2022/12/25 17:14:17 - mmengine - INFO - Epoch(train) [13][ 500/1567] lr: 1.2505e-02 eta: 0:08:12 time: 0.0882 data_time: 0.0069 memory: 1827 loss: 0.0131 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0131 2022/12/25 17:14:26 - mmengine - INFO - Epoch(train) [13][ 600/1567] lr: 1.2093e-02 eta: 0:08:04 time: 0.0847 data_time: 0.0068 memory: 1827 loss: 0.0154 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0154 2022/12/25 17:14:34 - mmengine - INFO - Epoch(train) [13][ 700/1567] lr: 1.1687e-02 eta: 0:07:55 time: 0.0847 data_time: 0.0067 memory: 1827 loss: 0.0193 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0193 2022/12/25 17:14:43 - mmengine - INFO - Epoch(train) [13][ 800/1567] lr: 1.1288e-02 eta: 0:07:47 time: 0.0854 data_time: 0.0068 memory: 1827 loss: 0.0175 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0175 2022/12/25 17:14:52 - mmengine - INFO - Epoch(train) [13][ 900/1567] lr: 1.0894e-02 eta: 0:07:38 time: 0.0877 data_time: 0.0070 memory: 1827 loss: 0.0121 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0121 2022/12/25 17:15:00 - mmengine - INFO - Epoch(train) [13][1000/1567] lr: 1.0507e-02 eta: 0:07:30 time: 0.0880 data_time: 0.0071 memory: 1827 loss: 0.0241 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0241 2022/12/25 17:15:09 - mmengine - INFO - Epoch(train) [13][1100/1567] lr: 1.0126e-02 eta: 0:07:21 time: 0.0880 data_time: 0.0076 memory: 1827 loss: 0.0244 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0244 2022/12/25 17:15:18 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:15:18 - mmengine - INFO - Epoch(train) [13][1200/1567] lr: 9.7512e-03 eta: 0:07:13 time: 0.0887 data_time: 0.0071 memory: 1827 loss: 0.0135 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0135 2022/12/25 17:15:27 - mmengine - INFO - Epoch(train) [13][1300/1567] lr: 9.3826e-03 eta: 0:07:04 time: 0.0875 data_time: 0.0067 memory: 1827 loss: 0.0183 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0183 2022/12/25 17:15:36 - mmengine - INFO - Epoch(train) [13][1400/1567] lr: 9.0204e-03 eta: 0:06:56 time: 0.0891 data_time: 0.0071 memory: 1827 loss: 0.0128 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0128 2022/12/25 17:15:44 - mmengine - INFO - Epoch(train) [13][1500/1567] lr: 8.6647e-03 eta: 0:06:47 time: 0.0840 data_time: 0.0068 memory: 1827 loss: 0.0109 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0109 2022/12/25 17:15:50 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:15:50 - mmengine - INFO - Epoch(train) [13][1567/1567] lr: 8.4300e-03 eta: 0:06:42 time: 0.0879 data_time: 0.0066 memory: 1827 loss: 0.2232 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2232 2022/12/25 17:15:50 - mmengine - INFO - Saving checkpoint at 13 epochs 2022/12/25 17:15:53 - mmengine - INFO - Epoch(val) [13][100/129] eta: 0:00:00 time: 0.0271 data_time: 0.0060 memory: 263 2022/12/25 17:15:54 - mmengine - INFO - Epoch(val) [13][129/129] acc/top1: 0.9140 acc/top5: 0.9945 acc/mean1: 0.9139 2022/12/25 17:15:54 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_12.pth is removed 2022/12/25 17:15:54 - mmengine - INFO - The best checkpoint with 0.9140 acc/top1 at 13 epoch is saved to best_acc/top1_epoch_13.pth. 2022/12/25 17:16:03 - mmengine - INFO - Epoch(train) [14][ 100/1567] lr: 8.0851e-03 eta: 0:06:33 time: 0.0872 data_time: 0.0072 memory: 1827 loss: 0.0141 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0141 2022/12/25 17:16:12 - mmengine - INFO - Epoch(train) [14][ 200/1567] lr: 7.7469e-03 eta: 0:06:25 time: 0.0872 data_time: 0.0071 memory: 1827 loss: 0.0162 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0162 2022/12/25 17:16:21 - mmengine - INFO - Epoch(train) [14][ 300/1567] lr: 7.4152e-03 eta: 0:06:16 time: 0.0943 data_time: 0.0071 memory: 1827 loss: 0.0120 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0120 2022/12/25 17:16:30 - mmengine - INFO - Epoch(train) [14][ 400/1567] lr: 7.0902e-03 eta: 0:06:08 time: 0.0863 data_time: 0.0071 memory: 1827 loss: 0.0106 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0106 2022/12/25 17:16:38 - mmengine - INFO - Epoch(train) [14][ 500/1567] lr: 6.7720e-03 eta: 0:05:59 time: 0.0878 data_time: 0.0079 memory: 1827 loss: 0.0146 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0146 2022/12/25 17:16:47 - mmengine - INFO - Epoch(train) [14][ 600/1567] lr: 6.4606e-03 eta: 0:05:51 time: 0.0889 data_time: 0.0068 memory: 1827 loss: 0.0089 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0089 2022/12/25 17:16:50 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:16:56 - mmengine - INFO - Epoch(train) [14][ 700/1567] lr: 6.1560e-03 eta: 0:05:42 time: 0.0857 data_time: 0.0075 memory: 1827 loss: 0.0120 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0120 2022/12/25 17:17:04 - mmengine - INFO - Epoch(train) [14][ 800/1567] lr: 5.8582e-03 eta: 0:05:33 time: 0.0847 data_time: 0.0075 memory: 1827 loss: 0.0116 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0116 2022/12/25 17:17:13 - mmengine - INFO - Epoch(train) [14][ 900/1567] lr: 5.5675e-03 eta: 0:05:25 time: 0.0841 data_time: 0.0070 memory: 1827 loss: 0.0098 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0098 2022/12/25 17:17:22 - mmengine - INFO - Epoch(train) [14][1000/1567] lr: 5.2836e-03 eta: 0:05:16 time: 0.0880 data_time: 0.0070 memory: 1827 loss: 0.0081 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0081 2022/12/25 17:17:31 - mmengine - INFO - Epoch(train) [14][1100/1567] lr: 5.0068e-03 eta: 0:05:08 time: 0.0897 data_time: 0.0071 memory: 1827 loss: 0.0069 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0069 2022/12/25 17:17:40 - mmengine - INFO - Epoch(train) [14][1200/1567] lr: 4.7371e-03 eta: 0:04:59 time: 0.0880 data_time: 0.0069 memory: 1827 loss: 0.0079 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0079 2022/12/25 17:17:48 - mmengine - INFO - Epoch(train) [14][1300/1567] lr: 4.4745e-03 eta: 0:04:51 time: 0.0866 data_time: 0.0074 memory: 1827 loss: 0.0070 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0070 2022/12/25 17:17:57 - mmengine - INFO - Epoch(train) [14][1400/1567] lr: 4.2190e-03 eta: 0:04:42 time: 0.0852 data_time: 0.0069 memory: 1827 loss: 0.0062 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0062 2022/12/25 17:18:06 - mmengine - INFO - Epoch(train) [14][1500/1567] lr: 3.9707e-03 eta: 0:04:34 time: 0.0869 data_time: 0.0068 memory: 1827 loss: 0.0073 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0073 2022/12/25 17:18:11 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:18:11 - mmengine - INFO - Epoch(train) [14][1567/1567] lr: 3.8084e-03 eta: 0:04:28 time: 0.0841 data_time: 0.0065 memory: 1827 loss: 0.1841 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.1841 2022/12/25 17:18:11 - mmengine - INFO - Saving checkpoint at 14 epochs 2022/12/25 17:18:15 - mmengine - INFO - Epoch(val) [14][100/129] eta: 0:00:00 time: 0.0262 data_time: 0.0059 memory: 263 2022/12/25 17:18:15 - mmengine - INFO - Epoch(val) [14][129/129] acc/top1: 0.9199 acc/top5: 0.9942 acc/mean1: 0.9198 2022/12/25 17:18:15 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_13.pth is removed 2022/12/25 17:18:16 - mmengine - INFO - The best checkpoint with 0.9199 acc/top1 at 14 epoch is saved to best_acc/top1_epoch_14.pth. 2022/12/25 17:18:21 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:18:24 - mmengine - INFO - Epoch(train) [15][ 100/1567] lr: 3.5722e-03 eta: 0:04:19 time: 0.0865 data_time: 0.0077 memory: 1827 loss: 0.0143 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0143 2022/12/25 17:18:33 - mmengine - INFO - Epoch(train) [15][ 200/1567] lr: 3.3433e-03 eta: 0:04:11 time: 0.0870 data_time: 0.0067 memory: 1827 loss: 0.0078 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0078 2022/12/25 17:18:42 - mmengine - INFO - Epoch(train) [15][ 300/1567] lr: 3.1217e-03 eta: 0:04:02 time: 0.0848 data_time: 0.0070 memory: 1827 loss: 0.0067 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0067 2022/12/25 17:18:50 - mmengine - INFO - Epoch(train) [15][ 400/1567] lr: 2.9075e-03 eta: 0:03:54 time: 0.0866 data_time: 0.0071 memory: 1827 loss: 0.0103 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0103 2022/12/25 17:18:59 - mmengine - INFO - Epoch(train) [15][ 500/1567] lr: 2.7007e-03 eta: 0:03:45 time: 0.0883 data_time: 0.0068 memory: 1827 loss: 0.0062 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0062 2022/12/25 17:19:08 - mmengine - INFO - Epoch(train) [15][ 600/1567] lr: 2.5013e-03 eta: 0:03:37 time: 0.0858 data_time: 0.0069 memory: 1827 loss: 0.0118 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0118 2022/12/25 17:19:17 - mmengine - INFO - Epoch(train) [15][ 700/1567] lr: 2.3093e-03 eta: 0:03:28 time: 0.0871 data_time: 0.0084 memory: 1827 loss: 0.0067 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0067 2022/12/25 17:19:25 - mmengine - INFO - Epoch(train) [15][ 800/1567] lr: 2.1249e-03 eta: 0:03:20 time: 0.0859 data_time: 0.0071 memory: 1827 loss: 0.0080 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0080 2022/12/25 17:19:34 - mmengine - INFO - Epoch(train) [15][ 900/1567] lr: 1.9479e-03 eta: 0:03:11 time: 0.0871 data_time: 0.0076 memory: 1827 loss: 0.0084 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0084 2022/12/25 17:19:43 - mmengine - INFO - Epoch(train) [15][1000/1567] lr: 1.7785e-03 eta: 0:03:02 time: 0.0873 data_time: 0.0068 memory: 1827 loss: 0.0064 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0064 2022/12/25 17:19:48 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:19:51 - mmengine - INFO - Epoch(train) [15][1100/1567] lr: 1.6167e-03 eta: 0:02:54 time: 0.0894 data_time: 0.0073 memory: 1827 loss: 0.0062 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0062 2022/12/25 17:20:00 - mmengine - INFO - Epoch(train) [15][1200/1567] lr: 1.4625e-03 eta: 0:02:45 time: 0.0856 data_time: 0.0068 memory: 1827 loss: 0.0051 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0051 2022/12/25 17:20:09 - mmengine - INFO - Epoch(train) [15][1300/1567] lr: 1.3159e-03 eta: 0:02:37 time: 0.0867 data_time: 0.0067 memory: 1827 loss: 0.0063 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0063 2022/12/25 17:20:18 - mmengine - INFO - Epoch(train) [15][1400/1567] lr: 1.1769e-03 eta: 0:02:28 time: 0.0892 data_time: 0.0069 memory: 1827 loss: 0.0086 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0086 2022/12/25 17:20:27 - mmengine - INFO - Epoch(train) [15][1500/1567] lr: 1.0456e-03 eta: 0:02:20 time: 0.0890 data_time: 0.0066 memory: 1827 loss: 0.0079 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0079 2022/12/25 17:20:33 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:20:33 - mmengine - INFO - Epoch(train) [15][1567/1567] lr: 9.6196e-04 eta: 0:02:14 time: 0.0878 data_time: 0.0065 memory: 1827 loss: 0.1837 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1837 2022/12/25 17:20:33 - mmengine - INFO - Saving checkpoint at 15 epochs 2022/12/25 17:20:36 - mmengine - INFO - Epoch(val) [15][100/129] eta: 0:00:00 time: 0.0275 data_time: 0.0063 memory: 263 2022/12/25 17:20:37 - mmengine - INFO - Epoch(val) [15][129/129] acc/top1: 0.9210 acc/top5: 0.9948 acc/mean1: 0.9209 2022/12/25 17:20:37 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_14.pth is removed 2022/12/25 17:20:37 - mmengine - INFO - The best checkpoint with 0.9210 acc/top1 at 15 epoch is saved to best_acc/top1_epoch_15.pth. 2022/12/25 17:20:46 - mmengine - INFO - Epoch(train) [16][ 100/1567] lr: 8.4351e-04 eta: 0:02:05 time: 0.0850 data_time: 0.0068 memory: 1827 loss: 0.0085 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0085 2022/12/25 17:20:55 - mmengine - INFO - Epoch(train) [16][ 200/1567] lr: 7.3277e-04 eta: 0:01:57 time: 0.0881 data_time: 0.0067 memory: 1827 loss: 0.0072 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0072 2022/12/25 17:21:03 - mmengine - INFO - Epoch(train) [16][ 300/1567] lr: 6.2978e-04 eta: 0:01:48 time: 0.0866 data_time: 0.0067 memory: 1827 loss: 0.0107 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0107 2022/12/25 17:21:12 - mmengine - INFO - Epoch(train) [16][ 400/1567] lr: 5.3453e-04 eta: 0:01:40 time: 0.0864 data_time: 0.0067 memory: 1827 loss: 0.0065 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0065 2022/12/25 17:21:20 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:21:21 - mmengine - INFO - Epoch(train) [16][ 500/1567] lr: 4.4705e-04 eta: 0:01:31 time: 0.0890 data_time: 0.0069 memory: 1827 loss: 0.0086 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0086 2022/12/25 17:21:30 - mmengine - INFO - Epoch(train) [16][ 600/1567] lr: 3.6735e-04 eta: 0:01:22 time: 0.0892 data_time: 0.0067 memory: 1827 loss: 0.0070 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0070 2022/12/25 17:21:38 - mmengine - INFO - Epoch(train) [16][ 700/1567] lr: 2.9544e-04 eta: 0:01:14 time: 0.0863 data_time: 0.0071 memory: 1827 loss: 0.0064 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0064 2022/12/25 17:21:47 - mmengine - INFO - Epoch(train) [16][ 800/1567] lr: 2.3134e-04 eta: 0:01:05 time: 0.0861 data_time: 0.0067 memory: 1827 loss: 0.0092 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0092 2022/12/25 17:21:56 - mmengine - INFO - Epoch(train) [16][ 900/1567] lr: 1.7505e-04 eta: 0:00:57 time: 0.0846 data_time: 0.0067 memory: 1827 loss: 0.0087 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0087 2022/12/25 17:22:04 - mmengine - INFO - Epoch(train) [16][1000/1567] lr: 1.2658e-04 eta: 0:00:48 time: 0.0896 data_time: 0.0066 memory: 1827 loss: 0.0057 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0057 2022/12/25 17:22:13 - mmengine - INFO - Epoch(train) [16][1100/1567] lr: 8.5947e-05 eta: 0:00:40 time: 0.0866 data_time: 0.0068 memory: 1827 loss: 0.0078 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0078 2022/12/25 17:22:22 - mmengine - INFO - Epoch(train) [16][1200/1567] lr: 5.3147e-05 eta: 0:00:31 time: 0.0878 data_time: 0.0067 memory: 1827 loss: 0.0074 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0074 2022/12/25 17:22:30 - mmengine - INFO - Epoch(train) [16][1300/1567] lr: 2.8190e-05 eta: 0:00:22 time: 0.0852 data_time: 0.0068 memory: 1827 loss: 0.0084 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0084 2022/12/25 17:22:39 - mmengine - INFO - Epoch(train) [16][1400/1567] lr: 1.1078e-05 eta: 0:00:14 time: 0.0860 data_time: 0.0073 memory: 1827 loss: 0.0073 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0073 2022/12/25 17:22:47 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:22:48 - mmengine - INFO - Epoch(train) [16][1500/1567] lr: 1.8150e-06 eta: 0:00:05 time: 0.0852 data_time: 0.0066 memory: 1827 loss: 0.0095 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0095 2022/12/25 17:22:54 - mmengine - INFO - Exp name: stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d_20221225_164514 2022/12/25 17:22:54 - mmengine - INFO - Epoch(train) [16][1567/1567] lr: 3.9252e-10 eta: 0:00:00 time: 0.0854 data_time: 0.0065 memory: 1827 loss: 0.1625 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1625 2022/12/25 17:22:54 - mmengine - INFO - Saving checkpoint at 16 epochs 2022/12/25 17:22:57 - mmengine - INFO - Epoch(val) [16][100/129] eta: 0:00:00 time: 0.0263 data_time: 0.0059 memory: 263 2022/12/25 17:22:58 - mmengine - INFO - Epoch(val) [16][129/129] acc/top1: 0.9224 acc/top5: 0.9945 acc/mean1: 0.9223 2022/12/25 17:22:58 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-bone-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_15.pth is removed 2022/12/25 17:22:58 - mmengine - INFO - The best checkpoint with 0.9224 acc/top1 at 16 epoch is saved to best_acc/top1_epoch_16.pth.