2022/12/25 15:27:43 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] CUDA available: True numpy_random_seed: 80876953 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/petrelfs/share/cuda-11.3 NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (GCC) 5.4.0 PyTorch: 1.11.0 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.3 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.2 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.12.0 OpenCV: 4.6.0 MMEngine: 0.3.2 Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None diff_rank_seed: False deterministic: False Distributed launcher: pytorch Distributed training: True GPU number: 8 ------------------------------------------------------------ 2022/12/25 15:27:43 - mmengine - INFO - Config: default_scope = 'mmaction' default_hooks = dict( runtime_info=dict(type='RuntimeInfoHook'), timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=100, ignore_last=False), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=1, save_best='auto'), sampler_seed=dict(type='DistSamplerSeedHook'), sync_buffers=dict(type='SyncBuffersHook')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) log_processor = dict(type='LogProcessor', window_size=20, by_epoch=True) vis_backends = [dict(type='LocalVisBackend')] visualizer = dict( type='ActionVisualizer', vis_backends=[dict(type='LocalVisBackend')]) log_level = 'INFO' load_from = None resume = False model = dict( type='RecognizerGCN', backbone=dict( type='STGCN', gcn_adaptive='init', gcn_with_res=True, tcn_type='mstcn', graph_cfg=dict(layout='coco', mode='spatial')), cls_head=dict(type='GCNHead', num_classes=60, in_channels=256)) dataset_type = 'PoseDataset' ann_file = 'data/skeleton/ntu60_2d.pkl' train_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] val_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] test_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] train_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='RepeatDataset', times=5, dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_train'))) val_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) test_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) val_evaluator = [dict(type='AccMetric')] test_evaluator = [dict(type='AccMetric')] train_cfg = dict( type='EpochBasedTrainLoop', max_epochs=16, val_begin=1, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='CosineAnnealingLR', eta_min=0, T_max=16, by_epoch=True, convert_to_iter_based=True) ] optim_wrapper = dict( optimizer=dict( type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True)) auto_scale_lr = dict(enable=False, base_batch_size=128) launcher = 'pytorch' work_dir = './work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d' randomness = dict(seed=None, diff_rank_seed=False, deterministic=False) 2022/12/25 15:27:43 - mmengine - INFO - Result has been saved to /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/modules_statistic_results.json 2022/12/25 15:27:43 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (NORMAL ) SyncBuffersHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (VERY_LOW ) CheckpointHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- Name of parameter - Initialization information backbone.data_bn.weight - torch.Size([51]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.data_bn.bias - torch.Size([51]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.conv.weight - torch.Size([192, 3, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.0.weight - torch.Size([64, 3, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.conv.weight - torch.Size([384, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.0.weight - torch.Size([128, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.residual.conv.weight - torch.Size([128, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.residual.conv.bias - torch.Size([128]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.residual.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.residual.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.conv.weight - torch.Size([384, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.conv.weight - torch.Size([384, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.conv.weight - torch.Size([768, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.0.weight - torch.Size([256, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.residual.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.residual.conv.bias - torch.Size([256]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.residual.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.residual.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.conv.weight - torch.Size([768, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.A - torch.Size([3, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.conv.weight - torch.Size([768, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN cls_head.fc.weight - torch.Size([60, 256]): NormalInit: mean=0, std=0.01, bias=0 cls_head.fc.bias - torch.Size([60]): NormalInit: mean=0, std=0.01, bias=0 2022/12/25 15:28:16 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d. 2022/12/25 15:28:26 - mmengine - INFO - Epoch(train) [1][ 100/1567] lr: 9.9996e-02 eta: 0:44:03 time: 0.0840 data_time: 0.0062 memory: 1827 loss: 3.1850 top1_acc: 0.1875 top5_acc: 0.3750 loss_cls: 3.1850 2022/12/25 15:28:35 - mmengine - INFO - Epoch(train) [1][ 200/1567] lr: 9.9984e-02 eta: 0:39:30 time: 0.0833 data_time: 0.0065 memory: 1827 loss: 2.5077 top1_acc: 0.3750 top5_acc: 0.6875 loss_cls: 2.5077 2022/12/25 15:28:43 - mmengine - INFO - Epoch(train) [1][ 300/1567] lr: 9.9965e-02 eta: 0:37:42 time: 0.0843 data_time: 0.0064 memory: 1827 loss: 1.7551 top1_acc: 0.3125 top5_acc: 0.6875 loss_cls: 1.7551 2022/12/25 15:28:52 - mmengine - INFO - Epoch(train) [1][ 400/1567] lr: 9.9938e-02 eta: 0:36:46 time: 0.0865 data_time: 0.0066 memory: 1827 loss: 1.2511 top1_acc: 0.5625 top5_acc: 0.9375 loss_cls: 1.2511 2022/12/25 15:29:00 - mmengine - INFO - Epoch(train) [1][ 500/1567] lr: 9.9902e-02 eta: 0:36:24 time: 0.0868 data_time: 0.0063 memory: 1827 loss: 1.0547 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 1.0547 2022/12/25 15:29:09 - mmengine - INFO - Epoch(train) [1][ 600/1567] lr: 9.9859e-02 eta: 0:36:04 time: 0.0862 data_time: 0.0062 memory: 1827 loss: 0.8760 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.8760 2022/12/25 15:29:17 - mmengine - INFO - Epoch(train) [1][ 700/1567] lr: 9.9808e-02 eta: 0:35:34 time: 0.0835 data_time: 0.0064 memory: 1827 loss: 0.8752 top1_acc: 0.6250 top5_acc: 1.0000 loss_cls: 0.8752 2022/12/25 15:29:26 - mmengine - INFO - Epoch(train) [1][ 800/1567] lr: 9.9750e-02 eta: 0:35:20 time: 0.0857 data_time: 0.0063 memory: 1827 loss: 0.7290 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.7290 2022/12/25 15:29:34 - mmengine - INFO - Epoch(train) [1][ 900/1567] lr: 9.9683e-02 eta: 0:35:07 time: 0.0859 data_time: 0.0063 memory: 1827 loss: 0.5619 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.5619 2022/12/25 15:29:43 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:29:43 - mmengine - INFO - Epoch(train) [1][1000/1567] lr: 9.9609e-02 eta: 0:34:54 time: 0.0823 data_time: 0.0062 memory: 1827 loss: 0.6308 top1_acc: 0.6250 top5_acc: 1.0000 loss_cls: 0.6308 2022/12/25 15:29:51 - mmengine - INFO - Epoch(train) [1][1100/1567] lr: 9.9527e-02 eta: 0:34:40 time: 0.0857 data_time: 0.0067 memory: 1827 loss: 0.6589 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.6589 2022/12/25 15:30:00 - mmengine - INFO - Epoch(train) [1][1200/1567] lr: 9.9437e-02 eta: 0:34:28 time: 0.0846 data_time: 0.0063 memory: 1827 loss: 0.5647 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5647 2022/12/25 15:30:08 - mmengine - INFO - Epoch(train) [1][1300/1567] lr: 9.9339e-02 eta: 0:34:15 time: 0.0838 data_time: 0.0061 memory: 1827 loss: 0.6107 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.6107 2022/12/25 15:30:17 - mmengine - INFO - Epoch(train) [1][1400/1567] lr: 9.9234e-02 eta: 0:34:03 time: 0.0837 data_time: 0.0062 memory: 1827 loss: 0.5330 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.5330 2022/12/25 15:30:25 - mmengine - INFO - Epoch(train) [1][1500/1567] lr: 9.9121e-02 eta: 0:33:51 time: 0.0847 data_time: 0.0062 memory: 1827 loss: 0.5230 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5230 2022/12/25 15:30:31 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:30:31 - mmengine - INFO - Epoch(train) [1][1567/1567] lr: 9.9040e-02 eta: 0:33:46 time: 0.0870 data_time: 0.0060 memory: 1827 loss: 0.7510 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.7510 2022/12/25 15:30:31 - mmengine - INFO - Saving checkpoint at 1 epochs 2022/12/25 15:30:34 - mmengine - INFO - Epoch(val) [1][100/129] eta: 0:00:00 time: 0.0263 data_time: 0.0060 memory: 263 2022/12/25 15:30:35 - mmengine - INFO - Epoch(val) [1][129/129] acc/top1: 0.6393 acc/top5: 0.9456 acc/mean1: 0.6396 2022/12/25 15:30:36 - mmengine - INFO - The best checkpoint with 0.6393 acc/top1 at 1 epoch is saved to best_acc/top1_epoch_1.pth. 2022/12/25 15:30:44 - mmengine - INFO - Epoch(train) [2][ 100/1567] lr: 9.8914e-02 eta: 0:33:39 time: 0.0862 data_time: 0.0062 memory: 1827 loss: 0.4939 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4939 2022/12/25 15:30:53 - mmengine - INFO - Epoch(train) [2][ 200/1567] lr: 9.8781e-02 eta: 0:33:32 time: 0.0866 data_time: 0.0062 memory: 1827 loss: 0.4324 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4324 2022/12/25 15:31:02 - mmengine - INFO - Epoch(train) [2][ 300/1567] lr: 9.8639e-02 eta: 0:33:24 time: 0.0864 data_time: 0.0062 memory: 1827 loss: 0.4956 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.4956 2022/12/25 15:31:10 - mmengine - INFO - Epoch(train) [2][ 400/1567] lr: 9.8491e-02 eta: 0:33:14 time: 0.0839 data_time: 0.0062 memory: 1827 loss: 0.4981 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.4981 2022/12/25 15:31:13 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:31:19 - mmengine - INFO - Epoch(train) [2][ 500/1567] lr: 9.8334e-02 eta: 0:33:03 time: 0.0822 data_time: 0.0064 memory: 1827 loss: 0.4625 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4625 2022/12/25 15:31:27 - mmengine - INFO - Epoch(train) [2][ 600/1567] lr: 9.8170e-02 eta: 0:32:51 time: 0.0830 data_time: 0.0063 memory: 1827 loss: 0.4994 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.4994 2022/12/25 15:31:35 - mmengine - INFO - Epoch(train) [2][ 700/1567] lr: 9.7998e-02 eta: 0:32:39 time: 0.0840 data_time: 0.0064 memory: 1827 loss: 0.4118 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4118 2022/12/25 15:31:44 - mmengine - INFO - Epoch(train) [2][ 800/1567] lr: 9.7819e-02 eta: 0:32:28 time: 0.0835 data_time: 0.0063 memory: 1827 loss: 0.4782 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4782 2022/12/25 15:31:52 - mmengine - INFO - Epoch(train) [2][ 900/1567] lr: 9.7632e-02 eta: 0:32:18 time: 0.0857 data_time: 0.0064 memory: 1827 loss: 0.4107 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4107 2022/12/25 15:32:00 - mmengine - INFO - Epoch(train) [2][1000/1567] lr: 9.7438e-02 eta: 0:32:08 time: 0.0828 data_time: 0.0061 memory: 1827 loss: 0.4425 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4425 2022/12/25 15:32:09 - mmengine - INFO - Epoch(train) [2][1100/1567] lr: 9.7236e-02 eta: 0:31:57 time: 0.0837 data_time: 0.0063 memory: 1827 loss: 0.3940 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3940 2022/12/25 15:32:17 - mmengine - INFO - Epoch(train) [2][1200/1567] lr: 9.7027e-02 eta: 0:31:47 time: 0.0837 data_time: 0.0063 memory: 1827 loss: 0.3736 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3736 2022/12/25 15:32:26 - mmengine - INFO - Epoch(train) [2][1300/1567] lr: 9.6810e-02 eta: 0:31:38 time: 0.0837 data_time: 0.0064 memory: 1827 loss: 0.4140 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4140 2022/12/25 15:32:34 - mmengine - INFO - Epoch(train) [2][1400/1567] lr: 9.6587e-02 eta: 0:31:28 time: 0.0838 data_time: 0.0063 memory: 1827 loss: 0.4197 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4197 2022/12/25 15:32:37 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:32:43 - mmengine - INFO - Epoch(train) [2][1500/1567] lr: 9.6355e-02 eta: 0:31:19 time: 0.0846 data_time: 0.0063 memory: 1827 loss: 0.3661 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3661 2022/12/25 15:32:48 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:32:48 - mmengine - INFO - Epoch(train) [2][1567/1567] lr: 9.6196e-02 eta: 0:31:13 time: 0.0839 data_time: 0.0062 memory: 1827 loss: 0.5501 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.5501 2022/12/25 15:32:48 - mmengine - INFO - Saving checkpoint at 2 epochs 2022/12/25 15:32:51 - mmengine - INFO - Epoch(val) [2][100/129] eta: 0:00:00 time: 0.0252 data_time: 0.0059 memory: 263 2022/12/25 15:32:52 - mmengine - INFO - Epoch(val) [2][129/129] acc/top1: 0.7194 acc/top5: 0.9597 acc/mean1: 0.7193 2022/12/25 15:32:52 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_1.pth is removed 2022/12/25 15:32:53 - mmengine - INFO - The best checkpoint with 0.7194 acc/top1 at 2 epoch is saved to best_acc/top1_epoch_2.pth. 2022/12/25 15:33:01 - mmengine - INFO - Epoch(train) [3][ 100/1567] lr: 9.5953e-02 eta: 0:31:05 time: 0.0848 data_time: 0.0062 memory: 1827 loss: 0.3538 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3538 2022/12/25 15:33:10 - mmengine - INFO - Epoch(train) [3][ 200/1567] lr: 9.5703e-02 eta: 0:30:56 time: 0.0853 data_time: 0.0063 memory: 1827 loss: 0.3554 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3554 2022/12/25 15:33:18 - mmengine - INFO - Epoch(train) [3][ 300/1567] lr: 9.5445e-02 eta: 0:30:48 time: 0.0859 data_time: 0.0070 memory: 1827 loss: 0.4224 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4224 2022/12/25 15:33:27 - mmengine - INFO - Epoch(train) [3][ 400/1567] lr: 9.5180e-02 eta: 0:30:38 time: 0.0824 data_time: 0.0063 memory: 1827 loss: 0.4156 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4156 2022/12/25 15:33:35 - mmengine - INFO - Epoch(train) [3][ 500/1567] lr: 9.4908e-02 eta: 0:30:28 time: 0.0831 data_time: 0.0063 memory: 1827 loss: 0.3554 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3554 2022/12/25 15:33:43 - mmengine - INFO - Epoch(train) [3][ 600/1567] lr: 9.4629e-02 eta: 0:30:19 time: 0.0835 data_time: 0.0066 memory: 1827 loss: 0.3053 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3053 2022/12/25 15:33:52 - mmengine - INFO - Epoch(train) [3][ 700/1567] lr: 9.4343e-02 eta: 0:30:10 time: 0.0839 data_time: 0.0064 memory: 1827 loss: 0.3515 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3515 2022/12/25 15:34:00 - mmengine - INFO - Epoch(train) [3][ 800/1567] lr: 9.4050e-02 eta: 0:30:02 time: 0.0848 data_time: 0.0078 memory: 1827 loss: 0.3730 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.3730 2022/12/25 15:34:06 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:34:09 - mmengine - INFO - Epoch(train) [3][ 900/1567] lr: 9.3750e-02 eta: 0:29:53 time: 0.0859 data_time: 0.0065 memory: 1827 loss: 0.2809 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2809 2022/12/25 15:34:17 - mmengine - INFO - Epoch(train) [3][1000/1567] lr: 9.3444e-02 eta: 0:29:43 time: 0.0841 data_time: 0.0070 memory: 1827 loss: 0.3276 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.3276 2022/12/25 15:34:26 - mmengine - INFO - Epoch(train) [3][1100/1567] lr: 9.3130e-02 eta: 0:29:35 time: 0.0839 data_time: 0.0067 memory: 1827 loss: 0.3109 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.3109 2022/12/25 15:34:34 - mmengine - INFO - Epoch(train) [3][1200/1567] lr: 9.2810e-02 eta: 0:29:27 time: 0.0846 data_time: 0.0066 memory: 1827 loss: 0.3606 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3606 2022/12/25 15:34:43 - mmengine - INFO - Epoch(train) [3][1300/1567] lr: 9.2483e-02 eta: 0:29:21 time: 0.0862 data_time: 0.0071 memory: 1827 loss: 0.3967 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3967 2022/12/25 15:34:52 - mmengine - INFO - Epoch(train) [3][1400/1567] lr: 9.2149e-02 eta: 0:29:13 time: 0.0854 data_time: 0.0068 memory: 1827 loss: 0.3302 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3302 2022/12/25 15:35:01 - mmengine - INFO - Epoch(train) [3][1500/1567] lr: 9.1809e-02 eta: 0:29:05 time: 0.0854 data_time: 0.0068 memory: 1827 loss: 0.3277 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3277 2022/12/25 15:35:06 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:35:06 - mmengine - INFO - Epoch(train) [3][1567/1567] lr: 9.1577e-02 eta: 0:28:59 time: 0.0862 data_time: 0.0069 memory: 1827 loss: 0.5439 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.5439 2022/12/25 15:35:06 - mmengine - INFO - Saving checkpoint at 3 epochs 2022/12/25 15:35:09 - mmengine - INFO - Epoch(val) [3][100/129] eta: 0:00:00 time: 0.0264 data_time: 0.0060 memory: 263 2022/12/25 15:35:11 - mmengine - INFO - Epoch(val) [3][129/129] acc/top1: 0.7317 acc/top5: 0.9655 acc/mean1: 0.7316 2022/12/25 15:35:11 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_2.pth is removed 2022/12/25 15:35:11 - mmengine - INFO - The best checkpoint with 0.7317 acc/top1 at 3 epoch is saved to best_acc/top1_epoch_3.pth. 2022/12/25 15:35:19 - mmengine - INFO - Epoch(train) [4][ 100/1567] lr: 9.1226e-02 eta: 0:28:50 time: 0.0836 data_time: 0.0066 memory: 1827 loss: 0.2695 top1_acc: 0.6250 top5_acc: 1.0000 loss_cls: 0.2695 2022/12/25 15:35:28 - mmengine - INFO - Epoch(train) [4][ 200/1567] lr: 9.0868e-02 eta: 0:28:41 time: 0.0832 data_time: 0.0066 memory: 1827 loss: 0.3396 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.3396 2022/12/25 15:35:36 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:35:36 - mmengine - INFO - Epoch(train) [4][ 300/1567] lr: 9.0504e-02 eta: 0:28:32 time: 0.0847 data_time: 0.0072 memory: 1827 loss: 0.3522 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3522 2022/12/25 15:35:45 - mmengine - INFO - Epoch(train) [4][ 400/1567] lr: 9.0133e-02 eta: 0:28:23 time: 0.0849 data_time: 0.0065 memory: 1827 loss: 0.3558 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3558 2022/12/25 15:35:53 - mmengine - INFO - Epoch(train) [4][ 500/1567] lr: 8.9756e-02 eta: 0:28:15 time: 0.0854 data_time: 0.0070 memory: 1827 loss: 0.3341 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3341 2022/12/25 15:36:02 - mmengine - INFO - Epoch(train) [4][ 600/1567] lr: 8.9373e-02 eta: 0:28:06 time: 0.0850 data_time: 0.0065 memory: 1827 loss: 0.3097 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3097 2022/12/25 15:36:10 - mmengine - INFO - Epoch(train) [4][ 700/1567] lr: 8.8984e-02 eta: 0:27:57 time: 0.0849 data_time: 0.0066 memory: 1827 loss: 0.3954 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3954 2022/12/25 15:36:19 - mmengine - INFO - Epoch(train) [4][ 800/1567] lr: 8.8589e-02 eta: 0:27:49 time: 0.0850 data_time: 0.0065 memory: 1827 loss: 0.3563 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3563 2022/12/25 15:36:27 - mmengine - INFO - Epoch(train) [4][ 900/1567] lr: 8.8187e-02 eta: 0:27:40 time: 0.0851 data_time: 0.0065 memory: 1827 loss: 0.3743 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.3743 2022/12/25 15:36:36 - mmengine - INFO - Epoch(train) [4][1000/1567] lr: 8.7780e-02 eta: 0:27:32 time: 0.0871 data_time: 0.0068 memory: 1827 loss: 0.2522 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2522 2022/12/25 15:36:44 - mmengine - INFO - Epoch(train) [4][1100/1567] lr: 8.7367e-02 eta: 0:27:23 time: 0.0845 data_time: 0.0064 memory: 1827 loss: 0.2683 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2683 2022/12/25 15:36:53 - mmengine - INFO - Epoch(train) [4][1200/1567] lr: 8.6947e-02 eta: 0:27:14 time: 0.0824 data_time: 0.0065 memory: 1827 loss: 0.3282 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3282 2022/12/25 15:37:01 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:37:01 - mmengine - INFO - Epoch(train) [4][1300/1567] lr: 8.6522e-02 eta: 0:27:05 time: 0.0828 data_time: 0.0064 memory: 1827 loss: 0.3352 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3352 2022/12/25 15:37:10 - mmengine - INFO - Epoch(train) [4][1400/1567] lr: 8.6092e-02 eta: 0:26:56 time: 0.0856 data_time: 0.0065 memory: 1827 loss: 0.3585 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3585 2022/12/25 15:37:18 - mmengine - INFO - Epoch(train) [4][1500/1567] lr: 8.5655e-02 eta: 0:26:48 time: 0.0858 data_time: 0.0065 memory: 1827 loss: 0.2669 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.2669 2022/12/25 15:37:24 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:37:24 - mmengine - INFO - Epoch(train) [4][1567/1567] lr: 8.5360e-02 eta: 0:26:42 time: 0.0830 data_time: 0.0063 memory: 1827 loss: 0.5162 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.5162 2022/12/25 15:37:24 - mmengine - INFO - Saving checkpoint at 4 epochs 2022/12/25 15:37:27 - mmengine - INFO - Epoch(val) [4][100/129] eta: 0:00:00 time: 0.0278 data_time: 0.0062 memory: 263 2022/12/25 15:37:28 - mmengine - INFO - Epoch(val) [4][129/129] acc/top1: 0.6455 acc/top5: 0.9224 acc/mean1: 0.6455 2022/12/25 15:37:37 - mmengine - INFO - Epoch(train) [5][ 100/1567] lr: 8.4914e-02 eta: 0:26:34 time: 0.0860 data_time: 0.0066 memory: 1827 loss: 0.2989 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2989 2022/12/25 15:37:45 - mmengine - INFO - Epoch(train) [5][ 200/1567] lr: 8.4463e-02 eta: 0:26:25 time: 0.0852 data_time: 0.0066 memory: 1827 loss: 0.2364 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2364 2022/12/25 15:37:54 - mmengine - INFO - Epoch(train) [5][ 300/1567] lr: 8.4006e-02 eta: 0:26:17 time: 0.0866 data_time: 0.0065 memory: 1827 loss: 0.2561 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2561 2022/12/25 15:38:02 - mmengine - INFO - Epoch(train) [5][ 400/1567] lr: 8.3544e-02 eta: 0:26:09 time: 0.0859 data_time: 0.0065 memory: 1827 loss: 0.2954 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2954 2022/12/25 15:38:11 - mmengine - INFO - Epoch(train) [5][ 500/1567] lr: 8.3077e-02 eta: 0:26:00 time: 0.0840 data_time: 0.0065 memory: 1827 loss: 0.2900 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2900 2022/12/25 15:38:19 - mmengine - INFO - Epoch(train) [5][ 600/1567] lr: 8.2605e-02 eta: 0:25:51 time: 0.0824 data_time: 0.0065 memory: 1827 loss: 0.3034 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3034 2022/12/25 15:38:28 - mmengine - INFO - Epoch(train) [5][ 700/1567] lr: 8.2127e-02 eta: 0:25:42 time: 0.0834 data_time: 0.0066 memory: 1827 loss: 0.2808 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2808 2022/12/25 15:38:30 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:38:36 - mmengine - INFO - Epoch(train) [5][ 800/1567] lr: 8.1645e-02 eta: 0:25:33 time: 0.0876 data_time: 0.0067 memory: 1827 loss: 0.3018 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3018 2022/12/25 15:38:44 - mmengine - INFO - Epoch(train) [5][ 900/1567] lr: 8.1157e-02 eta: 0:25:24 time: 0.0834 data_time: 0.0063 memory: 1827 loss: 0.3102 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3102 2022/12/25 15:38:53 - mmengine - INFO - Epoch(train) [5][1000/1567] lr: 8.0665e-02 eta: 0:25:16 time: 0.0843 data_time: 0.0067 memory: 1827 loss: 0.3344 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3344 2022/12/25 15:39:01 - mmengine - INFO - Epoch(train) [5][1100/1567] lr: 8.0167e-02 eta: 0:25:07 time: 0.0851 data_time: 0.0072 memory: 1827 loss: 0.2680 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2680 2022/12/25 15:39:10 - mmengine - INFO - Epoch(train) [5][1200/1567] lr: 7.9665e-02 eta: 0:24:59 time: 0.0850 data_time: 0.0067 memory: 1827 loss: 0.3388 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3388 2022/12/25 15:39:18 - mmengine - INFO - Epoch(train) [5][1300/1567] lr: 7.9159e-02 eta: 0:24:50 time: 0.0824 data_time: 0.0065 memory: 1827 loss: 0.2616 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2616 2022/12/25 15:39:27 - mmengine - INFO - Epoch(train) [5][1400/1567] lr: 7.8647e-02 eta: 0:24:41 time: 0.0833 data_time: 0.0065 memory: 1827 loss: 0.3080 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3080 2022/12/25 15:39:35 - mmengine - INFO - Epoch(train) [5][1500/1567] lr: 7.8132e-02 eta: 0:24:32 time: 0.0839 data_time: 0.0064 memory: 1827 loss: 0.2976 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2976 2022/12/25 15:39:41 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:39:41 - mmengine - INFO - Epoch(train) [5][1567/1567] lr: 7.7784e-02 eta: 0:24:27 time: 0.0859 data_time: 0.0066 memory: 1827 loss: 0.4517 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.4517 2022/12/25 15:39:41 - mmengine - INFO - Saving checkpoint at 5 epochs 2022/12/25 15:39:44 - mmengine - INFO - Epoch(val) [5][100/129] eta: 0:00:00 time: 0.0266 data_time: 0.0059 memory: 263 2022/12/25 15:39:45 - mmengine - INFO - Epoch(val) [5][129/129] acc/top1: 0.6927 acc/top5: 0.9548 acc/mean1: 0.6927 2022/12/25 15:39:53 - mmengine - INFO - Epoch(train) [6][ 100/1567] lr: 7.7261e-02 eta: 0:24:18 time: 0.0830 data_time: 0.0065 memory: 1827 loss: 0.2656 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2656 2022/12/25 15:39:59 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:40:01 - mmengine - INFO - Epoch(train) [6][ 200/1567] lr: 7.6733e-02 eta: 0:24:09 time: 0.0830 data_time: 0.0065 memory: 1827 loss: 0.2231 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2231 2022/12/25 15:40:10 - mmengine - INFO - Epoch(train) [6][ 300/1567] lr: 7.6202e-02 eta: 0:24:00 time: 0.0839 data_time: 0.0066 memory: 1827 loss: 0.2575 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2575 2022/12/25 15:40:18 - mmengine - INFO - Epoch(train) [6][ 400/1567] lr: 7.5666e-02 eta: 0:23:51 time: 0.0830 data_time: 0.0065 memory: 1827 loss: 0.2238 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2238 2022/12/25 15:40:27 - mmengine - INFO - Epoch(train) [6][ 500/1567] lr: 7.5126e-02 eta: 0:23:42 time: 0.0828 data_time: 0.0066 memory: 1827 loss: 0.2694 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.2694 2022/12/25 15:40:35 - mmengine - INFO - Epoch(train) [6][ 600/1567] lr: 7.4583e-02 eta: 0:23:34 time: 0.0862 data_time: 0.0072 memory: 1827 loss: 0.2547 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2547 2022/12/25 15:40:43 - mmengine - INFO - Epoch(train) [6][ 700/1567] lr: 7.4035e-02 eta: 0:23:25 time: 0.0835 data_time: 0.0064 memory: 1827 loss: 0.2442 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2442 2022/12/25 15:40:52 - mmengine - INFO - Epoch(train) [6][ 800/1567] lr: 7.3484e-02 eta: 0:23:17 time: 0.0843 data_time: 0.0067 memory: 1827 loss: 0.1888 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1888 2022/12/25 15:41:00 - mmengine - INFO - Epoch(train) [6][ 900/1567] lr: 7.2929e-02 eta: 0:23:08 time: 0.0834 data_time: 0.0065 memory: 1827 loss: 0.2316 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2316 2022/12/25 15:41:09 - mmengine - INFO - Epoch(train) [6][1000/1567] lr: 7.2371e-02 eta: 0:22:59 time: 0.0831 data_time: 0.0064 memory: 1827 loss: 0.2324 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.2324 2022/12/25 15:41:17 - mmengine - INFO - Epoch(train) [6][1100/1567] lr: 7.1809e-02 eta: 0:22:51 time: 0.0840 data_time: 0.0066 memory: 1827 loss: 0.2107 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2107 2022/12/25 15:41:23 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:41:26 - mmengine - INFO - Epoch(train) [6][1200/1567] lr: 7.1243e-02 eta: 0:22:42 time: 0.0864 data_time: 0.0065 memory: 1827 loss: 0.2355 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2355 2022/12/25 15:41:35 - mmengine - INFO - Epoch(train) [6][1300/1567] lr: 7.0674e-02 eta: 0:22:34 time: 0.0863 data_time: 0.0066 memory: 1827 loss: 0.2302 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2302 2022/12/25 15:41:43 - mmengine - INFO - Epoch(train) [6][1400/1567] lr: 7.0102e-02 eta: 0:22:26 time: 0.0877 data_time: 0.0111 memory: 1827 loss: 0.2281 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2281 2022/12/25 15:41:51 - mmengine - INFO - Epoch(train) [6][1500/1567] lr: 6.9527e-02 eta: 0:22:17 time: 0.0849 data_time: 0.0066 memory: 1827 loss: 0.2189 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2189 2022/12/25 15:41:57 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:41:57 - mmengine - INFO - Epoch(train) [6][1567/1567] lr: 6.9140e-02 eta: 0:22:11 time: 0.0830 data_time: 0.0065 memory: 1827 loss: 0.4053 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.4053 2022/12/25 15:41:57 - mmengine - INFO - Saving checkpoint at 6 epochs 2022/12/25 15:42:00 - mmengine - INFO - Epoch(val) [6][100/129] eta: 0:00:00 time: 0.0262 data_time: 0.0058 memory: 263 2022/12/25 15:42:01 - mmengine - INFO - Epoch(val) [6][129/129] acc/top1: 0.8256 acc/top5: 0.9814 acc/mean1: 0.8255 2022/12/25 15:42:01 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_3.pth is removed 2022/12/25 15:42:01 - mmengine - INFO - The best checkpoint with 0.8256 acc/top1 at 6 epoch is saved to best_acc/top1_epoch_6.pth. 2022/12/25 15:42:10 - mmengine - INFO - Epoch(train) [7][ 100/1567] lr: 6.8560e-02 eta: 0:22:02 time: 0.0831 data_time: 0.0066 memory: 1827 loss: 0.2589 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2589 2022/12/25 15:42:18 - mmengine - INFO - Epoch(train) [7][ 200/1567] lr: 6.7976e-02 eta: 0:21:54 time: 0.0827 data_time: 0.0075 memory: 1827 loss: 0.2119 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2119 2022/12/25 15:42:26 - mmengine - INFO - Epoch(train) [7][ 300/1567] lr: 6.7390e-02 eta: 0:21:45 time: 0.0867 data_time: 0.0086 memory: 1827 loss: 0.2363 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2363 2022/12/25 15:42:35 - mmengine - INFO - Epoch(train) [7][ 400/1567] lr: 6.6802e-02 eta: 0:21:37 time: 0.0841 data_time: 0.0065 memory: 1827 loss: 0.1969 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1969 2022/12/25 15:42:44 - mmengine - INFO - Epoch(train) [7][ 500/1567] lr: 6.6210e-02 eta: 0:21:28 time: 0.0853 data_time: 0.0080 memory: 1827 loss: 0.2299 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2299 2022/12/25 15:42:52 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:42:52 - mmengine - INFO - Epoch(train) [7][ 600/1567] lr: 6.5616e-02 eta: 0:21:20 time: 0.0842 data_time: 0.0068 memory: 1827 loss: 0.2243 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2243 2022/12/25 15:43:00 - mmengine - INFO - Epoch(train) [7][ 700/1567] lr: 6.5020e-02 eta: 0:21:11 time: 0.0839 data_time: 0.0065 memory: 1827 loss: 0.1771 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1771 2022/12/25 15:43:09 - mmengine - INFO - Epoch(train) [7][ 800/1567] lr: 6.4421e-02 eta: 0:21:02 time: 0.0834 data_time: 0.0066 memory: 1827 loss: 0.2264 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2264 2022/12/25 15:43:17 - mmengine - INFO - Epoch(train) [7][ 900/1567] lr: 6.3820e-02 eta: 0:20:54 time: 0.0840 data_time: 0.0068 memory: 1827 loss: 0.2017 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2017 2022/12/25 15:43:26 - mmengine - INFO - Epoch(train) [7][1000/1567] lr: 6.3217e-02 eta: 0:20:45 time: 0.0843 data_time: 0.0070 memory: 1827 loss: 0.1887 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1887 2022/12/25 15:43:34 - mmengine - INFO - Epoch(train) [7][1100/1567] lr: 6.2612e-02 eta: 0:20:37 time: 0.0842 data_time: 0.0066 memory: 1827 loss: 0.1911 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1911 2022/12/25 15:43:43 - mmengine - INFO - Epoch(train) [7][1200/1567] lr: 6.2005e-02 eta: 0:20:28 time: 0.0850 data_time: 0.0074 memory: 1827 loss: 0.1602 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1602 2022/12/25 15:43:51 - mmengine - INFO - Epoch(train) [7][1300/1567] lr: 6.1396e-02 eta: 0:20:20 time: 0.0862 data_time: 0.0068 memory: 1827 loss: 0.1477 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1477 2022/12/25 15:44:00 - mmengine - INFO - Epoch(train) [7][1400/1567] lr: 6.0785e-02 eta: 0:20:11 time: 0.0853 data_time: 0.0067 memory: 1827 loss: 0.1967 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1967 2022/12/25 15:44:08 - mmengine - INFO - Epoch(train) [7][1500/1567] lr: 6.0172e-02 eta: 0:20:03 time: 0.0842 data_time: 0.0072 memory: 1827 loss: 0.2061 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.2061 2022/12/25 15:44:14 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:44:14 - mmengine - INFO - Epoch(train) [7][1567/1567] lr: 5.9761e-02 eta: 0:19:57 time: 0.0837 data_time: 0.0073 memory: 1827 loss: 0.4263 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.4263 2022/12/25 15:44:14 - mmengine - INFO - Saving checkpoint at 7 epochs 2022/12/25 15:44:17 - mmengine - INFO - Epoch(val) [7][100/129] eta: 0:00:00 time: 0.0268 data_time: 0.0062 memory: 263 2022/12/25 15:44:18 - mmengine - INFO - Epoch(val) [7][129/129] acc/top1: 0.8198 acc/top5: 0.9811 acc/mean1: 0.8198 2022/12/25 15:44:21 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:44:27 - mmengine - INFO - Epoch(train) [8][ 100/1567] lr: 5.9145e-02 eta: 0:19:49 time: 0.0874 data_time: 0.0068 memory: 1827 loss: 0.1862 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1862 2022/12/25 15:44:35 - mmengine - INFO - Epoch(train) [8][ 200/1567] lr: 5.8529e-02 eta: 0:19:41 time: 0.0849 data_time: 0.0066 memory: 1827 loss: 0.1958 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1958 2022/12/25 15:44:44 - mmengine - INFO - Epoch(train) [8][ 300/1567] lr: 5.7911e-02 eta: 0:19:32 time: 0.0833 data_time: 0.0066 memory: 1827 loss: 0.2196 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2196 2022/12/25 15:44:52 - mmengine - INFO - Epoch(train) [8][ 400/1567] lr: 5.7292e-02 eta: 0:19:23 time: 0.0845 data_time: 0.0065 memory: 1827 loss: 0.1857 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1857 2022/12/25 15:45:00 - mmengine - INFO - Epoch(train) [8][ 500/1567] lr: 5.6671e-02 eta: 0:19:15 time: 0.0837 data_time: 0.0065 memory: 1827 loss: 0.1804 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1804 2022/12/25 15:45:09 - mmengine - INFO - Epoch(train) [8][ 600/1567] lr: 5.6050e-02 eta: 0:19:06 time: 0.0851 data_time: 0.0069 memory: 1827 loss: 0.1570 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1570 2022/12/25 15:45:17 - mmengine - INFO - Epoch(train) [8][ 700/1567] lr: 5.5427e-02 eta: 0:18:58 time: 0.0842 data_time: 0.0064 memory: 1827 loss: 0.1448 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1448 2022/12/25 15:45:26 - mmengine - INFO - Epoch(train) [8][ 800/1567] lr: 5.4804e-02 eta: 0:18:49 time: 0.0841 data_time: 0.0071 memory: 1827 loss: 0.1786 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1786 2022/12/25 15:45:34 - mmengine - INFO - Epoch(train) [8][ 900/1567] lr: 5.4180e-02 eta: 0:18:41 time: 0.0861 data_time: 0.0063 memory: 1827 loss: 0.2403 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.2403 2022/12/25 15:45:43 - mmengine - INFO - Epoch(train) [8][1000/1567] lr: 5.3556e-02 eta: 0:18:32 time: 0.0825 data_time: 0.0064 memory: 1827 loss: 0.1665 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1665 2022/12/25 15:45:45 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:45:51 - mmengine - INFO - Epoch(train) [8][1100/1567] lr: 5.2930e-02 eta: 0:18:23 time: 0.0850 data_time: 0.0068 memory: 1827 loss: 0.1505 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1505 2022/12/25 15:46:00 - mmengine - INFO - Epoch(train) [8][1200/1567] lr: 5.2305e-02 eta: 0:18:15 time: 0.0846 data_time: 0.0070 memory: 1827 loss: 0.1816 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1816 2022/12/25 15:46:08 - mmengine - INFO - Epoch(train) [8][1300/1567] lr: 5.1679e-02 eta: 0:18:06 time: 0.0840 data_time: 0.0069 memory: 1827 loss: 0.2475 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2475 2022/12/25 15:46:16 - mmengine - INFO - Epoch(train) [8][1400/1567] lr: 5.1052e-02 eta: 0:17:58 time: 0.0858 data_time: 0.0068 memory: 1827 loss: 0.1294 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1294 2022/12/25 15:46:25 - mmengine - INFO - Epoch(train) [8][1500/1567] lr: 5.0426e-02 eta: 0:17:49 time: 0.0836 data_time: 0.0063 memory: 1827 loss: 0.1914 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1914 2022/12/25 15:46:30 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:46:30 - mmengine - INFO - Epoch(train) [8][1567/1567] lr: 5.0006e-02 eta: 0:17:43 time: 0.0827 data_time: 0.0063 memory: 1827 loss: 0.3492 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3492 2022/12/25 15:46:30 - mmengine - INFO - Saving checkpoint at 8 epochs 2022/12/25 15:46:34 - mmengine - INFO - Epoch(val) [8][100/129] eta: 0:00:00 time: 0.0259 data_time: 0.0058 memory: 263 2022/12/25 15:46:35 - mmengine - INFO - Epoch(val) [8][129/129] acc/top1: 0.6392 acc/top5: 0.8933 acc/mean1: 0.6392 2022/12/25 15:46:43 - mmengine - INFO - Epoch(train) [9][ 100/1567] lr: 4.9380e-02 eta: 0:17:35 time: 0.0824 data_time: 0.0065 memory: 1827 loss: 0.1562 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1562 2022/12/25 15:46:51 - mmengine - INFO - Epoch(train) [9][ 200/1567] lr: 4.8753e-02 eta: 0:17:26 time: 0.0857 data_time: 0.0066 memory: 1827 loss: 0.1331 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1331 2022/12/25 15:47:00 - mmengine - INFO - Epoch(train) [9][ 300/1567] lr: 4.8127e-02 eta: 0:17:18 time: 0.0828 data_time: 0.0064 memory: 1827 loss: 0.1283 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1283 2022/12/25 15:47:08 - mmengine - INFO - Epoch(train) [9][ 400/1567] lr: 4.7501e-02 eta: 0:17:09 time: 0.0865 data_time: 0.0064 memory: 1827 loss: 0.1662 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1662 2022/12/25 15:47:14 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:47:17 - mmengine - INFO - Epoch(train) [9][ 500/1567] lr: 4.6876e-02 eta: 0:17:00 time: 0.0839 data_time: 0.0065 memory: 1827 loss: 0.1605 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.1605 2022/12/25 15:47:25 - mmengine - INFO - Epoch(train) [9][ 600/1567] lr: 4.6251e-02 eta: 0:16:52 time: 0.0838 data_time: 0.0065 memory: 1827 loss: 0.1318 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1318 2022/12/25 15:47:33 - mmengine - INFO - Epoch(train) [9][ 700/1567] lr: 4.5626e-02 eta: 0:16:43 time: 0.0829 data_time: 0.0064 memory: 1827 loss: 0.1410 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1410 2022/12/25 15:47:42 - mmengine - INFO - Epoch(train) [9][ 800/1567] lr: 4.5003e-02 eta: 0:16:35 time: 0.0835 data_time: 0.0065 memory: 1827 loss: 0.1704 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1704 2022/12/25 15:47:50 - mmengine - INFO - Epoch(train) [9][ 900/1567] lr: 4.4380e-02 eta: 0:16:26 time: 0.0848 data_time: 0.0065 memory: 1827 loss: 0.1551 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.1551 2022/12/25 15:47:59 - mmengine - INFO - Epoch(train) [9][1000/1567] lr: 4.3757e-02 eta: 0:16:18 time: 0.0830 data_time: 0.0063 memory: 1827 loss: 0.1301 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1301 2022/12/25 15:48:07 - mmengine - INFO - Epoch(train) [9][1100/1567] lr: 4.3136e-02 eta: 0:16:09 time: 0.0838 data_time: 0.0065 memory: 1827 loss: 0.1240 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1240 2022/12/25 15:48:15 - mmengine - INFO - Epoch(train) [9][1200/1567] lr: 4.2516e-02 eta: 0:16:00 time: 0.0843 data_time: 0.0073 memory: 1827 loss: 0.1320 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1320 2022/12/25 15:48:24 - mmengine - INFO - Epoch(train) [9][1300/1567] lr: 4.1897e-02 eta: 0:15:52 time: 0.0863 data_time: 0.0065 memory: 1827 loss: 0.1369 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1369 2022/12/25 15:48:33 - mmengine - INFO - Epoch(train) [9][1400/1567] lr: 4.1280e-02 eta: 0:15:44 time: 0.0862 data_time: 0.0066 memory: 1827 loss: 0.1293 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1293 2022/12/25 15:48:38 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:48:41 - mmengine - INFO - Epoch(train) [9][1500/1567] lr: 4.0664e-02 eta: 0:15:35 time: 0.0835 data_time: 0.0065 memory: 1827 loss: 0.1795 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.1795 2022/12/25 15:48:47 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:48:47 - mmengine - INFO - Epoch(train) [9][1567/1567] lr: 4.0252e-02 eta: 0:15:30 time: 0.0837 data_time: 0.0067 memory: 1827 loss: 0.3360 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3360 2022/12/25 15:48:47 - mmengine - INFO - Saving checkpoint at 9 epochs 2022/12/25 15:48:50 - mmengine - INFO - Epoch(val) [9][100/129] eta: 0:00:00 time: 0.0262 data_time: 0.0059 memory: 263 2022/12/25 15:48:51 - mmengine - INFO - Epoch(val) [9][129/129] acc/top1: 0.7534 acc/top5: 0.9669 acc/mean1: 0.7532 2022/12/25 15:48:59 - mmengine - INFO - Epoch(train) [10][ 100/1567] lr: 3.9638e-02 eta: 0:15:21 time: 0.0859 data_time: 0.0066 memory: 1827 loss: 0.1146 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1146 2022/12/25 15:49:08 - mmengine - INFO - Epoch(train) [10][ 200/1567] lr: 3.9026e-02 eta: 0:15:13 time: 0.0836 data_time: 0.0066 memory: 1827 loss: 0.0992 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0992 2022/12/25 15:49:16 - mmengine - INFO - Epoch(train) [10][ 300/1567] lr: 3.8415e-02 eta: 0:15:04 time: 0.0831 data_time: 0.0066 memory: 1827 loss: 0.1269 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1269 2022/12/25 15:49:25 - mmengine - INFO - Epoch(train) [10][ 400/1567] lr: 3.7807e-02 eta: 0:14:56 time: 0.0845 data_time: 0.0064 memory: 1827 loss: 0.0800 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0800 2022/12/25 15:49:33 - mmengine - INFO - Epoch(train) [10][ 500/1567] lr: 3.7200e-02 eta: 0:14:47 time: 0.0845 data_time: 0.0065 memory: 1827 loss: 0.1201 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1201 2022/12/25 15:49:42 - mmengine - INFO - Epoch(train) [10][ 600/1567] lr: 3.6596e-02 eta: 0:14:39 time: 0.0845 data_time: 0.0065 memory: 1827 loss: 0.1154 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1154 2022/12/25 15:49:50 - mmengine - INFO - Epoch(train) [10][ 700/1567] lr: 3.5993e-02 eta: 0:14:30 time: 0.0863 data_time: 0.0065 memory: 1827 loss: 0.1169 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1169 2022/12/25 15:49:59 - mmengine - INFO - Epoch(train) [10][ 800/1567] lr: 3.5393e-02 eta: 0:14:22 time: 0.0839 data_time: 0.0066 memory: 1827 loss: 0.1201 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1201 2022/12/25 15:50:07 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:50:07 - mmengine - INFO - Epoch(train) [10][ 900/1567] lr: 3.4795e-02 eta: 0:14:13 time: 0.0855 data_time: 0.0065 memory: 1827 loss: 0.1067 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1067 2022/12/25 15:50:15 - mmengine - INFO - Epoch(train) [10][1000/1567] lr: 3.4199e-02 eta: 0:14:05 time: 0.0844 data_time: 0.0066 memory: 1827 loss: 0.0871 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0871 2022/12/25 15:50:24 - mmengine - INFO - Epoch(train) [10][1100/1567] lr: 3.3606e-02 eta: 0:13:56 time: 0.0844 data_time: 0.0066 memory: 1827 loss: 0.0913 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.0913 2022/12/25 15:50:32 - mmengine - INFO - Epoch(train) [10][1200/1567] lr: 3.3015e-02 eta: 0:13:48 time: 0.0838 data_time: 0.0066 memory: 1827 loss: 0.1058 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1058 2022/12/25 15:50:41 - mmengine - INFO - Epoch(train) [10][1300/1567] lr: 3.2428e-02 eta: 0:13:39 time: 0.0834 data_time: 0.0064 memory: 1827 loss: 0.1022 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1022 2022/12/25 15:50:49 - mmengine - INFO - Epoch(train) [10][1400/1567] lr: 3.1842e-02 eta: 0:13:31 time: 0.0841 data_time: 0.0065 memory: 1827 loss: 0.0802 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0802 2022/12/25 15:50:57 - mmengine - INFO - Epoch(train) [10][1500/1567] lr: 3.1260e-02 eta: 0:13:22 time: 0.0834 data_time: 0.0066 memory: 1827 loss: 0.1340 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1340 2022/12/25 15:51:03 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:51:03 - mmengine - INFO - Epoch(train) [10][1567/1567] lr: 3.0872e-02 eta: 0:13:16 time: 0.0830 data_time: 0.0062 memory: 1827 loss: 0.2866 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.2866 2022/12/25 15:51:03 - mmengine - INFO - Saving checkpoint at 10 epochs 2022/12/25 15:51:06 - mmengine - INFO - Epoch(val) [10][100/129] eta: 0:00:00 time: 0.0299 data_time: 0.0066 memory: 263 2022/12/25 15:51:07 - mmengine - INFO - Epoch(val) [10][129/129] acc/top1: 0.8522 acc/top5: 0.9853 acc/mean1: 0.8521 2022/12/25 15:51:07 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_6.pth is removed 2022/12/25 15:51:08 - mmengine - INFO - The best checkpoint with 0.8522 acc/top1 at 10 epoch is saved to best_acc/top1_epoch_10.pth. 2022/12/25 15:51:16 - mmengine - INFO - Epoch(train) [11][ 100/1567] lr: 3.0294e-02 eta: 0:13:08 time: 0.0860 data_time: 0.0064 memory: 1827 loss: 0.0727 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0727 2022/12/25 15:51:25 - mmengine - INFO - Epoch(train) [11][ 200/1567] lr: 2.9720e-02 eta: 0:12:59 time: 0.0855 data_time: 0.0067 memory: 1827 loss: 0.1105 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1105 2022/12/25 15:51:33 - mmengine - INFO - Epoch(train) [11][ 300/1567] lr: 2.9149e-02 eta: 0:12:51 time: 0.0853 data_time: 0.0074 memory: 1827 loss: 0.0706 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0706 2022/12/25 15:51:36 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:51:42 - mmengine - INFO - Epoch(train) [11][ 400/1567] lr: 2.8581e-02 eta: 0:12:43 time: 0.0874 data_time: 0.0065 memory: 1827 loss: 0.1127 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1127 2022/12/25 15:51:51 - mmengine - INFO - Epoch(train) [11][ 500/1567] lr: 2.8017e-02 eta: 0:12:34 time: 0.0844 data_time: 0.0064 memory: 1827 loss: 0.0906 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0906 2022/12/25 15:51:59 - mmengine - INFO - Epoch(train) [11][ 600/1567] lr: 2.7456e-02 eta: 0:12:26 time: 0.0844 data_time: 0.0066 memory: 1827 loss: 0.0958 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0958 2022/12/25 15:52:08 - mmengine - INFO - Epoch(train) [11][ 700/1567] lr: 2.6898e-02 eta: 0:12:17 time: 0.0860 data_time: 0.0065 memory: 1827 loss: 0.0736 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0736 2022/12/25 15:52:16 - mmengine - INFO - Epoch(train) [11][ 800/1567] lr: 2.6345e-02 eta: 0:12:09 time: 0.0836 data_time: 0.0064 memory: 1827 loss: 0.0619 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0619 2022/12/25 15:52:25 - mmengine - INFO - Epoch(train) [11][ 900/1567] lr: 2.5794e-02 eta: 0:12:01 time: 0.0857 data_time: 0.0068 memory: 1827 loss: 0.0649 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0649 2022/12/25 15:52:33 - mmengine - INFO - Epoch(train) [11][1000/1567] lr: 2.5248e-02 eta: 0:11:52 time: 0.0862 data_time: 0.0065 memory: 1827 loss: 0.0560 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0560 2022/12/25 15:52:42 - mmengine - INFO - Epoch(train) [11][1100/1567] lr: 2.4706e-02 eta: 0:11:44 time: 0.0837 data_time: 0.0065 memory: 1827 loss: 0.0628 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0628 2022/12/25 15:52:50 - mmengine - INFO - Epoch(train) [11][1200/1567] lr: 2.4167e-02 eta: 0:11:35 time: 0.0842 data_time: 0.0066 memory: 1827 loss: 0.0593 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0593 2022/12/25 15:52:59 - mmengine - INFO - Epoch(train) [11][1300/1567] lr: 2.3633e-02 eta: 0:11:27 time: 0.0849 data_time: 0.0064 memory: 1827 loss: 0.0498 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0498 2022/12/25 15:53:01 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:53:07 - mmengine - INFO - Epoch(train) [11][1400/1567] lr: 2.3103e-02 eta: 0:11:18 time: 0.0874 data_time: 0.0065 memory: 1827 loss: 0.0540 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0540 2022/12/25 15:53:16 - mmengine - INFO - Epoch(train) [11][1500/1567] lr: 2.2577e-02 eta: 0:11:10 time: 0.0860 data_time: 0.0065 memory: 1827 loss: 0.0613 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0613 2022/12/25 15:53:22 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:53:22 - mmengine - INFO - Epoch(train) [11][1567/1567] lr: 2.2227e-02 eta: 0:11:04 time: 0.0872 data_time: 0.0062 memory: 1827 loss: 0.2126 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.2126 2022/12/25 15:53:22 - mmengine - INFO - Saving checkpoint at 11 epochs 2022/12/25 15:53:25 - mmengine - INFO - Epoch(val) [11][100/129] eta: 0:00:00 time: 0.0258 data_time: 0.0058 memory: 263 2022/12/25 15:53:26 - mmengine - INFO - Epoch(val) [11][129/129] acc/top1: 0.8615 acc/top5: 0.9864 acc/mean1: 0.8614 2022/12/25 15:53:26 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_10.pth is removed 2022/12/25 15:53:26 - mmengine - INFO - The best checkpoint with 0.8615 acc/top1 at 11 epoch is saved to best_acc/top1_epoch_11.pth. 2022/12/25 15:53:35 - mmengine - INFO - Epoch(train) [12][ 100/1567] lr: 2.1708e-02 eta: 0:10:56 time: 0.0888 data_time: 0.0064 memory: 1827 loss: 0.0433 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0433 2022/12/25 15:53:44 - mmengine - INFO - Epoch(train) [12][ 200/1567] lr: 2.1194e-02 eta: 0:10:47 time: 0.0874 data_time: 0.0073 memory: 1827 loss: 0.0369 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0369 2022/12/25 15:53:52 - mmengine - INFO - Epoch(train) [12][ 300/1567] lr: 2.0684e-02 eta: 0:10:39 time: 0.0840 data_time: 0.0064 memory: 1827 loss: 0.0477 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0477 2022/12/25 15:54:01 - mmengine - INFO - Epoch(train) [12][ 400/1567] lr: 2.0179e-02 eta: 0:10:30 time: 0.0836 data_time: 0.0065 memory: 1827 loss: 0.0521 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0521 2022/12/25 15:54:09 - mmengine - INFO - Epoch(train) [12][ 500/1567] lr: 1.9678e-02 eta: 0:10:22 time: 0.0845 data_time: 0.0065 memory: 1827 loss: 0.0350 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0350 2022/12/25 15:54:18 - mmengine - INFO - Epoch(train) [12][ 600/1567] lr: 1.9182e-02 eta: 0:10:13 time: 0.0848 data_time: 0.0065 memory: 1827 loss: 0.0627 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0627 2022/12/25 15:54:26 - mmengine - INFO - Epoch(train) [12][ 700/1567] lr: 1.8691e-02 eta: 0:10:05 time: 0.0858 data_time: 0.0065 memory: 1827 loss: 0.0348 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0348 2022/12/25 15:54:31 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:54:35 - mmengine - INFO - Epoch(train) [12][ 800/1567] lr: 1.8205e-02 eta: 0:09:56 time: 0.0841 data_time: 0.0065 memory: 1827 loss: 0.0587 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0587 2022/12/25 15:54:43 - mmengine - INFO - Epoch(train) [12][ 900/1567] lr: 1.7724e-02 eta: 0:09:48 time: 0.0835 data_time: 0.0066 memory: 1827 loss: 0.0462 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0462 2022/12/25 15:54:51 - mmengine - INFO - Epoch(train) [12][1000/1567] lr: 1.7248e-02 eta: 0:09:39 time: 0.0852 data_time: 0.0064 memory: 1827 loss: 0.0269 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0269 2022/12/25 15:55:00 - mmengine - INFO - Epoch(train) [12][1100/1567] lr: 1.6778e-02 eta: 0:09:31 time: 0.0838 data_time: 0.0065 memory: 1827 loss: 0.0163 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0163 2022/12/25 15:55:08 - mmengine - INFO - Epoch(train) [12][1200/1567] lr: 1.6312e-02 eta: 0:09:22 time: 0.0833 data_time: 0.0069 memory: 1827 loss: 0.0232 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0232 2022/12/25 15:55:17 - mmengine - INFO - Epoch(train) [12][1300/1567] lr: 1.5852e-02 eta: 0:09:14 time: 0.0855 data_time: 0.0070 memory: 1827 loss: 0.0335 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0335 2022/12/25 15:55:25 - mmengine - INFO - Epoch(train) [12][1400/1567] lr: 1.5397e-02 eta: 0:09:06 time: 0.0841 data_time: 0.0065 memory: 1827 loss: 0.0235 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0235 2022/12/25 15:55:34 - mmengine - INFO - Epoch(train) [12][1500/1567] lr: 1.4947e-02 eta: 0:08:57 time: 0.0860 data_time: 0.0070 memory: 1827 loss: 0.0290 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0290 2022/12/25 15:55:40 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:55:40 - mmengine - INFO - Epoch(train) [12][1567/1567] lr: 1.4649e-02 eta: 0:08:51 time: 0.0844 data_time: 0.0068 memory: 1827 loss: 0.1954 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.1954 2022/12/25 15:55:40 - mmengine - INFO - Saving checkpoint at 12 epochs 2022/12/25 15:55:43 - mmengine - INFO - Epoch(val) [12][100/129] eta: 0:00:00 time: 0.0267 data_time: 0.0059 memory: 263 2022/12/25 15:55:44 - mmengine - INFO - Epoch(val) [12][129/129] acc/top1: 0.8305 acc/top5: 0.9706 acc/mean1: 0.8304 2022/12/25 15:55:52 - mmengine - INFO - Epoch(train) [13][ 100/1567] lr: 1.4209e-02 eta: 0:08:43 time: 0.0844 data_time: 0.0069 memory: 1827 loss: 0.0213 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0213 2022/12/25 15:56:01 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:56:01 - mmengine - INFO - Epoch(train) [13][ 200/1567] lr: 1.3774e-02 eta: 0:08:34 time: 0.0838 data_time: 0.0065 memory: 1827 loss: 0.0263 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0263 2022/12/25 15:56:09 - mmengine - INFO - Epoch(train) [13][ 300/1567] lr: 1.3345e-02 eta: 0:08:26 time: 0.0857 data_time: 0.0073 memory: 1827 loss: 0.0241 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0241 2022/12/25 15:56:18 - mmengine - INFO - Epoch(train) [13][ 400/1567] lr: 1.2922e-02 eta: 0:08:17 time: 0.0906 data_time: 0.0064 memory: 1827 loss: 0.0204 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0204 2022/12/25 15:56:26 - mmengine - INFO - Epoch(train) [13][ 500/1567] lr: 1.2505e-02 eta: 0:08:09 time: 0.0845 data_time: 0.0064 memory: 1827 loss: 0.0158 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0158 2022/12/25 15:56:35 - mmengine - INFO - Epoch(train) [13][ 600/1567] lr: 1.2093e-02 eta: 0:08:00 time: 0.0857 data_time: 0.0065 memory: 1827 loss: 0.0285 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0285 2022/12/25 15:56:43 - mmengine - INFO - Epoch(train) [13][ 700/1567] lr: 1.1687e-02 eta: 0:07:52 time: 0.0847 data_time: 0.0072 memory: 1827 loss: 0.0164 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0164 2022/12/25 15:56:52 - mmengine - INFO - Epoch(train) [13][ 800/1567] lr: 1.1288e-02 eta: 0:07:43 time: 0.0852 data_time: 0.0065 memory: 1827 loss: 0.0102 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0102 2022/12/25 15:57:00 - mmengine - INFO - Epoch(train) [13][ 900/1567] lr: 1.0894e-02 eta: 0:07:35 time: 0.0840 data_time: 0.0066 memory: 1827 loss: 0.0216 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0216 2022/12/25 15:57:09 - mmengine - INFO - Epoch(train) [13][1000/1567] lr: 1.0507e-02 eta: 0:07:27 time: 0.0838 data_time: 0.0064 memory: 1827 loss: 0.0219 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0219 2022/12/25 15:57:17 - mmengine - INFO - Epoch(train) [13][1100/1567] lr: 1.0126e-02 eta: 0:07:18 time: 0.0862 data_time: 0.0064 memory: 1827 loss: 0.0176 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0176 2022/12/25 15:57:26 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:57:26 - mmengine - INFO - Epoch(train) [13][1200/1567] lr: 9.7512e-03 eta: 0:07:10 time: 0.0852 data_time: 0.0066 memory: 1827 loss: 0.0213 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0213 2022/12/25 15:57:34 - mmengine - INFO - Epoch(train) [13][1300/1567] lr: 9.3826e-03 eta: 0:07:01 time: 0.0828 data_time: 0.0063 memory: 1827 loss: 0.0165 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0165 2022/12/25 15:57:43 - mmengine - INFO - Epoch(train) [13][1400/1567] lr: 9.0204e-03 eta: 0:06:53 time: 0.0837 data_time: 0.0065 memory: 1827 loss: 0.0132 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0132 2022/12/25 15:57:51 - mmengine - INFO - Epoch(train) [13][1500/1567] lr: 8.6647e-03 eta: 0:06:44 time: 0.0858 data_time: 0.0065 memory: 1827 loss: 0.0119 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0119 2022/12/25 15:57:57 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:57:57 - mmengine - INFO - Epoch(train) [13][1567/1567] lr: 8.4300e-03 eta: 0:06:38 time: 0.0854 data_time: 0.0063 memory: 1827 loss: 0.2220 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2220 2022/12/25 15:57:57 - mmengine - INFO - Saving checkpoint at 13 epochs 2022/12/25 15:58:00 - mmengine - INFO - Epoch(val) [13][100/129] eta: 0:00:00 time: 0.0258 data_time: 0.0058 memory: 263 2022/12/25 15:58:01 - mmengine - INFO - Epoch(val) [13][129/129] acc/top1: 0.8851 acc/top5: 0.9900 acc/mean1: 0.8851 2022/12/25 15:58:01 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_11.pth is removed 2022/12/25 15:58:01 - mmengine - INFO - The best checkpoint with 0.8851 acc/top1 at 13 epoch is saved to best_acc/top1_epoch_13.pth. 2022/12/25 15:58:10 - mmengine - INFO - Epoch(train) [14][ 100/1567] lr: 8.0851e-03 eta: 0:06:30 time: 0.0840 data_time: 0.0064 memory: 1827 loss: 0.0157 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0157 2022/12/25 15:58:18 - mmengine - INFO - Epoch(train) [14][ 200/1567] lr: 7.7469e-03 eta: 0:06:21 time: 0.0835 data_time: 0.0068 memory: 1827 loss: 0.0161 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0161 2022/12/25 15:58:27 - mmengine - INFO - Epoch(train) [14][ 300/1567] lr: 7.4152e-03 eta: 0:06:13 time: 0.0837 data_time: 0.0065 memory: 1827 loss: 0.0109 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0109 2022/12/25 15:58:35 - mmengine - INFO - Epoch(train) [14][ 400/1567] lr: 7.0902e-03 eta: 0:06:04 time: 0.0843 data_time: 0.0069 memory: 1827 loss: 0.0123 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0123 2022/12/25 15:58:44 - mmengine - INFO - Epoch(train) [14][ 500/1567] lr: 6.7720e-03 eta: 0:05:56 time: 0.0867 data_time: 0.0066 memory: 1827 loss: 0.0121 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0121 2022/12/25 15:58:52 - mmengine - INFO - Epoch(train) [14][ 600/1567] lr: 6.4606e-03 eta: 0:05:47 time: 0.0860 data_time: 0.0066 memory: 1827 loss: 0.0096 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0096 2022/12/25 15:58:55 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 15:59:01 - mmengine - INFO - Epoch(train) [14][ 700/1567] lr: 6.1560e-03 eta: 0:05:39 time: 0.0840 data_time: 0.0064 memory: 1827 loss: 0.0102 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0102 2022/12/25 15:59:10 - mmengine - INFO - Epoch(train) [14][ 800/1567] lr: 5.8582e-03 eta: 0:05:31 time: 0.0976 data_time: 0.0066 memory: 1827 loss: 0.0093 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0093 2022/12/25 15:59:18 - mmengine - INFO - Epoch(train) [14][ 900/1567] lr: 5.5675e-03 eta: 0:05:22 time: 0.0839 data_time: 0.0065 memory: 1827 loss: 0.0113 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0113 2022/12/25 15:59:27 - mmengine - INFO - Epoch(train) [14][1000/1567] lr: 5.2836e-03 eta: 0:05:14 time: 0.0831 data_time: 0.0065 memory: 1827 loss: 0.0104 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0104 2022/12/25 15:59:35 - mmengine - INFO - Epoch(train) [14][1100/1567] lr: 5.0068e-03 eta: 0:05:05 time: 0.0849 data_time: 0.0066 memory: 1827 loss: 0.0095 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0095 2022/12/25 15:59:44 - mmengine - INFO - Epoch(train) [14][1200/1567] lr: 4.7371e-03 eta: 0:04:57 time: 0.0863 data_time: 0.0067 memory: 1827 loss: 0.0158 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0158 2022/12/25 15:59:52 - mmengine - INFO - Epoch(train) [14][1300/1567] lr: 4.4745e-03 eta: 0:04:48 time: 0.0864 data_time: 0.0065 memory: 1827 loss: 0.0108 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0108 2022/12/25 16:00:01 - mmengine - INFO - Epoch(train) [14][1400/1567] lr: 4.2190e-03 eta: 0:04:40 time: 0.0865 data_time: 0.0068 memory: 1827 loss: 0.0076 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0076 2022/12/25 16:00:10 - mmengine - INFO - Epoch(train) [14][1500/1567] lr: 3.9707e-03 eta: 0:04:31 time: 0.0866 data_time: 0.0067 memory: 1827 loss: 0.0127 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0127 2022/12/25 16:00:16 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 16:00:16 - mmengine - INFO - Epoch(train) [14][1567/1567] lr: 3.8084e-03 eta: 0:04:26 time: 0.0849 data_time: 0.0065 memory: 1827 loss: 0.1776 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.1776 2022/12/25 16:00:16 - mmengine - INFO - Saving checkpoint at 14 epochs 2022/12/25 16:00:19 - mmengine - INFO - Epoch(val) [14][100/129] eta: 0:00:00 time: 0.0265 data_time: 0.0060 memory: 263 2022/12/25 16:00:20 - mmengine - INFO - Epoch(val) [14][129/129] acc/top1: 0.8863 acc/top5: 0.9894 acc/mean1: 0.8862 2022/12/25 16:00:20 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_13.pth is removed 2022/12/25 16:00:20 - mmengine - INFO - The best checkpoint with 0.8863 acc/top1 at 14 epoch is saved to best_acc/top1_epoch_14.pth. 2022/12/25 16:00:25 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 16:00:29 - mmengine - INFO - Epoch(train) [15][ 100/1567] lr: 3.5722e-03 eta: 0:04:17 time: 0.0831 data_time: 0.0064 memory: 1827 loss: 0.0103 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0103 2022/12/25 16:00:37 - mmengine - INFO - Epoch(train) [15][ 200/1567] lr: 3.3433e-03 eta: 0:04:09 time: 0.0840 data_time: 0.0068 memory: 1827 loss: 0.0085 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0085 2022/12/25 16:00:45 - mmengine - INFO - Epoch(train) [15][ 300/1567] lr: 3.1217e-03 eta: 0:04:00 time: 0.0837 data_time: 0.0066 memory: 1827 loss: 0.0094 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0094 2022/12/25 16:00:54 - mmengine - INFO - Epoch(train) [15][ 400/1567] lr: 2.9075e-03 eta: 0:03:52 time: 0.0843 data_time: 0.0066 memory: 1827 loss: 0.0166 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0166 2022/12/25 16:01:02 - mmengine - INFO - Epoch(train) [15][ 500/1567] lr: 2.7007e-03 eta: 0:03:43 time: 0.0834 data_time: 0.0066 memory: 1827 loss: 0.0091 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0091 2022/12/25 16:01:11 - mmengine - INFO - Epoch(train) [15][ 600/1567] lr: 2.5013e-03 eta: 0:03:35 time: 0.0829 data_time: 0.0065 memory: 1827 loss: 0.0096 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0096 2022/12/25 16:01:19 - mmengine - INFO - Epoch(train) [15][ 700/1567] lr: 2.3093e-03 eta: 0:03:26 time: 0.0831 data_time: 0.0067 memory: 1827 loss: 0.0124 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0124 2022/12/25 16:01:27 - mmengine - INFO - Epoch(train) [15][ 800/1567] lr: 2.1249e-03 eta: 0:03:18 time: 0.0837 data_time: 0.0067 memory: 1827 loss: 0.0082 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0082 2022/12/25 16:01:36 - mmengine - INFO - Epoch(train) [15][ 900/1567] lr: 1.9479e-03 eta: 0:03:09 time: 0.0859 data_time: 0.0067 memory: 1827 loss: 0.0091 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0091 2022/12/25 16:01:45 - mmengine - INFO - Epoch(train) [15][1000/1567] lr: 1.7785e-03 eta: 0:03:01 time: 0.0866 data_time: 0.0073 memory: 1827 loss: 0.0103 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0103 2022/12/25 16:01:50 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 16:01:53 - mmengine - INFO - Epoch(train) [15][1100/1567] lr: 1.6167e-03 eta: 0:02:52 time: 0.0840 data_time: 0.0067 memory: 1827 loss: 0.0133 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0133 2022/12/25 16:02:02 - mmengine - INFO - Epoch(train) [15][1200/1567] lr: 1.4625e-03 eta: 0:02:44 time: 0.0853 data_time: 0.0067 memory: 1827 loss: 0.0096 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0096 2022/12/25 16:02:10 - mmengine - INFO - Epoch(train) [15][1300/1567] lr: 1.3159e-03 eta: 0:02:35 time: 0.0833 data_time: 0.0072 memory: 1827 loss: 0.0104 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0104 2022/12/25 16:02:19 - mmengine - INFO - Epoch(train) [15][1400/1567] lr: 1.1769e-03 eta: 0:02:27 time: 0.0887 data_time: 0.0082 memory: 1827 loss: 0.0089 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0089 2022/12/25 16:02:28 - mmengine - INFO - Epoch(train) [15][1500/1567] lr: 1.0456e-03 eta: 0:02:18 time: 0.0864 data_time: 0.0077 memory: 1827 loss: 0.0088 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0088 2022/12/25 16:02:33 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 16:02:33 - mmengine - INFO - Epoch(train) [15][1567/1567] lr: 9.6196e-04 eta: 0:02:13 time: 0.0830 data_time: 0.0065 memory: 1827 loss: 0.1919 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.1919 2022/12/25 16:02:33 - mmengine - INFO - Saving checkpoint at 15 epochs 2022/12/25 16:02:36 - mmengine - INFO - Epoch(val) [15][100/129] eta: 0:00:00 time: 0.0284 data_time: 0.0063 memory: 263 2022/12/25 16:02:37 - mmengine - INFO - Epoch(val) [15][129/129] acc/top1: 0.8887 acc/top5: 0.9901 acc/mean1: 0.8886 2022/12/25 16:02:37 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_14.pth is removed 2022/12/25 16:02:38 - mmengine - INFO - The best checkpoint with 0.8887 acc/top1 at 15 epoch is saved to best_acc/top1_epoch_15.pth. 2022/12/25 16:02:46 - mmengine - INFO - Epoch(train) [16][ 100/1567] lr: 8.4351e-04 eta: 0:02:04 time: 0.0841 data_time: 0.0065 memory: 1827 loss: 0.0122 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0122 2022/12/25 16:02:55 - mmengine - INFO - Epoch(train) [16][ 200/1567] lr: 7.3277e-04 eta: 0:01:56 time: 0.0958 data_time: 0.0068 memory: 1827 loss: 0.0088 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0088 2022/12/25 16:03:04 - mmengine - INFO - Epoch(train) [16][ 300/1567] lr: 6.2978e-04 eta: 0:01:47 time: 0.0866 data_time: 0.0066 memory: 1827 loss: 0.0083 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0083 2022/12/25 16:03:12 - mmengine - INFO - Epoch(train) [16][ 400/1567] lr: 5.3453e-04 eta: 0:01:39 time: 0.0852 data_time: 0.0079 memory: 1827 loss: 0.0124 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0124 2022/12/25 16:03:21 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 16:03:21 - mmengine - INFO - Epoch(train) [16][ 500/1567] lr: 4.4705e-04 eta: 0:01:30 time: 0.0866 data_time: 0.0077 memory: 1827 loss: 0.0113 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0113 2022/12/25 16:03:29 - mmengine - INFO - Epoch(train) [16][ 600/1567] lr: 3.6735e-04 eta: 0:01:22 time: 0.0837 data_time: 0.0066 memory: 1827 loss: 0.0101 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0101 2022/12/25 16:03:38 - mmengine - INFO - Epoch(train) [16][ 700/1567] lr: 2.9544e-04 eta: 0:01:13 time: 0.0829 data_time: 0.0066 memory: 1827 loss: 0.0156 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0156 2022/12/25 16:03:46 - mmengine - INFO - Epoch(train) [16][ 800/1567] lr: 2.3134e-04 eta: 0:01:05 time: 0.0845 data_time: 0.0065 memory: 1827 loss: 0.0097 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0097 2022/12/25 16:03:55 - mmengine - INFO - Epoch(train) [16][ 900/1567] lr: 1.7505e-04 eta: 0:00:56 time: 0.0843 data_time: 0.0066 memory: 1827 loss: 0.0113 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0113 2022/12/25 16:04:03 - mmengine - INFO - Epoch(train) [16][1000/1567] lr: 1.2658e-04 eta: 0:00:48 time: 0.0842 data_time: 0.0066 memory: 1827 loss: 0.0092 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0092 2022/12/25 16:04:12 - mmengine - INFO - Epoch(train) [16][1100/1567] lr: 8.5947e-05 eta: 0:00:39 time: 0.0851 data_time: 0.0069 memory: 1827 loss: 0.0077 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0077 2022/12/25 16:04:20 - mmengine - INFO - Epoch(train) [16][1200/1567] lr: 5.3147e-05 eta: 0:00:31 time: 0.0854 data_time: 0.0073 memory: 1827 loss: 0.0088 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0088 2022/12/25 16:04:29 - mmengine - INFO - Epoch(train) [16][1300/1567] lr: 2.8190e-05 eta: 0:00:22 time: 0.0859 data_time: 0.0072 memory: 1827 loss: 0.0102 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0102 2022/12/25 16:04:37 - mmengine - INFO - Epoch(train) [16][1400/1567] lr: 1.1078e-05 eta: 0:00:14 time: 0.0870 data_time: 0.0075 memory: 1827 loss: 0.0107 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0107 2022/12/25 16:04:45 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 16:04:46 - mmengine - INFO - Epoch(train) [16][1500/1567] lr: 1.8150e-06 eta: 0:00:05 time: 0.0839 data_time: 0.0078 memory: 1827 loss: 0.0098 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0098 2022/12/25 16:04:51 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20221225_152736 2022/12/25 16:04:51 - mmengine - INFO - Epoch(train) [16][1567/1567] lr: 3.9252e-10 eta: 0:00:00 time: 0.0870 data_time: 0.0072 memory: 1827 loss: 0.1834 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.1834 2022/12/25 16:04:51 - mmengine - INFO - Saving checkpoint at 16 epochs 2022/12/25 16:04:55 - mmengine - INFO - Epoch(val) [16][100/129] eta: 0:00:00 time: 0.0262 data_time: 0.0059 memory: 263 2022/12/25 16:04:55 - mmengine - INFO - Epoch(val) [16][129/129] acc/top1: 0.8898 acc/top5: 0.9898 acc/mean1: 0.8897 2022/12/25 16:04:55 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_15.pth is removed 2022/12/25 16:04:56 - mmengine - INFO - The best checkpoint with 0.8898 acc/top1 at 16 epoch is saved to best_acc/top1_epoch_16.pth.