2022/12/30 13:04:06 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] CUDA available: True numpy_random_seed: 644657294 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/petrelfs/share/cuda-11.3 NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (GCC) 5.4.0 PyTorch: 1.11.0 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.3 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.2 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.12.0 OpenCV: 4.6.0 MMEngine: 0.3.2 Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None diff_rank_seed: False deterministic: False Distributed launcher: pytorch Distributed training: True GPU number: 8 ------------------------------------------------------------ 2022/12/30 13:04:06 - mmengine - INFO - Config: default_scope = 'mmaction' default_hooks = dict( runtime_info=dict(type='RuntimeInfoHook'), timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=100, ignore_last=False), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=1, save_best='auto'), sampler_seed=dict(type='DistSamplerSeedHook'), sync_buffers=dict(type='SyncBuffersHook')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) log_processor = dict(type='LogProcessor', window_size=20, by_epoch=True) vis_backends = [dict(type='LocalVisBackend')] visualizer = dict( type='ActionVisualizer', vis_backends=[dict(type='LocalVisBackend')]) log_level = 'INFO' load_from = None resume = False model = dict( type='RecognizerGCN', backbone=dict( type='STGCN', gcn_adaptive='init', gcn_with_res=True, tcn_type='mstcn', graph_cfg=dict(layout='nturgb+d', mode='spatial')), cls_head=dict(type='GCNHead', num_classes=60, in_channels=256)) dataset_type = 'PoseDataset' ann_file = 'data/skeleton/ntu60_3d.pkl' train_pipeline = [ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] val_pipeline = [ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] test_pipeline = [ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] train_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='RepeatDataset', times=5, dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_3d.pkl', pipeline=[ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_train'))) val_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_3d.pkl', pipeline=[ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) test_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_3d.pkl', pipeline=[ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) val_evaluator = [dict(type='AccMetric')] test_evaluator = [dict(type='AccMetric')] train_cfg = dict( type='EpochBasedTrainLoop', max_epochs=16, val_begin=1, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='CosineAnnealingLR', eta_min=0, T_max=16, by_epoch=True, convert_to_iter_based=True) ] optim_wrapper = dict( optimizer=dict( type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True)) auto_scale_lr = dict(enable=False, base_batch_size=128) launcher = 'pytorch' work_dir = './work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d' randomness = dict(seed=None, diff_rank_seed=False, deterministic=False) 2022/12/30 13:04:06 - mmengine - INFO - Result has been saved to /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/modules_statistic_results.json 2022/12/30 13:04:07 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (NORMAL ) SyncBuffersHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (VERY_LOW ) CheckpointHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- Name of parameter - Initialization information backbone.data_bn.weight - torch.Size([75]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.data_bn.bias - torch.Size([75]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.conv.weight - torch.Size([192, 3, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.0.weight - torch.Size([64, 3, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.gcn.down.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.0.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.0.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.1.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.1.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.2.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.2.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.conv.weight - torch.Size([192, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.gcn.conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.0.weight - torch.Size([14, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.0.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.1.weight - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.1.bias - torch.Size([14]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.0.3.conv.weight - torch.Size([14, 14, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.0.3.conv.bias - torch.Size([14]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.1.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.1.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.1.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.2.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.2.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.2.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.3.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.3.3.conv.weight - torch.Size([10, 10, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.3.3.conv.bias - torch.Size([10]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.3.tcn.branches.4.0.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.0.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.1.weight - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.4.1.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.5.weight - torch.Size([10, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.branches.5.bias - torch.Size([10]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.0.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.0.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.2.weight - torch.Size([64, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.transform.2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.3.tcn.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.conv.weight - torch.Size([384, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.0.weight - torch.Size([128, 64, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.gcn.down.1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.residual.conv.weight - torch.Size([128, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.residual.conv.bias - torch.Size([128]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.4.residual.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.4.residual.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.conv.weight - torch.Size([384, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.5.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.5.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.conv.weight - torch.Size([384, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.gcn.conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.0.weight - torch.Size([23, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.0.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.1.weight - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.1.bias - torch.Size([23]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.0.3.conv.weight - torch.Size([23, 23, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.0.3.conv.bias - torch.Size([23]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.1.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.1.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.1.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.2.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.2.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.2.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.3.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.3.3.conv.weight - torch.Size([21, 21, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.3.3.conv.bias - torch.Size([21]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.6.tcn.branches.4.0.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.0.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.1.weight - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.4.1.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.5.weight - torch.Size([21, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.branches.5.bias - torch.Size([21]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.0.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.0.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.2.weight - torch.Size([128, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.transform.2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.bn.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.6.tcn.bn.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.conv.weight - torch.Size([768, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.0.weight - torch.Size([256, 128, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.gcn.down.1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.residual.conv.weight - torch.Size([256, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.residual.conv.bias - torch.Size([256]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.7.residual.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.7.residual.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.conv.weight - torch.Size([768, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.8.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.8.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.A - torch.Size([3, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.conv.weight - torch.Size([768, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.gcn.conv.bias - torch.Size([768]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.0.weight - torch.Size([46, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.0.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.1.weight - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.1.bias - torch.Size([46]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.0.3.conv.weight - torch.Size([46, 46, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.0.3.conv.bias - torch.Size([46]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.1.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.1.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.1.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.2.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.2.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.2.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.3.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.3.3.conv.weight - torch.Size([42, 42, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.3.3.conv.bias - torch.Size([42]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.gcn.9.tcn.branches.4.0.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.0.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.1.weight - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.4.1.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.5.weight - torch.Size([42, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.branches.5.bias - torch.Size([42]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.0.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.0.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.2.weight - torch.Size([256, 256, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.transform.2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.bn.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn.9.tcn.bn.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of RecognizerGCN cls_head.fc.weight - torch.Size([60, 256]): NormalInit: mean=0, std=0.01, bias=0 cls_head.fc.bias - torch.Size([60]): NormalInit: mean=0, std=0.01, bias=0 2022/12/30 13:04:41 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d. 2022/12/30 13:05:25 - mmengine - INFO - Epoch(train) [1][ 100/1567] lr: 9.9996e-02 eta: 3:04:10 time: 0.1887 data_time: 0.0066 memory: 2656 loss: 2.8927 top1_acc: 0.1875 top5_acc: 0.4375 loss_cls: 2.8927 2022/12/30 13:05:44 - mmengine - INFO - Epoch(train) [1][ 200/1567] lr: 9.9984e-02 eta: 2:11:29 time: 0.1916 data_time: 0.0066 memory: 2656 loss: 2.1890 top1_acc: 0.2500 top5_acc: 0.8750 loss_cls: 2.1890 2022/12/30 13:06:03 - mmengine - INFO - Epoch(train) [1][ 300/1567] lr: 9.9965e-02 eta: 1:53:20 time: 0.1876 data_time: 0.0066 memory: 2656 loss: 1.8681 top1_acc: 0.4375 top5_acc: 0.8125 loss_cls: 1.8681 2022/12/30 13:06:22 - mmengine - INFO - Epoch(train) [1][ 400/1567] lr: 9.9938e-02 eta: 1:44:26 time: 0.1912 data_time: 0.0067 memory: 2656 loss: 1.3300 top1_acc: 0.5625 top5_acc: 0.8125 loss_cls: 1.3300 2022/12/30 13:06:41 - mmengine - INFO - Epoch(train) [1][ 500/1567] lr: 9.9902e-02 eta: 1:38:35 time: 0.1838 data_time: 0.0066 memory: 2656 loss: 1.2831 top1_acc: 0.6250 top5_acc: 0.9375 loss_cls: 1.2831 2022/12/30 13:07:00 - mmengine - INFO - Epoch(train) [1][ 600/1567] lr: 9.9859e-02 eta: 1:34:40 time: 0.1902 data_time: 0.0066 memory: 2656 loss: 1.0038 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 1.0038 2022/12/30 13:07:19 - mmengine - INFO - Epoch(train) [1][ 700/1567] lr: 9.9808e-02 eta: 1:31:45 time: 0.1855 data_time: 0.0066 memory: 2656 loss: 1.0781 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 1.0781 2022/12/30 13:07:38 - mmengine - INFO - Epoch(train) [1][ 800/1567] lr: 9.9750e-02 eta: 1:29:38 time: 0.1894 data_time: 0.0068 memory: 2656 loss: 1.0742 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 1.0742 2022/12/30 13:07:57 - mmengine - INFO - Epoch(train) [1][ 900/1567] lr: 9.9683e-02 eta: 1:28:03 time: 0.1974 data_time: 0.0068 memory: 2656 loss: 0.9311 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.9311 2022/12/30 13:08:16 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:08:16 - mmengine - INFO - Epoch(train) [1][1000/1567] lr: 9.9609e-02 eta: 1:26:37 time: 0.1882 data_time: 0.0065 memory: 2656 loss: 0.8254 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.8254 2022/12/30 13:08:35 - mmengine - INFO - Epoch(train) [1][1100/1567] lr: 9.9527e-02 eta: 1:25:14 time: 0.1894 data_time: 0.0071 memory: 2656 loss: 0.8078 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.8078 2022/12/30 13:08:54 - mmengine - INFO - Epoch(train) [1][1200/1567] lr: 9.9437e-02 eta: 1:24:04 time: 0.1859 data_time: 0.0066 memory: 2656 loss: 0.8159 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.8159 2022/12/30 13:09:13 - mmengine - INFO - Epoch(train) [1][1300/1567] lr: 9.9339e-02 eta: 1:23:03 time: 0.1890 data_time: 0.0064 memory: 2656 loss: 0.8760 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.8760 2022/12/30 13:09:32 - mmengine - INFO - Epoch(train) [1][1400/1567] lr: 9.9234e-02 eta: 1:22:10 time: 0.1918 data_time: 0.0068 memory: 2656 loss: 0.7013 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.7013 2022/12/30 13:09:51 - mmengine - INFO - Epoch(train) [1][1500/1567] lr: 9.9121e-02 eta: 1:21:19 time: 0.1851 data_time: 0.0068 memory: 2656 loss: 0.8304 top1_acc: 0.6250 top5_acc: 1.0000 loss_cls: 0.8304 2022/12/30 13:10:03 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:10:03 - mmengine - INFO - Epoch(train) [1][1567/1567] lr: 9.9040e-02 eta: 1:20:42 time: 0.1808 data_time: 0.0064 memory: 2656 loss: 0.8725 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.8725 2022/12/30 13:10:03 - mmengine - INFO - Saving checkpoint at 1 epochs 2022/12/30 13:10:09 - mmengine - INFO - Epoch(val) [1][100/129] eta: 0:00:01 time: 0.0428 data_time: 0.0122 memory: 378 2022/12/30 13:10:10 - mmengine - INFO - Epoch(val) [1][129/129] acc/top1: 0.5977 acc/top5: 0.9068 acc/mean1: 0.5976 2022/12/30 13:10:10 - mmengine - INFO - The best checkpoint with 0.5977 acc/top1 at 1 epoch is saved to best_acc/top1_epoch_1.pth. 2022/12/30 13:10:30 - mmengine - INFO - Epoch(train) [2][ 100/1567] lr: 9.8914e-02 eta: 1:20:04 time: 0.1871 data_time: 0.0065 memory: 2656 loss: 0.7231 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.7231 2022/12/30 13:10:49 - mmengine - INFO - Epoch(train) [2][ 200/1567] lr: 9.8781e-02 eta: 1:19:25 time: 0.1852 data_time: 0.0066 memory: 2656 loss: 0.6715 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.6715 2022/12/30 13:11:08 - mmengine - INFO - Epoch(train) [2][ 300/1567] lr: 9.8639e-02 eta: 1:18:53 time: 0.1880 data_time: 0.0066 memory: 2656 loss: 0.6547 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.6547 2022/12/30 13:11:27 - mmengine - INFO - Epoch(train) [2][ 400/1567] lr: 9.8491e-02 eta: 1:18:16 time: 0.1918 data_time: 0.0071 memory: 2656 loss: 0.6421 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.6421 2022/12/30 13:11:34 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:11:46 - mmengine - INFO - Epoch(train) [2][ 500/1567] lr: 9.8334e-02 eta: 1:17:41 time: 0.1857 data_time: 0.0067 memory: 2656 loss: 0.5580 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.5580 2022/12/30 13:12:05 - mmengine - INFO - Epoch(train) [2][ 600/1567] lr: 9.8170e-02 eta: 1:17:05 time: 0.1886 data_time: 0.0066 memory: 2656 loss: 0.5650 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.5650 2022/12/30 13:12:24 - mmengine - INFO - Epoch(train) [2][ 700/1567] lr: 9.7998e-02 eta: 1:16:30 time: 0.1867 data_time: 0.0071 memory: 2656 loss: 0.7407 top1_acc: 0.6875 top5_acc: 0.8125 loss_cls: 0.7407 2022/12/30 13:12:42 - mmengine - INFO - Epoch(train) [2][ 800/1567] lr: 9.7819e-02 eta: 1:15:53 time: 0.1797 data_time: 0.0065 memory: 2656 loss: 0.5512 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.5512 2022/12/30 13:13:01 - mmengine - INFO - Epoch(train) [2][ 900/1567] lr: 9.7632e-02 eta: 1:15:20 time: 0.1843 data_time: 0.0066 memory: 2656 loss: 0.5364 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5364 2022/12/30 13:13:19 - mmengine - INFO - Epoch(train) [2][1000/1567] lr: 9.7438e-02 eta: 1:14:47 time: 0.1889 data_time: 0.0065 memory: 2656 loss: 0.6480 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.6480 2022/12/30 13:13:38 - mmengine - INFO - Epoch(train) [2][1100/1567] lr: 9.7236e-02 eta: 1:14:17 time: 0.1868 data_time: 0.0071 memory: 2656 loss: 0.4898 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.4898 2022/12/30 13:13:57 - mmengine - INFO - Epoch(train) [2][1200/1567] lr: 9.7027e-02 eta: 1:13:52 time: 0.1948 data_time: 0.0067 memory: 2656 loss: 0.6753 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.6753 2022/12/30 13:14:16 - mmengine - INFO - Epoch(train) [2][1300/1567] lr: 9.6810e-02 eta: 1:13:25 time: 0.1926 data_time: 0.0072 memory: 2656 loss: 0.5199 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.5199 2022/12/30 13:14:36 - mmengine - INFO - Epoch(train) [2][1400/1567] lr: 9.6587e-02 eta: 1:12:59 time: 0.1931 data_time: 0.0066 memory: 2656 loss: 0.5452 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5452 2022/12/30 13:14:42 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:14:55 - mmengine - INFO - Epoch(train) [2][1500/1567] lr: 9.6355e-02 eta: 1:12:35 time: 0.1875 data_time: 0.0066 memory: 2656 loss: 0.5966 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.5966 2022/12/30 13:15:07 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:15:07 - mmengine - INFO - Epoch(train) [2][1567/1567] lr: 9.6196e-02 eta: 1:12:12 time: 0.1735 data_time: 0.0063 memory: 2656 loss: 0.8303 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.8303 2022/12/30 13:15:07 - mmengine - INFO - Saving checkpoint at 2 epochs 2022/12/30 13:15:11 - mmengine - INFO - Epoch(val) [2][100/129] eta: 0:00:01 time: 0.0410 data_time: 0.0063 memory: 378 2022/12/30 13:15:13 - mmengine - INFO - Epoch(val) [2][129/129] acc/top1: 0.7324 acc/top5: 0.9512 acc/mean1: 0.7324 2022/12/30 13:15:13 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_1.pth is removed 2022/12/30 13:15:13 - mmengine - INFO - The best checkpoint with 0.7324 acc/top1 at 2 epoch is saved to best_acc/top1_epoch_2.pth. 2022/12/30 13:15:32 - mmengine - INFO - Epoch(train) [3][ 100/1567] lr: 9.5953e-02 eta: 1:11:47 time: 0.1789 data_time: 0.0065 memory: 2656 loss: 0.4608 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.4608 2022/12/30 13:15:52 - mmengine - INFO - Epoch(train) [3][ 200/1567] lr: 9.5703e-02 eta: 1:11:25 time: 0.1831 data_time: 0.0066 memory: 2656 loss: 0.5260 top1_acc: 0.6875 top5_acc: 0.8750 loss_cls: 0.5260 2022/12/30 13:16:10 - mmengine - INFO - Epoch(train) [3][ 300/1567] lr: 9.5445e-02 eta: 1:10:59 time: 0.1808 data_time: 0.0066 memory: 2656 loss: 0.4849 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4849 2022/12/30 13:16:30 - mmengine - INFO - Epoch(train) [3][ 400/1567] lr: 9.5180e-02 eta: 1:10:38 time: 0.1954 data_time: 0.0066 memory: 2656 loss: 0.4801 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4801 2022/12/30 13:16:48 - mmengine - INFO - Epoch(train) [3][ 500/1567] lr: 9.4908e-02 eta: 1:10:09 time: 0.1742 data_time: 0.0067 memory: 2656 loss: 0.6723 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.6723 2022/12/30 13:17:06 - mmengine - INFO - Epoch(train) [3][ 600/1567] lr: 9.4629e-02 eta: 1:09:43 time: 0.1941 data_time: 0.0065 memory: 2656 loss: 0.4955 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4955 2022/12/30 13:17:26 - mmengine - INFO - Epoch(train) [3][ 700/1567] lr: 9.4343e-02 eta: 1:09:21 time: 0.1945 data_time: 0.0066 memory: 2656 loss: 0.4367 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4367 2022/12/30 13:17:44 - mmengine - INFO - Epoch(train) [3][ 800/1567] lr: 9.4050e-02 eta: 1:08:56 time: 0.1853 data_time: 0.0066 memory: 2656 loss: 0.4576 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4576 2022/12/30 13:17:57 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:18:03 - mmengine - INFO - Epoch(train) [3][ 900/1567] lr: 9.3750e-02 eta: 1:08:32 time: 0.1871 data_time: 0.0070 memory: 2656 loss: 0.4746 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.4746 2022/12/30 13:18:22 - mmengine - INFO - Epoch(train) [3][1000/1567] lr: 9.3444e-02 eta: 1:08:08 time: 0.1828 data_time: 0.0066 memory: 2656 loss: 0.5357 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5357 2022/12/30 13:18:40 - mmengine - INFO - Epoch(train) [3][1100/1567] lr: 9.3130e-02 eta: 1:07:42 time: 0.1823 data_time: 0.0065 memory: 2656 loss: 0.5927 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5927 2022/12/30 13:18:58 - mmengine - INFO - Epoch(train) [3][1200/1567] lr: 9.2810e-02 eta: 1:07:16 time: 0.1725 data_time: 0.0066 memory: 2656 loss: 0.4470 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.4470 2022/12/30 13:19:16 - mmengine - INFO - Epoch(train) [3][1300/1567] lr: 9.2483e-02 eta: 1:06:50 time: 0.1822 data_time: 0.0066 memory: 2656 loss: 0.5107 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.5107 2022/12/30 13:19:34 - mmengine - INFO - Epoch(train) [3][1400/1567] lr: 9.2149e-02 eta: 1:06:25 time: 0.1876 data_time: 0.0066 memory: 2656 loss: 0.4637 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4637 2022/12/30 13:19:53 - mmengine - INFO - Epoch(train) [3][1500/1567] lr: 9.1809e-02 eta: 1:06:02 time: 0.1745 data_time: 0.0065 memory: 2656 loss: 0.4212 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.4212 2022/12/30 13:20:05 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:20:05 - mmengine - INFO - Epoch(train) [3][1567/1567] lr: 9.1577e-02 eta: 1:05:48 time: 0.1861 data_time: 0.0065 memory: 2656 loss: 0.6442 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.6442 2022/12/30 13:20:06 - mmengine - INFO - Saving checkpoint at 3 epochs 2022/12/30 13:20:10 - mmengine - INFO - Epoch(val) [3][100/129] eta: 0:00:01 time: 0.0417 data_time: 0.0063 memory: 378 2022/12/30 13:20:12 - mmengine - INFO - Epoch(val) [3][129/129] acc/top1: 0.5220 acc/top5: 0.8371 acc/mean1: 0.5220 2022/12/30 13:20:31 - mmengine - INFO - Epoch(train) [4][ 100/1567] lr: 9.1226e-02 eta: 1:05:27 time: 0.1887 data_time: 0.0066 memory: 2656 loss: 0.4469 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4469 2022/12/30 13:20:50 - mmengine - INFO - Epoch(train) [4][ 200/1567] lr: 9.0868e-02 eta: 1:05:06 time: 0.1902 data_time: 0.0069 memory: 2656 loss: 0.4385 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.4385 2022/12/30 13:21:09 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:21:09 - mmengine - INFO - Epoch(train) [4][ 300/1567] lr: 9.0504e-02 eta: 1:04:47 time: 0.1876 data_time: 0.0066 memory: 2656 loss: 0.4539 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.4539 2022/12/30 13:21:28 - mmengine - INFO - Epoch(train) [4][ 400/1567] lr: 9.0133e-02 eta: 1:04:27 time: 0.1869 data_time: 0.0065 memory: 2656 loss: 0.5253 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.5253 2022/12/30 13:21:47 - mmengine - INFO - Epoch(train) [4][ 500/1567] lr: 8.9756e-02 eta: 1:04:05 time: 0.1845 data_time: 0.0068 memory: 2656 loss: 0.4130 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4130 2022/12/30 13:22:06 - mmengine - INFO - Epoch(train) [4][ 600/1567] lr: 8.9373e-02 eta: 1:03:45 time: 0.1928 data_time: 0.0071 memory: 2656 loss: 0.4001 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4001 2022/12/30 13:22:25 - mmengine - INFO - Epoch(train) [4][ 700/1567] lr: 8.8984e-02 eta: 1:03:22 time: 0.1740 data_time: 0.0070 memory: 2656 loss: 0.4720 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4720 2022/12/30 13:22:43 - mmengine - INFO - Epoch(train) [4][ 800/1567] lr: 8.8589e-02 eta: 1:03:00 time: 0.1973 data_time: 0.0065 memory: 2656 loss: 0.3877 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3877 2022/12/30 13:23:02 - mmengine - INFO - Epoch(train) [4][ 900/1567] lr: 8.8187e-02 eta: 1:02:40 time: 0.1825 data_time: 0.0069 memory: 2656 loss: 0.4241 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4241 2022/12/30 13:23:22 - mmengine - INFO - Epoch(train) [4][1000/1567] lr: 8.7780e-02 eta: 1:02:20 time: 0.1937 data_time: 0.0075 memory: 2656 loss: 0.4126 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4126 2022/12/30 13:23:40 - mmengine - INFO - Epoch(train) [4][1100/1567] lr: 8.7367e-02 eta: 1:01:59 time: 0.1838 data_time: 0.0074 memory: 2656 loss: 0.4435 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.4435 2022/12/30 13:23:59 - mmengine - INFO - Epoch(train) [4][1200/1567] lr: 8.6947e-02 eta: 1:01:37 time: 0.1817 data_time: 0.0065 memory: 2656 loss: 0.3599 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3599 2022/12/30 13:24:17 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:24:18 - mmengine - INFO - Epoch(train) [4][1300/1567] lr: 8.6522e-02 eta: 1:01:16 time: 0.1881 data_time: 0.0067 memory: 2656 loss: 0.5344 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.5344 2022/12/30 13:24:36 - mmengine - INFO - Epoch(train) [4][1400/1567] lr: 8.6092e-02 eta: 1:00:54 time: 0.1800 data_time: 0.0066 memory: 2656 loss: 0.3850 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3850 2022/12/30 13:24:54 - mmengine - INFO - Epoch(train) [4][1500/1567] lr: 8.5655e-02 eta: 1:00:31 time: 0.1849 data_time: 0.0067 memory: 2656 loss: 0.3931 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3931 2022/12/30 13:25:06 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:25:06 - mmengine - INFO - Epoch(train) [4][1567/1567] lr: 8.5360e-02 eta: 1:00:17 time: 0.1785 data_time: 0.0066 memory: 2656 loss: 0.6283 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.6283 2022/12/30 13:25:06 - mmengine - INFO - Saving checkpoint at 4 epochs 2022/12/30 13:25:11 - mmengine - INFO - Epoch(val) [4][100/129] eta: 0:00:01 time: 0.0416 data_time: 0.0063 memory: 378 2022/12/30 13:25:12 - mmengine - INFO - Epoch(val) [4][129/129] acc/top1: 0.6588 acc/top5: 0.9105 acc/mean1: 0.6584 2022/12/30 13:25:32 - mmengine - INFO - Epoch(train) [5][ 100/1567] lr: 8.4914e-02 eta: 0:59:57 time: 0.1850 data_time: 0.0064 memory: 2656 loss: 0.3921 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3921 2022/12/30 13:25:50 - mmengine - INFO - Epoch(train) [5][ 200/1567] lr: 8.4463e-02 eta: 0:59:37 time: 0.1897 data_time: 0.0071 memory: 2656 loss: 0.3707 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.3707 2022/12/30 13:26:09 - mmengine - INFO - Epoch(train) [5][ 300/1567] lr: 8.4006e-02 eta: 0:59:16 time: 0.1902 data_time: 0.0066 memory: 2656 loss: 0.4130 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4130 2022/12/30 13:26:28 - mmengine - INFO - Epoch(train) [5][ 400/1567] lr: 8.3544e-02 eta: 0:58:57 time: 0.1965 data_time: 0.0067 memory: 2656 loss: 0.4261 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4261 2022/12/30 13:26:47 - mmengine - INFO - Epoch(train) [5][ 500/1567] lr: 8.3077e-02 eta: 0:58:35 time: 0.1848 data_time: 0.0067 memory: 2656 loss: 0.3910 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3910 2022/12/30 13:27:05 - mmengine - INFO - Epoch(train) [5][ 600/1567] lr: 8.2605e-02 eta: 0:58:15 time: 0.1910 data_time: 0.0066 memory: 2656 loss: 0.4444 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4444 2022/12/30 13:27:24 - mmengine - INFO - Epoch(train) [5][ 700/1567] lr: 8.2127e-02 eta: 0:57:54 time: 0.2049 data_time: 0.0065 memory: 2656 loss: 0.4286 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4286 2022/12/30 13:27:30 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:27:43 - mmengine - INFO - Epoch(train) [5][ 800/1567] lr: 8.1645e-02 eta: 0:57:34 time: 0.1938 data_time: 0.0066 memory: 2656 loss: 0.4058 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.4058 2022/12/30 13:28:02 - mmengine - INFO - Epoch(train) [5][ 900/1567] lr: 8.1157e-02 eta: 0:57:15 time: 0.2022 data_time: 0.0068 memory: 2656 loss: 0.4290 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.4290 2022/12/30 13:28:21 - mmengine - INFO - Epoch(train) [5][1000/1567] lr: 8.0665e-02 eta: 0:56:55 time: 0.1846 data_time: 0.0066 memory: 2656 loss: 0.3726 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3726 2022/12/30 13:28:40 - mmengine - INFO - Epoch(train) [5][1100/1567] lr: 8.0167e-02 eta: 0:56:35 time: 0.1911 data_time: 0.0066 memory: 2656 loss: 0.3914 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3914 2022/12/30 13:28:58 - mmengine - INFO - Epoch(train) [5][1200/1567] lr: 7.9665e-02 eta: 0:56:13 time: 0.1857 data_time: 0.0065 memory: 2656 loss: 0.3566 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3566 2022/12/30 13:29:16 - mmengine - INFO - Epoch(train) [5][1300/1567] lr: 7.9159e-02 eta: 0:55:52 time: 0.1819 data_time: 0.0069 memory: 2656 loss: 0.3322 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3322 2022/12/30 13:29:36 - mmengine - INFO - Epoch(train) [5][1400/1567] lr: 7.8647e-02 eta: 0:55:33 time: 0.1953 data_time: 0.0068 memory: 2656 loss: 0.3941 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3941 2022/12/30 13:29:55 - mmengine - INFO - Epoch(train) [5][1500/1567] lr: 7.8132e-02 eta: 0:55:14 time: 0.1818 data_time: 0.0075 memory: 2656 loss: 0.3435 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3435 2022/12/30 13:30:07 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:30:07 - mmengine - INFO - Epoch(train) [5][1567/1567] lr: 7.7784e-02 eta: 0:55:01 time: 0.1843 data_time: 0.0071 memory: 2656 loss: 0.5720 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.5720 2022/12/30 13:30:07 - mmengine - INFO - Saving checkpoint at 5 epochs 2022/12/30 13:30:12 - mmengine - INFO - Epoch(val) [5][100/129] eta: 0:00:01 time: 0.0411 data_time: 0.0059 memory: 378 2022/12/30 13:30:14 - mmengine - INFO - Epoch(val) [5][129/129] acc/top1: 0.7605 acc/top5: 0.9457 acc/mean1: 0.7604 2022/12/30 13:30:14 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_2.pth is removed 2022/12/30 13:30:14 - mmengine - INFO - The best checkpoint with 0.7605 acc/top1 at 5 epoch is saved to best_acc/top1_epoch_5.pth. 2022/12/30 13:30:33 - mmengine - INFO - Epoch(train) [6][ 100/1567] lr: 7.7261e-02 eta: 0:54:42 time: 0.1841 data_time: 0.0069 memory: 2656 loss: 0.3862 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3862 2022/12/30 13:30:45 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:30:52 - mmengine - INFO - Epoch(train) [6][ 200/1567] lr: 7.6733e-02 eta: 0:54:22 time: 0.1898 data_time: 0.0065 memory: 2656 loss: 0.2956 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2956 2022/12/30 13:31:11 - mmengine - INFO - Epoch(train) [6][ 300/1567] lr: 7.6202e-02 eta: 0:54:02 time: 0.1909 data_time: 0.0066 memory: 2656 loss: 0.3915 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.3915 2022/12/30 13:31:30 - mmengine - INFO - Epoch(train) [6][ 400/1567] lr: 7.5666e-02 eta: 0:53:43 time: 0.1918 data_time: 0.0065 memory: 2656 loss: 0.3614 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.3614 2022/12/30 13:31:49 - mmengine - INFO - Epoch(train) [6][ 500/1567] lr: 7.5126e-02 eta: 0:53:24 time: 0.1847 data_time: 0.0069 memory: 2656 loss: 0.3683 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3683 2022/12/30 13:32:08 - mmengine - INFO - Epoch(train) [6][ 600/1567] lr: 7.4583e-02 eta: 0:53:03 time: 0.1760 data_time: 0.0066 memory: 2656 loss: 0.4424 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.4424 2022/12/30 13:32:26 - mmengine - INFO - Epoch(train) [6][ 700/1567] lr: 7.4035e-02 eta: 0:52:43 time: 0.1798 data_time: 0.0064 memory: 2656 loss: 0.3760 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3760 2022/12/30 13:32:45 - mmengine - INFO - Epoch(train) [6][ 800/1567] lr: 7.3484e-02 eta: 0:52:23 time: 0.1796 data_time: 0.0067 memory: 2656 loss: 0.2881 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2881 2022/12/30 13:33:04 - mmengine - INFO - Epoch(train) [6][ 900/1567] lr: 7.2929e-02 eta: 0:52:03 time: 0.1852 data_time: 0.0066 memory: 2656 loss: 0.3359 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3359 2022/12/30 13:33:23 - mmengine - INFO - Epoch(train) [6][1000/1567] lr: 7.2371e-02 eta: 0:51:44 time: 0.1988 data_time: 0.0065 memory: 2656 loss: 0.3187 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3187 2022/12/30 13:33:42 - mmengine - INFO - Epoch(train) [6][1100/1567] lr: 7.1809e-02 eta: 0:51:25 time: 0.1804 data_time: 0.0065 memory: 2656 loss: 0.2554 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2554 2022/12/30 13:33:54 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:34:01 - mmengine - INFO - Epoch(train) [6][1200/1567] lr: 7.1243e-02 eta: 0:51:05 time: 0.1868 data_time: 0.0071 memory: 2656 loss: 0.3840 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3840 2022/12/30 13:34:19 - mmengine - INFO - Epoch(train) [6][1300/1567] lr: 7.0674e-02 eta: 0:50:45 time: 0.1899 data_time: 0.0070 memory: 2656 loss: 0.2754 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2754 2022/12/30 13:34:38 - mmengine - INFO - Epoch(train) [6][1400/1567] lr: 7.0102e-02 eta: 0:50:25 time: 0.1766 data_time: 0.0066 memory: 2656 loss: 0.3191 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3191 2022/12/30 13:34:57 - mmengine - INFO - Epoch(train) [6][1500/1567] lr: 6.9527e-02 eta: 0:50:06 time: 0.1864 data_time: 0.0067 memory: 2656 loss: 0.3440 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3440 2022/12/30 13:35:09 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:35:09 - mmengine - INFO - Epoch(train) [6][1567/1567] lr: 6.9140e-02 eta: 0:49:52 time: 0.1831 data_time: 0.0066 memory: 2656 loss: 0.5360 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.5360 2022/12/30 13:35:09 - mmengine - INFO - Saving checkpoint at 6 epochs 2022/12/30 13:35:14 - mmengine - INFO - Epoch(val) [6][100/129] eta: 0:00:01 time: 0.0408 data_time: 0.0067 memory: 378 2022/12/30 13:35:15 - mmengine - INFO - Epoch(val) [6][129/129] acc/top1: 0.6665 acc/top5: 0.8943 acc/mean1: 0.6664 2022/12/30 13:35:34 - mmengine - INFO - Epoch(train) [7][ 100/1567] lr: 6.8560e-02 eta: 0:49:32 time: 0.1827 data_time: 0.0068 memory: 2656 loss: 0.3194 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3194 2022/12/30 13:35:52 - mmengine - INFO - Epoch(train) [7][ 200/1567] lr: 6.7976e-02 eta: 0:49:12 time: 0.1895 data_time: 0.0065 memory: 2656 loss: 0.2786 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2786 2022/12/30 13:36:11 - mmengine - INFO - Epoch(train) [7][ 300/1567] lr: 6.7390e-02 eta: 0:48:53 time: 0.1863 data_time: 0.0065 memory: 2656 loss: 0.2616 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2616 2022/12/30 13:36:30 - mmengine - INFO - Epoch(train) [7][ 400/1567] lr: 6.6802e-02 eta: 0:48:33 time: 0.1856 data_time: 0.0065 memory: 2656 loss: 0.2940 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2940 2022/12/30 13:36:48 - mmengine - INFO - Epoch(train) [7][ 500/1567] lr: 6.6210e-02 eta: 0:48:12 time: 0.1879 data_time: 0.0069 memory: 2656 loss: 0.3098 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3098 2022/12/30 13:37:06 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:37:06 - mmengine - INFO - Epoch(train) [7][ 600/1567] lr: 6.5616e-02 eta: 0:47:52 time: 0.1815 data_time: 0.0066 memory: 2656 loss: 0.3241 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3241 2022/12/30 13:37:25 - mmengine - INFO - Epoch(train) [7][ 700/1567] lr: 6.5020e-02 eta: 0:47:32 time: 0.1767 data_time: 0.0065 memory: 2656 loss: 0.3108 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3108 2022/12/30 13:37:44 - mmengine - INFO - Epoch(train) [7][ 800/1567] lr: 6.4421e-02 eta: 0:47:13 time: 0.2027 data_time: 0.0071 memory: 2656 loss: 0.3361 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3361 2022/12/30 13:38:02 - mmengine - INFO - Epoch(train) [7][ 900/1567] lr: 6.3820e-02 eta: 0:46:54 time: 0.1940 data_time: 0.0071 memory: 2656 loss: 0.2569 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2569 2022/12/30 13:38:21 - mmengine - INFO - Epoch(train) [7][1000/1567] lr: 6.3217e-02 eta: 0:46:33 time: 0.1825 data_time: 0.0067 memory: 2656 loss: 0.3172 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3172 2022/12/30 13:38:39 - mmengine - INFO - Epoch(train) [7][1100/1567] lr: 6.2612e-02 eta: 0:46:13 time: 0.1810 data_time: 0.0065 memory: 2656 loss: 0.3162 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3162 2022/12/30 13:38:58 - mmengine - INFO - Epoch(train) [7][1200/1567] lr: 6.2005e-02 eta: 0:45:53 time: 0.1709 data_time: 0.0068 memory: 2656 loss: 0.2727 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2727 2022/12/30 13:39:17 - mmengine - INFO - Epoch(train) [7][1300/1567] lr: 6.1396e-02 eta: 0:45:35 time: 0.1960 data_time: 0.0072 memory: 2656 loss: 0.2903 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2903 2022/12/30 13:39:35 - mmengine - INFO - Epoch(train) [7][1400/1567] lr: 6.0785e-02 eta: 0:45:15 time: 0.1762 data_time: 0.0071 memory: 2656 loss: 0.4340 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4340 2022/12/30 13:39:54 - mmengine - INFO - Epoch(train) [7][1500/1567] lr: 6.0172e-02 eta: 0:44:56 time: 0.1957 data_time: 0.0067 memory: 2656 loss: 0.3027 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.3027 2022/12/30 13:40:06 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:40:06 - mmengine - INFO - Epoch(train) [7][1567/1567] lr: 5.9761e-02 eta: 0:44:42 time: 0.1772 data_time: 0.0064 memory: 2656 loss: 0.4421 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.4421 2022/12/30 13:40:06 - mmengine - INFO - Saving checkpoint at 7 epochs 2022/12/30 13:40:11 - mmengine - INFO - Epoch(val) [7][100/129] eta: 0:00:01 time: 0.0416 data_time: 0.0062 memory: 378 2022/12/30 13:40:12 - mmengine - INFO - Epoch(val) [7][129/129] acc/top1: 0.6873 acc/top5: 0.8923 acc/mean1: 0.6871 2022/12/30 13:40:18 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:40:31 - mmengine - INFO - Epoch(train) [8][ 100/1567] lr: 5.9145e-02 eta: 0:44:23 time: 0.1989 data_time: 0.0070 memory: 2656 loss: 0.3058 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3058 2022/12/30 13:40:50 - mmengine - INFO - Epoch(train) [8][ 200/1567] lr: 5.8529e-02 eta: 0:44:03 time: 0.1880 data_time: 0.0068 memory: 2656 loss: 0.2977 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2977 2022/12/30 13:41:08 - mmengine - INFO - Epoch(train) [8][ 300/1567] lr: 5.7911e-02 eta: 0:43:44 time: 0.1951 data_time: 0.0066 memory: 2656 loss: 0.3087 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3087 2022/12/30 13:41:27 - mmengine - INFO - Epoch(train) [8][ 400/1567] lr: 5.7292e-02 eta: 0:43:25 time: 0.1838 data_time: 0.0066 memory: 2656 loss: 0.2179 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2179 2022/12/30 13:41:46 - mmengine - INFO - Epoch(train) [8][ 500/1567] lr: 5.6671e-02 eta: 0:43:05 time: 0.1836 data_time: 0.0065 memory: 2656 loss: 0.2585 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2585 2022/12/30 13:42:05 - mmengine - INFO - Epoch(train) [8][ 600/1567] lr: 5.6050e-02 eta: 0:42:46 time: 0.2001 data_time: 0.0069 memory: 2656 loss: 0.2880 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.2880 2022/12/30 13:42:23 - mmengine - INFO - Epoch(train) [8][ 700/1567] lr: 5.5427e-02 eta: 0:42:27 time: 0.1831 data_time: 0.0065 memory: 2656 loss: 0.3256 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.3256 2022/12/30 13:42:42 - mmengine - INFO - Epoch(train) [8][ 800/1567] lr: 5.4804e-02 eta: 0:42:07 time: 0.1913 data_time: 0.0065 memory: 2656 loss: 0.2380 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2380 2022/12/30 13:43:01 - mmengine - INFO - Epoch(train) [8][ 900/1567] lr: 5.4180e-02 eta: 0:41:48 time: 0.1873 data_time: 0.0069 memory: 2656 loss: 0.2739 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.2739 2022/12/30 13:43:19 - mmengine - INFO - Epoch(train) [8][1000/1567] lr: 5.3556e-02 eta: 0:41:28 time: 0.1857 data_time: 0.0068 memory: 2656 loss: 0.3247 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3247 2022/12/30 13:43:25 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:43:38 - mmengine - INFO - Epoch(train) [8][1100/1567] lr: 5.2930e-02 eta: 0:41:09 time: 0.1766 data_time: 0.0068 memory: 2656 loss: 0.2360 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.2360 2022/12/30 13:43:56 - mmengine - INFO - Epoch(train) [8][1200/1567] lr: 5.2305e-02 eta: 0:40:49 time: 0.1866 data_time: 0.0066 memory: 2656 loss: 0.2383 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2383 2022/12/30 13:44:15 - mmengine - INFO - Epoch(train) [8][1300/1567] lr: 5.1679e-02 eta: 0:40:30 time: 0.1880 data_time: 0.0065 memory: 2656 loss: 0.2784 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2784 2022/12/30 13:44:34 - mmengine - INFO - Epoch(train) [8][1400/1567] lr: 5.1052e-02 eta: 0:40:11 time: 0.1876 data_time: 0.0066 memory: 2656 loss: 0.2710 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2710 2022/12/30 13:44:52 - mmengine - INFO - Epoch(train) [8][1500/1567] lr: 5.0426e-02 eta: 0:39:51 time: 0.1835 data_time: 0.0066 memory: 2656 loss: 0.2681 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2681 2022/12/30 13:45:04 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:45:04 - mmengine - INFO - Epoch(train) [8][1567/1567] lr: 5.0006e-02 eta: 0:39:38 time: 0.1830 data_time: 0.0063 memory: 2656 loss: 0.3737 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3737 2022/12/30 13:45:04 - mmengine - INFO - Saving checkpoint at 8 epochs 2022/12/30 13:45:09 - mmengine - INFO - Epoch(val) [8][100/129] eta: 0:00:01 time: 0.0432 data_time: 0.0093 memory: 378 2022/12/30 13:45:10 - mmengine - INFO - Epoch(val) [8][129/129] acc/top1: 0.6219 acc/top5: 0.8435 acc/mean1: 0.6216 2022/12/30 13:45:30 - mmengine - INFO - Epoch(train) [9][ 100/1567] lr: 4.9380e-02 eta: 0:39:19 time: 0.1765 data_time: 0.0066 memory: 2656 loss: 0.2062 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2062 2022/12/30 13:45:48 - mmengine - INFO - Epoch(train) [9][ 200/1567] lr: 4.8753e-02 eta: 0:38:59 time: 0.1808 data_time: 0.0066 memory: 2656 loss: 0.2472 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2472 2022/12/30 13:46:07 - mmengine - INFO - Epoch(train) [9][ 300/1567] lr: 4.8127e-02 eta: 0:38:41 time: 0.1802 data_time: 0.0071 memory: 2656 loss: 0.2340 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2340 2022/12/30 13:46:25 - mmengine - INFO - Epoch(train) [9][ 400/1567] lr: 4.7501e-02 eta: 0:38:21 time: 0.1754 data_time: 0.0065 memory: 2656 loss: 0.1886 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1886 2022/12/30 13:46:37 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:46:44 - mmengine - INFO - Epoch(train) [9][ 500/1567] lr: 4.6876e-02 eta: 0:38:02 time: 0.1953 data_time: 0.0066 memory: 2656 loss: 0.2606 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2606 2022/12/30 13:47:03 - mmengine - INFO - Epoch(train) [9][ 600/1567] lr: 4.6251e-02 eta: 0:37:43 time: 0.1848 data_time: 0.0066 memory: 2656 loss: 0.2622 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2622 2022/12/30 13:47:21 - mmengine - INFO - Epoch(train) [9][ 700/1567] lr: 4.5626e-02 eta: 0:37:23 time: 0.1774 data_time: 0.0065 memory: 2656 loss: 0.2147 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2147 2022/12/30 13:47:40 - mmengine - INFO - Epoch(train) [9][ 800/1567] lr: 4.5003e-02 eta: 0:37:04 time: 0.1844 data_time: 0.0073 memory: 2656 loss: 0.2366 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2366 2022/12/30 13:47:58 - mmengine - INFO - Epoch(train) [9][ 900/1567] lr: 4.4380e-02 eta: 0:36:45 time: 0.1803 data_time: 0.0077 memory: 2656 loss: 0.2845 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2845 2022/12/30 13:48:17 - mmengine - INFO - Epoch(train) [9][1000/1567] lr: 4.3757e-02 eta: 0:36:25 time: 0.1893 data_time: 0.0065 memory: 2656 loss: 0.1960 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1960 2022/12/30 13:48:35 - mmengine - INFO - Epoch(train) [9][1100/1567] lr: 4.3136e-02 eta: 0:36:06 time: 0.1793 data_time: 0.0067 memory: 2656 loss: 0.1928 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1928 2022/12/30 13:48:53 - mmengine - INFO - Epoch(train) [9][1200/1567] lr: 4.2516e-02 eta: 0:35:46 time: 0.1900 data_time: 0.0066 memory: 2656 loss: 0.1782 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1782 2022/12/30 13:49:11 - mmengine - INFO - Epoch(train) [9][1300/1567] lr: 4.1897e-02 eta: 0:35:27 time: 0.1837 data_time: 0.0065 memory: 2656 loss: 0.1523 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1523 2022/12/30 13:49:31 - mmengine - INFO - Epoch(train) [9][1400/1567] lr: 4.1280e-02 eta: 0:35:08 time: 0.1902 data_time: 0.0070 memory: 2656 loss: 0.1823 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.1823 2022/12/30 13:49:42 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:49:49 - mmengine - INFO - Epoch(train) [9][1500/1567] lr: 4.0664e-02 eta: 0:34:49 time: 0.1835 data_time: 0.0066 memory: 2656 loss: 0.1941 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1941 2022/12/30 13:50:01 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:50:01 - mmengine - INFO - Epoch(train) [9][1567/1567] lr: 4.0252e-02 eta: 0:34:36 time: 0.1781 data_time: 0.0063 memory: 2656 loss: 0.2803 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.2803 2022/12/30 13:50:01 - mmengine - INFO - Saving checkpoint at 9 epochs 2022/12/30 13:50:06 - mmengine - INFO - Epoch(val) [9][100/129] eta: 0:00:01 time: 0.0420 data_time: 0.0061 memory: 378 2022/12/30 13:50:08 - mmengine - INFO - Epoch(val) [9][129/129] acc/top1: 0.8311 acc/top5: 0.9706 acc/mean1: 0.8310 2022/12/30 13:50:08 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_5.pth is removed 2022/12/30 13:50:08 - mmengine - INFO - The best checkpoint with 0.8311 acc/top1 at 9 epoch is saved to best_acc/top1_epoch_9.pth. 2022/12/30 13:50:27 - mmengine - INFO - Epoch(train) [10][ 100/1567] lr: 3.9638e-02 eta: 0:34:17 time: 0.1848 data_time: 0.0066 memory: 2656 loss: 0.1464 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1464 2022/12/30 13:50:45 - mmengine - INFO - Epoch(train) [10][ 200/1567] lr: 3.9026e-02 eta: 0:33:57 time: 0.1779 data_time: 0.0068 memory: 2656 loss: 0.1974 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1974 2022/12/30 13:51:04 - mmengine - INFO - Epoch(train) [10][ 300/1567] lr: 3.8415e-02 eta: 0:33:38 time: 0.1812 data_time: 0.0069 memory: 2656 loss: 0.1452 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1452 2022/12/30 13:51:23 - mmengine - INFO - Epoch(train) [10][ 400/1567] lr: 3.7807e-02 eta: 0:33:19 time: 0.1908 data_time: 0.0068 memory: 2656 loss: 0.1588 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1588 2022/12/30 13:51:42 - mmengine - INFO - Epoch(train) [10][ 500/1567] lr: 3.7200e-02 eta: 0:33:00 time: 0.1859 data_time: 0.0067 memory: 2656 loss: 0.1747 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1747 2022/12/30 13:52:01 - mmengine - INFO - Epoch(train) [10][ 600/1567] lr: 3.6596e-02 eta: 0:32:41 time: 0.1892 data_time: 0.0072 memory: 2656 loss: 0.2077 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2077 2022/12/30 13:52:20 - mmengine - INFO - Epoch(train) [10][ 700/1567] lr: 3.5993e-02 eta: 0:32:23 time: 0.1914 data_time: 0.0070 memory: 2656 loss: 0.2042 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2042 2022/12/30 13:52:39 - mmengine - INFO - Epoch(train) [10][ 800/1567] lr: 3.5393e-02 eta: 0:32:04 time: 0.1908 data_time: 0.0071 memory: 2656 loss: 0.1429 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1429 2022/12/30 13:52:58 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:52:58 - mmengine - INFO - Epoch(train) [10][ 900/1567] lr: 3.4795e-02 eta: 0:31:45 time: 0.1925 data_time: 0.0070 memory: 2656 loss: 0.1655 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1655 2022/12/30 13:53:17 - mmengine - INFO - Epoch(train) [10][1000/1567] lr: 3.4199e-02 eta: 0:31:26 time: 0.1894 data_time: 0.0067 memory: 2656 loss: 0.1563 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1563 2022/12/30 13:53:36 - mmengine - INFO - Epoch(train) [10][1100/1567] lr: 3.3606e-02 eta: 0:31:07 time: 0.1867 data_time: 0.0068 memory: 2656 loss: 0.1163 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1163 2022/12/30 13:53:55 - mmengine - INFO - Epoch(train) [10][1200/1567] lr: 3.3015e-02 eta: 0:30:48 time: 0.1972 data_time: 0.0068 memory: 2656 loss: 0.2025 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2025 2022/12/30 13:54:14 - mmengine - INFO - Epoch(train) [10][1300/1567] lr: 3.2428e-02 eta: 0:30:30 time: 0.1914 data_time: 0.0068 memory: 2656 loss: 0.1133 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1133 2022/12/30 13:54:33 - mmengine - INFO - Epoch(train) [10][1400/1567] lr: 3.1842e-02 eta: 0:30:11 time: 0.1841 data_time: 0.0071 memory: 2656 loss: 0.1261 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1261 2022/12/30 13:54:52 - mmengine - INFO - Epoch(train) [10][1500/1567] lr: 3.1260e-02 eta: 0:29:52 time: 0.1854 data_time: 0.0070 memory: 2656 loss: 0.1626 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1626 2022/12/30 13:55:04 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:55:04 - mmengine - INFO - Epoch(train) [10][1567/1567] lr: 3.0872e-02 eta: 0:29:39 time: 0.1800 data_time: 0.0071 memory: 2656 loss: 0.3652 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3652 2022/12/30 13:55:04 - mmengine - INFO - Saving checkpoint at 10 epochs 2022/12/30 13:55:09 - mmengine - INFO - Epoch(val) [10][100/129] eta: 0:00:01 time: 0.0409 data_time: 0.0064 memory: 378 2022/12/30 13:55:11 - mmengine - INFO - Epoch(val) [10][129/129] acc/top1: 0.8370 acc/top5: 0.9676 acc/mean1: 0.8369 2022/12/30 13:55:11 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_9.pth is removed 2022/12/30 13:55:11 - mmengine - INFO - The best checkpoint with 0.8370 acc/top1 at 10 epoch is saved to best_acc/top1_epoch_10.pth. 2022/12/30 13:55:30 - mmengine - INFO - Epoch(train) [11][ 100/1567] lr: 3.0294e-02 eta: 0:29:20 time: 0.1915 data_time: 0.0070 memory: 2656 loss: 0.1246 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1246 2022/12/30 13:55:49 - mmengine - INFO - Epoch(train) [11][ 200/1567] lr: 2.9720e-02 eta: 0:29:01 time: 0.1910 data_time: 0.0074 memory: 2656 loss: 0.1365 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1365 2022/12/30 13:56:08 - mmengine - INFO - Epoch(train) [11][ 300/1567] lr: 2.9149e-02 eta: 0:28:42 time: 0.1967 data_time: 0.0068 memory: 2656 loss: 0.1503 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1503 2022/12/30 13:56:14 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:56:27 - mmengine - INFO - Epoch(train) [11][ 400/1567] lr: 2.8581e-02 eta: 0:28:23 time: 0.1879 data_time: 0.0067 memory: 2656 loss: 0.1456 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1456 2022/12/30 13:56:47 - mmengine - INFO - Epoch(train) [11][ 500/1567] lr: 2.8017e-02 eta: 0:28:05 time: 0.1930 data_time: 0.0068 memory: 2656 loss: 0.0906 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0906 2022/12/30 13:57:06 - mmengine - INFO - Epoch(train) [11][ 600/1567] lr: 2.7456e-02 eta: 0:27:46 time: 0.1974 data_time: 0.0067 memory: 2656 loss: 0.1428 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1428 2022/12/30 13:57:25 - mmengine - INFO - Epoch(train) [11][ 700/1567] lr: 2.6898e-02 eta: 0:27:27 time: 0.1869 data_time: 0.0069 memory: 2656 loss: 0.0932 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0932 2022/12/30 13:57:45 - mmengine - INFO - Epoch(train) [11][ 800/1567] lr: 2.6345e-02 eta: 0:27:09 time: 0.1971 data_time: 0.0069 memory: 2656 loss: 0.1083 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1083 2022/12/30 13:58:04 - mmengine - INFO - Epoch(train) [11][ 900/1567] lr: 2.5794e-02 eta: 0:26:50 time: 0.1965 data_time: 0.0072 memory: 2656 loss: 0.1179 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1179 2022/12/30 13:58:24 - mmengine - INFO - Epoch(train) [11][1000/1567] lr: 2.5248e-02 eta: 0:26:31 time: 0.1916 data_time: 0.0071 memory: 2656 loss: 0.0828 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0828 2022/12/30 13:58:43 - mmengine - INFO - Epoch(train) [11][1100/1567] lr: 2.4706e-02 eta: 0:26:13 time: 0.1899 data_time: 0.0070 memory: 2656 loss: 0.1021 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1021 2022/12/30 13:59:03 - mmengine - INFO - Epoch(train) [11][1200/1567] lr: 2.4167e-02 eta: 0:25:54 time: 0.1964 data_time: 0.0073 memory: 2656 loss: 0.1317 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1317 2022/12/30 13:59:23 - mmengine - INFO - Epoch(train) [11][1300/1567] lr: 2.3633e-02 eta: 0:25:35 time: 0.1951 data_time: 0.0068 memory: 2656 loss: 0.0998 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0998 2022/12/30 13:59:29 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 13:59:42 - mmengine - INFO - Epoch(train) [11][1400/1567] lr: 2.3103e-02 eta: 0:25:17 time: 0.1922 data_time: 0.0069 memory: 2656 loss: 0.0989 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0989 2022/12/30 14:00:01 - mmengine - INFO - Epoch(train) [11][1500/1567] lr: 2.2577e-02 eta: 0:24:58 time: 0.1944 data_time: 0.0069 memory: 2656 loss: 0.0828 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0828 2022/12/30 14:00:14 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:00:14 - mmengine - INFO - Epoch(train) [11][1567/1567] lr: 2.2227e-02 eta: 0:24:45 time: 0.1819 data_time: 0.0069 memory: 2656 loss: 0.2383 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.2383 2022/12/30 14:00:14 - mmengine - INFO - Saving checkpoint at 11 epochs 2022/12/30 14:00:19 - mmengine - INFO - Epoch(val) [11][100/129] eta: 0:00:01 time: 0.0396 data_time: 0.0083 memory: 378 2022/12/30 14:00:20 - mmengine - INFO - Epoch(val) [11][129/129] acc/top1: 0.8696 acc/top5: 0.9768 acc/mean1: 0.8695 2022/12/30 14:00:20 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_10.pth is removed 2022/12/30 14:00:20 - mmengine - INFO - The best checkpoint with 0.8696 acc/top1 at 11 epoch is saved to best_acc/top1_epoch_11.pth. 2022/12/30 14:00:40 - mmengine - INFO - Epoch(train) [12][ 100/1567] lr: 2.1708e-02 eta: 0:24:26 time: 0.1981 data_time: 0.0070 memory: 2656 loss: 0.0515 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0515 2022/12/30 14:00:59 - mmengine - INFO - Epoch(train) [12][ 200/1567] lr: 2.1194e-02 eta: 0:24:08 time: 0.1879 data_time: 0.0069 memory: 2656 loss: 0.0885 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0885 2022/12/30 14:01:19 - mmengine - INFO - Epoch(train) [12][ 300/1567] lr: 2.0684e-02 eta: 0:23:49 time: 0.1910 data_time: 0.0068 memory: 2656 loss: 0.0571 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0571 2022/12/30 14:01:38 - mmengine - INFO - Epoch(train) [12][ 400/1567] lr: 2.0179e-02 eta: 0:23:30 time: 0.1931 data_time: 0.0067 memory: 2656 loss: 0.0628 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0628 2022/12/30 14:01:57 - mmengine - INFO - Epoch(train) [12][ 500/1567] lr: 1.9678e-02 eta: 0:23:11 time: 0.1913 data_time: 0.0069 memory: 2656 loss: 0.0437 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0437 2022/12/30 14:02:17 - mmengine - INFO - Epoch(train) [12][ 600/1567] lr: 1.9182e-02 eta: 0:22:52 time: 0.1941 data_time: 0.0072 memory: 2656 loss: 0.0434 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0434 2022/12/30 14:02:36 - mmengine - INFO - Epoch(train) [12][ 700/1567] lr: 1.8691e-02 eta: 0:22:33 time: 0.1919 data_time: 0.0070 memory: 2656 loss: 0.0614 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0614 2022/12/30 14:02:48 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:02:55 - mmengine - INFO - Epoch(train) [12][ 800/1567] lr: 1.8205e-02 eta: 0:22:15 time: 0.1840 data_time: 0.0069 memory: 2656 loss: 0.0776 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0776 2022/12/30 14:03:14 - mmengine - INFO - Epoch(train) [12][ 900/1567] lr: 1.7724e-02 eta: 0:21:56 time: 0.1897 data_time: 0.0068 memory: 2656 loss: 0.0555 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0555 2022/12/30 14:03:33 - mmengine - INFO - Epoch(train) [12][1000/1567] lr: 1.7248e-02 eta: 0:21:37 time: 0.1855 data_time: 0.0071 memory: 2656 loss: 0.0444 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0444 2022/12/30 14:03:52 - mmengine - INFO - Epoch(train) [12][1100/1567] lr: 1.6778e-02 eta: 0:21:18 time: 0.1935 data_time: 0.0067 memory: 2656 loss: 0.0665 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0665 2022/12/30 14:04:11 - mmengine - INFO - Epoch(train) [12][1200/1567] lr: 1.6312e-02 eta: 0:20:59 time: 0.1900 data_time: 0.0067 memory: 2656 loss: 0.0810 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0810 2022/12/30 14:04:30 - mmengine - INFO - Epoch(train) [12][1300/1567] lr: 1.5852e-02 eta: 0:20:40 time: 0.1973 data_time: 0.0066 memory: 2656 loss: 0.0508 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0508 2022/12/30 14:04:50 - mmengine - INFO - Epoch(train) [12][1400/1567] lr: 1.5397e-02 eta: 0:20:21 time: 0.1833 data_time: 0.0067 memory: 2656 loss: 0.0732 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0732 2022/12/30 14:05:08 - mmengine - INFO - Epoch(train) [12][1500/1567] lr: 1.4947e-02 eta: 0:20:02 time: 0.1894 data_time: 0.0067 memory: 2656 loss: 0.0702 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0702 2022/12/30 14:05:21 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:05:21 - mmengine - INFO - Epoch(train) [12][1567/1567] lr: 1.4649e-02 eta: 0:19:49 time: 0.1839 data_time: 0.0065 memory: 2656 loss: 0.2321 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2321 2022/12/30 14:05:21 - mmengine - INFO - Saving checkpoint at 12 epochs 2022/12/30 14:05:26 - mmengine - INFO - Epoch(val) [12][100/129] eta: 0:00:01 time: 0.0411 data_time: 0.0062 memory: 378 2022/12/30 14:05:27 - mmengine - INFO - Epoch(val) [12][129/129] acc/top1: 0.8706 acc/top5: 0.9762 acc/mean1: 0.8705 2022/12/30 14:05:27 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_11.pth is removed 2022/12/30 14:05:28 - mmengine - INFO - The best checkpoint with 0.8706 acc/top1 at 12 epoch is saved to best_acc/top1_epoch_12.pth. 2022/12/30 14:05:47 - mmengine - INFO - Epoch(train) [13][ 100/1567] lr: 1.4209e-02 eta: 0:19:30 time: 0.1834 data_time: 0.0067 memory: 2656 loss: 0.0510 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0510 2022/12/30 14:06:05 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:06:06 - mmengine - INFO - Epoch(train) [13][ 200/1567] lr: 1.3774e-02 eta: 0:19:11 time: 0.1828 data_time: 0.0066 memory: 2656 loss: 0.0242 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0242 2022/12/30 14:06:25 - mmengine - INFO - Epoch(train) [13][ 300/1567] lr: 1.3345e-02 eta: 0:18:52 time: 0.1872 data_time: 0.0068 memory: 2656 loss: 0.0223 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0223 2022/12/30 14:06:44 - mmengine - INFO - Epoch(train) [13][ 400/1567] lr: 1.2922e-02 eta: 0:18:33 time: 0.1786 data_time: 0.0066 memory: 2656 loss: 0.0412 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0412 2022/12/30 14:07:02 - mmengine - INFO - Epoch(train) [13][ 500/1567] lr: 1.2505e-02 eta: 0:18:14 time: 0.1751 data_time: 0.0067 memory: 2656 loss: 0.0245 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0245 2022/12/30 14:07:21 - mmengine - INFO - Epoch(train) [13][ 600/1567] lr: 1.2093e-02 eta: 0:17:55 time: 0.1822 data_time: 0.0066 memory: 2656 loss: 0.0417 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0417 2022/12/30 14:07:40 - mmengine - INFO - Epoch(train) [13][ 700/1567] lr: 1.1687e-02 eta: 0:17:36 time: 0.1908 data_time: 0.0066 memory: 2656 loss: 0.0216 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0216 2022/12/30 14:07:59 - mmengine - INFO - Epoch(train) [13][ 800/1567] lr: 1.1288e-02 eta: 0:17:17 time: 0.1896 data_time: 0.0066 memory: 2656 loss: 0.0439 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0439 2022/12/30 14:08:18 - mmengine - INFO - Epoch(train) [13][ 900/1567] lr: 1.0894e-02 eta: 0:16:58 time: 0.1908 data_time: 0.0071 memory: 2656 loss: 0.0212 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0212 2022/12/30 14:08:37 - mmengine - INFO - Epoch(train) [13][1000/1567] lr: 1.0507e-02 eta: 0:16:39 time: 0.1926 data_time: 0.0069 memory: 2656 loss: 0.0250 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0250 2022/12/30 14:08:55 - mmengine - INFO - Epoch(train) [13][1100/1567] lr: 1.0126e-02 eta: 0:16:20 time: 0.1809 data_time: 0.0067 memory: 2656 loss: 0.0169 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0169 2022/12/30 14:09:13 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:09:14 - mmengine - INFO - Epoch(train) [13][1200/1567] lr: 9.7512e-03 eta: 0:16:01 time: 0.1993 data_time: 0.0069 memory: 2656 loss: 0.0178 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0178 2022/12/30 14:09:33 - mmengine - INFO - Epoch(train) [13][1300/1567] lr: 9.3826e-03 eta: 0:15:42 time: 0.1843 data_time: 0.0067 memory: 2656 loss: 0.0282 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0282 2022/12/30 14:09:51 - mmengine - INFO - Epoch(train) [13][1400/1567] lr: 9.0204e-03 eta: 0:15:23 time: 0.1726 data_time: 0.0066 memory: 2656 loss: 0.0359 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0359 2022/12/30 14:10:09 - mmengine - INFO - Epoch(train) [13][1500/1567] lr: 8.6647e-03 eta: 0:15:04 time: 0.1742 data_time: 0.0067 memory: 2656 loss: 0.0164 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0164 2022/12/30 14:10:22 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:10:22 - mmengine - INFO - Epoch(train) [13][1567/1567] lr: 8.4300e-03 eta: 0:14:51 time: 0.1873 data_time: 0.0075 memory: 2656 loss: 0.1893 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.1893 2022/12/30 14:10:22 - mmengine - INFO - Saving checkpoint at 13 epochs 2022/12/30 14:10:27 - mmengine - INFO - Epoch(val) [13][100/129] eta: 0:00:01 time: 0.0412 data_time: 0.0064 memory: 378 2022/12/30 14:10:28 - mmengine - INFO - Epoch(val) [13][129/129] acc/top1: 0.8773 acc/top5: 0.9772 acc/mean1: 0.8772 2022/12/30 14:10:28 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_12.pth is removed 2022/12/30 14:10:28 - mmengine - INFO - The best checkpoint with 0.8773 acc/top1 at 13 epoch is saved to best_acc/top1_epoch_13.pth. 2022/12/30 14:10:47 - mmengine - INFO - Epoch(train) [14][ 100/1567] lr: 8.0851e-03 eta: 0:14:32 time: 0.1860 data_time: 0.0066 memory: 2656 loss: 0.0132 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0132 2022/12/30 14:11:07 - mmengine - INFO - Epoch(train) [14][ 200/1567] lr: 7.7469e-03 eta: 0:14:13 time: 0.1897 data_time: 0.0066 memory: 2656 loss: 0.0100 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0100 2022/12/30 14:11:25 - mmengine - INFO - Epoch(train) [14][ 300/1567] lr: 7.4152e-03 eta: 0:13:54 time: 0.1753 data_time: 0.0066 memory: 2656 loss: 0.0229 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0229 2022/12/30 14:11:44 - mmengine - INFO - Epoch(train) [14][ 400/1567] lr: 7.0902e-03 eta: 0:13:35 time: 0.1869 data_time: 0.0066 memory: 2656 loss: 0.0136 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0136 2022/12/30 14:12:02 - mmengine - INFO - Epoch(train) [14][ 500/1567] lr: 6.7720e-03 eta: 0:13:16 time: 0.1849 data_time: 0.0066 memory: 2656 loss: 0.0119 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0119 2022/12/30 14:12:21 - mmengine - INFO - Epoch(train) [14][ 600/1567] lr: 6.4606e-03 eta: 0:12:57 time: 0.1890 data_time: 0.0071 memory: 2656 loss: 0.0136 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0136 2022/12/30 14:12:26 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:12:40 - mmengine - INFO - Epoch(train) [14][ 700/1567] lr: 6.1560e-03 eta: 0:12:38 time: 0.1857 data_time: 0.0065 memory: 2656 loss: 0.0180 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0180 2022/12/30 14:12:58 - mmengine - INFO - Epoch(train) [14][ 800/1567] lr: 5.8582e-03 eta: 0:12:19 time: 0.1896 data_time: 0.0067 memory: 2656 loss: 0.0148 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0148 2022/12/30 14:13:18 - mmengine - INFO - Epoch(train) [14][ 900/1567] lr: 5.5675e-03 eta: 0:12:00 time: 0.2021 data_time: 0.0084 memory: 2656 loss: 0.0163 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0163 2022/12/30 14:13:37 - mmengine - INFO - Epoch(train) [14][1000/1567] lr: 5.2836e-03 eta: 0:11:41 time: 0.1872 data_time: 0.0066 memory: 2656 loss: 0.0097 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0097 2022/12/30 14:13:56 - mmengine - INFO - Epoch(train) [14][1100/1567] lr: 5.0068e-03 eta: 0:11:22 time: 0.1958 data_time: 0.0067 memory: 2656 loss: 0.0115 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0115 2022/12/30 14:14:15 - mmengine - INFO - Epoch(train) [14][1200/1567] lr: 4.7371e-03 eta: 0:11:03 time: 0.1871 data_time: 0.0066 memory: 2656 loss: 0.0117 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0117 2022/12/30 14:14:33 - mmengine - INFO - Epoch(train) [14][1300/1567] lr: 4.4745e-03 eta: 0:10:44 time: 0.1984 data_time: 0.0076 memory: 2656 loss: 0.0098 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0098 2022/12/30 14:14:52 - mmengine - INFO - Epoch(train) [14][1400/1567] lr: 4.2190e-03 eta: 0:10:25 time: 0.1927 data_time: 0.0066 memory: 2656 loss: 0.0135 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0135 2022/12/30 14:15:12 - mmengine - INFO - Epoch(train) [14][1500/1567] lr: 3.9707e-03 eta: 0:10:06 time: 0.2014 data_time: 0.0070 memory: 2656 loss: 0.0130 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0130 2022/12/30 14:15:24 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:15:24 - mmengine - INFO - Epoch(train) [14][1567/1567] lr: 3.8084e-03 eta: 0:09:54 time: 0.1842 data_time: 0.0064 memory: 2656 loss: 0.1947 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.1947 2022/12/30 14:15:24 - mmengine - INFO - Saving checkpoint at 14 epochs 2022/12/30 14:15:29 - mmengine - INFO - Epoch(val) [14][100/129] eta: 0:00:01 time: 0.0415 data_time: 0.0067 memory: 378 2022/12/30 14:15:30 - mmengine - INFO - Epoch(val) [14][129/129] acc/top1: 0.8796 acc/top5: 0.9777 acc/mean1: 0.8795 2022/12/30 14:15:30 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_13.pth is removed 2022/12/30 14:15:31 - mmengine - INFO - The best checkpoint with 0.8796 acc/top1 at 14 epoch is saved to best_acc/top1_epoch_14.pth. 2022/12/30 14:15:43 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:15:50 - mmengine - INFO - Epoch(train) [15][ 100/1567] lr: 3.5722e-03 eta: 0:09:35 time: 0.1838 data_time: 0.0071 memory: 2656 loss: 0.0123 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0123 2022/12/30 14:16:08 - mmengine - INFO - Epoch(train) [15][ 200/1567] lr: 3.3433e-03 eta: 0:09:16 time: 0.1814 data_time: 0.0066 memory: 2656 loss: 0.0084 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0084 2022/12/30 14:16:27 - mmengine - INFO - Epoch(train) [15][ 300/1567] lr: 3.1217e-03 eta: 0:08:57 time: 0.1866 data_time: 0.0070 memory: 2656 loss: 0.0117 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0117 2022/12/30 14:16:46 - mmengine - INFO - Epoch(train) [15][ 400/1567] lr: 2.9075e-03 eta: 0:08:38 time: 0.1858 data_time: 0.0067 memory: 2656 loss: 0.0104 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0104 2022/12/30 14:17:05 - mmengine - INFO - Epoch(train) [15][ 500/1567] lr: 2.7007e-03 eta: 0:08:19 time: 0.1782 data_time: 0.0073 memory: 2656 loss: 0.0121 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0121 2022/12/30 14:17:23 - mmengine - INFO - Epoch(train) [15][ 600/1567] lr: 2.5013e-03 eta: 0:08:00 time: 0.1885 data_time: 0.0070 memory: 2656 loss: 0.0122 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0122 2022/12/30 14:17:42 - mmengine - INFO - Epoch(train) [15][ 700/1567] lr: 2.3093e-03 eta: 0:07:41 time: 0.1855 data_time: 0.0068 memory: 2656 loss: 0.0093 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0093 2022/12/30 14:18:01 - mmengine - INFO - Epoch(train) [15][ 800/1567] lr: 2.1249e-03 eta: 0:07:22 time: 0.1893 data_time: 0.0067 memory: 2656 loss: 0.0145 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0145 2022/12/30 14:18:20 - mmengine - INFO - Epoch(train) [15][ 900/1567] lr: 1.9479e-03 eta: 0:07:03 time: 0.1852 data_time: 0.0066 memory: 2656 loss: 0.0107 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0107 2022/12/30 14:18:39 - mmengine - INFO - Epoch(train) [15][1000/1567] lr: 1.7785e-03 eta: 0:06:44 time: 0.1951 data_time: 0.0067 memory: 2656 loss: 0.0126 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0126 2022/12/30 14:18:51 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:18:58 - mmengine - INFO - Epoch(train) [15][1100/1567] lr: 1.6167e-03 eta: 0:06:25 time: 0.1973 data_time: 0.0067 memory: 2656 loss: 0.0121 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0121 2022/12/30 14:19:17 - mmengine - INFO - Epoch(train) [15][1200/1567] lr: 1.4625e-03 eta: 0:06:06 time: 0.1891 data_time: 0.0069 memory: 2656 loss: 0.0093 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0093 2022/12/30 14:19:36 - mmengine - INFO - Epoch(train) [15][1300/1567] lr: 1.3159e-03 eta: 0:05:47 time: 0.1996 data_time: 0.0070 memory: 2656 loss: 0.0092 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0092 2022/12/30 14:19:55 - mmengine - INFO - Epoch(train) [15][1400/1567] lr: 1.1769e-03 eta: 0:05:28 time: 0.1857 data_time: 0.0068 memory: 2656 loss: 0.0104 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0104 2022/12/30 14:20:14 - mmengine - INFO - Epoch(train) [15][1500/1567] lr: 1.0456e-03 eta: 0:05:09 time: 0.1839 data_time: 0.0069 memory: 2656 loss: 0.0117 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0117 2022/12/30 14:20:27 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:20:27 - mmengine - INFO - Epoch(train) [15][1567/1567] lr: 9.6196e-04 eta: 0:04:56 time: 0.1797 data_time: 0.0068 memory: 2656 loss: 0.1615 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1615 2022/12/30 14:20:27 - mmengine - INFO - Saving checkpoint at 15 epochs 2022/12/30 14:20:32 - mmengine - INFO - Epoch(val) [15][100/129] eta: 0:00:01 time: 0.0457 data_time: 0.0101 memory: 378 2022/12/30 14:20:33 - mmengine - INFO - Epoch(val) [15][129/129] acc/top1: 0.8885 acc/top5: 0.9795 acc/mean1: 0.8884 2022/12/30 14:20:33 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/work_dirs/stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_14.pth is removed 2022/12/30 14:20:34 - mmengine - INFO - The best checkpoint with 0.8885 acc/top1 at 15 epoch is saved to best_acc/top1_epoch_15.pth. 2022/12/30 14:20:53 - mmengine - INFO - Epoch(train) [16][ 100/1567] lr: 8.4351e-04 eta: 0:04:38 time: 0.1924 data_time: 0.0066 memory: 2656 loss: 0.0138 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0138 2022/12/30 14:21:12 - mmengine - INFO - Epoch(train) [16][ 200/1567] lr: 7.3277e-04 eta: 0:04:19 time: 0.1950 data_time: 0.0066 memory: 2656 loss: 0.0075 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0075 2022/12/30 14:21:31 - mmengine - INFO - Epoch(train) [16][ 300/1567] lr: 6.2978e-04 eta: 0:04:00 time: 0.1958 data_time: 0.0068 memory: 2656 loss: 0.0094 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0094 2022/12/30 14:21:49 - mmengine - INFO - Epoch(train) [16][ 400/1567] lr: 5.3453e-04 eta: 0:03:41 time: 0.1774 data_time: 0.0065 memory: 2656 loss: 0.0107 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0107 2022/12/30 14:22:07 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:22:08 - mmengine - INFO - Epoch(train) [16][ 500/1567] lr: 4.4705e-04 eta: 0:03:22 time: 0.1826 data_time: 0.0068 memory: 2656 loss: 0.0097 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0097 2022/12/30 14:22:26 - mmengine - INFO - Epoch(train) [16][ 600/1567] lr: 3.6735e-04 eta: 0:03:03 time: 0.1881 data_time: 0.0069 memory: 2656 loss: 0.0131 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0131 2022/12/30 14:22:45 - mmengine - INFO - Epoch(train) [16][ 700/1567] lr: 2.9544e-04 eta: 0:02:44 time: 0.1827 data_time: 0.0072 memory: 2656 loss: 0.0138 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0138 2022/12/30 14:23:04 - mmengine - INFO - Epoch(train) [16][ 800/1567] lr: 2.3134e-04 eta: 0:02:25 time: 0.1942 data_time: 0.0073 memory: 2656 loss: 0.0122 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0122 2022/12/30 14:23:22 - mmengine - INFO - Epoch(train) [16][ 900/1567] lr: 1.7505e-04 eta: 0:02:06 time: 0.1896 data_time: 0.0073 memory: 2656 loss: 0.0071 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0071 2022/12/30 14:23:41 - mmengine - INFO - Epoch(train) [16][1000/1567] lr: 1.2658e-04 eta: 0:01:47 time: 0.1892 data_time: 0.0066 memory: 2656 loss: 0.0091 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0091 2022/12/30 14:24:00 - mmengine - INFO - Epoch(train) [16][1100/1567] lr: 8.5947e-05 eta: 0:01:28 time: 0.1785 data_time: 0.0069 memory: 2656 loss: 0.0134 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0134 2022/12/30 14:24:18 - mmengine - INFO - Epoch(train) [16][1200/1567] lr: 5.3147e-05 eta: 0:01:09 time: 0.1744 data_time: 0.0080 memory: 2656 loss: 0.0134 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0134 2022/12/30 14:24:36 - mmengine - INFO - Epoch(train) [16][1300/1567] lr: 2.8190e-05 eta: 0:00:50 time: 0.1771 data_time: 0.0066 memory: 2656 loss: 0.0097 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0097 2022/12/30 14:24:54 - mmengine - INFO - Epoch(train) [16][1400/1567] lr: 1.1078e-05 eta: 0:00:31 time: 0.1792 data_time: 0.0067 memory: 2656 loss: 0.0090 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0090 2022/12/30 14:25:13 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:25:13 - mmengine - INFO - Epoch(train) [16][1500/1567] lr: 1.8150e-06 eta: 0:00:12 time: 0.1834 data_time: 0.0072 memory: 2656 loss: 0.0125 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0125 2022/12/30 14:25:26 - mmengine - INFO - Exp name: stgcn++_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20221230_130356 2022/12/30 14:25:26 - mmengine - INFO - Epoch(train) [16][1567/1567] lr: 3.9252e-10 eta: 0:00:00 time: 0.1804 data_time: 0.0064 memory: 2656 loss: 0.1913 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.1913 2022/12/30 14:25:26 - mmengine - INFO - Saving checkpoint at 16 epochs 2022/12/30 14:25:31 - mmengine - INFO - Epoch(val) [16][100/129] eta: 0:00:01 time: 0.0531 data_time: 0.0164 memory: 378 2022/12/30 14:25:32 - mmengine - INFO - Epoch(val) [16][129/129] acc/top1: 0.8863 acc/top5: 0.9786 acc/mean1: 0.8862