2023/03/06 22:31:30 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] CUDA available: True numpy_random_seed: 778356497 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/petrelfs/share/cuda-11.3 NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (GCC) 5.4.0 PyTorch: 1.11.0 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.3 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.2 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.12.0 OpenCV: 4.6.0 MMEngine: 0.6.0 Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None diff_rank_seed: False deterministic: False Distributed launcher: slurm Distributed training: True GPU number: 8 ------------------------------------------------------------ 2023/03/06 22:31:30 - mmengine - INFO - Config: default_scope = 'mmaction' default_hooks = dict( runtime_info=dict(type='RuntimeInfoHook', _scope_='mmaction'), timer=dict(type='IterTimerHook', _scope_='mmaction'), logger=dict( type='LoggerHook', interval=100, ignore_last=False, _scope_='mmaction'), param_scheduler=dict(type='ParamSchedulerHook', _scope_='mmaction'), checkpoint=dict( type='CheckpointHook', interval=1, save_best='auto', _scope_='mmaction'), sampler_seed=dict(type='DistSamplerSeedHook', _scope_='mmaction'), sync_buffers=dict(type='SyncBuffersHook', _scope_='mmaction')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) log_processor = dict( type='LogProcessor', window_size=20, by_epoch=True, _scope_='mmaction') vis_backends = [dict(type='LocalVisBackend', _scope_='mmaction')] visualizer = dict( type='ActionVisualizer', vis_backends=[dict(type='LocalVisBackend')], _scope_='mmaction') log_level = 'INFO' load_from = None resume = False custom_imports = dict(imports='models') model = dict( type='RecognizerGCN', backbone=dict( type='MSG3D', graph_cfg=dict(layout='nturgb+d', mode='binary_adj')), cls_head=dict(type='GCNHead', num_classes=60, in_channels=384)) dataset_type = 'PoseDataset' ann_file = 'data/skeleton/ntu60_3d.pkl' train_pipeline = [ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] val_pipeline = [ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] test_pipeline = [ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] train_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='RepeatDataset', times=5, dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_3d.pkl', pipeline=[ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_train'))) val_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_3d.pkl', pipeline=[ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) test_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_3d.pkl', pipeline=[ dict(type='PreNormalize3D'), dict(type='GenSkeFeat', dataset='nturgb+d', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) val_evaluator = [dict(type='AccMetric')] test_evaluator = [dict(type='AccMetric')] train_cfg = dict( type='EpochBasedTrainLoop', max_epochs=16, val_begin=1, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='CosineAnnealingLR', eta_min=0, T_max=16, by_epoch=True, convert_to_iter_based=True) ] optim_wrapper = dict( optimizer=dict( type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True)) auto_scale_lr = dict(enable=False, base_batch_size=128) launcher = 'slurm' work_dir = './work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d' randomness = dict(seed=None, diff_rank_seed=False, deterministic=False) 2023/03/06 22:31:31 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (NORMAL ) SyncBuffersHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- Name of parameter - Initialization information backbone.data_bn.weight - torch.Size([150]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.data_bn.bias - torch.Size([150]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.PA - torch.Size([6, 75, 75]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.mlp.layers.0.weight - torch.Size([96, 18, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.out_conv.weight - torch.Size([96, 96, 1, 3, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.out_conv.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.out_bn.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.out_bn.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.PA - torch.Size([6, 125, 125]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.mlp.layers.0.weight - torch.Size([96, 18, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.out_conv.weight - torch.Size([96, 96, 1, 5, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.out_conv.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.out_bn.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.out_bn.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.PA - torch.Size([13, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.mlp.layers.0.weight - torch.Size([96, 39, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.0.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.0.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.0.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.0.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.0.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.0.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.0.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.0.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.1.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.1.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.1.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.1.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.1.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.1.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.1.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.1.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.2.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.2.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.2.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.2.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.2.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.2.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.2.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.2.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.3.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.3.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.3.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.3.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.3.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.3.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.3.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.3.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.4.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.4.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.4.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.4.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.4.4.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.4.4.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.5.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.5.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.5.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.5.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.0.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.0.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.0.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.0.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.0.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.0.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.0.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.0.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.1.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.1.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.1.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.1.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.1.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.1.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.1.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.1.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.2.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.2.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.2.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.2.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.2.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.2.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.2.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.2.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.3.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.3.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.3.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.3.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.3.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.3.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.3.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.3.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.4.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.4.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.4.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.4.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.4.4.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.4.4.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.5.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.5.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.5.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.5.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.0.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.0.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.0.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.0.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.0.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.0.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.0.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.0.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.1.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.1.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.1.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.1.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.1.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.1.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.1.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.1.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.2.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.2.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.2.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.2.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.2.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.2.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.2.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.2.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.3.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.3.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.3.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.3.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.3.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.3.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.3.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.3.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.4.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.4.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.4.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.4.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.4.4.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.4.4.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.5.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.5.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.5.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.5.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.PA - torch.Size([6, 75, 75]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.mlp.layers.0.weight - torch.Size([96, 576, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.out_conv.weight - torch.Size([192, 96, 1, 3, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.out_conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.out_bn.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.out_bn.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.PA - torch.Size([6, 125, 125]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.mlp.layers.0.weight - torch.Size([96, 576, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.out_conv.weight - torch.Size([192, 96, 1, 5, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.out_conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.out_bn.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.out_bn.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.PA - torch.Size([13, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.mlp.layers.0.weight - torch.Size([96, 1248, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.0.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.0.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.0.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.0.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.0.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.0.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.0.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.0.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.1.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.1.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.1.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.1.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.1.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.1.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.1.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.1.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.2.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.2.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.2.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.2.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.2.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.2.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.2.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.2.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.3.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.3.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.3.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.3.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.3.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.3.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.3.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.3.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.4.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.4.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.4.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.4.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.4.4.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.4.4.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.5.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.5.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.5.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.5.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.residual.conv.weight - torch.Size([192, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.residual.conv.bias - torch.Size([192]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.residual.bn.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.residual.bn.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.0.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.0.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.0.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.0.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.0.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.0.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.0.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.0.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.1.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.1.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.1.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.1.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.1.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.1.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.1.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.1.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.2.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.2.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.2.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.2.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.2.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.2.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.2.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.2.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.3.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.3.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.3.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.3.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.3.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.3.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.3.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.3.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.4.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.4.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.4.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.4.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.4.4.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.4.4.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.5.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.5.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.5.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.5.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.0.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.0.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.0.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.0.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.0.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.0.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.0.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.0.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.1.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.1.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.1.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.1.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.1.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.1.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.1.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.1.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.2.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.2.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.2.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.2.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.2.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.2.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.2.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.2.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.3.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.3.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.3.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.3.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.3.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.3.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.3.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.3.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.4.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.4.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.4.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.4.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.4.4.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.4.4.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.5.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.5.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.5.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.5.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.PA - torch.Size([6, 75, 75]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.mlp.layers.0.weight - torch.Size([192, 1152, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.mlp.layers.0.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.mlp.layers.1.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.mlp.layers.1.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.out_conv.weight - torch.Size([384, 192, 1, 3, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.out_conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.out_bn.weight - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.out_bn.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.PA - torch.Size([6, 125, 125]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.mlp.layers.0.weight - torch.Size([192, 1152, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.mlp.layers.0.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.mlp.layers.1.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.mlp.layers.1.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.out_conv.weight - torch.Size([384, 192, 1, 5, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.out_conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.out_bn.weight - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.out_bn.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.PA - torch.Size([13, 25, 25]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.mlp.layers.0.weight - torch.Size([192, 2496, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.mlp.layers.0.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.mlp.layers.1.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.mlp.layers.1.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.0.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.0.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.0.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.0.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.0.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.0.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.0.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.0.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.1.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.1.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.1.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.1.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.1.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.1.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.1.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.1.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.2.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.2.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.2.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.2.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.2.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.2.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.2.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.2.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.3.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.3.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.3.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.3.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.3.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.3.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.3.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.3.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.4.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.4.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.4.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.4.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.4.4.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.4.4.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.5.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.5.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.5.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.5.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.residual.conv.weight - torch.Size([384, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.residual.conv.bias - torch.Size([384]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.residual.bn.weight - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.residual.bn.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.0.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.0.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.0.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.0.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.0.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.0.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.0.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.0.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.1.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.1.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.1.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.1.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.1.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.1.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.1.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.1.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.2.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.2.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.2.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.2.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.2.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.2.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.2.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.2.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.3.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.3.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.3.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.3.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.3.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.3.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.3.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.3.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.4.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.4.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.4.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.4.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.4.4.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.4.4.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.5.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.5.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.5.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.5.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.0.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.0.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.0.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.0.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.0.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.0.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.0.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.0.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.1.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.1.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.1.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.1.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.1.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.1.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.1.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.1.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.2.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.2.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.2.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.2.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.2.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.2.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.2.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.2.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.3.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.3.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.3.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.3.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.3.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.3.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.3.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.3.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.4.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.4.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.4.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.4.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.4.4.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.4.4.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.5.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.5.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.5.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.5.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN cls_head.fc.weight - torch.Size([60, 384]): NormalInit: mean=0, std=0.01, bias=0 cls_head.fc.bias - torch.Size([60]): NormalInit: mean=0, std=0.01, bias=0 2023/03/06 22:31:44 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d. 2023/03/06 22:32:07 - mmengine - INFO - Epoch(train) [1][ 100/1567] lr: 9.9996e-02 eta: 1:36:08 time: 0.1541 data_time: 0.0066 memory: 6166 loss: 3.1291 top1_acc: 0.2500 top5_acc: 0.5000 loss_cls: 3.1291 2023/03/06 22:32:23 - mmengine - INFO - Epoch(train) [1][ 200/1567] lr: 9.9984e-02 eta: 1:19:45 time: 0.1538 data_time: 0.0069 memory: 6166 loss: 2.6892 top1_acc: 0.1875 top5_acc: 0.6875 loss_cls: 2.6892 2023/03/06 22:32:38 - mmengine - INFO - Epoch(train) [1][ 300/1567] lr: 9.9965e-02 eta: 1:14:04 time: 0.1529 data_time: 0.0063 memory: 6166 loss: 2.1964 top1_acc: 0.5000 top5_acc: 0.8750 loss_cls: 2.1964 2023/03/06 22:32:53 - mmengine - INFO - Epoch(train) [1][ 400/1567] lr: 9.9938e-02 eta: 1:11:06 time: 0.1536 data_time: 0.0062 memory: 6166 loss: 1.7563 top1_acc: 0.6250 top5_acc: 0.8125 loss_cls: 1.7563 2023/03/06 22:33:09 - mmengine - INFO - Epoch(train) [1][ 500/1567] lr: 9.9902e-02 eta: 1:09:11 time: 0.1521 data_time: 0.0064 memory: 6166 loss: 1.6214 top1_acc: 0.5625 top5_acc: 0.6875 loss_cls: 1.6214 2023/03/06 22:33:24 - mmengine - INFO - Epoch(train) [1][ 600/1567] lr: 9.9859e-02 eta: 1:07:45 time: 0.1519 data_time: 0.0062 memory: 6166 loss: 1.3557 top1_acc: 0.8125 top5_acc: 0.8750 loss_cls: 1.3557 2023/03/06 22:33:39 - mmengine - INFO - Epoch(train) [1][ 700/1567] lr: 9.9808e-02 eta: 1:06:41 time: 0.1563 data_time: 0.0062 memory: 6166 loss: 1.4918 top1_acc: 0.4375 top5_acc: 0.8750 loss_cls: 1.4918 2023/03/06 22:33:54 - mmengine - INFO - Epoch(train) [1][ 800/1567] lr: 9.9750e-02 eta: 1:05:54 time: 0.1535 data_time: 0.0061 memory: 6166 loss: 1.2359 top1_acc: 0.5625 top5_acc: 0.9375 loss_cls: 1.2359 2023/03/06 22:34:10 - mmengine - INFO - Epoch(train) [1][ 900/1567] lr: 9.9683e-02 eta: 1:05:12 time: 0.1534 data_time: 0.0061 memory: 6166 loss: 1.2010 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 1.2010 2023/03/06 22:34:25 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:34:25 - mmengine - INFO - Epoch(train) [1][1000/1567] lr: 9.9609e-02 eta: 1:04:31 time: 0.1516 data_time: 0.0062 memory: 6166 loss: 1.1189 top1_acc: 0.6250 top5_acc: 0.8750 loss_cls: 1.1189 2023/03/06 22:34:40 - mmengine - INFO - Epoch(train) [1][1100/1567] lr: 9.9527e-02 eta: 1:03:57 time: 0.1520 data_time: 0.0061 memory: 6166 loss: 1.0370 top1_acc: 0.8125 top5_acc: 0.8750 loss_cls: 1.0370 2023/03/06 22:34:55 - mmengine - INFO - Epoch(train) [1][1200/1567] lr: 9.9437e-02 eta: 1:03:25 time: 0.1526 data_time: 0.0061 memory: 6166 loss: 0.9394 top1_acc: 0.6250 top5_acc: 0.8750 loss_cls: 0.9394 2023/03/06 22:35:11 - mmengine - INFO - Epoch(train) [1][1300/1567] lr: 9.9339e-02 eta: 1:02:56 time: 0.1519 data_time: 0.0061 memory: 6166 loss: 0.8968 top1_acc: 0.6875 top5_acc: 0.8750 loss_cls: 0.8968 2023/03/06 22:35:26 - mmengine - INFO - Epoch(train) [1][1400/1567] lr: 9.9234e-02 eta: 1:02:30 time: 0.1529 data_time: 0.0061 memory: 6166 loss: 0.7683 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.7683 2023/03/06 22:35:41 - mmengine - INFO - Epoch(train) [1][1500/1567] lr: 9.9121e-02 eta: 1:02:05 time: 0.1525 data_time: 0.0061 memory: 6166 loss: 0.8013 top1_acc: 0.8125 top5_acc: 0.8750 loss_cls: 0.8013 2023/03/06 22:35:51 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:35:51 - mmengine - INFO - Epoch(train) [1][1567/1567] lr: 9.9040e-02 eta: 1:01:49 time: 0.1517 data_time: 0.0059 memory: 6166 loss: 1.0877 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 1.0877 2023/03/06 22:35:51 - mmengine - INFO - Saving checkpoint at 1 epochs 2023/03/06 22:35:57 - mmengine - INFO - Epoch(val) [1][100/129] eta: 0:00:01 time: 0.0447 data_time: 0.0058 memory: 1242 2023/03/06 22:35:58 - mmengine - INFO - Epoch(val) [1][129/129] acc/top1: 0.6448 acc/top5: 0.9079 acc/mean1: 0.6444 2023/03/06 22:35:59 - mmengine - INFO - The best checkpoint with 0.6448 acc/top1 at 1 epoch is saved to best_acc/top1_epoch_1.pth. 2023/03/06 22:36:14 - mmengine - INFO - Epoch(train) [2][ 100/1567] lr: 9.8914e-02 eta: 1:01:29 time: 0.1548 data_time: 0.0068 memory: 6166 loss: 0.8508 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.8508 2023/03/06 22:36:30 - mmengine - INFO - Epoch(train) [2][ 200/1567] lr: 9.8781e-02 eta: 1:01:09 time: 0.1552 data_time: 0.0063 memory: 6166 loss: 0.8857 top1_acc: 0.5625 top5_acc: 0.9375 loss_cls: 0.8857 2023/03/06 22:36:45 - mmengine - INFO - Epoch(train) [2][ 300/1567] lr: 9.8639e-02 eta: 1:00:49 time: 0.1531 data_time: 0.0063 memory: 6166 loss: 0.7748 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.7748 2023/03/06 22:37:00 - mmengine - INFO - Epoch(train) [2][ 400/1567] lr: 9.8491e-02 eta: 1:00:28 time: 0.1525 data_time: 0.0062 memory: 6166 loss: 0.5938 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5938 2023/03/06 22:37:05 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:37:16 - mmengine - INFO - Epoch(train) [2][ 500/1567] lr: 9.8334e-02 eta: 1:00:07 time: 0.1524 data_time: 0.0063 memory: 6166 loss: 0.7368 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.7368 2023/03/06 22:37:31 - mmengine - INFO - Epoch(train) [2][ 600/1567] lr: 9.8170e-02 eta: 0:59:46 time: 0.1523 data_time: 0.0062 memory: 6166 loss: 0.7077 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.7077 2023/03/06 22:37:46 - mmengine - INFO - Epoch(train) [2][ 700/1567] lr: 9.7998e-02 eta: 0:59:27 time: 0.1529 data_time: 0.0062 memory: 6166 loss: 0.6136 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.6136 2023/03/06 22:38:01 - mmengine - INFO - Epoch(train) [2][ 800/1567] lr: 9.7819e-02 eta: 0:59:07 time: 0.1520 data_time: 0.0063 memory: 6166 loss: 0.7295 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.7295 2023/03/06 22:38:17 - mmengine - INFO - Epoch(train) [2][ 900/1567] lr: 9.7632e-02 eta: 0:58:48 time: 0.1523 data_time: 0.0062 memory: 6166 loss: 0.6460 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.6460 2023/03/06 22:38:32 - mmengine - INFO - Epoch(train) [2][1000/1567] lr: 9.7438e-02 eta: 0:58:30 time: 0.1533 data_time: 0.0062 memory: 6166 loss: 0.6323 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.6323 2023/03/06 22:38:47 - mmengine - INFO - Epoch(train) [2][1100/1567] lr: 9.7236e-02 eta: 0:58:11 time: 0.1519 data_time: 0.0063 memory: 6166 loss: 0.6782 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.6782 2023/03/06 22:39:02 - mmengine - INFO - Epoch(train) [2][1200/1567] lr: 9.7027e-02 eta: 0:57:52 time: 0.1521 data_time: 0.0063 memory: 6166 loss: 0.5965 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.5965 2023/03/06 22:39:18 - mmengine - INFO - Epoch(train) [2][1300/1567] lr: 9.6810e-02 eta: 0:57:34 time: 0.1523 data_time: 0.0062 memory: 6166 loss: 0.6074 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.6074 2023/03/06 22:39:33 - mmengine - INFO - Epoch(train) [2][1400/1567] lr: 9.6587e-02 eta: 0:57:16 time: 0.1520 data_time: 0.0063 memory: 6166 loss: 0.6038 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.6038 2023/03/06 22:39:38 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:39:48 - mmengine - INFO - Epoch(train) [2][1500/1567] lr: 9.6355e-02 eta: 0:56:58 time: 0.1520 data_time: 0.0062 memory: 6166 loss: 0.4645 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.4645 2023/03/06 22:39:58 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:39:58 - mmengine - INFO - Epoch(train) [2][1567/1567] lr: 9.6196e-02 eta: 0:56:45 time: 0.1484 data_time: 0.0060 memory: 6166 loss: 0.6357 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.6357 2023/03/06 22:39:58 - mmengine - INFO - Saving checkpoint at 2 epochs 2023/03/06 22:40:03 - mmengine - INFO - Epoch(val) [2][100/129] eta: 0:00:01 time: 0.0448 data_time: 0.0059 memory: 1242 2023/03/06 22:40:05 - mmengine - INFO - Epoch(val) [2][129/129] acc/top1: 0.6878 acc/top5: 0.9307 acc/mean1: 0.6876 2023/03/06 22:40:05 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_1.pth is removed 2023/03/06 22:40:05 - mmengine - INFO - The best checkpoint with 0.6878 acc/top1 at 2 epoch is saved to best_acc/top1_epoch_2.pth. 2023/03/06 22:40:21 - mmengine - INFO - Epoch(train) [3][ 100/1567] lr: 9.5953e-02 eta: 0:56:29 time: 0.1529 data_time: 0.0063 memory: 6166 loss: 0.5567 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.5567 2023/03/06 22:40:36 - mmengine - INFO - Epoch(train) [3][ 200/1567] lr: 9.5703e-02 eta: 0:56:12 time: 0.1535 data_time: 0.0063 memory: 6166 loss: 0.5735 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.5735 2023/03/06 22:40:51 - mmengine - INFO - Epoch(train) [3][ 300/1567] lr: 9.5445e-02 eta: 0:55:56 time: 0.1535 data_time: 0.0065 memory: 6166 loss: 0.4586 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4586 2023/03/06 22:41:07 - mmengine - INFO - Epoch(train) [3][ 400/1567] lr: 9.5180e-02 eta: 0:55:40 time: 0.1537 data_time: 0.0064 memory: 6166 loss: 0.5681 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.5681 2023/03/06 22:41:22 - mmengine - INFO - Epoch(train) [3][ 500/1567] lr: 9.4908e-02 eta: 0:55:24 time: 0.1554 data_time: 0.0063 memory: 6166 loss: 0.4760 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4760 2023/03/06 22:41:38 - mmengine - INFO - Epoch(train) [3][ 600/1567] lr: 9.4629e-02 eta: 0:55:08 time: 0.1547 data_time: 0.0064 memory: 6166 loss: 0.4797 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.4797 2023/03/06 22:41:53 - mmengine - INFO - Epoch(train) [3][ 700/1567] lr: 9.4343e-02 eta: 0:54:52 time: 0.1543 data_time: 0.0064 memory: 6166 loss: 0.6039 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.6039 2023/03/06 22:42:08 - mmengine - INFO - Epoch(train) [3][ 800/1567] lr: 9.4050e-02 eta: 0:54:35 time: 0.1531 data_time: 0.0064 memory: 6166 loss: 0.6480 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.6480 2023/03/06 22:42:19 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:42:24 - mmengine - INFO - Epoch(train) [3][ 900/1567] lr: 9.3750e-02 eta: 0:54:19 time: 0.1536 data_time: 0.0065 memory: 6166 loss: 0.5259 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.5259 2023/03/06 22:42:39 - mmengine - INFO - Epoch(train) [3][1000/1567] lr: 9.3444e-02 eta: 0:54:03 time: 0.1522 data_time: 0.0064 memory: 6166 loss: 0.4096 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.4096 2023/03/06 22:42:54 - mmengine - INFO - Epoch(train) [3][1100/1567] lr: 9.3130e-02 eta: 0:53:45 time: 0.1517 data_time: 0.0064 memory: 6166 loss: 0.4323 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4323 2023/03/06 22:43:09 - mmengine - INFO - Epoch(train) [3][1200/1567] lr: 9.2810e-02 eta: 0:53:29 time: 0.1519 data_time: 0.0064 memory: 6166 loss: 0.5664 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.5664 2023/03/06 22:43:25 - mmengine - INFO - Epoch(train) [3][1300/1567] lr: 9.2483e-02 eta: 0:53:11 time: 0.1515 data_time: 0.0066 memory: 6166 loss: 0.5649 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.5649 2023/03/06 22:43:40 - mmengine - INFO - Epoch(train) [3][1400/1567] lr: 9.2149e-02 eta: 0:52:54 time: 0.1505 data_time: 0.0063 memory: 6166 loss: 0.4717 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4717 2023/03/06 22:43:55 - mmengine - INFO - Epoch(train) [3][1500/1567] lr: 9.1809e-02 eta: 0:52:37 time: 0.1507 data_time: 0.0063 memory: 6166 loss: 0.5548 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.5548 2023/03/06 22:44:05 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:44:05 - mmengine - INFO - Epoch(train) [3][1567/1567] lr: 9.1577e-02 eta: 0:52:25 time: 0.1463 data_time: 0.0062 memory: 6166 loss: 0.7120 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.7120 2023/03/06 22:44:05 - mmengine - INFO - Saving checkpoint at 3 epochs 2023/03/06 22:44:10 - mmengine - INFO - Epoch(val) [3][100/129] eta: 0:00:01 time: 0.0450 data_time: 0.0059 memory: 1242 2023/03/06 22:44:11 - mmengine - INFO - Epoch(val) [3][129/129] acc/top1: 0.7362 acc/top5: 0.9444 acc/mean1: 0.7359 2023/03/06 22:44:11 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_2.pth is removed 2023/03/06 22:44:12 - mmengine - INFO - The best checkpoint with 0.7362 acc/top1 at 3 epoch is saved to best_acc/top1_epoch_3.pth. 2023/03/06 22:44:27 - mmengine - INFO - Epoch(train) [4][ 100/1567] lr: 9.1226e-02 eta: 0:52:10 time: 0.1532 data_time: 0.0066 memory: 6166 loss: 0.5373 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.5373 2023/03/06 22:44:42 - mmengine - INFO - Epoch(train) [4][ 200/1567] lr: 9.0868e-02 eta: 0:51:53 time: 0.1512 data_time: 0.0063 memory: 6166 loss: 0.4522 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4522 2023/03/06 22:44:57 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:44:57 - mmengine - INFO - Epoch(train) [4][ 300/1567] lr: 9.0504e-02 eta: 0:51:36 time: 0.1507 data_time: 0.0065 memory: 6166 loss: 0.4223 top1_acc: 0.8750 top5_acc: 0.8750 loss_cls: 0.4223 2023/03/06 22:45:13 - mmengine - INFO - Epoch(train) [4][ 400/1567] lr: 9.0133e-02 eta: 0:51:20 time: 0.1518 data_time: 0.0064 memory: 6166 loss: 0.4638 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4638 2023/03/06 22:45:28 - mmengine - INFO - Epoch(train) [4][ 500/1567] lr: 8.9756e-02 eta: 0:51:03 time: 0.1513 data_time: 0.0063 memory: 6166 loss: 0.3923 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3923 2023/03/06 22:45:43 - mmengine - INFO - Epoch(train) [4][ 600/1567] lr: 8.9373e-02 eta: 0:50:47 time: 0.1540 data_time: 0.0065 memory: 6166 loss: 0.4389 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.4389 2023/03/06 22:45:58 - mmengine - INFO - Epoch(train) [4][ 700/1567] lr: 8.8984e-02 eta: 0:50:32 time: 0.1537 data_time: 0.0065 memory: 6166 loss: 0.3795 top1_acc: 0.8750 top5_acc: 0.8750 loss_cls: 0.3795 2023/03/06 22:46:14 - mmengine - INFO - Epoch(train) [4][ 800/1567] lr: 8.8589e-02 eta: 0:50:16 time: 0.1518 data_time: 0.0063 memory: 6166 loss: 0.3675 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3675 2023/03/06 22:46:29 - mmengine - INFO - Epoch(train) [4][ 900/1567] lr: 8.8187e-02 eta: 0:49:59 time: 0.1518 data_time: 0.0064 memory: 6166 loss: 0.3473 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3473 2023/03/06 22:46:44 - mmengine - INFO - Epoch(train) [4][1000/1567] lr: 8.7780e-02 eta: 0:49:43 time: 0.1524 data_time: 0.0065 memory: 6166 loss: 0.3963 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.3963 2023/03/06 22:46:59 - mmengine - INFO - Epoch(train) [4][1100/1567] lr: 8.7367e-02 eta: 0:49:27 time: 0.1516 data_time: 0.0064 memory: 6166 loss: 0.3849 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3849 2023/03/06 22:47:14 - mmengine - INFO - Epoch(train) [4][1200/1567] lr: 8.6947e-02 eta: 0:49:11 time: 0.1511 data_time: 0.0064 memory: 6166 loss: 0.3953 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.3953 2023/03/06 22:47:30 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:47:30 - mmengine - INFO - Epoch(train) [4][1300/1567] lr: 8.6522e-02 eta: 0:48:55 time: 0.1522 data_time: 0.0064 memory: 6166 loss: 0.3345 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3345 2023/03/06 22:47:45 - mmengine - INFO - Epoch(train) [4][1400/1567] lr: 8.6092e-02 eta: 0:48:39 time: 0.1525 data_time: 0.0064 memory: 6166 loss: 0.3256 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3256 2023/03/06 22:48:00 - mmengine - INFO - Epoch(train) [4][1500/1567] lr: 8.5655e-02 eta: 0:48:23 time: 0.1527 data_time: 0.0065 memory: 6166 loss: 0.4037 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4037 2023/03/06 22:48:10 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:48:10 - mmengine - INFO - Epoch(train) [4][1567/1567] lr: 8.5360e-02 eta: 0:48:12 time: 0.1475 data_time: 0.0062 memory: 6166 loss: 0.5543 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.5543 2023/03/06 22:48:10 - mmengine - INFO - Saving checkpoint at 4 epochs 2023/03/06 22:48:15 - mmengine - INFO - Epoch(val) [4][100/129] eta: 0:00:01 time: 0.0447 data_time: 0.0059 memory: 1242 2023/03/06 22:48:17 - mmengine - INFO - Epoch(val) [4][129/129] acc/top1: 0.6500 acc/top5: 0.9107 acc/mean1: 0.6501 2023/03/06 22:48:33 - mmengine - INFO - Epoch(train) [5][ 100/1567] lr: 8.4914e-02 eta: 0:47:57 time: 0.1536 data_time: 0.0064 memory: 6166 loss: 0.4500 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.4500 2023/03/06 22:48:48 - mmengine - INFO - Epoch(train) [5][ 200/1567] lr: 8.4463e-02 eta: 0:47:42 time: 0.1511 data_time: 0.0063 memory: 6166 loss: 0.4331 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.4331 2023/03/06 22:49:03 - mmengine - INFO - Epoch(train) [5][ 300/1567] lr: 8.4006e-02 eta: 0:47:25 time: 0.1511 data_time: 0.0063 memory: 6166 loss: 0.3853 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3853 2023/03/06 22:49:18 - mmengine - INFO - Epoch(train) [5][ 400/1567] lr: 8.3544e-02 eta: 0:47:09 time: 0.1509 data_time: 0.0064 memory: 6166 loss: 0.3851 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3851 2023/03/06 22:49:33 - mmengine - INFO - Epoch(train) [5][ 500/1567] lr: 8.3077e-02 eta: 0:46:53 time: 0.1511 data_time: 0.0065 memory: 6166 loss: 0.3925 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.3925 2023/03/06 22:49:48 - mmengine - INFO - Epoch(train) [5][ 600/1567] lr: 8.2605e-02 eta: 0:46:37 time: 0.1529 data_time: 0.0063 memory: 6166 loss: 0.2833 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2833 2023/03/06 22:50:04 - mmengine - INFO - Epoch(train) [5][ 700/1567] lr: 8.2127e-02 eta: 0:46:22 time: 0.1535 data_time: 0.0064 memory: 6166 loss: 0.3905 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3905 2023/03/06 22:50:09 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:50:19 - mmengine - INFO - Epoch(train) [5][ 800/1567] lr: 8.1645e-02 eta: 0:46:06 time: 0.1541 data_time: 0.0064 memory: 6166 loss: 0.3587 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3587 2023/03/06 22:50:34 - mmengine - INFO - Epoch(train) [5][ 900/1567] lr: 8.1157e-02 eta: 0:45:51 time: 0.1537 data_time: 0.0063 memory: 6166 loss: 0.3810 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3810 2023/03/06 22:50:49 - mmengine - INFO - Epoch(train) [5][1000/1567] lr: 8.0665e-02 eta: 0:45:35 time: 0.1506 data_time: 0.0063 memory: 6166 loss: 0.3420 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3420 2023/03/06 22:51:04 - mmengine - INFO - Epoch(train) [5][1100/1567] lr: 8.0167e-02 eta: 0:45:19 time: 0.1508 data_time: 0.0064 memory: 6166 loss: 0.3765 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3765 2023/03/06 22:51:20 - mmengine - INFO - Epoch(train) [5][1200/1567] lr: 7.9665e-02 eta: 0:45:03 time: 0.1508 data_time: 0.0063 memory: 6166 loss: 0.3831 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.3831 2023/03/06 22:51:35 - mmengine - INFO - Epoch(train) [5][1300/1567] lr: 7.9159e-02 eta: 0:44:47 time: 0.1516 data_time: 0.0065 memory: 6166 loss: 0.3009 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3009 2023/03/06 22:51:50 - mmengine - INFO - Epoch(train) [5][1400/1567] lr: 7.8647e-02 eta: 0:44:31 time: 0.1506 data_time: 0.0063 memory: 6166 loss: 0.3428 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3428 2023/03/06 22:52:05 - mmengine - INFO - Epoch(train) [5][1500/1567] lr: 7.8132e-02 eta: 0:44:15 time: 0.1506 data_time: 0.0063 memory: 6166 loss: 0.3134 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.3134 2023/03/06 22:52:15 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:52:15 - mmengine - INFO - Epoch(train) [5][1567/1567] lr: 7.7784e-02 eta: 0:44:04 time: 0.1486 data_time: 0.0063 memory: 6166 loss: 0.5744 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.5744 2023/03/06 22:52:15 - mmengine - INFO - Saving checkpoint at 5 epochs 2023/03/06 22:52:20 - mmengine - INFO - Epoch(val) [5][100/129] eta: 0:00:01 time: 0.0448 data_time: 0.0059 memory: 1242 2023/03/06 22:52:22 - mmengine - INFO - Epoch(val) [5][129/129] acc/top1: 0.7056 acc/top5: 0.9347 acc/mean1: 0.7056 2023/03/06 22:52:37 - mmengine - INFO - Epoch(train) [6][ 100/1567] lr: 7.7261e-02 eta: 0:43:50 time: 0.1550 data_time: 0.0068 memory: 6166 loss: 0.3462 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3462 2023/03/06 22:52:47 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:52:53 - mmengine - INFO - Epoch(train) [6][ 200/1567] lr: 7.6733e-02 eta: 0:43:35 time: 0.1565 data_time: 0.0064 memory: 6166 loss: 0.2763 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2763 2023/03/06 22:53:08 - mmengine - INFO - Epoch(train) [6][ 300/1567] lr: 7.6202e-02 eta: 0:43:20 time: 0.1542 data_time: 0.0064 memory: 6166 loss: 0.3039 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3039 2023/03/06 22:53:24 - mmengine - INFO - Epoch(train) [6][ 400/1567] lr: 7.5666e-02 eta: 0:43:05 time: 0.1529 data_time: 0.0064 memory: 6166 loss: 0.2987 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2987 2023/03/06 22:53:39 - mmengine - INFO - Epoch(train) [6][ 500/1567] lr: 7.5126e-02 eta: 0:42:49 time: 0.1541 data_time: 0.0063 memory: 6166 loss: 0.4370 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4370 2023/03/06 22:53:55 - mmengine - INFO - Epoch(train) [6][ 600/1567] lr: 7.4583e-02 eta: 0:42:34 time: 0.1553 data_time: 0.0066 memory: 6166 loss: 0.2787 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2787 2023/03/06 22:54:10 - mmengine - INFO - Epoch(train) [6][ 700/1567] lr: 7.4035e-02 eta: 0:42:19 time: 0.1542 data_time: 0.0068 memory: 6166 loss: 0.2949 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2949 2023/03/06 22:54:25 - mmengine - INFO - Epoch(train) [6][ 800/1567] lr: 7.3484e-02 eta: 0:42:03 time: 0.1526 data_time: 0.0064 memory: 6166 loss: 0.2498 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2498 2023/03/06 22:54:41 - mmengine - INFO - Epoch(train) [6][ 900/1567] lr: 7.2929e-02 eta: 0:41:48 time: 0.1564 data_time: 0.0064 memory: 6166 loss: 0.3132 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3132 2023/03/06 22:54:56 - mmengine - INFO - Epoch(train) [6][1000/1567] lr: 7.2371e-02 eta: 0:41:33 time: 0.1536 data_time: 0.0064 memory: 6166 loss: 0.3486 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3486 2023/03/06 22:55:12 - mmengine - INFO - Epoch(train) [6][1100/1567] lr: 7.1809e-02 eta: 0:41:18 time: 0.1533 data_time: 0.0064 memory: 6166 loss: 0.2921 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2921 2023/03/06 22:55:22 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:55:27 - mmengine - INFO - Epoch(train) [6][1200/1567] lr: 7.1243e-02 eta: 0:41:03 time: 0.1555 data_time: 0.0066 memory: 6166 loss: 0.2857 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2857 2023/03/06 22:55:42 - mmengine - INFO - Epoch(train) [6][1300/1567] lr: 7.0674e-02 eta: 0:40:47 time: 0.1540 data_time: 0.0065 memory: 6166 loss: 0.2540 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.2540 2023/03/06 22:55:58 - mmengine - INFO - Epoch(train) [6][1400/1567] lr: 7.0102e-02 eta: 0:40:32 time: 0.1545 data_time: 0.0063 memory: 6166 loss: 0.3434 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3434 2023/03/06 22:56:13 - mmengine - INFO - Epoch(train) [6][1500/1567] lr: 6.9527e-02 eta: 0:40:17 time: 0.1535 data_time: 0.0064 memory: 6166 loss: 0.3505 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3505 2023/03/06 22:56:24 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:56:24 - mmengine - INFO - Epoch(train) [6][1567/1567] lr: 6.9140e-02 eta: 0:40:07 time: 0.1516 data_time: 0.0064 memory: 6166 loss: 0.4998 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.4998 2023/03/06 22:56:24 - mmengine - INFO - Saving checkpoint at 6 epochs 2023/03/06 22:56:29 - mmengine - INFO - Epoch(val) [6][100/129] eta: 0:00:01 time: 0.0455 data_time: 0.0058 memory: 1242 2023/03/06 22:56:30 - mmengine - INFO - Epoch(val) [6][129/129] acc/top1: 0.7914 acc/top5: 0.9640 acc/mean1: 0.7913 2023/03/06 22:56:30 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_3.pth is removed 2023/03/06 22:56:31 - mmengine - INFO - The best checkpoint with 0.7914 acc/top1 at 6 epoch is saved to best_acc/top1_epoch_6.pth. 2023/03/06 22:56:46 - mmengine - INFO - Epoch(train) [7][ 100/1567] lr: 6.8560e-02 eta: 0:39:52 time: 0.1540 data_time: 0.0065 memory: 6166 loss: 0.3234 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3234 2023/03/06 22:57:02 - mmengine - INFO - Epoch(train) [7][ 200/1567] lr: 6.7976e-02 eta: 0:39:36 time: 0.1534 data_time: 0.0065 memory: 6166 loss: 0.2984 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2984 2023/03/06 22:57:17 - mmengine - INFO - Epoch(train) [7][ 300/1567] lr: 6.7390e-02 eta: 0:39:21 time: 0.1543 data_time: 0.0064 memory: 6166 loss: 0.2462 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.2462 2023/03/06 22:57:33 - mmengine - INFO - Epoch(train) [7][ 400/1567] lr: 6.6802e-02 eta: 0:39:06 time: 0.1531 data_time: 0.0064 memory: 6166 loss: 0.2815 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2815 2023/03/06 22:57:49 - mmengine - INFO - Epoch(train) [7][ 500/1567] lr: 6.6210e-02 eta: 0:38:51 time: 0.1541 data_time: 0.0066 memory: 6166 loss: 0.2363 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2363 2023/03/06 22:58:04 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 22:58:04 - mmengine - INFO - Epoch(train) [7][ 600/1567] lr: 6.5616e-02 eta: 0:38:36 time: 0.1553 data_time: 0.0067 memory: 6166 loss: 0.2096 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2096 2023/03/06 22:58:19 - mmengine - INFO - Epoch(train) [7][ 700/1567] lr: 6.5020e-02 eta: 0:38:21 time: 0.1541 data_time: 0.0064 memory: 6166 loss: 0.3229 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.3229 2023/03/06 22:58:35 - mmengine - INFO - Epoch(train) [7][ 800/1567] lr: 6.4421e-02 eta: 0:38:05 time: 0.1529 data_time: 0.0065 memory: 6166 loss: 0.3031 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3031 2023/03/06 22:58:50 - mmengine - INFO - Epoch(train) [7][ 900/1567] lr: 6.3820e-02 eta: 0:37:50 time: 0.1544 data_time: 0.0068 memory: 6166 loss: 0.2099 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2099 2023/03/06 22:59:06 - mmengine - INFO - Epoch(train) [7][1000/1567] lr: 6.3217e-02 eta: 0:37:35 time: 0.1549 data_time: 0.0066 memory: 6166 loss: 0.2621 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2621 2023/03/06 22:59:21 - mmengine - INFO - Epoch(train) [7][1100/1567] lr: 6.2612e-02 eta: 0:37:20 time: 0.1563 data_time: 0.0068 memory: 6166 loss: 0.2701 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2701 2023/03/06 22:59:37 - mmengine - INFO - Epoch(train) [7][1200/1567] lr: 6.2005e-02 eta: 0:37:04 time: 0.1565 data_time: 0.0065 memory: 6166 loss: 0.2142 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.2142 2023/03/06 22:59:52 - mmengine - INFO - Epoch(train) [7][1300/1567] lr: 6.1396e-02 eta: 0:36:49 time: 0.1542 data_time: 0.0066 memory: 6166 loss: 0.1930 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.1930 2023/03/06 23:00:08 - mmengine - INFO - Epoch(train) [7][1400/1567] lr: 6.0785e-02 eta: 0:36:34 time: 0.1538 data_time: 0.0068 memory: 6166 loss: 0.2374 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.2374 2023/03/06 23:00:23 - mmengine - INFO - Epoch(train) [7][1500/1567] lr: 6.0172e-02 eta: 0:36:18 time: 0.1542 data_time: 0.0064 memory: 6166 loss: 0.2327 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2327 2023/03/06 23:00:33 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:00:33 - mmengine - INFO - Epoch(train) [7][1567/1567] lr: 5.9761e-02 eta: 0:36:08 time: 0.1496 data_time: 0.0062 memory: 6166 loss: 0.3584 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3584 2023/03/06 23:00:33 - mmengine - INFO - Saving checkpoint at 7 epochs 2023/03/06 23:00:39 - mmengine - INFO - Epoch(val) [7][100/129] eta: 0:00:01 time: 0.0460 data_time: 0.0068 memory: 1242 2023/03/06 23:00:40 - mmengine - INFO - Epoch(val) [7][129/129] acc/top1: 0.7209 acc/top5: 0.9388 acc/mean1: 0.7208 2023/03/06 23:00:45 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:00:56 - mmengine - INFO - Epoch(train) [8][ 100/1567] lr: 5.9145e-02 eta: 0:35:53 time: 0.1537 data_time: 0.0064 memory: 6166 loss: 0.2654 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2654 2023/03/06 23:01:11 - mmengine - INFO - Epoch(train) [8][ 200/1567] lr: 5.8529e-02 eta: 0:35:38 time: 0.1553 data_time: 0.0065 memory: 6166 loss: 0.2499 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2499 2023/03/06 23:01:26 - mmengine - INFO - Epoch(train) [8][ 300/1567] lr: 5.7911e-02 eta: 0:35:22 time: 0.1535 data_time: 0.0065 memory: 6166 loss: 0.2233 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2233 2023/03/06 23:01:42 - mmengine - INFO - Epoch(train) [8][ 400/1567] lr: 5.7292e-02 eta: 0:35:07 time: 0.1550 data_time: 0.0064 memory: 6166 loss: 0.2238 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2238 2023/03/06 23:01:58 - mmengine - INFO - Epoch(train) [8][ 500/1567] lr: 5.6671e-02 eta: 0:34:52 time: 0.1594 data_time: 0.0065 memory: 6166 loss: 0.2004 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2004 2023/03/06 23:02:13 - mmengine - INFO - Epoch(train) [8][ 600/1567] lr: 5.6050e-02 eta: 0:34:36 time: 0.1543 data_time: 0.0065 memory: 6166 loss: 0.2366 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2366 2023/03/06 23:02:28 - mmengine - INFO - Epoch(train) [8][ 700/1567] lr: 5.5427e-02 eta: 0:34:21 time: 0.1551 data_time: 0.0063 memory: 6166 loss: 0.1896 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1896 2023/03/06 23:02:44 - mmengine - INFO - Epoch(train) [8][ 800/1567] lr: 5.4804e-02 eta: 0:34:06 time: 0.1539 data_time: 0.0064 memory: 6166 loss: 0.2197 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2197 2023/03/06 23:02:59 - mmengine - INFO - Epoch(train) [8][ 900/1567] lr: 5.4180e-02 eta: 0:33:51 time: 0.1539 data_time: 0.0064 memory: 6166 loss: 0.2095 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2095 2023/03/06 23:03:15 - mmengine - INFO - Epoch(train) [8][1000/1567] lr: 5.3556e-02 eta: 0:33:35 time: 0.1539 data_time: 0.0064 memory: 6166 loss: 0.2242 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2242 2023/03/06 23:03:20 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:03:30 - mmengine - INFO - Epoch(train) [8][1100/1567] lr: 5.2930e-02 eta: 0:33:20 time: 0.1542 data_time: 0.0063 memory: 6166 loss: 0.2215 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2215 2023/03/06 23:03:46 - mmengine - INFO - Epoch(train) [8][1200/1567] lr: 5.2305e-02 eta: 0:33:05 time: 0.1564 data_time: 0.0063 memory: 6166 loss: 0.2099 top1_acc: 0.9375 top5_acc: 0.9375 loss_cls: 0.2099 2023/03/06 23:04:01 - mmengine - INFO - Epoch(train) [8][1300/1567] lr: 5.1679e-02 eta: 0:32:49 time: 0.1550 data_time: 0.0063 memory: 6166 loss: 0.1809 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1809 2023/03/06 23:04:17 - mmengine - INFO - Epoch(train) [8][1400/1567] lr: 5.1052e-02 eta: 0:32:35 time: 0.1684 data_time: 0.0064 memory: 6166 loss: 0.1827 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.1827 2023/03/06 23:04:34 - mmengine - INFO - Epoch(train) [8][1500/1567] lr: 5.0426e-02 eta: 0:32:21 time: 0.1595 data_time: 0.0063 memory: 6166 loss: 0.2166 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2166 2023/03/06 23:04:44 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:04:44 - mmengine - INFO - Epoch(train) [8][1567/1567] lr: 5.0006e-02 eta: 0:32:10 time: 0.1506 data_time: 0.0062 memory: 6166 loss: 0.2949 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.2949 2023/03/06 23:04:44 - mmengine - INFO - Saving checkpoint at 8 epochs 2023/03/06 23:04:49 - mmengine - INFO - Epoch(val) [8][100/129] eta: 0:00:01 time: 0.0451 data_time: 0.0060 memory: 1242 2023/03/06 23:04:51 - mmengine - INFO - Epoch(val) [8][129/129] acc/top1: 0.7835 acc/top5: 0.9612 acc/mean1: 0.7833 2023/03/06 23:05:06 - mmengine - INFO - Epoch(train) [9][ 100/1567] lr: 4.9380e-02 eta: 0:31:55 time: 0.1536 data_time: 0.0064 memory: 6166 loss: 0.1989 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1989 2023/03/06 23:05:22 - mmengine - INFO - Epoch(train) [9][ 200/1567] lr: 4.8753e-02 eta: 0:31:40 time: 0.1539 data_time: 0.0067 memory: 6166 loss: 0.1816 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1816 2023/03/06 23:05:37 - mmengine - INFO - Epoch(train) [9][ 300/1567] lr: 4.8127e-02 eta: 0:31:24 time: 0.1531 data_time: 0.0065 memory: 6166 loss: 0.1665 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1665 2023/03/06 23:05:53 - mmengine - INFO - Epoch(train) [9][ 400/1567] lr: 4.7501e-02 eta: 0:31:09 time: 0.1552 data_time: 0.0063 memory: 6166 loss: 0.1424 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1424 2023/03/06 23:06:03 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:06:08 - mmengine - INFO - Epoch(train) [9][ 500/1567] lr: 4.6876e-02 eta: 0:30:54 time: 0.1547 data_time: 0.0073 memory: 6166 loss: 0.1389 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1389 2023/03/06 23:06:24 - mmengine - INFO - Epoch(train) [9][ 600/1567] lr: 4.6251e-02 eta: 0:30:38 time: 0.1530 data_time: 0.0064 memory: 6166 loss: 0.1680 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1680 2023/03/06 23:06:39 - mmengine - INFO - Epoch(train) [9][ 700/1567] lr: 4.5626e-02 eta: 0:30:23 time: 0.1530 data_time: 0.0065 memory: 6166 loss: 0.1371 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1371 2023/03/06 23:06:54 - mmengine - INFO - Epoch(train) [9][ 800/1567] lr: 4.5003e-02 eta: 0:30:07 time: 0.1554 data_time: 0.0065 memory: 6166 loss: 0.1628 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1628 2023/03/06 23:07:10 - mmengine - INFO - Epoch(train) [9][ 900/1567] lr: 4.4380e-02 eta: 0:29:52 time: 0.1567 data_time: 0.0064 memory: 6166 loss: 0.1411 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1411 2023/03/06 23:07:26 - mmengine - INFO - Epoch(train) [9][1000/1567] lr: 4.3757e-02 eta: 0:29:37 time: 0.1532 data_time: 0.0064 memory: 6166 loss: 0.1325 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1325 2023/03/06 23:07:41 - mmengine - INFO - Epoch(train) [9][1100/1567] lr: 4.3136e-02 eta: 0:29:21 time: 0.1548 data_time: 0.0064 memory: 6166 loss: 0.1487 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1487 2023/03/06 23:07:56 - mmengine - INFO - Epoch(train) [9][1200/1567] lr: 4.2516e-02 eta: 0:29:06 time: 0.1538 data_time: 0.0064 memory: 6166 loss: 0.1334 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1334 2023/03/06 23:08:12 - mmengine - INFO - Epoch(train) [9][1300/1567] lr: 4.1897e-02 eta: 0:28:51 time: 0.1545 data_time: 0.0064 memory: 6166 loss: 0.1351 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1351 2023/03/06 23:08:27 - mmengine - INFO - Epoch(train) [9][1400/1567] lr: 4.1280e-02 eta: 0:28:35 time: 0.1535 data_time: 0.0064 memory: 6166 loss: 0.1850 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1850 2023/03/06 23:08:37 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:08:43 - mmengine - INFO - Epoch(train) [9][1500/1567] lr: 4.0664e-02 eta: 0:28:20 time: 0.1529 data_time: 0.0064 memory: 6166 loss: 0.1715 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1715 2023/03/06 23:08:53 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:08:53 - mmengine - INFO - Epoch(train) [9][1567/1567] lr: 4.0252e-02 eta: 0:28:09 time: 0.1495 data_time: 0.0061 memory: 6166 loss: 0.3311 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3311 2023/03/06 23:08:53 - mmengine - INFO - Saving checkpoint at 9 epochs 2023/03/06 23:08:58 - mmengine - INFO - Epoch(val) [9][100/129] eta: 0:00:01 time: 0.0453 data_time: 0.0059 memory: 1242 2023/03/06 23:09:00 - mmengine - INFO - Epoch(val) [9][129/129] acc/top1: 0.8051 acc/top5: 0.9697 acc/mean1: 0.8050 2023/03/06 23:09:00 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_6.pth is removed 2023/03/06 23:09:00 - mmengine - INFO - The best checkpoint with 0.8051 acc/top1 at 9 epoch is saved to best_acc/top1_epoch_9.pth. 2023/03/06 23:09:16 - mmengine - INFO - Epoch(train) [10][ 100/1567] lr: 3.9638e-02 eta: 0:27:54 time: 0.1569 data_time: 0.0064 memory: 6166 loss: 0.1322 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1322 2023/03/06 23:09:31 - mmengine - INFO - Epoch(train) [10][ 200/1567] lr: 3.9026e-02 eta: 0:27:39 time: 0.1536 data_time: 0.0065 memory: 6166 loss: 0.0847 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0847 2023/03/06 23:09:47 - mmengine - INFO - Epoch(train) [10][ 300/1567] lr: 3.8415e-02 eta: 0:27:23 time: 0.1556 data_time: 0.0066 memory: 6166 loss: 0.1004 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1004 2023/03/06 23:10:03 - mmengine - INFO - Epoch(train) [10][ 400/1567] lr: 3.7807e-02 eta: 0:27:08 time: 0.1554 data_time: 0.0064 memory: 6166 loss: 0.1242 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1242 2023/03/06 23:10:18 - mmengine - INFO - Epoch(train) [10][ 500/1567] lr: 3.7200e-02 eta: 0:26:53 time: 0.1554 data_time: 0.0064 memory: 6166 loss: 0.1267 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1267 2023/03/06 23:10:34 - mmengine - INFO - Epoch(train) [10][ 600/1567] lr: 3.6596e-02 eta: 0:26:38 time: 0.1559 data_time: 0.0065 memory: 6166 loss: 0.1328 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1328 2023/03/06 23:10:49 - mmengine - INFO - Epoch(train) [10][ 700/1567] lr: 3.5993e-02 eta: 0:26:22 time: 0.1534 data_time: 0.0064 memory: 6166 loss: 0.1223 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1223 2023/03/06 23:11:05 - mmengine - INFO - Epoch(train) [10][ 800/1567] lr: 3.5393e-02 eta: 0:26:07 time: 0.1568 data_time: 0.0064 memory: 6166 loss: 0.1268 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1268 2023/03/06 23:11:20 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:11:20 - mmengine - INFO - Epoch(train) [10][ 900/1567] lr: 3.4795e-02 eta: 0:25:52 time: 0.1545 data_time: 0.0066 memory: 6166 loss: 0.1267 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1267 2023/03/06 23:11:36 - mmengine - INFO - Epoch(train) [10][1000/1567] lr: 3.4199e-02 eta: 0:25:36 time: 0.1571 data_time: 0.0068 memory: 6166 loss: 0.0711 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0711 2023/03/06 23:11:51 - mmengine - INFO - Epoch(train) [10][1100/1567] lr: 3.3606e-02 eta: 0:25:21 time: 0.1533 data_time: 0.0065 memory: 6166 loss: 0.0554 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0554 2023/03/06 23:12:07 - mmengine - INFO - Epoch(train) [10][1200/1567] lr: 3.3015e-02 eta: 0:25:05 time: 0.1548 data_time: 0.0067 memory: 6166 loss: 0.0776 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0776 2023/03/06 23:12:22 - mmengine - INFO - Epoch(train) [10][1300/1567] lr: 3.2428e-02 eta: 0:24:50 time: 0.1544 data_time: 0.0066 memory: 6166 loss: 0.1076 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1076 2023/03/06 23:12:38 - mmengine - INFO - Epoch(train) [10][1400/1567] lr: 3.1842e-02 eta: 0:24:35 time: 0.1543 data_time: 0.0066 memory: 6166 loss: 0.0712 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0712 2023/03/06 23:12:53 - mmengine - INFO - Epoch(train) [10][1500/1567] lr: 3.1260e-02 eta: 0:24:19 time: 0.1534 data_time: 0.0065 memory: 6166 loss: 0.1073 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1073 2023/03/06 23:13:03 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:13:03 - mmengine - INFO - Epoch(train) [10][1567/1567] lr: 3.0872e-02 eta: 0:24:09 time: 0.1493 data_time: 0.0062 memory: 6166 loss: 0.2493 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2493 2023/03/06 23:13:03 - mmengine - INFO - Saving checkpoint at 10 epochs 2023/03/06 23:13:08 - mmengine - INFO - Epoch(val) [10][100/129] eta: 0:00:01 time: 0.0448 data_time: 0.0059 memory: 1242 2023/03/06 23:13:10 - mmengine - INFO - Epoch(val) [10][129/129] acc/top1: 0.8488 acc/top5: 0.9723 acc/mean1: 0.8487 2023/03/06 23:13:10 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_9.pth is removed 2023/03/06 23:13:10 - mmengine - INFO - The best checkpoint with 0.8488 acc/top1 at 10 epoch is saved to best_acc/top1_epoch_10.pth. 2023/03/06 23:13:26 - mmengine - INFO - Epoch(train) [11][ 100/1567] lr: 3.0294e-02 eta: 0:23:53 time: 0.1549 data_time: 0.0065 memory: 6166 loss: 0.1123 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.1123 2023/03/06 23:13:41 - mmengine - INFO - Epoch(train) [11][ 200/1567] lr: 2.9720e-02 eta: 0:23:38 time: 0.1551 data_time: 0.0064 memory: 6166 loss: 0.0626 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0626 2023/03/06 23:13:57 - mmengine - INFO - Epoch(train) [11][ 300/1567] lr: 2.9149e-02 eta: 0:23:23 time: 0.1543 data_time: 0.0064 memory: 6166 loss: 0.0626 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0626 2023/03/06 23:14:02 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:14:12 - mmengine - INFO - Epoch(train) [11][ 400/1567] lr: 2.8581e-02 eta: 0:23:07 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0653 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0653 2023/03/06 23:14:28 - mmengine - INFO - Epoch(train) [11][ 500/1567] lr: 2.8017e-02 eta: 0:22:52 time: 0.1554 data_time: 0.0066 memory: 6166 loss: 0.0800 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.0800 2023/03/06 23:14:43 - mmengine - INFO - Epoch(train) [11][ 600/1567] lr: 2.7456e-02 eta: 0:22:37 time: 0.1539 data_time: 0.0065 memory: 6166 loss: 0.0632 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0632 2023/03/06 23:14:59 - mmengine - INFO - Epoch(train) [11][ 700/1567] lr: 2.6898e-02 eta: 0:22:21 time: 0.1541 data_time: 0.0064 memory: 6166 loss: 0.0551 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0551 2023/03/06 23:15:14 - mmengine - INFO - Epoch(train) [11][ 800/1567] lr: 2.6345e-02 eta: 0:22:06 time: 0.1545 data_time: 0.0065 memory: 6166 loss: 0.0674 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0674 2023/03/06 23:15:30 - mmengine - INFO - Epoch(train) [11][ 900/1567] lr: 2.5794e-02 eta: 0:21:50 time: 0.1549 data_time: 0.0065 memory: 6166 loss: 0.0431 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0431 2023/03/06 23:15:45 - mmengine - INFO - Epoch(train) [11][1000/1567] lr: 2.5248e-02 eta: 0:21:35 time: 0.1539 data_time: 0.0064 memory: 6166 loss: 0.0723 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0723 2023/03/06 23:16:01 - mmengine - INFO - Epoch(train) [11][1100/1567] lr: 2.4706e-02 eta: 0:21:19 time: 0.1539 data_time: 0.0063 memory: 6166 loss: 0.0651 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0651 2023/03/06 23:16:16 - mmengine - INFO - Epoch(train) [11][1200/1567] lr: 2.4167e-02 eta: 0:21:04 time: 0.1543 data_time: 0.0064 memory: 6166 loss: 0.0655 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0655 2023/03/06 23:16:31 - mmengine - INFO - Epoch(train) [11][1300/1567] lr: 2.3633e-02 eta: 0:20:49 time: 0.1538 data_time: 0.0065 memory: 6166 loss: 0.0682 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0682 2023/03/06 23:16:36 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:16:47 - mmengine - INFO - Epoch(train) [11][1400/1567] lr: 2.3103e-02 eta: 0:20:33 time: 0.1538 data_time: 0.0064 memory: 6166 loss: 0.0589 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0589 2023/03/06 23:17:02 - mmengine - INFO - Epoch(train) [11][1500/1567] lr: 2.2577e-02 eta: 0:20:18 time: 0.1534 data_time: 0.0063 memory: 6166 loss: 0.0493 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0493 2023/03/06 23:17:12 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:17:12 - mmengine - INFO - Epoch(train) [11][1567/1567] lr: 2.2227e-02 eta: 0:20:07 time: 0.1494 data_time: 0.0062 memory: 6166 loss: 0.1815 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1815 2023/03/06 23:17:12 - mmengine - INFO - Saving checkpoint at 11 epochs 2023/03/06 23:17:18 - mmengine - INFO - Epoch(val) [11][100/129] eta: 0:00:01 time: 0.0449 data_time: 0.0059 memory: 1242 2023/03/06 23:17:19 - mmengine - INFO - Epoch(val) [11][129/129] acc/top1: 0.8604 acc/top5: 0.9745 acc/mean1: 0.8603 2023/03/06 23:17:19 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_10.pth is removed 2023/03/06 23:17:20 - mmengine - INFO - The best checkpoint with 0.8604 acc/top1 at 11 epoch is saved to best_acc/top1_epoch_11.pth. 2023/03/06 23:17:35 - mmengine - INFO - Epoch(train) [12][ 100/1567] lr: 2.1708e-02 eta: 0:19:52 time: 0.1533 data_time: 0.0063 memory: 6166 loss: 0.0324 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0324 2023/03/06 23:17:50 - mmengine - INFO - Epoch(train) [12][ 200/1567] lr: 2.1194e-02 eta: 0:19:37 time: 0.1526 data_time: 0.0064 memory: 6166 loss: 0.0253 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0253 2023/03/06 23:18:06 - mmengine - INFO - Epoch(train) [12][ 300/1567] lr: 2.0684e-02 eta: 0:19:21 time: 0.1530 data_time: 0.0066 memory: 6166 loss: 0.0355 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0355 2023/03/06 23:18:22 - mmengine - INFO - Epoch(train) [12][ 400/1567] lr: 2.0179e-02 eta: 0:19:06 time: 0.1525 data_time: 0.0063 memory: 6166 loss: 0.0213 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0213 2023/03/06 23:18:37 - mmengine - INFO - Epoch(train) [12][ 500/1567] lr: 1.9678e-02 eta: 0:18:50 time: 0.1527 data_time: 0.0064 memory: 6166 loss: 0.0414 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0414 2023/03/06 23:18:52 - mmengine - INFO - Epoch(train) [12][ 600/1567] lr: 1.9182e-02 eta: 0:18:35 time: 0.1526 data_time: 0.0071 memory: 6166 loss: 0.0224 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0224 2023/03/06 23:19:08 - mmengine - INFO - Epoch(train) [12][ 700/1567] lr: 1.8691e-02 eta: 0:18:19 time: 0.1522 data_time: 0.0064 memory: 6166 loss: 0.0330 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0330 2023/03/06 23:19:17 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:19:23 - mmengine - INFO - Epoch(train) [12][ 800/1567] lr: 1.8205e-02 eta: 0:18:04 time: 0.1521 data_time: 0.0065 memory: 6166 loss: 0.0176 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0176 2023/03/06 23:19:38 - mmengine - INFO - Epoch(train) [12][ 900/1567] lr: 1.7724e-02 eta: 0:17:48 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0167 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0167 2023/03/06 23:19:53 - mmengine - INFO - Epoch(train) [12][1000/1567] lr: 1.7248e-02 eta: 0:17:33 time: 0.1526 data_time: 0.0063 memory: 6166 loss: 0.0214 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0214 2023/03/06 23:20:09 - mmengine - INFO - Epoch(train) [12][1100/1567] lr: 1.6778e-02 eta: 0:17:18 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0301 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0301 2023/03/06 23:20:24 - mmengine - INFO - Epoch(train) [12][1200/1567] lr: 1.6312e-02 eta: 0:17:02 time: 0.1539 data_time: 0.0063 memory: 6166 loss: 0.0205 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0205 2023/03/06 23:20:39 - mmengine - INFO - Epoch(train) [12][1300/1567] lr: 1.5852e-02 eta: 0:16:47 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0236 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0236 2023/03/06 23:20:54 - mmengine - INFO - Epoch(train) [12][1400/1567] lr: 1.5397e-02 eta: 0:16:31 time: 0.1522 data_time: 0.0064 memory: 6166 loss: 0.0139 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0139 2023/03/06 23:21:10 - mmengine - INFO - Epoch(train) [12][1500/1567] lr: 1.4947e-02 eta: 0:16:16 time: 0.1527 data_time: 0.0063 memory: 6166 loss: 0.0210 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0210 2023/03/06 23:21:20 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:21:20 - mmengine - INFO - Epoch(train) [12][1567/1567] lr: 1.4649e-02 eta: 0:16:05 time: 0.1484 data_time: 0.0061 memory: 6166 loss: 0.1864 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.1864 2023/03/06 23:21:20 - mmengine - INFO - Saving checkpoint at 12 epochs 2023/03/06 23:21:25 - mmengine - INFO - Epoch(val) [12][100/129] eta: 0:00:01 time: 0.0484 data_time: 0.0059 memory: 1242 2023/03/06 23:21:26 - mmengine - INFO - Epoch(val) [12][129/129] acc/top1: 0.8803 acc/top5: 0.9781 acc/mean1: 0.8803 2023/03/06 23:21:26 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_11.pth is removed 2023/03/06 23:21:27 - mmengine - INFO - The best checkpoint with 0.8803 acc/top1 at 12 epoch is saved to best_acc/top1_epoch_12.pth. 2023/03/06 23:21:42 - mmengine - INFO - Epoch(train) [13][ 100/1567] lr: 1.4209e-02 eta: 0:15:50 time: 0.1540 data_time: 0.0063 memory: 6166 loss: 0.0214 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0214 2023/03/06 23:21:57 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:21:58 - mmengine - INFO - Epoch(train) [13][ 200/1567] lr: 1.3774e-02 eta: 0:15:34 time: 0.1525 data_time: 0.0070 memory: 6166 loss: 0.0210 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0210 2023/03/06 23:22:13 - mmengine - INFO - Epoch(train) [13][ 300/1567] lr: 1.3345e-02 eta: 0:15:19 time: 0.1526 data_time: 0.0065 memory: 6166 loss: 0.0213 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0213 2023/03/06 23:22:28 - mmengine - INFO - Epoch(train) [13][ 400/1567] lr: 1.2922e-02 eta: 0:15:04 time: 0.1537 data_time: 0.0063 memory: 6166 loss: 0.0177 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0177 2023/03/06 23:22:44 - mmengine - INFO - Epoch(train) [13][ 500/1567] lr: 1.2505e-02 eta: 0:14:48 time: 0.1533 data_time: 0.0064 memory: 6166 loss: 0.0157 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0157 2023/03/06 23:22:59 - mmengine - INFO - Epoch(train) [13][ 600/1567] lr: 1.2093e-02 eta: 0:14:33 time: 0.1522 data_time: 0.0065 memory: 6166 loss: 0.0178 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0178 2023/03/06 23:23:14 - mmengine - INFO - Epoch(train) [13][ 700/1567] lr: 1.1687e-02 eta: 0:14:17 time: 0.1526 data_time: 0.0063 memory: 6166 loss: 0.0211 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0211 2023/03/06 23:23:29 - mmengine - INFO - Epoch(train) [13][ 800/1567] lr: 1.1288e-02 eta: 0:14:02 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0158 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0158 2023/03/06 23:23:45 - mmengine - INFO - Epoch(train) [13][ 900/1567] lr: 1.0894e-02 eta: 0:13:46 time: 0.1522 data_time: 0.0064 memory: 6166 loss: 0.0152 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0152 2023/03/06 23:24:00 - mmengine - INFO - Epoch(train) [13][1000/1567] lr: 1.0507e-02 eta: 0:13:31 time: 0.1521 data_time: 0.0064 memory: 6166 loss: 0.0197 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0197 2023/03/06 23:24:15 - mmengine - INFO - Epoch(train) [13][1100/1567] lr: 1.0126e-02 eta: 0:13:15 time: 0.1529 data_time: 0.0064 memory: 6166 loss: 0.0130 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0130 2023/03/06 23:24:30 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:24:31 - mmengine - INFO - Epoch(train) [13][1200/1567] lr: 9.7512e-03 eta: 0:13:00 time: 0.1531 data_time: 0.0063 memory: 6166 loss: 0.0136 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0136 2023/03/06 23:24:46 - mmengine - INFO - Epoch(train) [13][1300/1567] lr: 9.3826e-03 eta: 0:12:45 time: 0.1558 data_time: 0.0063 memory: 6166 loss: 0.0142 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0142 2023/03/06 23:25:02 - mmengine - INFO - Epoch(train) [13][1400/1567] lr: 9.0204e-03 eta: 0:12:29 time: 0.1561 data_time: 0.0063 memory: 6166 loss: 0.0126 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0126 2023/03/06 23:25:17 - mmengine - INFO - Epoch(train) [13][1500/1567] lr: 8.6647e-03 eta: 0:12:14 time: 0.1531 data_time: 0.0071 memory: 6166 loss: 0.0138 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0138 2023/03/06 23:25:27 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:25:27 - mmengine - INFO - Epoch(train) [13][1567/1567] lr: 8.4300e-03 eta: 0:12:04 time: 0.1484 data_time: 0.0061 memory: 6166 loss: 0.1908 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.1908 2023/03/06 23:25:27 - mmengine - INFO - Saving checkpoint at 13 epochs 2023/03/06 23:25:33 - mmengine - INFO - Epoch(val) [13][100/129] eta: 0:00:01 time: 0.0451 data_time: 0.0058 memory: 1242 2023/03/06 23:25:35 - mmengine - INFO - Epoch(val) [13][129/129] acc/top1: 0.8857 acc/top5: 0.9806 acc/mean1: 0.8856 2023/03/06 23:25:35 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_12.pth is removed 2023/03/06 23:25:35 - mmengine - INFO - The best checkpoint with 0.8857 acc/top1 at 13 epoch is saved to best_acc/top1_epoch_13.pth. 2023/03/06 23:25:51 - mmengine - INFO - Epoch(train) [14][ 100/1567] lr: 8.0851e-03 eta: 0:11:48 time: 0.1542 data_time: 0.0063 memory: 6166 loss: 0.0131 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0131 2023/03/06 23:26:06 - mmengine - INFO - Epoch(train) [14][ 200/1567] lr: 7.7469e-03 eta: 0:11:33 time: 0.1553 data_time: 0.0063 memory: 6166 loss: 0.0130 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0130 2023/03/06 23:26:22 - mmengine - INFO - Epoch(train) [14][ 300/1567] lr: 7.4152e-03 eta: 0:11:17 time: 0.1547 data_time: 0.0063 memory: 6166 loss: 0.0108 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0108 2023/03/06 23:26:37 - mmengine - INFO - Epoch(train) [14][ 400/1567] lr: 7.0902e-03 eta: 0:11:02 time: 0.1551 data_time: 0.0063 memory: 6166 loss: 0.0143 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0143 2023/03/06 23:26:53 - mmengine - INFO - Epoch(train) [14][ 500/1567] lr: 6.7720e-03 eta: 0:10:47 time: 0.1535 data_time: 0.0063 memory: 6166 loss: 0.0094 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0094 2023/03/06 23:27:08 - mmengine - INFO - Epoch(train) [14][ 600/1567] lr: 6.4606e-03 eta: 0:10:31 time: 0.1533 data_time: 0.0063 memory: 6166 loss: 0.0155 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0155 2023/03/06 23:27:12 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:27:23 - mmengine - INFO - Epoch(train) [14][ 700/1567] lr: 6.1560e-03 eta: 0:10:16 time: 0.1536 data_time: 0.0063 memory: 6166 loss: 0.0183 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0183 2023/03/06 23:27:39 - mmengine - INFO - Epoch(train) [14][ 800/1567] lr: 5.8582e-03 eta: 0:10:00 time: 0.1541 data_time: 0.0063 memory: 6166 loss: 0.0133 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0133 2023/03/06 23:27:54 - mmengine - INFO - Epoch(train) [14][ 900/1567] lr: 5.5675e-03 eta: 0:09:45 time: 0.1538 data_time: 0.0064 memory: 6166 loss: 0.0101 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0101 2023/03/06 23:28:09 - mmengine - INFO - Epoch(train) [14][1000/1567] lr: 5.2836e-03 eta: 0:09:30 time: 0.1520 data_time: 0.0062 memory: 6166 loss: 0.0117 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0117 2023/03/06 23:28:25 - mmengine - INFO - Epoch(train) [14][1100/1567] lr: 5.0068e-03 eta: 0:09:14 time: 0.1544 data_time: 0.0063 memory: 6166 loss: 0.0122 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0122 2023/03/06 23:28:40 - mmengine - INFO - Epoch(train) [14][1200/1567] lr: 4.7371e-03 eta: 0:08:59 time: 0.1524 data_time: 0.0063 memory: 6166 loss: 0.0089 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0089 2023/03/06 23:28:56 - mmengine - INFO - Epoch(train) [14][1300/1567] lr: 4.4745e-03 eta: 0:08:43 time: 0.1523 data_time: 0.0063 memory: 6166 loss: 0.0130 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0130 2023/03/06 23:29:11 - mmengine - INFO - Epoch(train) [14][1400/1567] lr: 4.2190e-03 eta: 0:08:28 time: 0.1537 data_time: 0.0063 memory: 6166 loss: 0.0118 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0118 2023/03/06 23:29:26 - mmengine - INFO - Epoch(train) [14][1500/1567] lr: 3.9707e-03 eta: 0:08:13 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0088 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0088 2023/03/06 23:29:36 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:29:36 - mmengine - INFO - Epoch(train) [14][1567/1567] lr: 3.8084e-03 eta: 0:08:02 time: 0.1481 data_time: 0.0061 memory: 6166 loss: 0.2178 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2178 2023/03/06 23:29:36 - mmengine - INFO - Saving checkpoint at 14 epochs 2023/03/06 23:29:42 - mmengine - INFO - Epoch(val) [14][100/129] eta: 0:00:01 time: 0.0452 data_time: 0.0062 memory: 1242 2023/03/06 23:29:43 - mmengine - INFO - Epoch(val) [14][129/129] acc/top1: 0.8899 acc/top5: 0.9809 acc/mean1: 0.8898 2023/03/06 23:29:43 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_13.pth is removed 2023/03/06 23:29:44 - mmengine - INFO - The best checkpoint with 0.8899 acc/top1 at 14 epoch is saved to best_acc/top1_epoch_14.pth. 2023/03/06 23:29:53 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:29:59 - mmengine - INFO - Epoch(train) [15][ 100/1567] lr: 3.5722e-03 eta: 0:07:47 time: 0.1532 data_time: 0.0069 memory: 6166 loss: 0.0110 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0110 2023/03/06 23:30:14 - mmengine - INFO - Epoch(train) [15][ 200/1567] lr: 3.3433e-03 eta: 0:07:31 time: 0.1528 data_time: 0.0064 memory: 6166 loss: 0.0108 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0108 2023/03/06 23:30:30 - mmengine - INFO - Epoch(train) [15][ 300/1567] lr: 3.1217e-03 eta: 0:07:16 time: 0.1523 data_time: 0.0063 memory: 6166 loss: 0.0115 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0115 2023/03/06 23:30:45 - mmengine - INFO - Epoch(train) [15][ 400/1567] lr: 2.9075e-03 eta: 0:07:01 time: 0.1529 data_time: 0.0063 memory: 6166 loss: 0.0141 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0141 2023/03/06 23:31:00 - mmengine - INFO - Epoch(train) [15][ 500/1567] lr: 2.7007e-03 eta: 0:06:45 time: 0.1542 data_time: 0.0064 memory: 6166 loss: 0.0086 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0086 2023/03/06 23:31:16 - mmengine - INFO - Epoch(train) [15][ 600/1567] lr: 2.5013e-03 eta: 0:06:30 time: 0.1545 data_time: 0.0063 memory: 6166 loss: 0.0106 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0106 2023/03/06 23:31:31 - mmengine - INFO - Epoch(train) [15][ 700/1567] lr: 2.3093e-03 eta: 0:06:14 time: 0.1542 data_time: 0.0063 memory: 6166 loss: 0.0080 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0080 2023/03/06 23:31:47 - mmengine - INFO - Epoch(train) [15][ 800/1567] lr: 2.1249e-03 eta: 0:05:59 time: 0.1557 data_time: 0.0064 memory: 6166 loss: 0.0089 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0089 2023/03/06 23:32:02 - mmengine - INFO - Epoch(train) [15][ 900/1567] lr: 1.9479e-03 eta: 0:05:44 time: 0.1543 data_time: 0.0063 memory: 6166 loss: 0.0081 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0081 2023/03/06 23:32:18 - mmengine - INFO - Epoch(train) [15][1000/1567] lr: 1.7785e-03 eta: 0:05:28 time: 0.1542 data_time: 0.0063 memory: 6166 loss: 0.0090 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0090 2023/03/06 23:32:27 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:32:33 - mmengine - INFO - Epoch(train) [15][1100/1567] lr: 1.6167e-03 eta: 0:05:13 time: 0.1537 data_time: 0.0065 memory: 6166 loss: 0.0123 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0123 2023/03/06 23:32:48 - mmengine - INFO - Epoch(train) [15][1200/1567] lr: 1.4625e-03 eta: 0:04:57 time: 0.1528 data_time: 0.0063 memory: 6166 loss: 0.0093 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0093 2023/03/06 23:33:04 - mmengine - INFO - Epoch(train) [15][1300/1567] lr: 1.3159e-03 eta: 0:04:42 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0087 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0087 2023/03/06 23:33:19 - mmengine - INFO - Epoch(train) [15][1400/1567] lr: 1.1769e-03 eta: 0:04:27 time: 0.1529 data_time: 0.0063 memory: 6166 loss: 0.0109 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0109 2023/03/06 23:33:34 - mmengine - INFO - Epoch(train) [15][1500/1567] lr: 1.0456e-03 eta: 0:04:11 time: 0.1541 data_time: 0.0064 memory: 6166 loss: 0.0086 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0086 2023/03/06 23:33:44 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:33:44 - mmengine - INFO - Epoch(train) [15][1567/1567] lr: 9.6196e-04 eta: 0:04:01 time: 0.1486 data_time: 0.0061 memory: 6166 loss: 0.1367 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1367 2023/03/06 23:33:45 - mmengine - INFO - Saving checkpoint at 15 epochs 2023/03/06 23:33:50 - mmengine - INFO - Epoch(val) [15][100/129] eta: 0:00:01 time: 0.0460 data_time: 0.0068 memory: 1242 2023/03/06 23:33:52 - mmengine - INFO - Epoch(val) [15][129/129] acc/top1: 0.8936 acc/top5: 0.9811 acc/mean1: 0.8935 2023/03/06 23:33:52 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_14.pth is removed 2023/03/06 23:33:52 - mmengine - INFO - The best checkpoint with 0.8936 acc/top1 at 15 epoch is saved to best_acc/top1_epoch_15.pth. 2023/03/06 23:34:08 - mmengine - INFO - Epoch(train) [16][ 100/1567] lr: 8.4351e-04 eta: 0:03:45 time: 0.1532 data_time: 0.0063 memory: 6166 loss: 0.0104 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0104 2023/03/06 23:34:23 - mmengine - INFO - Epoch(train) [16][ 200/1567] lr: 7.3277e-04 eta: 0:03:30 time: 0.1532 data_time: 0.0064 memory: 6166 loss: 0.0093 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0093 2023/03/06 23:34:39 - mmengine - INFO - Epoch(train) [16][ 300/1567] lr: 6.2978e-04 eta: 0:03:15 time: 0.1533 data_time: 0.0063 memory: 6166 loss: 0.0090 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0090 2023/03/06 23:34:54 - mmengine - INFO - Epoch(train) [16][ 400/1567] lr: 5.3453e-04 eta: 0:02:59 time: 0.1528 data_time: 0.0063 memory: 6166 loss: 0.0079 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0079 2023/03/06 23:35:08 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:35:09 - mmengine - INFO - Epoch(train) [16][ 500/1567] lr: 4.4705e-04 eta: 0:02:44 time: 0.1529 data_time: 0.0063 memory: 6166 loss: 0.0168 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0168 2023/03/06 23:35:25 - mmengine - INFO - Epoch(train) [16][ 600/1567] lr: 3.6735e-04 eta: 0:02:28 time: 0.1530 data_time: 0.0063 memory: 6166 loss: 0.0091 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0091 2023/03/06 23:35:40 - mmengine - INFO - Epoch(train) [16][ 700/1567] lr: 2.9544e-04 eta: 0:02:13 time: 0.1529 data_time: 0.0063 memory: 6166 loss: 0.0097 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0097 2023/03/06 23:35:55 - mmengine - INFO - Epoch(train) [16][ 800/1567] lr: 2.3134e-04 eta: 0:01:58 time: 0.1531 data_time: 0.0063 memory: 6166 loss: 0.0073 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0073 2023/03/06 23:36:10 - mmengine - INFO - Epoch(train) [16][ 900/1567] lr: 1.7505e-04 eta: 0:01:42 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0112 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0112 2023/03/06 23:36:26 - mmengine - INFO - Epoch(train) [16][1000/1567] lr: 1.2658e-04 eta: 0:01:27 time: 0.1524 data_time: 0.0063 memory: 6166 loss: 0.0106 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0106 2023/03/06 23:36:41 - mmengine - INFO - Epoch(train) [16][1100/1567] lr: 8.5947e-05 eta: 0:01:11 time: 0.1522 data_time: 0.0064 memory: 6166 loss: 0.0153 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0153 2023/03/06 23:36:56 - mmengine - INFO - Epoch(train) [16][1200/1567] lr: 5.3147e-05 eta: 0:00:56 time: 0.1522 data_time: 0.0063 memory: 6166 loss: 0.0105 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0105 2023/03/06 23:37:12 - mmengine - INFO - Epoch(train) [16][1300/1567] lr: 2.8190e-05 eta: 0:00:41 time: 0.1552 data_time: 0.0063 memory: 6166 loss: 0.0103 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0103 2023/03/06 23:37:27 - mmengine - INFO - Epoch(train) [16][1400/1567] lr: 1.1078e-05 eta: 0:00:25 time: 0.1547 data_time: 0.0064 memory: 6166 loss: 0.0073 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0073 2023/03/06 23:37:42 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:37:43 - mmengine - INFO - Epoch(train) [16][1500/1567] lr: 1.8150e-06 eta: 0:00:10 time: 0.1549 data_time: 0.0064 memory: 6166 loss: 0.0056 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0056 2023/03/06 23:37:53 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d_20230306_223122 2023/03/06 23:37:53 - mmengine - INFO - Epoch(train) [16][1567/1567] lr: 3.9252e-10 eta: 0:00:00 time: 0.1491 data_time: 0.0062 memory: 6166 loss: 0.1798 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.1798 2023/03/06 23:37:53 - mmengine - INFO - Saving checkpoint at 16 epochs 2023/03/06 23:37:58 - mmengine - INFO - Epoch(val) [16][100/129] eta: 0:00:01 time: 0.0448 data_time: 0.0059 memory: 1242 2023/03/06 23:38:00 - mmengine - INFO - Epoch(val) [16][129/129] acc/top1: 0.8940 acc/top5: 0.9809 acc/mean1: 0.8939 2023/03/06 23:38:00 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-3d/best_acc/top1_epoch_15.pth is removed 2023/03/06 23:38:00 - mmengine - INFO - The best checkpoint with 0.8940 acc/top1 at 16 epoch is saved to best_acc/top1_epoch_16.pth.