2023/03/08 20:17:21 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] CUDA available: True numpy_random_seed: 1974345934 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/petrelfs/share/cuda-11.3 NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (GCC) 5.4.0 PyTorch: 1.11.0 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e) - OpenMP 201511 (a.k.a. OpenMP 4.5) - LAPACK is enabled (usually provided by MKL) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.3 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.2 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, TorchVision: 0.12.0 OpenCV: 4.6.0 MMEngine: 0.6.0 Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None diff_rank_seed: False deterministic: False Distributed launcher: pytorch Distributed training: True GPU number: 8 ------------------------------------------------------------ 2023/03/08 20:17:21 - mmengine - INFO - Config: default_scope = 'mmaction' default_hooks = dict( runtime_info=dict(type='RuntimeInfoHook', _scope_='mmaction'), timer=dict(type='IterTimerHook', _scope_='mmaction'), logger=dict( type='LoggerHook', interval=100, ignore_last=False, _scope_='mmaction'), param_scheduler=dict(type='ParamSchedulerHook', _scope_='mmaction'), checkpoint=dict( type='CheckpointHook', interval=1, save_best='auto', _scope_='mmaction'), sampler_seed=dict(type='DistSamplerSeedHook', _scope_='mmaction'), sync_buffers=dict(type='SyncBuffersHook', _scope_='mmaction')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) log_processor = dict( type='LogProcessor', window_size=20, by_epoch=True, _scope_='mmaction') vis_backends = [dict(type='LocalVisBackend', _scope_='mmaction')] visualizer = dict( type='ActionVisualizer', vis_backends=[dict(type='LocalVisBackend')], _scope_='mmaction') log_level = 'INFO' load_from = None resume = False custom_imports = dict(imports='models') model = dict( type='RecognizerGCN', backbone=dict( type='MSG3D', graph_cfg=dict(layout='coco', mode='binary_adj')), cls_head=dict(type='GCNHead', num_classes=60, in_channels=384)) dataset_type = 'PoseDataset' ann_file = 'data/skeleton/ntu60_2d.pkl' train_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] val_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] test_pipeline = [ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ] train_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='RepeatDataset', times=5, dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict(type='UniformSampleFrames', clip_len=100), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_train'))) val_dataloader = dict( batch_size=16, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=1, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) test_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PoseDataset', ann_file='data/skeleton/ntu60_2d.pkl', pipeline=[ dict(type='PreNormalize2D'), dict(type='GenSkeFeat', dataset='coco', feats=['j']), dict( type='UniformSampleFrames', clip_len=100, num_clips=10, test_mode=True), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=2), dict(type='PackActionInputs') ], split='xsub_val', test_mode=True)) val_evaluator = [dict(type='AccMetric')] test_evaluator = [dict(type='AccMetric')] train_cfg = dict( type='EpochBasedTrainLoop', max_epochs=16, val_begin=1, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='CosineAnnealingLR', eta_min=0, T_max=16, by_epoch=True, convert_to_iter_based=True) ] optim_wrapper = dict( optimizer=dict( type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True)) auto_scale_lr = dict(enable=False, base_batch_size=128) launcher = 'pytorch' work_dir = './work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d' randomness = dict(seed=None, diff_rank_seed=False, deterministic=False) 2023/03/08 20:17:21 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (NORMAL ) SyncBuffersHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- Name of parameter - Initialization information backbone.data_bn.weight - torch.Size([102]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.data_bn.bias - torch.Size([102]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.PA - torch.Size([6, 51, 51]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.mlp.layers.0.weight - torch.Size([96, 18, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.gcn3d.1.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.out_conv.weight - torch.Size([96, 96, 1, 3, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.out_conv.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.out_bn.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.0.out_bn.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.PA - torch.Size([6, 85, 85]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.mlp.layers.0.weight - torch.Size([96, 18, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.gcn3d.1.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.out_conv.weight - torch.Size([96, 96, 1, 5, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.out_conv.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.out_bn.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d1.gcn3d.1.out_bn.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.PA - torch.Size([13, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.mlp.layers.0.weight - torch.Size([96, 39, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.0.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.0.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.0.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.0.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.0.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.0.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.0.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.0.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.0.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.1.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.1.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.1.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.1.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.1.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.1.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.1.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.1.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.2.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.2.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.2.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.2.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.2.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.2.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.2.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.2.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.3.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.3.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.3.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.3.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.3.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.3.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.3.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.3.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.4.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.4.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.4.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.4.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.4.4.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.4.4.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.5.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.5.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.1.branches.5.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.1.branches.5.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.0.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.0.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.0.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.0.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.0.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.0.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.0.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.0.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.1.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.1.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.1.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.1.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.1.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.1.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.1.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.1.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.2.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.2.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.2.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.2.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.2.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.2.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.2.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.2.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.3.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.3.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.3.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.3.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.3.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.3.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.3.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.3.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.4.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.4.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.4.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.4.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.4.4.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.4.4.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.5.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.5.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn1.2.branches.5.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn1.2.branches.5.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.0.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.0.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.0.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.0.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.0.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.0.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.0.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.0.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.1.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.1.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.1.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.1.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.1.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.1.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.1.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.1.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.2.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.2.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.2.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.2.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.2.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.2.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.2.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.2.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.3.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.3.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.3.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.3.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.3.3.conv.weight - torch.Size([16, 16, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.3.3.conv.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.3.3.bn.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.3.3.bn.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.4.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.4.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.4.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.4.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.4.4.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.4.4.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.5.0.weight - torch.Size([16, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.5.0.bias - torch.Size([16]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn1.branches.5.1.weight - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn1.branches.5.1.bias - torch.Size([16]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.PA - torch.Size([6, 51, 51]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.mlp.layers.0.weight - torch.Size([96, 576, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.gcn3d.1.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.out_conv.weight - torch.Size([192, 96, 1, 3, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.out_conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.out_bn.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.0.out_bn.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.PA - torch.Size([6, 85, 85]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.mlp.layers.0.weight - torch.Size([96, 576, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.gcn3d.1.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.out_conv.weight - torch.Size([192, 96, 1, 5, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.out_conv.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.out_bn.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d2.gcn3d.1.out_bn.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.PA - torch.Size([13, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.mlp.layers.0.weight - torch.Size([96, 1248, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.mlp.layers.0.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.mlp.layers.1.weight - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.0.mlp.layers.1.bias - torch.Size([96]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.0.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.0.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.0.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.0.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.0.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.0.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.0.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.0.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.1.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.1.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.1.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.1.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.1.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.1.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.1.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.1.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.2.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.2.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.2.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.2.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.2.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.2.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.2.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.2.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.3.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.3.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.3.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.3.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.3.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.3.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.3.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.3.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.4.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.4.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.4.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.4.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.4.4.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.4.4.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.5.0.weight - torch.Size([32, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.5.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.branches.5.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.branches.5.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.residual.conv.weight - torch.Size([192, 96, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.residual.conv.bias - torch.Size([192]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.1.residual.bn.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.1.residual.bn.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.0.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.0.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.0.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.0.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.0.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.0.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.0.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.0.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.1.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.1.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.1.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.1.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.1.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.1.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.1.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.1.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.2.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.2.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.2.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.2.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.2.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.2.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.2.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.2.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.3.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.3.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.3.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.3.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.3.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.3.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.3.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.3.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.4.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.4.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.4.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.4.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.4.4.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.4.4.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.5.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.5.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn2.2.branches.5.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn2.2.branches.5.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.0.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.0.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.0.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.0.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.0.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.0.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.0.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.0.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.1.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.1.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.1.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.1.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.1.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.1.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.1.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.1.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.2.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.2.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.2.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.2.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.2.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.2.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.2.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.2.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.3.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.3.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.3.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.3.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.3.3.conv.weight - torch.Size([32, 32, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.3.3.conv.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.3.3.bn.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.3.3.bn.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.4.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.4.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.4.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.4.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.4.4.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.4.4.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.5.0.weight - torch.Size([32, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.5.0.bias - torch.Size([32]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn2.branches.5.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn2.branches.5.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.PA - torch.Size([6, 51, 51]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.mlp.layers.0.weight - torch.Size([192, 1152, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.mlp.layers.0.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.mlp.layers.1.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.gcn3d.1.mlp.layers.1.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.out_conv.weight - torch.Size([384, 192, 1, 3, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.out_conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.out_bn.weight - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.0.out_bn.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.PA - torch.Size([6, 85, 85]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.mlp.layers.0.weight - torch.Size([192, 1152, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.mlp.layers.0.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.mlp.layers.1.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.gcn3d.1.mlp.layers.1.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.out_conv.weight - torch.Size([384, 192, 1, 5, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.out_conv.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.out_bn.weight - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.gcn3d3.gcn3d.1.out_bn.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.PA - torch.Size([13, 17, 17]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.mlp.layers.0.weight - torch.Size([192, 2496, 1, 1]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.mlp.layers.0.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.mlp.layers.1.weight - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.0.mlp.layers.1.bias - torch.Size([192]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.0.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.0.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.0.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.0.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.0.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.0.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.0.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.0.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.1.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.1.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.1.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.1.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.1.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.1.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.1.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.1.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.2.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.2.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.2.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.2.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.2.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.2.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.2.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.2.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.3.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.3.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.3.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.3.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.3.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.3.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.3.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.3.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.4.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.4.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.4.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.4.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.4.4.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.4.4.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.5.0.weight - torch.Size([64, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.5.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.branches.5.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.branches.5.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.residual.conv.weight - torch.Size([384, 192, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.residual.conv.bias - torch.Size([384]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.1.residual.bn.weight - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.1.residual.bn.bias - torch.Size([384]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.0.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.0.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.0.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.0.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.0.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.0.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.0.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.0.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.1.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.1.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.1.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.1.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.1.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.1.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.1.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.1.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.2.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.2.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.2.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.2.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.2.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.2.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.2.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.2.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.3.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.3.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.3.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.3.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.3.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.3.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.3.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.3.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.4.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.4.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.4.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.4.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.4.4.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.4.4.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.5.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.5.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.sgcn3.2.branches.5.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.sgcn3.2.branches.5.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.0.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.0.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.0.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.0.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.0.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.0.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.0.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.0.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.1.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.1.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.1.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.1.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.1.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.1.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.1.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.1.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.2.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.2.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.2.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.2.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.2.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.2.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.2.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.2.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.3.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.3.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.3.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.3.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.3.3.conv.weight - torch.Size([64, 64, 3, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.3.3.conv.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.3.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.3.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.4.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.4.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.4.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.4.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.4.4.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.4.4.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.5.0.weight - torch.Size([64, 384, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.5.0.bias - torch.Size([64]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0 backbone.tcn3.branches.5.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN backbone.tcn3.branches.5.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of RecognizerGCN cls_head.fc.weight - torch.Size([60, 384]): NormalInit: mean=0, std=0.01, bias=0 cls_head.fc.bias - torch.Size([60]): NormalInit: mean=0, std=0.01, bias=0 2023/03/08 20:17:29 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d. 2023/03/08 20:17:47 - mmengine - INFO - Epoch(train) [1][ 100/1567] lr: 9.9996e-02 eta: 1:14:47 time: 0.1152 data_time: 0.0061 memory: 4206 loss: 3.4589 top1_acc: 0.0000 top5_acc: 0.5625 loss_cls: 3.4589 2023/03/08 20:17:59 - mmengine - INFO - Epoch(train) [1][ 200/1567] lr: 9.9984e-02 eta: 1:01:19 time: 0.1147 data_time: 0.0061 memory: 4206 loss: 3.1064 top1_acc: 0.0625 top5_acc: 0.3125 loss_cls: 3.1064 2023/03/08 20:18:10 - mmengine - INFO - Epoch(train) [1][ 300/1567] lr: 9.9965e-02 eta: 0:56:17 time: 0.1135 data_time: 0.0061 memory: 4206 loss: 2.4400 top1_acc: 0.1250 top5_acc: 0.6250 loss_cls: 2.4400 2023/03/08 20:18:21 - mmengine - INFO - Epoch(train) [1][ 400/1567] lr: 9.9938e-02 eta: 0:53:46 time: 0.1141 data_time: 0.0060 memory: 4206 loss: 1.9389 top1_acc: 0.4375 top5_acc: 0.9375 loss_cls: 1.9389 2023/03/08 20:18:33 - mmengine - INFO - Epoch(train) [1][ 500/1567] lr: 9.9902e-02 eta: 0:52:06 time: 0.1127 data_time: 0.0061 memory: 4206 loss: 1.7399 top1_acc: 0.6875 top5_acc: 0.8750 loss_cls: 1.7399 2023/03/08 20:18:44 - mmengine - INFO - Epoch(train) [1][ 600/1567] lr: 9.9859e-02 eta: 0:51:04 time: 0.1140 data_time: 0.0064 memory: 4206 loss: 1.4118 top1_acc: 0.6250 top5_acc: 0.8750 loss_cls: 1.4118 2023/03/08 20:18:55 - mmengine - INFO - Epoch(train) [1][ 700/1567] lr: 9.9808e-02 eta: 0:50:07 time: 0.1113 data_time: 0.0060 memory: 4206 loss: 1.2725 top1_acc: 0.6250 top5_acc: 1.0000 loss_cls: 1.2725 2023/03/08 20:19:07 - mmengine - INFO - Epoch(train) [1][ 800/1567] lr: 9.9750e-02 eta: 0:49:19 time: 0.1118 data_time: 0.0061 memory: 4206 loss: 0.9981 top1_acc: 0.6250 top5_acc: 0.8750 loss_cls: 0.9981 2023/03/08 20:19:18 - mmengine - INFO - Epoch(train) [1][ 900/1567] lr: 9.9683e-02 eta: 0:48:41 time: 0.1137 data_time: 0.0060 memory: 4206 loss: 0.8579 top1_acc: 0.5625 top5_acc: 1.0000 loss_cls: 0.8579 2023/03/08 20:19:29 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:19:29 - mmengine - INFO - Epoch(train) [1][1000/1567] lr: 9.9609e-02 eta: 0:48:12 time: 0.1137 data_time: 0.0060 memory: 4206 loss: 0.7764 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.7764 2023/03/08 20:19:40 - mmengine - INFO - Epoch(train) [1][1100/1567] lr: 9.9527e-02 eta: 0:47:42 time: 0.1117 data_time: 0.0061 memory: 4206 loss: 0.7351 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.7351 2023/03/08 20:19:52 - mmengine - INFO - Epoch(train) [1][1200/1567] lr: 9.9437e-02 eta: 0:47:15 time: 0.1120 data_time: 0.0061 memory: 4206 loss: 0.6826 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.6826 2023/03/08 20:20:03 - mmengine - INFO - Epoch(train) [1][1300/1567] lr: 9.9339e-02 eta: 0:46:53 time: 0.1134 data_time: 0.0061 memory: 4206 loss: 0.6437 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.6437 2023/03/08 20:20:14 - mmengine - INFO - Epoch(train) [1][1400/1567] lr: 9.9234e-02 eta: 0:46:32 time: 0.1123 data_time: 0.0061 memory: 4206 loss: 0.5618 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.5618 2023/03/08 20:20:25 - mmengine - INFO - Epoch(train) [1][1500/1567] lr: 9.9121e-02 eta: 0:46:10 time: 0.1125 data_time: 0.0061 memory: 4206 loss: 0.5440 top1_acc: 0.6875 top5_acc: 1.0000 loss_cls: 0.5440 2023/03/08 20:20:33 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:20:33 - mmengine - INFO - Epoch(train) [1][1567/1567] lr: 9.9040e-02 eta: 0:45:56 time: 0.1113 data_time: 0.0059 memory: 4206 loss: 0.6849 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.6849 2023/03/08 20:20:33 - mmengine - INFO - Saving checkpoint at 1 epochs 2023/03/08 20:20:37 - mmengine - INFO - Epoch(val) [1][100/129] eta: 0:00:01 time: 0.0327 data_time: 0.0058 memory: 856 2023/03/08 20:20:38 - mmengine - INFO - Epoch(val) [1][129/129] acc/top1: 0.6222 acc/top5: 0.9457 acc/mean1: 0.6222 2023/03/08 20:20:38 - mmengine - INFO - The best checkpoint with 0.6222 acc/top1 at 1 epoch is saved to best_acc/top1_epoch_1.pth. 2023/03/08 20:20:50 - mmengine - INFO - Epoch(train) [2][ 100/1567] lr: 9.8914e-02 eta: 0:45:37 time: 0.1111 data_time: 0.0062 memory: 4206 loss: 0.4653 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4653 2023/03/08 20:21:01 - mmengine - INFO - Epoch(train) [2][ 200/1567] lr: 9.8781e-02 eta: 0:45:18 time: 0.1112 data_time: 0.0061 memory: 4206 loss: 0.5810 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.5810 2023/03/08 20:21:12 - mmengine - INFO - Epoch(train) [2][ 300/1567] lr: 9.8639e-02 eta: 0:44:59 time: 0.1115 data_time: 0.0062 memory: 4206 loss: 0.4264 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4264 2023/03/08 20:21:23 - mmengine - INFO - Epoch(train) [2][ 400/1567] lr: 9.8491e-02 eta: 0:44:42 time: 0.1111 data_time: 0.0061 memory: 4206 loss: 0.5190 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.5190 2023/03/08 20:21:27 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:21:34 - mmengine - INFO - Epoch(train) [2][ 500/1567] lr: 9.8334e-02 eta: 0:44:25 time: 0.1112 data_time: 0.0062 memory: 4206 loss: 0.5451 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.5451 2023/03/08 20:21:45 - mmengine - INFO - Epoch(train) [2][ 600/1567] lr: 9.8170e-02 eta: 0:44:09 time: 0.1112 data_time: 0.0062 memory: 4206 loss: 0.4413 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4413 2023/03/08 20:21:56 - mmengine - INFO - Epoch(train) [2][ 700/1567] lr: 9.7998e-02 eta: 0:43:52 time: 0.1112 data_time: 0.0062 memory: 4206 loss: 0.4324 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4324 2023/03/08 20:22:07 - mmengine - INFO - Epoch(train) [2][ 800/1567] lr: 9.7819e-02 eta: 0:43:37 time: 0.1118 data_time: 0.0062 memory: 4206 loss: 0.4311 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4311 2023/03/08 20:22:19 - mmengine - INFO - Epoch(train) [2][ 900/1567] lr: 9.7632e-02 eta: 0:43:23 time: 0.1147 data_time: 0.0062 memory: 4206 loss: 0.4079 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.4079 2023/03/08 20:22:30 - mmengine - INFO - Epoch(train) [2][1000/1567] lr: 9.7438e-02 eta: 0:43:11 time: 0.1145 data_time: 0.0062 memory: 4206 loss: 0.4839 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.4839 2023/03/08 20:22:42 - mmengine - INFO - Epoch(train) [2][1100/1567] lr: 9.7236e-02 eta: 0:42:58 time: 0.1123 data_time: 0.0063 memory: 4206 loss: 0.4507 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4507 2023/03/08 20:22:53 - mmengine - INFO - Epoch(train) [2][1200/1567] lr: 9.7027e-02 eta: 0:42:44 time: 0.1125 data_time: 0.0062 memory: 4206 loss: 0.4623 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4623 2023/03/08 20:23:04 - mmengine - INFO - Epoch(train) [2][1300/1567] lr: 9.6810e-02 eta: 0:42:30 time: 0.1114 data_time: 0.0061 memory: 4206 loss: 0.3894 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3894 2023/03/08 20:23:15 - mmengine - INFO - Epoch(train) [2][1400/1567] lr: 9.6587e-02 eta: 0:42:17 time: 0.1149 data_time: 0.0063 memory: 4206 loss: 0.2960 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2960 2023/03/08 20:23:19 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:23:27 - mmengine - INFO - Epoch(train) [2][1500/1567] lr: 9.6355e-02 eta: 0:42:06 time: 0.1139 data_time: 0.0062 memory: 4206 loss: 0.3562 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3562 2023/03/08 20:23:34 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:23:34 - mmengine - INFO - Epoch(train) [2][1567/1567] lr: 9.6196e-02 eta: 0:41:57 time: 0.1113 data_time: 0.0060 memory: 4206 loss: 0.5368 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.5368 2023/03/08 20:23:34 - mmengine - INFO - Saving checkpoint at 2 epochs 2023/03/08 20:23:38 - mmengine - INFO - Epoch(val) [2][100/129] eta: 0:00:00 time: 0.0322 data_time: 0.0057 memory: 856 2023/03/08 20:23:39 - mmengine - INFO - Epoch(val) [2][129/129] acc/top1: 0.2631 acc/top5: 0.8490 acc/mean1: 0.2632 2023/03/08 20:23:51 - mmengine - INFO - Epoch(train) [3][ 100/1567] lr: 9.5953e-02 eta: 0:41:47 time: 0.1160 data_time: 0.0061 memory: 4206 loss: 0.3824 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.3824 2023/03/08 20:24:02 - mmengine - INFO - Epoch(train) [3][ 200/1567] lr: 9.5703e-02 eta: 0:41:36 time: 0.1144 data_time: 0.0062 memory: 4206 loss: 0.2898 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2898 2023/03/08 20:24:14 - mmengine - INFO - Epoch(train) [3][ 300/1567] lr: 9.5445e-02 eta: 0:41:24 time: 0.1151 data_time: 0.0062 memory: 4206 loss: 0.3890 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3890 2023/03/08 20:24:25 - mmengine - INFO - Epoch(train) [3][ 400/1567] lr: 9.5180e-02 eta: 0:41:13 time: 0.1155 data_time: 0.0061 memory: 4206 loss: 0.3334 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3334 2023/03/08 20:24:37 - mmengine - INFO - Epoch(train) [3][ 500/1567] lr: 9.4908e-02 eta: 0:41:01 time: 0.1132 data_time: 0.0063 memory: 4206 loss: 0.3052 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3052 2023/03/08 20:24:48 - mmengine - INFO - Epoch(train) [3][ 600/1567] lr: 9.4629e-02 eta: 0:40:48 time: 0.1128 data_time: 0.0061 memory: 4206 loss: 0.2611 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2611 2023/03/08 20:24:59 - mmengine - INFO - Epoch(train) [3][ 700/1567] lr: 9.4343e-02 eta: 0:40:36 time: 0.1125 data_time: 0.0062 memory: 4206 loss: 0.3039 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3039 2023/03/08 20:25:11 - mmengine - INFO - Epoch(train) [3][ 800/1567] lr: 9.4050e-02 eta: 0:40:23 time: 0.1114 data_time: 0.0061 memory: 4206 loss: 0.3789 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.3789 2023/03/08 20:25:18 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:25:22 - mmengine - INFO - Epoch(train) [3][ 900/1567] lr: 9.3750e-02 eta: 0:40:10 time: 0.1131 data_time: 0.0061 memory: 4206 loss: 0.2659 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2659 2023/03/08 20:25:33 - mmengine - INFO - Epoch(train) [3][1000/1567] lr: 9.3444e-02 eta: 0:39:57 time: 0.1118 data_time: 0.0069 memory: 4206 loss: 0.3328 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.3328 2023/03/08 20:25:44 - mmengine - INFO - Epoch(train) [3][1100/1567] lr: 9.3130e-02 eta: 0:39:44 time: 0.1113 data_time: 0.0062 memory: 4206 loss: 0.2844 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2844 2023/03/08 20:25:55 - mmengine - INFO - Epoch(train) [3][1200/1567] lr: 9.2810e-02 eta: 0:39:32 time: 0.1130 data_time: 0.0061 memory: 4206 loss: 0.3289 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3289 2023/03/08 20:26:07 - mmengine - INFO - Epoch(train) [3][1300/1567] lr: 9.2483e-02 eta: 0:39:20 time: 0.1129 data_time: 0.0061 memory: 4206 loss: 0.4136 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.4136 2023/03/08 20:26:18 - mmengine - INFO - Epoch(train) [3][1400/1567] lr: 9.2149e-02 eta: 0:39:10 time: 0.1148 data_time: 0.0061 memory: 4206 loss: 0.3128 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.3128 2023/03/08 20:26:30 - mmengine - INFO - Epoch(train) [3][1500/1567] lr: 9.1809e-02 eta: 0:38:58 time: 0.1133 data_time: 0.0062 memory: 4206 loss: 0.2915 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2915 2023/03/08 20:26:37 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:26:37 - mmengine - INFO - Epoch(train) [3][1567/1567] lr: 9.1577e-02 eta: 0:38:50 time: 0.1095 data_time: 0.0059 memory: 4206 loss: 0.3956 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3956 2023/03/08 20:26:37 - mmengine - INFO - Saving checkpoint at 3 epochs 2023/03/08 20:26:41 - mmengine - INFO - Epoch(val) [3][100/129] eta: 0:00:00 time: 0.0321 data_time: 0.0056 memory: 856 2023/03/08 20:26:42 - mmengine - INFO - Epoch(val) [3][129/129] acc/top1: 0.6054 acc/top5: 0.9640 acc/mean1: 0.6055 2023/03/08 20:26:53 - mmengine - INFO - Epoch(train) [4][ 100/1567] lr: 9.1226e-02 eta: 0:38:37 time: 0.1117 data_time: 0.0062 memory: 4206 loss: 0.2851 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2851 2023/03/08 20:27:05 - mmengine - INFO - Epoch(train) [4][ 200/1567] lr: 9.0868e-02 eta: 0:38:26 time: 0.1138 data_time: 0.0062 memory: 4206 loss: 0.2300 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.2300 2023/03/08 20:27:16 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:27:16 - mmengine - INFO - Epoch(train) [4][ 300/1567] lr: 9.0504e-02 eta: 0:38:14 time: 0.1148 data_time: 0.0061 memory: 4206 loss: 0.3183 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3183 2023/03/08 20:27:28 - mmengine - INFO - Epoch(train) [4][ 400/1567] lr: 9.0133e-02 eta: 0:38:03 time: 0.1152 data_time: 0.0061 memory: 4206 loss: 0.2444 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2444 2023/03/08 20:27:39 - mmengine - INFO - Epoch(train) [4][ 500/1567] lr: 8.9756e-02 eta: 0:37:51 time: 0.1115 data_time: 0.0062 memory: 4206 loss: 0.2730 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2730 2023/03/08 20:27:50 - mmengine - INFO - Epoch(train) [4][ 600/1567] lr: 8.9373e-02 eta: 0:37:38 time: 0.1114 data_time: 0.0061 memory: 4206 loss: 0.2550 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2550 2023/03/08 20:28:01 - mmengine - INFO - Epoch(train) [4][ 700/1567] lr: 8.8984e-02 eta: 0:37:26 time: 0.1118 data_time: 0.0061 memory: 4206 loss: 0.2776 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2776 2023/03/08 20:28:12 - mmengine - INFO - Epoch(train) [4][ 800/1567] lr: 8.8589e-02 eta: 0:37:14 time: 0.1115 data_time: 0.0061 memory: 4206 loss: 0.3090 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.3090 2023/03/08 20:28:24 - mmengine - INFO - Epoch(train) [4][ 900/1567] lr: 8.8187e-02 eta: 0:37:01 time: 0.1116 data_time: 0.0061 memory: 4206 loss: 0.2657 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2657 2023/03/08 20:28:35 - mmengine - INFO - Epoch(train) [4][1000/1567] lr: 8.7780e-02 eta: 0:36:51 time: 0.1163 data_time: 0.0061 memory: 4206 loss: 0.3301 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.3301 2023/03/08 20:28:47 - mmengine - INFO - Epoch(train) [4][1100/1567] lr: 8.7367e-02 eta: 0:36:39 time: 0.1148 data_time: 0.0062 memory: 4206 loss: 0.2322 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2322 2023/03/08 20:28:58 - mmengine - INFO - Epoch(train) [4][1200/1567] lr: 8.6947e-02 eta: 0:36:28 time: 0.1116 data_time: 0.0061 memory: 4206 loss: 0.2825 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2825 2023/03/08 20:29:09 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:29:09 - mmengine - INFO - Epoch(train) [4][1300/1567] lr: 8.6522e-02 eta: 0:36:16 time: 0.1114 data_time: 0.0060 memory: 4206 loss: 0.2672 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2672 2023/03/08 20:29:20 - mmengine - INFO - Epoch(train) [4][1400/1567] lr: 8.6092e-02 eta: 0:36:04 time: 0.1131 data_time: 0.0062 memory: 4206 loss: 0.2577 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2577 2023/03/08 20:29:32 - mmengine - INFO - Epoch(train) [4][1500/1567] lr: 8.5655e-02 eta: 0:35:52 time: 0.1128 data_time: 0.0062 memory: 4206 loss: 0.2224 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2224 2023/03/08 20:29:39 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:29:39 - mmengine - INFO - Epoch(train) [4][1567/1567] lr: 8.5360e-02 eta: 0:35:44 time: 0.1119 data_time: 0.0059 memory: 4206 loss: 0.3968 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3968 2023/03/08 20:29:39 - mmengine - INFO - Saving checkpoint at 4 epochs 2023/03/08 20:29:43 - mmengine - INFO - Epoch(val) [4][100/129] eta: 0:00:00 time: 0.0322 data_time: 0.0057 memory: 856 2023/03/08 20:29:44 - mmengine - INFO - Epoch(val) [4][129/129] acc/top1: 0.7859 acc/top5: 0.9741 acc/mean1: 0.7858 2023/03/08 20:29:44 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_1.pth is removed 2023/03/08 20:29:45 - mmengine - INFO - The best checkpoint with 0.7859 acc/top1 at 4 epoch is saved to best_acc/top1_epoch_4.pth. 2023/03/08 20:29:56 - mmengine - INFO - Epoch(train) [5][ 100/1567] lr: 8.4914e-02 eta: 0:35:33 time: 0.1131 data_time: 0.0061 memory: 4206 loss: 0.2239 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2239 2023/03/08 20:30:07 - mmengine - INFO - Epoch(train) [5][ 200/1567] lr: 8.4463e-02 eta: 0:35:21 time: 0.1131 data_time: 0.0062 memory: 4206 loss: 0.2161 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2161 2023/03/08 20:30:18 - mmengine - INFO - Epoch(train) [5][ 300/1567] lr: 8.4006e-02 eta: 0:35:09 time: 0.1116 data_time: 0.0061 memory: 4206 loss: 0.1609 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1609 2023/03/08 20:30:30 - mmengine - INFO - Epoch(train) [5][ 400/1567] lr: 8.3544e-02 eta: 0:34:57 time: 0.1126 data_time: 0.0061 memory: 4206 loss: 0.2760 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2760 2023/03/08 20:30:41 - mmengine - INFO - Epoch(train) [5][ 500/1567] lr: 8.3077e-02 eta: 0:34:46 time: 0.1125 data_time: 0.0062 memory: 4206 loss: 0.2026 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2026 2023/03/08 20:30:52 - mmengine - INFO - Epoch(train) [5][ 600/1567] lr: 8.2605e-02 eta: 0:34:34 time: 0.1134 data_time: 0.0061 memory: 4206 loss: 0.2282 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2282 2023/03/08 20:31:04 - mmengine - INFO - Epoch(train) [5][ 700/1567] lr: 8.2127e-02 eta: 0:34:23 time: 0.1158 data_time: 0.0061 memory: 4206 loss: 0.2732 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.2732 2023/03/08 20:31:08 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:31:15 - mmengine - INFO - Epoch(train) [5][ 800/1567] lr: 8.1645e-02 eta: 0:34:11 time: 0.1143 data_time: 0.0061 memory: 4206 loss: 0.2169 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2169 2023/03/08 20:31:27 - mmengine - INFO - Epoch(train) [5][ 900/1567] lr: 8.1157e-02 eta: 0:34:00 time: 0.1113 data_time: 0.0062 memory: 4206 loss: 0.2102 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2102 2023/03/08 20:31:38 - mmengine - INFO - Epoch(train) [5][1000/1567] lr: 8.0665e-02 eta: 0:33:48 time: 0.1115 data_time: 0.0062 memory: 4206 loss: 0.1951 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1951 2023/03/08 20:31:49 - mmengine - INFO - Epoch(train) [5][1100/1567] lr: 8.0167e-02 eta: 0:33:36 time: 0.1138 data_time: 0.0062 memory: 4206 loss: 0.1748 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1748 2023/03/08 20:32:00 - mmengine - INFO - Epoch(train) [5][1200/1567] lr: 7.9665e-02 eta: 0:33:25 time: 0.1136 data_time: 0.0062 memory: 4206 loss: 0.1934 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1934 2023/03/08 20:32:12 - mmengine - INFO - Epoch(train) [5][1300/1567] lr: 7.9159e-02 eta: 0:33:13 time: 0.1142 data_time: 0.0062 memory: 4206 loss: 0.1650 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1650 2023/03/08 20:32:23 - mmengine - INFO - Epoch(train) [5][1400/1567] lr: 7.8647e-02 eta: 0:33:02 time: 0.1144 data_time: 0.0062 memory: 4206 loss: 0.2635 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2635 2023/03/08 20:32:35 - mmengine - INFO - Epoch(train) [5][1500/1567] lr: 7.8132e-02 eta: 0:32:51 time: 0.1154 data_time: 0.0062 memory: 4206 loss: 0.2401 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.2401 2023/03/08 20:32:42 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:32:42 - mmengine - INFO - Epoch(train) [5][1567/1567] lr: 7.7784e-02 eta: 0:32:43 time: 0.1111 data_time: 0.0060 memory: 4206 loss: 0.4895 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.4895 2023/03/08 20:32:42 - mmengine - INFO - Saving checkpoint at 5 epochs 2023/03/08 20:32:46 - mmengine - INFO - Epoch(val) [5][100/129] eta: 0:00:00 time: 0.0322 data_time: 0.0057 memory: 856 2023/03/08 20:32:47 - mmengine - INFO - Epoch(val) [5][129/129] acc/top1: 0.7889 acc/top5: 0.9769 acc/mean1: 0.7888 2023/03/08 20:32:47 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_4.pth is removed 2023/03/08 20:32:47 - mmengine - INFO - The best checkpoint with 0.7889 acc/top1 at 5 epoch is saved to best_acc/top1_epoch_5.pth. 2023/03/08 20:32:59 - mmengine - INFO - Epoch(train) [6][ 100/1567] lr: 7.7261e-02 eta: 0:32:32 time: 0.1133 data_time: 0.0061 memory: 4206 loss: 0.1852 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1852 2023/03/08 20:33:06 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:33:10 - mmengine - INFO - Epoch(train) [6][ 200/1567] lr: 7.6733e-02 eta: 0:32:20 time: 0.1116 data_time: 0.0061 memory: 4206 loss: 0.1930 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1930 2023/03/08 20:33:21 - mmengine - INFO - Epoch(train) [6][ 300/1567] lr: 7.6202e-02 eta: 0:32:08 time: 0.1114 data_time: 0.0061 memory: 4206 loss: 0.1894 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1894 2023/03/08 20:33:32 - mmengine - INFO - Epoch(train) [6][ 400/1567] lr: 7.5666e-02 eta: 0:31:56 time: 0.1113 data_time: 0.0061 memory: 4206 loss: 0.1684 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1684 2023/03/08 20:33:44 - mmengine - INFO - Epoch(train) [6][ 500/1567] lr: 7.5126e-02 eta: 0:31:45 time: 0.1141 data_time: 0.0061 memory: 4206 loss: 0.1934 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1934 2023/03/08 20:33:55 - mmengine - INFO - Epoch(train) [6][ 600/1567] lr: 7.4583e-02 eta: 0:31:33 time: 0.1112 data_time: 0.0062 memory: 4206 loss: 0.1512 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1512 2023/03/08 20:34:06 - mmengine - INFO - Epoch(train) [6][ 700/1567] lr: 7.4035e-02 eta: 0:31:22 time: 0.1143 data_time: 0.0062 memory: 4206 loss: 0.1941 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1941 2023/03/08 20:34:18 - mmengine - INFO - Epoch(train) [6][ 800/1567] lr: 7.3484e-02 eta: 0:31:10 time: 0.1131 data_time: 0.0062 memory: 4206 loss: 0.1954 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1954 2023/03/08 20:34:29 - mmengine - INFO - Epoch(train) [6][ 900/1567] lr: 7.2929e-02 eta: 0:30:59 time: 0.1115 data_time: 0.0062 memory: 4206 loss: 0.1918 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1918 2023/03/08 20:34:40 - mmengine - INFO - Epoch(train) [6][1000/1567] lr: 7.2371e-02 eta: 0:30:47 time: 0.1118 data_time: 0.0061 memory: 4206 loss: 0.1590 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1590 2023/03/08 20:34:51 - mmengine - INFO - Epoch(train) [6][1100/1567] lr: 7.1809e-02 eta: 0:30:35 time: 0.1126 data_time: 0.0062 memory: 4206 loss: 0.2322 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.2322 2023/03/08 20:34:59 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:35:03 - mmengine - INFO - Epoch(train) [6][1200/1567] lr: 7.1243e-02 eta: 0:30:24 time: 0.1129 data_time: 0.0062 memory: 4206 loss: 0.1760 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1760 2023/03/08 20:35:14 - mmengine - INFO - Epoch(train) [6][1300/1567] lr: 7.0674e-02 eta: 0:30:12 time: 0.1140 data_time: 0.0061 memory: 4206 loss: 0.1638 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1638 2023/03/08 20:35:25 - mmengine - INFO - Epoch(train) [6][1400/1567] lr: 7.0102e-02 eta: 0:30:01 time: 0.1140 data_time: 0.0061 memory: 4206 loss: 0.1528 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1528 2023/03/08 20:35:37 - mmengine - INFO - Epoch(train) [6][1500/1567] lr: 6.9527e-02 eta: 0:29:50 time: 0.1138 data_time: 0.0062 memory: 4206 loss: 0.1915 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1915 2023/03/08 20:35:44 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:35:44 - mmengine - INFO - Epoch(train) [6][1567/1567] lr: 6.9140e-02 eta: 0:29:42 time: 0.1108 data_time: 0.0060 memory: 4206 loss: 0.2714 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2714 2023/03/08 20:35:44 - mmengine - INFO - Saving checkpoint at 6 epochs 2023/03/08 20:35:48 - mmengine - INFO - Epoch(val) [6][100/129] eta: 0:00:00 time: 0.0323 data_time: 0.0057 memory: 856 2023/03/08 20:35:49 - mmengine - INFO - Epoch(val) [6][129/129] acc/top1: 0.8304 acc/top5: 0.9808 acc/mean1: 0.8303 2023/03/08 20:35:49 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_5.pth is removed 2023/03/08 20:35:50 - mmengine - INFO - The best checkpoint with 0.8304 acc/top1 at 6 epoch is saved to best_acc/top1_epoch_6.pth. 2023/03/08 20:36:01 - mmengine - INFO - Epoch(train) [7][ 100/1567] lr: 6.8560e-02 eta: 0:29:31 time: 0.1157 data_time: 0.0061 memory: 4206 loss: 0.1851 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1851 2023/03/08 20:36:13 - mmengine - INFO - Epoch(train) [7][ 200/1567] lr: 6.7976e-02 eta: 0:29:20 time: 0.1172 data_time: 0.0063 memory: 4206 loss: 0.1699 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1699 2023/03/08 20:36:24 - mmengine - INFO - Epoch(train) [7][ 300/1567] lr: 6.7390e-02 eta: 0:29:09 time: 0.1141 data_time: 0.0062 memory: 4206 loss: 0.1641 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1641 2023/03/08 20:36:36 - mmengine - INFO - Epoch(train) [7][ 400/1567] lr: 6.6802e-02 eta: 0:28:57 time: 0.1149 data_time: 0.0062 memory: 4206 loss: 0.1380 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1380 2023/03/08 20:36:47 - mmengine - INFO - Epoch(train) [7][ 500/1567] lr: 6.6210e-02 eta: 0:28:46 time: 0.1151 data_time: 0.0062 memory: 4206 loss: 0.1881 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1881 2023/03/08 20:36:59 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:36:59 - mmengine - INFO - Epoch(train) [7][ 600/1567] lr: 6.5616e-02 eta: 0:28:35 time: 0.1150 data_time: 0.0062 memory: 4206 loss: 0.1380 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1380 2023/03/08 20:37:10 - mmengine - INFO - Epoch(train) [7][ 700/1567] lr: 6.5020e-02 eta: 0:28:24 time: 0.1163 data_time: 0.0061 memory: 4206 loss: 0.1709 top1_acc: 0.7500 top5_acc: 1.0000 loss_cls: 0.1709 2023/03/08 20:37:22 - mmengine - INFO - Epoch(train) [7][ 800/1567] lr: 6.4421e-02 eta: 0:28:12 time: 0.1131 data_time: 0.0062 memory: 4206 loss: 0.1606 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1606 2023/03/08 20:37:33 - mmengine - INFO - Epoch(train) [7][ 900/1567] lr: 6.3820e-02 eta: 0:28:01 time: 0.1132 data_time: 0.0062 memory: 4206 loss: 0.1663 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1663 2023/03/08 20:37:45 - mmengine - INFO - Epoch(train) [7][1000/1567] lr: 6.3217e-02 eta: 0:27:50 time: 0.1156 data_time: 0.0066 memory: 4206 loss: 0.1498 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1498 2023/03/08 20:37:56 - mmengine - INFO - Epoch(train) [7][1100/1567] lr: 6.2612e-02 eta: 0:27:38 time: 0.1113 data_time: 0.0063 memory: 4206 loss: 0.1128 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1128 2023/03/08 20:38:07 - mmengine - INFO - Epoch(train) [7][1200/1567] lr: 6.2005e-02 eta: 0:27:27 time: 0.1113 data_time: 0.0062 memory: 4206 loss: 0.1387 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1387 2023/03/08 20:38:18 - mmengine - INFO - Epoch(train) [7][1300/1567] lr: 6.1396e-02 eta: 0:27:15 time: 0.1125 data_time: 0.0062 memory: 4206 loss: 0.1291 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1291 2023/03/08 20:38:30 - mmengine - INFO - Epoch(train) [7][1400/1567] lr: 6.0785e-02 eta: 0:27:03 time: 0.1120 data_time: 0.0062 memory: 4206 loss: 0.1411 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1411 2023/03/08 20:38:41 - mmengine - INFO - Epoch(train) [7][1500/1567] lr: 6.0172e-02 eta: 0:26:52 time: 0.1142 data_time: 0.0062 memory: 4206 loss: 0.1245 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.1245 2023/03/08 20:38:49 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:38:49 - mmengine - INFO - Epoch(train) [7][1567/1567] lr: 5.9761e-02 eta: 0:26:44 time: 0.1109 data_time: 0.0059 memory: 4206 loss: 0.3371 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.3371 2023/03/08 20:38:49 - mmengine - INFO - Saving checkpoint at 7 epochs 2023/03/08 20:38:52 - mmengine - INFO - Epoch(val) [7][100/129] eta: 0:00:00 time: 0.0323 data_time: 0.0057 memory: 856 2023/03/08 20:38:54 - mmengine - INFO - Epoch(val) [7][129/129] acc/top1: 0.8180 acc/top5: 0.9850 acc/mean1: 0.8180 2023/03/08 20:38:57 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:39:05 - mmengine - INFO - Epoch(train) [8][ 100/1567] lr: 5.9145e-02 eta: 0:26:33 time: 0.1143 data_time: 0.0062 memory: 4206 loss: 0.0902 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0902 2023/03/08 20:39:16 - mmengine - INFO - Epoch(train) [8][ 200/1567] lr: 5.8529e-02 eta: 0:26:22 time: 0.1143 data_time: 0.0067 memory: 4206 loss: 0.1535 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1535 2023/03/08 20:39:28 - mmengine - INFO - Epoch(train) [8][ 300/1567] lr: 5.7911e-02 eta: 0:26:10 time: 0.1135 data_time: 0.0062 memory: 4206 loss: 0.1437 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1437 2023/03/08 20:39:39 - mmengine - INFO - Epoch(train) [8][ 400/1567] lr: 5.7292e-02 eta: 0:25:59 time: 0.1132 data_time: 0.0061 memory: 4206 loss: 0.1075 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1075 2023/03/08 20:39:51 - mmengine - INFO - Epoch(train) [8][ 500/1567] lr: 5.6671e-02 eta: 0:25:48 time: 0.1154 data_time: 0.0062 memory: 4206 loss: 0.1216 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1216 2023/03/08 20:40:02 - mmengine - INFO - Epoch(train) [8][ 600/1567] lr: 5.6050e-02 eta: 0:25:37 time: 0.1155 data_time: 0.0062 memory: 4206 loss: 0.1239 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1239 2023/03/08 20:40:14 - mmengine - INFO - Epoch(train) [8][ 700/1567] lr: 5.5427e-02 eta: 0:25:25 time: 0.1150 data_time: 0.0063 memory: 4206 loss: 0.0967 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.0967 2023/03/08 20:40:25 - mmengine - INFO - Epoch(train) [8][ 800/1567] lr: 5.4804e-02 eta: 0:25:14 time: 0.1152 data_time: 0.0061 memory: 4206 loss: 0.1050 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1050 2023/03/08 20:40:37 - mmengine - INFO - Epoch(train) [8][ 900/1567] lr: 5.4180e-02 eta: 0:25:03 time: 0.1129 data_time: 0.0062 memory: 4206 loss: 0.0886 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0886 2023/03/08 20:40:48 - mmengine - INFO - Epoch(train) [8][1000/1567] lr: 5.3556e-02 eta: 0:24:51 time: 0.1131 data_time: 0.0061 memory: 4206 loss: 0.1177 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1177 2023/03/08 20:40:52 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:40:59 - mmengine - INFO - Epoch(train) [8][1100/1567] lr: 5.2930e-02 eta: 0:24:40 time: 0.1141 data_time: 0.0062 memory: 4206 loss: 0.1162 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1162 2023/03/08 20:41:11 - mmengine - INFO - Epoch(train) [8][1200/1567] lr: 5.2305e-02 eta: 0:24:28 time: 0.1117 data_time: 0.0062 memory: 4206 loss: 0.1728 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1728 2023/03/08 20:41:22 - mmengine - INFO - Epoch(train) [8][1300/1567] lr: 5.1679e-02 eta: 0:24:17 time: 0.1145 data_time: 0.0061 memory: 4206 loss: 0.1596 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1596 2023/03/08 20:41:33 - mmengine - INFO - Epoch(train) [8][1400/1567] lr: 5.1052e-02 eta: 0:24:06 time: 0.1147 data_time: 0.0061 memory: 4206 loss: 0.1466 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1466 2023/03/08 20:41:45 - mmengine - INFO - Epoch(train) [8][1500/1567] lr: 5.0426e-02 eta: 0:23:55 time: 0.1168 data_time: 0.0062 memory: 4206 loss: 0.1474 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1474 2023/03/08 20:41:53 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:41:53 - mmengine - INFO - Epoch(train) [8][1567/1567] lr: 5.0006e-02 eta: 0:23:47 time: 0.1134 data_time: 0.0060 memory: 4206 loss: 0.2860 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.2860 2023/03/08 20:41:53 - mmengine - INFO - Saving checkpoint at 8 epochs 2023/03/08 20:41:57 - mmengine - INFO - Epoch(val) [8][100/129] eta: 0:00:00 time: 0.0324 data_time: 0.0057 memory: 856 2023/03/08 20:41:58 - mmengine - INFO - Epoch(val) [8][129/129] acc/top1: 0.8511 acc/top5: 0.9869 acc/mean1: 0.8511 2023/03/08 20:41:58 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_6.pth is removed 2023/03/08 20:41:58 - mmengine - INFO - The best checkpoint with 0.8511 acc/top1 at 8 epoch is saved to best_acc/top1_epoch_8.pth. 2023/03/08 20:42:10 - mmengine - INFO - Epoch(train) [9][ 100/1567] lr: 4.9380e-02 eta: 0:23:36 time: 0.1148 data_time: 0.0061 memory: 4206 loss: 0.1280 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1280 2023/03/08 20:42:21 - mmengine - INFO - Epoch(train) [9][ 200/1567] lr: 4.8753e-02 eta: 0:23:24 time: 0.1144 data_time: 0.0062 memory: 4206 loss: 0.0908 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.0908 2023/03/08 20:42:33 - mmengine - INFO - Epoch(train) [9][ 300/1567] lr: 4.8127e-02 eta: 0:23:13 time: 0.1118 data_time: 0.0061 memory: 4206 loss: 0.1297 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1297 2023/03/08 20:42:44 - mmengine - INFO - Epoch(train) [9][ 400/1567] lr: 4.7501e-02 eta: 0:23:01 time: 0.1119 data_time: 0.0061 memory: 4206 loss: 0.0808 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0808 2023/03/08 20:42:51 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:42:55 - mmengine - INFO - Epoch(train) [9][ 500/1567] lr: 4.6876e-02 eta: 0:22:50 time: 0.1138 data_time: 0.0062 memory: 4206 loss: 0.0824 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0824 2023/03/08 20:43:06 - mmengine - INFO - Epoch(train) [9][ 600/1567] lr: 4.6251e-02 eta: 0:22:39 time: 0.1158 data_time: 0.0063 memory: 4206 loss: 0.0955 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0955 2023/03/08 20:43:18 - mmengine - INFO - Epoch(train) [9][ 700/1567] lr: 4.5626e-02 eta: 0:22:27 time: 0.1117 data_time: 0.0061 memory: 4206 loss: 0.1154 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1154 2023/03/08 20:43:29 - mmengine - INFO - Epoch(train) [9][ 800/1567] lr: 4.5003e-02 eta: 0:22:16 time: 0.1117 data_time: 0.0062 memory: 4206 loss: 0.0828 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.0828 2023/03/08 20:43:40 - mmengine - INFO - Epoch(train) [9][ 900/1567] lr: 4.4380e-02 eta: 0:22:04 time: 0.1124 data_time: 0.0062 memory: 4206 loss: 0.1138 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1138 2023/03/08 20:43:51 - mmengine - INFO - Epoch(train) [9][1000/1567] lr: 4.3757e-02 eta: 0:21:52 time: 0.1120 data_time: 0.0069 memory: 4206 loss: 0.1159 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1159 2023/03/08 20:44:03 - mmengine - INFO - Epoch(train) [9][1100/1567] lr: 4.3136e-02 eta: 0:21:41 time: 0.1161 data_time: 0.0063 memory: 4206 loss: 0.1213 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.1213 2023/03/08 20:44:14 - mmengine - INFO - Epoch(train) [9][1200/1567] lr: 4.2516e-02 eta: 0:21:30 time: 0.1129 data_time: 0.0063 memory: 4206 loss: 0.0737 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.0737 2023/03/08 20:44:25 - mmengine - INFO - Epoch(train) [9][1300/1567] lr: 4.1897e-02 eta: 0:21:18 time: 0.1144 data_time: 0.0061 memory: 4206 loss: 0.1132 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1132 2023/03/08 20:44:37 - mmengine - INFO - Epoch(train) [9][1400/1567] lr: 4.1280e-02 eta: 0:21:07 time: 0.1144 data_time: 0.0062 memory: 4206 loss: 0.0911 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.0911 2023/03/08 20:44:44 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:44:48 - mmengine - INFO - Epoch(train) [9][1500/1567] lr: 4.0664e-02 eta: 0:20:55 time: 0.1115 data_time: 0.0062 memory: 4206 loss: 0.1086 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.1086 2023/03/08 20:44:56 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:44:56 - mmengine - INFO - Epoch(train) [9][1567/1567] lr: 4.0252e-02 eta: 0:20:48 time: 0.1104 data_time: 0.0059 memory: 4206 loss: 0.2164 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.2164 2023/03/08 20:44:56 - mmengine - INFO - Saving checkpoint at 9 epochs 2023/03/08 20:44:59 - mmengine - INFO - Epoch(val) [9][100/129] eta: 0:00:00 time: 0.0323 data_time: 0.0057 memory: 856 2023/03/08 20:45:01 - mmengine - INFO - Epoch(val) [9][129/129] acc/top1: 0.8560 acc/top5: 0.9852 acc/mean1: 0.8559 2023/03/08 20:45:01 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_8.pth is removed 2023/03/08 20:45:01 - mmengine - INFO - The best checkpoint with 0.8560 acc/top1 at 9 epoch is saved to best_acc/top1_epoch_9.pth. 2023/03/08 20:45:12 - mmengine - INFO - Epoch(train) [10][ 100/1567] lr: 3.9638e-02 eta: 0:20:36 time: 0.1127 data_time: 0.0061 memory: 4206 loss: 0.0493 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0493 2023/03/08 20:45:24 - mmengine - INFO - Epoch(train) [10][ 200/1567] lr: 3.9026e-02 eta: 0:20:25 time: 0.1127 data_time: 0.0062 memory: 4206 loss: 0.0691 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0691 2023/03/08 20:45:35 - mmengine - INFO - Epoch(train) [10][ 300/1567] lr: 3.8415e-02 eta: 0:20:13 time: 0.1123 data_time: 0.0071 memory: 4206 loss: 0.0548 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0548 2023/03/08 20:45:46 - mmengine - INFO - Epoch(train) [10][ 400/1567] lr: 3.7807e-02 eta: 0:20:02 time: 0.1120 data_time: 0.0062 memory: 4206 loss: 0.0834 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0834 2023/03/08 20:45:57 - mmengine - INFO - Epoch(train) [10][ 500/1567] lr: 3.7200e-02 eta: 0:19:50 time: 0.1133 data_time: 0.0062 memory: 4206 loss: 0.0799 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0799 2023/03/08 20:46:09 - mmengine - INFO - Epoch(train) [10][ 600/1567] lr: 3.6596e-02 eta: 0:19:39 time: 0.1126 data_time: 0.0061 memory: 4206 loss: 0.0885 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0885 2023/03/08 20:46:20 - mmengine - INFO - Epoch(train) [10][ 700/1567] lr: 3.5993e-02 eta: 0:19:28 time: 0.1121 data_time: 0.0061 memory: 4206 loss: 0.0793 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0793 2023/03/08 20:46:31 - mmengine - INFO - Epoch(train) [10][ 800/1567] lr: 3.5393e-02 eta: 0:19:16 time: 0.1124 data_time: 0.0061 memory: 4206 loss: 0.0802 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0802 2023/03/08 20:46:42 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:46:42 - mmengine - INFO - Epoch(train) [10][ 900/1567] lr: 3.4795e-02 eta: 0:19:05 time: 0.1116 data_time: 0.0063 memory: 4206 loss: 0.0695 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0695 2023/03/08 20:46:54 - mmengine - INFO - Epoch(train) [10][1000/1567] lr: 3.4199e-02 eta: 0:18:53 time: 0.1125 data_time: 0.0061 memory: 4206 loss: 0.0967 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0967 2023/03/08 20:47:05 - mmengine - INFO - Epoch(train) [10][1100/1567] lr: 3.3606e-02 eta: 0:18:42 time: 0.1134 data_time: 0.0061 memory: 4206 loss: 0.0597 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0597 2023/03/08 20:47:16 - mmengine - INFO - Epoch(train) [10][1200/1567] lr: 3.3015e-02 eta: 0:18:30 time: 0.1117 data_time: 0.0061 memory: 4206 loss: 0.0614 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0614 2023/03/08 20:47:28 - mmengine - INFO - Epoch(train) [10][1300/1567] lr: 3.2428e-02 eta: 0:18:19 time: 0.1142 data_time: 0.0061 memory: 4206 loss: 0.0585 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0585 2023/03/08 20:47:39 - mmengine - INFO - Epoch(train) [10][1400/1567] lr: 3.1842e-02 eta: 0:18:08 time: 0.1132 data_time: 0.0061 memory: 4206 loss: 0.0660 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0660 2023/03/08 20:47:50 - mmengine - INFO - Epoch(train) [10][1500/1567] lr: 3.1260e-02 eta: 0:17:56 time: 0.1126 data_time: 0.0061 memory: 4206 loss: 0.0786 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0786 2023/03/08 20:47:58 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:47:58 - mmengine - INFO - Epoch(train) [10][1567/1567] lr: 3.0872e-02 eta: 0:17:48 time: 0.1094 data_time: 0.0059 memory: 4206 loss: 0.2624 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2624 2023/03/08 20:47:58 - mmengine - INFO - Saving checkpoint at 10 epochs 2023/03/08 20:48:01 - mmengine - INFO - Epoch(val) [10][100/129] eta: 0:00:00 time: 0.0322 data_time: 0.0056 memory: 856 2023/03/08 20:48:03 - mmengine - INFO - Epoch(val) [10][129/129] acc/top1: 0.8223 acc/top5: 0.9674 acc/mean1: 0.8221 2023/03/08 20:48:14 - mmengine - INFO - Epoch(train) [11][ 100/1567] lr: 3.0294e-02 eta: 0:17:37 time: 0.1143 data_time: 0.0061 memory: 4206 loss: 0.0533 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0533 2023/03/08 20:48:26 - mmengine - INFO - Epoch(train) [11][ 200/1567] lr: 2.9720e-02 eta: 0:17:26 time: 0.1158 data_time: 0.0062 memory: 4206 loss: 0.0487 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0487 2023/03/08 20:48:37 - mmengine - INFO - Epoch(train) [11][ 300/1567] lr: 2.9149e-02 eta: 0:17:15 time: 0.1163 data_time: 0.0063 memory: 4206 loss: 0.0497 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0497 2023/03/08 20:48:41 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:48:49 - mmengine - INFO - Epoch(train) [11][ 400/1567] lr: 2.8581e-02 eta: 0:17:03 time: 0.1147 data_time: 0.0062 memory: 4206 loss: 0.0332 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0332 2023/03/08 20:49:00 - mmengine - INFO - Epoch(train) [11][ 500/1567] lr: 2.8017e-02 eta: 0:16:52 time: 0.1129 data_time: 0.0062 memory: 4206 loss: 0.0422 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0422 2023/03/08 20:49:11 - mmengine - INFO - Epoch(train) [11][ 600/1567] lr: 2.7456e-02 eta: 0:16:40 time: 0.1117 data_time: 0.0061 memory: 4206 loss: 0.0540 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0540 2023/03/08 20:49:23 - mmengine - INFO - Epoch(train) [11][ 700/1567] lr: 2.6898e-02 eta: 0:16:29 time: 0.1136 data_time: 0.0062 memory: 4206 loss: 0.0394 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0394 2023/03/08 20:49:34 - mmengine - INFO - Epoch(train) [11][ 800/1567] lr: 2.6345e-02 eta: 0:16:18 time: 0.1118 data_time: 0.0062 memory: 4206 loss: 0.0294 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0294 2023/03/08 20:49:45 - mmengine - INFO - Epoch(train) [11][ 900/1567] lr: 2.5794e-02 eta: 0:16:06 time: 0.1117 data_time: 0.0062 memory: 4206 loss: 0.0584 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0584 2023/03/08 20:49:56 - mmengine - INFO - Epoch(train) [11][1000/1567] lr: 2.5248e-02 eta: 0:15:55 time: 0.1116 data_time: 0.0062 memory: 4206 loss: 0.0281 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0281 2023/03/08 20:50:07 - mmengine - INFO - Epoch(train) [11][1100/1567] lr: 2.4706e-02 eta: 0:15:43 time: 0.1117 data_time: 0.0062 memory: 4206 loss: 0.0418 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0418 2023/03/08 20:50:19 - mmengine - INFO - Epoch(train) [11][1200/1567] lr: 2.4167e-02 eta: 0:15:32 time: 0.1121 data_time: 0.0062 memory: 4206 loss: 0.0337 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0337 2023/03/08 20:50:30 - mmengine - INFO - Epoch(train) [11][1300/1567] lr: 2.3633e-02 eta: 0:15:20 time: 0.1119 data_time: 0.0062 memory: 4206 loss: 0.0312 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0312 2023/03/08 20:50:33 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:50:41 - mmengine - INFO - Epoch(train) [11][1400/1567] lr: 2.3103e-02 eta: 0:15:09 time: 0.1119 data_time: 0.0061 memory: 4206 loss: 0.0366 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0366 2023/03/08 20:50:52 - mmengine - INFO - Epoch(train) [11][1500/1567] lr: 2.2577e-02 eta: 0:14:57 time: 0.1120 data_time: 0.0062 memory: 4206 loss: 0.0296 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0296 2023/03/08 20:51:00 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:51:00 - mmengine - INFO - Epoch(train) [11][1567/1567] lr: 2.2227e-02 eta: 0:14:50 time: 0.1094 data_time: 0.0059 memory: 4206 loss: 0.2232 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2232 2023/03/08 20:51:00 - mmengine - INFO - Saving checkpoint at 11 epochs 2023/03/08 20:51:04 - mmengine - INFO - Epoch(val) [11][100/129] eta: 0:00:00 time: 0.0321 data_time: 0.0056 memory: 856 2023/03/08 20:51:05 - mmengine - INFO - Epoch(val) [11][129/129] acc/top1: 0.8151 acc/top5: 0.9689 acc/mean1: 0.8151 2023/03/08 20:51:16 - mmengine - INFO - Epoch(train) [12][ 100/1567] lr: 2.1708e-02 eta: 0:14:38 time: 0.1144 data_time: 0.0062 memory: 4206 loss: 0.0417 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0417 2023/03/08 20:51:28 - mmengine - INFO - Epoch(train) [12][ 200/1567] lr: 2.1194e-02 eta: 0:14:27 time: 0.1138 data_time: 0.0062 memory: 4206 loss: 0.0321 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0321 2023/03/08 20:51:39 - mmengine - INFO - Epoch(train) [12][ 300/1567] lr: 2.0684e-02 eta: 0:14:16 time: 0.1138 data_time: 0.0062 memory: 4206 loss: 0.0346 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0346 2023/03/08 20:51:50 - mmengine - INFO - Epoch(train) [12][ 400/1567] lr: 2.0179e-02 eta: 0:14:04 time: 0.1120 data_time: 0.0061 memory: 4206 loss: 0.0304 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0304 2023/03/08 20:52:02 - mmengine - INFO - Epoch(train) [12][ 500/1567] lr: 1.9678e-02 eta: 0:13:53 time: 0.1152 data_time: 0.0062 memory: 4206 loss: 0.0442 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0442 2023/03/08 20:52:13 - mmengine - INFO - Epoch(train) [12][ 600/1567] lr: 1.9182e-02 eta: 0:13:42 time: 0.1170 data_time: 0.0063 memory: 4206 loss: 0.0304 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0304 2023/03/08 20:52:25 - mmengine - INFO - Epoch(train) [12][ 700/1567] lr: 1.8691e-02 eta: 0:13:30 time: 0.1154 data_time: 0.0061 memory: 4206 loss: 0.0454 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0454 2023/03/08 20:52:32 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:52:36 - mmengine - INFO - Epoch(train) [12][ 800/1567] lr: 1.8205e-02 eta: 0:13:19 time: 0.1168 data_time: 0.0063 memory: 4206 loss: 0.0252 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0252 2023/03/08 20:52:48 - mmengine - INFO - Epoch(train) [12][ 900/1567] lr: 1.7724e-02 eta: 0:13:08 time: 0.1158 data_time: 0.0062 memory: 4206 loss: 0.0179 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0179 2023/03/08 20:52:59 - mmengine - INFO - Epoch(train) [12][1000/1567] lr: 1.7248e-02 eta: 0:12:56 time: 0.1116 data_time: 0.0062 memory: 4206 loss: 0.0290 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0290 2023/03/08 20:53:11 - mmengine - INFO - Epoch(train) [12][1100/1567] lr: 1.6778e-02 eta: 0:12:45 time: 0.1150 data_time: 0.0062 memory: 4206 loss: 0.0389 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0389 2023/03/08 20:53:22 - mmengine - INFO - Epoch(train) [12][1200/1567] lr: 1.6312e-02 eta: 0:12:34 time: 0.1137 data_time: 0.0062 memory: 4206 loss: 0.0237 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0237 2023/03/08 20:53:34 - mmengine - INFO - Epoch(train) [12][1300/1567] lr: 1.5852e-02 eta: 0:12:22 time: 0.1136 data_time: 0.0061 memory: 4206 loss: 0.0193 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0193 2023/03/08 20:53:45 - mmengine - INFO - Epoch(train) [12][1400/1567] lr: 1.5397e-02 eta: 0:12:11 time: 0.1135 data_time: 0.0063 memory: 4206 loss: 0.0128 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0128 2023/03/08 20:53:56 - mmengine - INFO - Epoch(train) [12][1500/1567] lr: 1.4947e-02 eta: 0:12:00 time: 0.1146 data_time: 0.0063 memory: 4206 loss: 0.0134 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0134 2023/03/08 20:54:04 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:54:04 - mmengine - INFO - Epoch(train) [12][1567/1567] lr: 1.4649e-02 eta: 0:11:52 time: 0.1116 data_time: 0.0060 memory: 4206 loss: 0.2072 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2072 2023/03/08 20:54:04 - mmengine - INFO - Saving checkpoint at 12 epochs 2023/03/08 20:54:08 - mmengine - INFO - Epoch(val) [12][100/129] eta: 0:00:00 time: 0.0320 data_time: 0.0056 memory: 856 2023/03/08 20:54:09 - mmengine - INFO - Epoch(val) [12][129/129] acc/top1: 0.8908 acc/top5: 0.9910 acc/mean1: 0.8906 2023/03/08 20:54:09 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_9.pth is removed 2023/03/08 20:54:09 - mmengine - INFO - The best checkpoint with 0.8908 acc/top1 at 12 epoch is saved to best_acc/top1_epoch_12.pth. 2023/03/08 20:54:21 - mmengine - INFO - Epoch(train) [13][ 100/1567] lr: 1.4209e-02 eta: 0:11:41 time: 0.1151 data_time: 0.0061 memory: 4206 loss: 0.0359 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0359 2023/03/08 20:54:32 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:54:32 - mmengine - INFO - Epoch(train) [13][ 200/1567] lr: 1.3774e-02 eta: 0:11:29 time: 0.1175 data_time: 0.0062 memory: 4206 loss: 0.0165 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0165 2023/03/08 20:54:44 - mmengine - INFO - Epoch(train) [13][ 300/1567] lr: 1.3345e-02 eta: 0:11:18 time: 0.1121 data_time: 0.0063 memory: 4206 loss: 0.0242 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0242 2023/03/08 20:54:55 - mmengine - INFO - Epoch(train) [13][ 400/1567] lr: 1.2922e-02 eta: 0:11:07 time: 0.1118 data_time: 0.0062 memory: 4206 loss: 0.0194 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0194 2023/03/08 20:55:06 - mmengine - INFO - Epoch(train) [13][ 500/1567] lr: 1.2505e-02 eta: 0:10:55 time: 0.1123 data_time: 0.0063 memory: 4206 loss: 0.0179 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0179 2023/03/08 20:55:17 - mmengine - INFO - Epoch(train) [13][ 600/1567] lr: 1.2093e-02 eta: 0:10:44 time: 0.1120 data_time: 0.0062 memory: 4206 loss: 0.0180 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0180 2023/03/08 20:55:29 - mmengine - INFO - Epoch(train) [13][ 700/1567] lr: 1.1687e-02 eta: 0:10:32 time: 0.1121 data_time: 0.0062 memory: 4206 loss: 0.0153 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0153 2023/03/08 20:55:40 - mmengine - INFO - Epoch(train) [13][ 800/1567] lr: 1.1288e-02 eta: 0:10:21 time: 0.1119 data_time: 0.0062 memory: 4206 loss: 0.0125 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0125 2023/03/08 20:55:51 - mmengine - INFO - Epoch(train) [13][ 900/1567] lr: 1.0894e-02 eta: 0:10:10 time: 0.1121 data_time: 0.0062 memory: 4206 loss: 0.0144 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0144 2023/03/08 20:56:02 - mmengine - INFO - Epoch(train) [13][1000/1567] lr: 1.0507e-02 eta: 0:09:58 time: 0.1127 data_time: 0.0063 memory: 4206 loss: 0.0197 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0197 2023/03/08 20:56:14 - mmengine - INFO - Epoch(train) [13][1100/1567] lr: 1.0126e-02 eta: 0:09:47 time: 0.1163 data_time: 0.0063 memory: 4206 loss: 0.0135 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0135 2023/03/08 20:56:25 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:56:25 - mmengine - INFO - Epoch(train) [13][1200/1567] lr: 9.7512e-03 eta: 0:09:36 time: 0.1152 data_time: 0.0062 memory: 4206 loss: 0.0178 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0178 2023/03/08 20:56:37 - mmengine - INFO - Epoch(train) [13][1300/1567] lr: 9.3826e-03 eta: 0:09:24 time: 0.1157 data_time: 0.0062 memory: 4206 loss: 0.0246 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0246 2023/03/08 20:56:48 - mmengine - INFO - Epoch(train) [13][1400/1567] lr: 9.0204e-03 eta: 0:09:13 time: 0.1157 data_time: 0.0063 memory: 4206 loss: 0.0103 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0103 2023/03/08 20:57:00 - mmengine - INFO - Epoch(train) [13][1500/1567] lr: 8.6647e-03 eta: 0:09:02 time: 0.1157 data_time: 0.0064 memory: 4206 loss: 0.0096 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0096 2023/03/08 20:57:08 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:57:08 - mmengine - INFO - Epoch(train) [13][1567/1567] lr: 8.4300e-03 eta: 0:08:54 time: 0.1123 data_time: 0.0059 memory: 4206 loss: 0.2135 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2135 2023/03/08 20:57:08 - mmengine - INFO - Saving checkpoint at 13 epochs 2023/03/08 20:57:11 - mmengine - INFO - Epoch(val) [13][100/129] eta: 0:00:00 time: 0.0323 data_time: 0.0056 memory: 856 2023/03/08 20:57:13 - mmengine - INFO - Epoch(val) [13][129/129] acc/top1: 0.9124 acc/top5: 0.9914 acc/mean1: 0.9123 2023/03/08 20:57:13 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_12.pth is removed 2023/03/08 20:57:13 - mmengine - INFO - The best checkpoint with 0.9124 acc/top1 at 13 epoch is saved to best_acc/top1_epoch_13.pth. 2023/03/08 20:57:25 - mmengine - INFO - Epoch(train) [14][ 100/1567] lr: 8.0851e-03 eta: 0:08:43 time: 0.1121 data_time: 0.0062 memory: 4206 loss: 0.0109 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0109 2023/03/08 20:57:36 - mmengine - INFO - Epoch(train) [14][ 200/1567] lr: 7.7469e-03 eta: 0:08:31 time: 0.1123 data_time: 0.0063 memory: 4206 loss: 0.0111 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0111 2023/03/08 20:57:47 - mmengine - INFO - Epoch(train) [14][ 300/1567] lr: 7.4152e-03 eta: 0:08:20 time: 0.1122 data_time: 0.0063 memory: 4206 loss: 0.0125 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0125 2023/03/08 20:57:58 - mmengine - INFO - Epoch(train) [14][ 400/1567] lr: 7.0902e-03 eta: 0:08:08 time: 0.1138 data_time: 0.0062 memory: 4206 loss: 0.0084 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0084 2023/03/08 20:58:10 - mmengine - INFO - Epoch(train) [14][ 500/1567] lr: 6.7720e-03 eta: 0:07:57 time: 0.1156 data_time: 0.0063 memory: 4206 loss: 0.0074 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0074 2023/03/08 20:58:21 - mmengine - INFO - Epoch(train) [14][ 600/1567] lr: 6.4606e-03 eta: 0:07:46 time: 0.1124 data_time: 0.0065 memory: 4206 loss: 0.0088 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0088 2023/03/08 20:58:25 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 20:58:33 - mmengine - INFO - Epoch(train) [14][ 700/1567] lr: 6.1560e-03 eta: 0:07:34 time: 0.1137 data_time: 0.0062 memory: 4206 loss: 0.0096 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0096 2023/03/08 20:58:44 - mmengine - INFO - Epoch(train) [14][ 800/1567] lr: 5.8582e-03 eta: 0:07:23 time: 0.1138 data_time: 0.0064 memory: 4206 loss: 0.0110 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0110 2023/03/08 20:58:55 - mmengine - INFO - Epoch(train) [14][ 900/1567] lr: 5.5675e-03 eta: 0:07:12 time: 0.1136 data_time: 0.0062 memory: 4206 loss: 0.0062 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0062 2023/03/08 20:59:07 - mmengine - INFO - Epoch(train) [14][1000/1567] lr: 5.2836e-03 eta: 0:07:00 time: 0.1140 data_time: 0.0062 memory: 4206 loss: 0.0089 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0089 2023/03/08 20:59:18 - mmengine - INFO - Epoch(train) [14][1100/1567] lr: 5.0068e-03 eta: 0:06:49 time: 0.1141 data_time: 0.0063 memory: 4206 loss: 0.0175 top1_acc: 0.9375 top5_acc: 1.0000 loss_cls: 0.0175 2023/03/08 20:59:29 - mmengine - INFO - Epoch(train) [14][1200/1567] lr: 4.7371e-03 eta: 0:06:37 time: 0.1123 data_time: 0.0062 memory: 4206 loss: 0.0089 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0089 2023/03/08 20:59:41 - mmengine - INFO - Epoch(train) [14][1300/1567] lr: 4.4745e-03 eta: 0:06:26 time: 0.1123 data_time: 0.0062 memory: 4206 loss: 0.0080 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0080 2023/03/08 20:59:52 - mmengine - INFO - Epoch(train) [14][1400/1567] lr: 4.2190e-03 eta: 0:06:15 time: 0.1134 data_time: 0.0063 memory: 4206 loss: 0.0110 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0110 2023/03/08 21:00:04 - mmengine - INFO - Epoch(train) [14][1500/1567] lr: 3.9707e-03 eta: 0:06:03 time: 0.1159 data_time: 0.0065 memory: 4206 loss: 0.0109 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0109 2023/03/08 21:00:11 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 21:00:11 - mmengine - INFO - Epoch(train) [14][1567/1567] lr: 3.8084e-03 eta: 0:05:56 time: 0.1095 data_time: 0.0060 memory: 4206 loss: 0.1802 top1_acc: 0.0000 top5_acc: 1.0000 loss_cls: 0.1802 2023/03/08 21:00:11 - mmengine - INFO - Saving checkpoint at 14 epochs 2023/03/08 21:00:15 - mmengine - INFO - Epoch(val) [14][100/129] eta: 0:00:00 time: 0.0320 data_time: 0.0056 memory: 856 2023/03/08 21:00:16 - mmengine - INFO - Epoch(val) [14][129/129] acc/top1: 0.9165 acc/top5: 0.9941 acc/mean1: 0.9164 2023/03/08 21:00:16 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_13.pth is removed 2023/03/08 21:00:16 - mmengine - INFO - The best checkpoint with 0.9165 acc/top1 at 14 epoch is saved to best_acc/top1_epoch_14.pth. 2023/03/08 21:00:24 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 21:00:28 - mmengine - INFO - Epoch(train) [15][ 100/1567] lr: 3.5722e-03 eta: 0:05:44 time: 0.1158 data_time: 0.0062 memory: 4206 loss: 0.0077 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0077 2023/03/08 21:00:40 - mmengine - INFO - Epoch(train) [15][ 200/1567] lr: 3.3433e-03 eta: 0:05:33 time: 0.1164 data_time: 0.0062 memory: 4206 loss: 0.0129 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0129 2023/03/08 21:00:51 - mmengine - INFO - Epoch(train) [15][ 300/1567] lr: 3.1217e-03 eta: 0:05:22 time: 0.1152 data_time: 0.0062 memory: 4206 loss: 0.0087 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0087 2023/03/08 21:01:03 - mmengine - INFO - Epoch(train) [15][ 400/1567] lr: 2.9075e-03 eta: 0:05:10 time: 0.1123 data_time: 0.0062 memory: 4206 loss: 0.0075 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0075 2023/03/08 21:01:14 - mmengine - INFO - Epoch(train) [15][ 500/1567] lr: 2.7007e-03 eta: 0:04:59 time: 0.1152 data_time: 0.0063 memory: 4206 loss: 0.0095 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0095 2023/03/08 21:01:26 - mmengine - INFO - Epoch(train) [15][ 600/1567] lr: 2.5013e-03 eta: 0:04:48 time: 0.1144 data_time: 0.0062 memory: 4206 loss: 0.0073 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0073 2023/03/08 21:01:37 - mmengine - INFO - Epoch(train) [15][ 700/1567] lr: 2.3093e-03 eta: 0:04:36 time: 0.1143 data_time: 0.0062 memory: 4206 loss: 0.0088 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0088 2023/03/08 21:01:49 - mmengine - INFO - Epoch(train) [15][ 800/1567] lr: 2.1249e-03 eta: 0:04:25 time: 0.1142 data_time: 0.0063 memory: 4206 loss: 0.0068 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0068 2023/03/08 21:02:00 - mmengine - INFO - Epoch(train) [15][ 900/1567] lr: 1.9479e-03 eta: 0:04:14 time: 0.1143 data_time: 0.0063 memory: 4206 loss: 0.0107 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0107 2023/03/08 21:02:11 - mmengine - INFO - Epoch(train) [15][1000/1567] lr: 1.7785e-03 eta: 0:04:02 time: 0.1138 data_time: 0.0063 memory: 4206 loss: 0.0109 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0109 2023/03/08 21:02:18 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 21:02:23 - mmengine - INFO - Epoch(train) [15][1100/1567] lr: 1.6167e-03 eta: 0:03:51 time: 0.1137 data_time: 0.0063 memory: 4206 loss: 0.0107 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0107 2023/03/08 21:02:34 - mmengine - INFO - Epoch(train) [15][1200/1567] lr: 1.4625e-03 eta: 0:03:39 time: 0.1140 data_time: 0.0063 memory: 4206 loss: 0.0073 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0073 2023/03/08 21:02:45 - mmengine - INFO - Epoch(train) [15][1300/1567] lr: 1.3159e-03 eta: 0:03:28 time: 0.1144 data_time: 0.0062 memory: 4206 loss: 0.0090 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0090 2023/03/08 21:02:57 - mmengine - INFO - Epoch(train) [15][1400/1567] lr: 1.1769e-03 eta: 0:03:17 time: 0.1145 data_time: 0.0063 memory: 4206 loss: 0.0068 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0068 2023/03/08 21:03:08 - mmengine - INFO - Epoch(train) [15][1500/1567] lr: 1.0456e-03 eta: 0:03:05 time: 0.1138 data_time: 0.0062 memory: 4206 loss: 0.0057 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0057 2023/03/08 21:03:16 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 21:03:16 - mmengine - INFO - Epoch(train) [15][1567/1567] lr: 9.6196e-04 eta: 0:02:58 time: 0.1111 data_time: 0.0059 memory: 4206 loss: 0.1798 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.1798 2023/03/08 21:03:16 - mmengine - INFO - Saving checkpoint at 15 epochs 2023/03/08 21:03:20 - mmengine - INFO - Epoch(val) [15][100/129] eta: 0:00:00 time: 0.0326 data_time: 0.0058 memory: 856 2023/03/08 21:03:21 - mmengine - INFO - Epoch(val) [15][129/129] acc/top1: 0.9199 acc/top5: 0.9938 acc/mean1: 0.9199 2023/03/08 21:03:21 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_14.pth is removed 2023/03/08 21:03:21 - mmengine - INFO - The best checkpoint with 0.9199 acc/top1 at 15 epoch is saved to best_acc/top1_epoch_15.pth. 2023/03/08 21:03:33 - mmengine - INFO - Epoch(train) [16][ 100/1567] lr: 8.4351e-04 eta: 0:02:46 time: 0.1156 data_time: 0.0068 memory: 4206 loss: 0.0064 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0064 2023/03/08 21:03:44 - mmengine - INFO - Epoch(train) [16][ 200/1567] lr: 7.3277e-04 eta: 0:02:35 time: 0.1166 data_time: 0.0063 memory: 4206 loss: 0.0089 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0089 2023/03/08 21:03:56 - mmengine - INFO - Epoch(train) [16][ 300/1567] lr: 6.2978e-04 eta: 0:02:24 time: 0.1132 data_time: 0.0062 memory: 4206 loss: 0.0096 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0096 2023/03/08 21:04:08 - mmengine - INFO - Epoch(train) [16][ 400/1567] lr: 5.3453e-04 eta: 0:02:12 time: 0.1153 data_time: 0.0062 memory: 4206 loss: 0.0072 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0072 2023/03/08 21:04:19 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 21:04:19 - mmengine - INFO - Epoch(train) [16][ 500/1567] lr: 4.4705e-04 eta: 0:02:01 time: 0.1154 data_time: 0.0063 memory: 4206 loss: 0.0089 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0089 2023/03/08 21:04:31 - mmengine - INFO - Epoch(train) [16][ 600/1567] lr: 3.6735e-04 eta: 0:01:50 time: 0.1153 data_time: 0.0063 memory: 4206 loss: 0.0092 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0092 2023/03/08 21:04:42 - mmengine - INFO - Epoch(train) [16][ 700/1567] lr: 2.9544e-04 eta: 0:01:38 time: 0.1142 data_time: 0.0063 memory: 4206 loss: 0.0077 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0077 2023/03/08 21:04:54 - mmengine - INFO - Epoch(train) [16][ 800/1567] lr: 2.3134e-04 eta: 0:01:27 time: 0.1150 data_time: 0.0063 memory: 4206 loss: 0.0070 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0070 2023/03/08 21:05:05 - mmengine - INFO - Epoch(train) [16][ 900/1567] lr: 1.7505e-04 eta: 0:01:15 time: 0.1172 data_time: 0.0062 memory: 4206 loss: 0.0090 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0090 2023/03/08 21:05:17 - mmengine - INFO - Epoch(train) [16][1000/1567] lr: 1.2658e-04 eta: 0:01:04 time: 0.1148 data_time: 0.0062 memory: 4206 loss: 0.0071 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0071 2023/03/08 21:05:28 - mmengine - INFO - Epoch(train) [16][1100/1567] lr: 8.5947e-05 eta: 0:00:53 time: 0.1136 data_time: 0.0063 memory: 4206 loss: 0.0066 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0066 2023/03/08 21:05:40 - mmengine - INFO - Epoch(train) [16][1200/1567] lr: 5.3147e-05 eta: 0:00:41 time: 0.1139 data_time: 0.0063 memory: 4206 loss: 0.0054 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0054 2023/03/08 21:05:51 - mmengine - INFO - Epoch(train) [16][1300/1567] lr: 2.8190e-05 eta: 0:00:30 time: 0.1163 data_time: 0.0062 memory: 4206 loss: 0.0083 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0083 2023/03/08 21:06:03 - mmengine - INFO - Epoch(train) [16][1400/1567] lr: 1.1078e-05 eta: 0:00:19 time: 0.1156 data_time: 0.0063 memory: 4206 loss: 0.0078 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0078 2023/03/08 21:06:14 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 21:06:14 - mmengine - INFO - Epoch(train) [16][1500/1567] lr: 1.8150e-06 eta: 0:00:07 time: 0.1177 data_time: 0.0064 memory: 4206 loss: 0.0061 top1_acc: 1.0000 top5_acc: 1.0000 loss_cls: 0.0061 2023/03/08 21:06:22 - mmengine - INFO - Exp name: msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d_20230308_201710 2023/03/08 21:06:22 - mmengine - INFO - Epoch(train) [16][1567/1567] lr: 3.9252e-10 eta: 0:00:00 time: 0.1136 data_time: 0.0061 memory: 4206 loss: 0.2155 top1_acc: 0.0000 top5_acc: 0.0000 loss_cls: 0.2155 2023/03/08 21:06:22 - mmengine - INFO - Saving checkpoint at 16 epochs 2023/03/08 21:06:26 - mmengine - INFO - Epoch(val) [16][100/129] eta: 0:00:00 time: 0.0321 data_time: 0.0056 memory: 856 2023/03/08 21:06:27 - mmengine - INFO - Epoch(val) [16][129/129] acc/top1: 0.9204 acc/top5: 0.9938 acc/mean1: 0.9204 2023/03/08 21:06:27 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/daiwenxun/mmlab/mmaction2/projects/msg3d/work_dirs/msg3d_8xb16-joint-u100-80e_ntu60-xsub-keypoint-2d/best_acc/top1_epoch_15.pth is removed 2023/03/08 21:06:28 - mmengine - INFO - The best checkpoint with 0.9204 acc/top1 at 16 epoch is saved to best_acc/top1_epoch_16.pth.