2023/02/14 12:33:31 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.7.0 (default, Oct 9 2018, 10:31:47) [GCC 7.3.0] CUDA available: True numpy_random_seed: 908254618 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/cache/share/cuda-11.1 NVCC: Cuda compilation tools, release 11.1, V11.1.74 GCC: gcc (GCC) 5.4.0 PyTorch: 1.9.0+cu111 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.1 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86 - CuDNN 8.0.5 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, TorchVision: 0.10.0+cu111 OpenCV: 4.6.0 MMEngine: 0.5.0 Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None diff_rank_seed: False deterministic: False Distributed launcher: pytorch Distributed training: True GPU number: 8 ------------------------------------------------------------ 2023/02/14 12:33:32 - mmengine - INFO - Config: default_scope = 'mmaction' default_hooks = dict( runtime_info=dict(type='RuntimeInfoHook'), timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=100, ignore_last=False), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict( type='CheckpointHook', interval=3, save_best='auto', max_keep_ckpts=5), sampler_seed=dict(type='DistSamplerSeedHook'), sync_buffers=dict(type='SyncBuffersHook')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) log_processor = dict(type='LogProcessor', window_size=20, by_epoch=True) vis_backends = [ dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend') ] visualizer = dict( type='ActionVisualizer', vis_backends=[ dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend') ]) log_level = 'INFO' load_from = None resume = False num_frames = 8 model = dict( type='Recognizer3D', backbone=dict( type='UniFormerV2', input_resolution=224, patch_size=16, width=768, layers=12, heads=12, t_size=8, dw_reduction=1.5, backbone_drop_path_rate=0.0, temporal_downsample=False, no_lmhra=True, double_lmhra=True, return_list=[8, 9, 10, 11], n_layers=4, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0.0, mlp_dropout=[0.5, 0.5, 0.5, 0.5], clip_pretrained=False, init_cfg=dict( type='Pretrained', checkpoint= 'https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth', prefix='backbone.')), cls_head=dict( type='UniFormerHead', dropout_ratio=0.5, num_classes=400, in_channels=768, average_clips='prob', channel_map= 'configs/recognition/uniformerv2/k710_channel_map/map_k400.json', init_cfg=dict( type='Pretrained', checkpoint= 'https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth', prefix='cls_head.')), data_preprocessor=dict( type='ActionDataPreprocessor', mean=[114.75, 114.75, 114.75], std=[57.375, 57.375, 57.375], format_shape='NCTHW')) dataset_type = 'VideoDataset' data_root = 'data/kinetics400/videos_train' data_root_val = 'data/kinetics400/videos_val' ann_file_train = 'data/kinetics400/kinetics400_train_list_videos.txt' ann_file_val = 'data/kinetics400/kinetics400_val_list_videos.txt' ann_file_test = 'data/kinetics400/kinetics400_val_list_videos.txt' file_client_args = dict( io_backend='petrel', path_mapping=dict({ 'data/kinetics400': 's254:s3://openmmlab/datasets/action/Kinetics400' })) train_pipeline = [ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics400': 's254:s3://openmmlab/datasets/action/Kinetics400' })), dict(type='UniformSample', clip_len=8, num_clips=1), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 256)), dict( type='PytorchVideoWrapper', op='RandAugment', magnitude=7, num_layers=4), dict(type='RandomResizedCrop'), dict(type='Resize', scale=(224, 224), keep_ratio=False), dict(type='Flip', flip_ratio=0.5), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ] val_pipeline = [ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics400': 's254:s3://openmmlab/datasets/action/Kinetics400' })), dict(type='UniformSample', clip_len=8, num_clips=1, test_mode=True), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 224)), dict(type='CenterCrop', crop_size=224), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ] test_pipeline = [ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics400': 's254:s3://openmmlab/datasets/action/Kinetics400' })), dict(type='UniformSample', clip_len=8, num_clips=4, test_mode=True), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 224)), dict(type='ThreeCrop', crop_size=224), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ] train_dataloader = dict( batch_size=32, num_workers=8, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='VideoDataset', ann_file='data/kinetics400/kinetics400_train_list_videos.txt', data_prefix=dict(video='data/kinetics400/videos_train'), pipeline=[ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics400': 's254:s3://openmmlab/datasets/action/Kinetics400' })), dict(type='UniformSample', clip_len=8, num_clips=1), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 256)), dict( type='PytorchVideoWrapper', op='RandAugment', magnitude=7, num_layers=4), dict(type='RandomResizedCrop'), dict(type='Resize', scale=(224, 224), keep_ratio=False), dict(type='Flip', flip_ratio=0.5), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ])) val_dataloader = dict( batch_size=8, num_workers=8, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='VideoDataset', ann_file='data/kinetics400/kinetics400_val_list_videos.txt', data_prefix=dict(video='data/kinetics400/videos_val'), pipeline=[ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics400': 's254:s3://openmmlab/datasets/action/Kinetics400' })), dict( type='UniformSample', clip_len=8, num_clips=1, test_mode=True), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 224)), dict(type='CenterCrop', crop_size=224), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ], test_mode=True)) test_dataloader = dict( batch_size=8, num_workers=8, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='VideoDataset', ann_file='data/kinetics400/kinetics400_val_list_videos.txt', data_prefix=dict(video='data/kinetics400/videos_val'), pipeline=[ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics400': 's254:s3://openmmlab/datasets/action/Kinetics400' })), dict( type='UniformSample', clip_len=8, num_clips=4, test_mode=True), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 224)), dict(type='ThreeCrop', crop_size=224), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ], test_mode=True)) val_evaluator = dict(type='AccMetric') test_evaluator = dict(type='AccMetric') train_cfg = dict( type='EpochBasedTrainLoop', max_epochs=5, val_begin=1, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') base_lr = 2e-06 optim_wrapper = dict( optimizer=dict( type='AdamW', lr=2e-06, betas=(0.9, 0.999), weight_decay=0.05), paramwise_cfg=dict(norm_decay_mult=0.0, bias_decay_mult=0.0), clip_grad=dict(max_norm=20, norm_type=2)) param_scheduler = [ dict( type='LinearLR', start_factor=0.5, by_epoch=True, begin=0, end=1, convert_to_iter_based=True), dict( type='CosineAnnealingLR', T_max=4, eta_min_ratio=0.5, by_epoch=True, begin=1, end=5, convert_to_iter_based=True) ] auto_scale_lr = dict(enable=True, base_batch_size=256) launcher = 'pytorch' work_dir = 'work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_fix-load-cls-head' randomness = dict(seed=None, diff_rank_seed=False, deterministic=False) 2023/02/14 12:33:35 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:35 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:35 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:35 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:35 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:35 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - No L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Double L_MHRA: True 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - Drop path rate: 0.0 2023/02/14 12:33:36 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (NORMAL ) SyncBuffersHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.ln_pre.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.ln_pre.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dpe.0.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dpe.1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dpe.2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dpe.3.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_3.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_3.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_3.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_3.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_3.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_3.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.attn.out_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_1.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_1.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.mlp.c_fc.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.mlp.c_proj.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_2.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_2.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_3.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_3.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.norm.weight:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- backbone.transformer.norm.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - paramwise_options -- cls_head.fc_cls.bias:weight_decay=0.0 2023/02/14 12:33:39 - mmengine - INFO - LR is set based on batch size of 256 and the current batch size is 256. Scaling the original LR by 1.0. 2023/02/14 12:33:40 - mmengine - INFO - load backbone. in model from: https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth 2023/02/14 12:33:41 - mmengine - INFO - load pretrained model from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth 2023/02/14 12:33:42 - mmengine - INFO - Name of parameter - Initialization information backbone.class_embedding - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.positional_embedding - torch.Size([197, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.conv1.weight - torch.Size([768, 3, 1, 16, 16]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.ln_pre.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.ln_pre.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.temporal_cls_token - torch.Size([1, 1, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.balance - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.0.weight - torch.Size([768, 1, 3, 3, 3]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.0.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.1.weight - torch.Size([768, 1, 3, 3, 3]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.2.weight - torch.Size([768, 1, 3, 3, 3]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.3.weight - torch.Size([768, 1, 3, 3, 3]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_3.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_3.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_3.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_3.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.norm.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.norm.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth cls_head.fc_cls.weight - torch.Size([400, 768]): Initialized by user-defined `init_weights` in UniFormerHead cls_head.fc_cls.bias - torch.Size([400]): Initialized by user-defined `init_weights` in UniFormerHead 2023/02/14 12:33:42 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/lilin/Repos/mmact_dev/mmaction2/work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_fix-load-cls-head. 2023/02/14 12:36:06 - mmengine - INFO - Epoch(train) [1][100/940] lr: 1.1054e-06 eta: 1:50:28 time: 1.3643 data_time: 0.0590 memory: 52497 grad_norm: 24.3483 loss: 0.8410 top1_acc: 0.7500 top5_acc: 0.9062 loss_cls: 0.8410 2023/02/14 12:38:16 - mmengine - INFO - Epoch(train) [1][200/940] lr: 1.2119e-06 eta: 1:42:35 time: 1.2953 data_time: 0.0287 memory: 52497 grad_norm: 24.5193 loss: 0.8397 top1_acc: 0.8125 top5_acc: 0.9062 loss_cls: 0.8397 2023/02/14 12:40:23 - mmengine - INFO - Epoch(train) [1][300/940] lr: 1.3184e-06 eta: 1:38:03 time: 1.2570 data_time: 0.0279 memory: 52497 grad_norm: 24.5271 loss: 0.8613 top1_acc: 0.7500 top5_acc: 0.9062 loss_cls: 0.8613 2023/02/14 12:42:31 - mmengine - INFO - Epoch(train) [1][400/940] lr: 1.4249e-06 eta: 1:34:43 time: 1.2474 data_time: 0.0574 memory: 52497 grad_norm: 25.6639 loss: 0.9309 top1_acc: 0.7812 top5_acc: 0.9062 loss_cls: 0.9309 2023/02/14 12:44:39 - mmengine - INFO - Epoch(train) [1][500/940] lr: 1.5314e-06 eta: 1:31:58 time: 1.2509 data_time: 0.1248 memory: 52497 grad_norm: 24.3827 loss: 0.7340 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 0.7340 2023/02/14 12:46:48 - mmengine - INFO - Epoch(train) [1][600/940] lr: 1.6379e-06 eta: 1:29:32 time: 1.2960 data_time: 0.1850 memory: 52497 grad_norm: 24.0113 loss: 0.9039 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.9039 2023/02/14 12:48:55 - mmengine - INFO - Epoch(train) [1][700/940] lr: 1.7444e-06 eta: 1:26:56 time: 1.2525 data_time: 0.1335 memory: 52497 grad_norm: 25.5866 loss: 0.9337 top1_acc: 0.6875 top5_acc: 0.8438 loss_cls: 0.9337 2023/02/14 12:51:03 - mmengine - INFO - Epoch(train) [1][800/940] lr: 1.8509e-06 eta: 1:24:34 time: 1.3042 data_time: 0.1930 memory: 52497 grad_norm: 25.3509 loss: 0.8424 top1_acc: 0.8750 top5_acc: 0.9688 loss_cls: 0.8424 2023/02/14 12:53:10 - mmengine - INFO - Epoch(train) [1][900/940] lr: 1.9574e-06 eta: 1:22:12 time: 1.2692 data_time: 0.1649 memory: 52497 grad_norm: 24.3541 loss: 0.8326 top1_acc: 0.9375 top5_acc: 0.9688 loss_cls: 0.8326 2023/02/14 12:53:58 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 12:53:58 - mmengine - INFO - Epoch(train) [1][940/940] lr: 2.0000e-06 eta: 1:21:02 time: 1.1035 data_time: 0.0533 memory: 52497 grad_norm: 24.4097 loss: 0.9806 top1_acc: 0.7143 top5_acc: 0.7143 loss_cls: 0.9806 2023/02/14 12:54:24 - mmengine - INFO - Epoch(val) [1][100/310] eta: 0:00:54 time: 0.1925 data_time: 0.1126 memory: 2891 2023/02/14 12:54:44 - mmengine - INFO - Epoch(val) [1][200/310] eta: 0:00:25 time: 0.1874 data_time: 0.1079 memory: 2891 2023/02/14 12:55:05 - mmengine - INFO - Epoch(val) [1][300/310] eta: 0:00:02 time: 0.1967 data_time: 0.1176 memory: 2891 2023/02/14 12:55:09 - mmengine - INFO - Epoch(val) [1][310/310] acc/top1: 0.8391 acc/top5: 0.9630 acc/mean1: 0.8391 2023/02/14 12:55:12 - mmengine - INFO - The best checkpoint with 0.8391 acc/top1 at 1 epoch is saved to best_acc/top1_epoch_1.pth. 2023/02/14 12:56:39 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 12:57:31 - mmengine - INFO - Epoch(train) [2][100/940] lr: 1.9983e-06 eta: 1:19:26 time: 1.3575 data_time: 0.0329 memory: 52497 grad_norm: 24.6726 loss: 0.8945 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 0.8945 2023/02/14 12:59:38 - mmengine - INFO - Epoch(train) [2][200/940] lr: 1.9931e-06 eta: 1:17:05 time: 1.2654 data_time: 0.0280 memory: 52497 grad_norm: 24.1096 loss: 0.8797 top1_acc: 0.7188 top5_acc: 0.8438 loss_cls: 0.8797 2023/02/14 13:01:46 - mmengine - INFO - Epoch(train) [2][300/940] lr: 1.9845e-06 eta: 1:14:51 time: 1.3255 data_time: 0.0650 memory: 52497 grad_norm: 24.8388 loss: 0.9918 top1_acc: 0.7812 top5_acc: 0.8750 loss_cls: 0.9918 2023/02/14 13:03:55 - mmengine - INFO - Epoch(train) [2][400/940] lr: 1.9725e-06 eta: 1:12:38 time: 1.3070 data_time: 0.0256 memory: 52497 grad_norm: 24.1161 loss: 0.8337 top1_acc: 0.8125 top5_acc: 0.8750 loss_cls: 0.8337 2023/02/14 13:06:02 - mmengine - INFO - Epoch(train) [2][500/940] lr: 1.9572e-06 eta: 1:10:24 time: 1.3163 data_time: 0.0311 memory: 52497 grad_norm: 24.7461 loss: 0.9243 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.9243 2023/02/14 13:08:11 - mmengine - INFO - Epoch(train) [2][600/940] lr: 1.9387e-06 eta: 1:08:13 time: 1.3194 data_time: 0.0308 memory: 52497 grad_norm: 23.6711 loss: 0.8444 top1_acc: 0.7812 top5_acc: 0.8750 loss_cls: 0.8444 2023/02/14 13:10:16 - mmengine - INFO - Epoch(train) [2][700/940] lr: 1.9171e-06 eta: 1:05:54 time: 1.2145 data_time: 0.0262 memory: 52497 grad_norm: 24.1686 loss: 0.7862 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 0.7862 2023/02/14 13:12:24 - mmengine - INFO - Epoch(train) [2][800/940] lr: 1.8927e-06 eta: 1:03:43 time: 1.2579 data_time: 0.1459 memory: 52497 grad_norm: 25.1057 loss: 0.9311 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.9311 2023/02/14 13:14:32 - mmengine - INFO - Epoch(train) [2][900/940] lr: 1.8655e-06 eta: 1:01:32 time: 1.2941 data_time: 0.0442 memory: 52497 grad_norm: 24.8962 loss: 0.9267 top1_acc: 0.7812 top5_acc: 0.8438 loss_cls: 0.9267 2023/02/14 13:15:21 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 13:15:21 - mmengine - INFO - Epoch(train) [2][940/940] lr: 1.8538e-06 eta: 1:00:36 time: 1.0792 data_time: 0.0194 memory: 52497 grad_norm: 25.1357 loss: 0.9312 top1_acc: 0.8571 top5_acc: 1.0000 loss_cls: 0.9312 2023/02/14 13:15:42 - mmengine - INFO - Epoch(val) [2][100/310] eta: 0:00:43 time: 0.1930 data_time: 0.1127 memory: 2891 2023/02/14 13:16:02 - mmengine - INFO - Epoch(val) [2][200/310] eta: 0:00:22 time: 0.1992 data_time: 0.1184 memory: 2891 2023/02/14 13:16:23 - mmengine - INFO - Epoch(val) [2][300/310] eta: 0:00:02 time: 0.2097 data_time: 0.1297 memory: 2891 2023/02/14 13:16:27 - mmengine - INFO - Epoch(val) [2][310/310] acc/top1: 0.8405 acc/top5: 0.9631 acc/mean1: 0.8405 2023/02/14 13:16:27 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/lilin/Repos/mmact_dev/mmaction2/work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_fix-load-cls-head/best_acc/top1_epoch_1.pth is removed 2023/02/14 13:16:30 - mmengine - INFO - The best checkpoint with 0.8405 acc/top1 at 2 epoch is saved to best_acc/top1_epoch_2.pth. 2023/02/14 13:18:49 - mmengine - INFO - Epoch(train) [3][100/940] lr: 1.8231e-06 eta: 0:58:41 time: 1.4213 data_time: 0.3335 memory: 52497 grad_norm: 24.4137 loss: 0.8897 top1_acc: 0.6875 top5_acc: 0.8125 loss_cls: 0.8897 2023/02/14 13:19:14 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 13:20:58 - mmengine - INFO - Epoch(train) [3][200/940] lr: 1.7902e-06 eta: 0:56:30 time: 1.2706 data_time: 0.1641 memory: 52497 grad_norm: 23.9303 loss: 0.9088 top1_acc: 0.8438 top5_acc: 0.8750 loss_cls: 0.9088 2023/02/14 13:23:05 - mmengine - INFO - Epoch(train) [3][300/940] lr: 1.7552e-06 eta: 0:54:17 time: 1.3593 data_time: 0.2473 memory: 52497 grad_norm: 24.7871 loss: 0.9149 top1_acc: 0.5625 top5_acc: 0.8438 loss_cls: 0.9149 2023/02/14 13:25:12 - mmengine - INFO - Epoch(train) [3][400/940] lr: 1.7184e-06 eta: 0:52:06 time: 1.2091 data_time: 0.0749 memory: 52497 grad_norm: 24.6842 loss: 0.8452 top1_acc: 0.7812 top5_acc: 0.9375 loss_cls: 0.8452 2023/02/14 13:27:22 - mmengine - INFO - Epoch(train) [3][500/940] lr: 1.6801e-06 eta: 0:49:57 time: 1.2436 data_time: 0.0308 memory: 52497 grad_norm: 24.8858 loss: 0.9253 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 0.9253 2023/02/14 13:29:31 - mmengine - INFO - Epoch(train) [3][600/940] lr: 1.6405e-06 eta: 0:47:48 time: 1.3920 data_time: 0.0260 memory: 52497 grad_norm: 24.7111 loss: 0.7910 top1_acc: 0.8125 top5_acc: 0.9062 loss_cls: 0.7910 2023/02/14 13:31:36 - mmengine - INFO - Epoch(train) [3][700/940] lr: 1.6000e-06 eta: 0:45:36 time: 1.2263 data_time: 0.0298 memory: 52497 grad_norm: 24.5111 loss: 0.9385 top1_acc: 0.8125 top5_acc: 0.9688 loss_cls: 0.9385 2023/02/14 13:33:46 - mmengine - INFO - Epoch(train) [3][800/940] lr: 1.5588e-06 eta: 0:43:27 time: 1.2682 data_time: 0.0285 memory: 52497 grad_norm: 24.4405 loss: 0.7723 top1_acc: 0.7188 top5_acc: 0.8750 loss_cls: 0.7723 2023/02/14 13:35:54 - mmengine - INFO - Epoch(train) [3][900/940] lr: 1.5171e-06 eta: 0:41:17 time: 1.2811 data_time: 0.0309 memory: 52497 grad_norm: 24.2863 loss: 0.8232 top1_acc: 0.8438 top5_acc: 0.9062 loss_cls: 0.8232 2023/02/14 13:36:41 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 13:36:41 - mmengine - INFO - Epoch(train) [3][940/940] lr: 1.5004e-06 eta: 0:40:23 time: 1.0856 data_time: 0.0218 memory: 52497 grad_norm: 27.0151 loss: 0.8271 top1_acc: 0.5714 top5_acc: 1.0000 loss_cls: 0.8271 2023/02/14 13:36:41 - mmengine - INFO - Saving checkpoint at 3 epochs 2023/02/14 13:37:08 - mmengine - INFO - Epoch(val) [3][100/310] eta: 0:00:44 time: 0.2018 data_time: 0.1147 memory: 2891 2023/02/14 13:37:28 - mmengine - INFO - Epoch(val) [3][200/310] eta: 0:00:22 time: 0.1784 data_time: 0.0958 memory: 2891 2023/02/14 13:37:48 - mmengine - INFO - Epoch(val) [3][300/310] eta: 0:00:02 time: 0.1669 data_time: 0.0888 memory: 2891 2023/02/14 13:37:51 - mmengine - INFO - Epoch(val) [3][310/310] acc/top1: 0.8416 acc/top5: 0.9635 acc/mean1: 0.8416 2023/02/14 13:37:51 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/lilin/Repos/mmact_dev/mmaction2/work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_fix-load-cls-head/best_acc/top1_epoch_2.pth is removed 2023/02/14 13:37:54 - mmengine - INFO - The best checkpoint with 0.8416 acc/top1 at 3 epoch is saved to best_acc/top1_epoch_3.pth. 2023/02/14 13:40:13 - mmengine - INFO - Epoch(train) [4][100/940] lr: 1.4587e-06 eta: 0:38:20 time: 1.2835 data_time: 0.1470 memory: 52497 grad_norm: 23.4497 loss: 0.8608 top1_acc: 0.6875 top5_acc: 0.9062 loss_cls: 0.8608 2023/02/14 13:41:59 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 13:42:22 - mmengine - INFO - Epoch(train) [4][200/940] lr: 1.4172e-06 eta: 0:36:11 time: 1.1635 data_time: 0.0500 memory: 52497 grad_norm: 23.1218 loss: 0.9282 top1_acc: 0.8125 top5_acc: 0.9062 loss_cls: 0.9282 2023/02/14 13:44:29 - mmengine - INFO - Epoch(train) [4][300/940] lr: 1.3764e-06 eta: 0:34:01 time: 1.2768 data_time: 0.1732 memory: 52497 grad_norm: 24.8899 loss: 0.9034 top1_acc: 0.8125 top5_acc: 0.9062 loss_cls: 0.9034 2023/02/14 13:46:37 - mmengine - INFO - Epoch(train) [4][400/940] lr: 1.3364e-06 eta: 0:31:51 time: 1.2793 data_time: 0.0891 memory: 52497 grad_norm: 24.5591 loss: 0.9618 top1_acc: 0.6875 top5_acc: 0.8750 loss_cls: 0.9618 2023/02/14 13:48:47 - mmengine - INFO - Epoch(train) [4][500/940] lr: 1.2975e-06 eta: 0:29:42 time: 1.3550 data_time: 0.2446 memory: 52497 grad_norm: 24.4752 loss: 0.8256 top1_acc: 0.8750 top5_acc: 0.9688 loss_cls: 0.8256 2023/02/14 13:50:56 - mmengine - INFO - Epoch(train) [4][600/940] lr: 1.2601e-06 eta: 0:27:33 time: 1.3477 data_time: 0.0742 memory: 52497 grad_norm: 24.2778 loss: 1.0059 top1_acc: 0.7812 top5_acc: 0.8438 loss_cls: 1.0059 2023/02/14 13:53:05 - mmengine - INFO - Epoch(train) [4][700/940] lr: 1.2243e-06 eta: 0:25:23 time: 1.2982 data_time: 0.1875 memory: 52497 grad_norm: 25.1057 loss: 0.8687 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 0.8687 2023/02/14 13:55:12 - mmengine - INFO - Epoch(train) [4][800/940] lr: 1.1905e-06 eta: 0:23:14 time: 1.2203 data_time: 0.0868 memory: 52497 grad_norm: 24.6399 loss: 0.8230 top1_acc: 0.8125 top5_acc: 1.0000 loss_cls: 0.8230 2023/02/14 13:57:22 - mmengine - INFO - Epoch(train) [4][900/940] lr: 1.1588e-06 eta: 0:21:05 time: 1.3253 data_time: 0.2214 memory: 52497 grad_norm: 24.0846 loss: 0.8103 top1_acc: 0.8438 top5_acc: 0.9375 loss_cls: 0.8103 2023/02/14 13:58:08 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 13:58:08 - mmengine - INFO - Epoch(train) [4][940/940] lr: 1.1467e-06 eta: 0:20:12 time: 1.1107 data_time: 0.0530 memory: 52497 grad_norm: 25.2941 loss: 0.8294 top1_acc: 0.7143 top5_acc: 0.7143 loss_cls: 0.8294 2023/02/14 13:58:29 - mmengine - INFO - Epoch(val) [4][100/310] eta: 0:00:44 time: 0.1985 data_time: 0.1179 memory: 2891 2023/02/14 13:58:49 - mmengine - INFO - Epoch(val) [4][200/310] eta: 0:00:22 time: 0.1970 data_time: 0.1170 memory: 2891 2023/02/14 13:59:10 - mmengine - INFO - Epoch(val) [4][300/310] eta: 0:00:02 time: 0.2080 data_time: 0.1282 memory: 2891 2023/02/14 13:59:15 - mmengine - INFO - Epoch(val) [4][310/310] acc/top1: 0.8428 acc/top5: 0.9634 acc/mean1: 0.8428 2023/02/14 13:59:15 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/lilin/Repos/mmact_dev/mmaction2/work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_fix-load-cls-head/best_acc/top1_epoch_3.pth is removed 2023/02/14 13:59:18 - mmengine - INFO - The best checkpoint with 0.8428 acc/top1 at 4 epoch is saved to best_acc/top1_epoch_4.pth. 2023/02/14 14:01:37 - mmengine - INFO - Epoch(train) [5][100/940] lr: 1.1184e-06 eta: 0:18:05 time: 1.3054 data_time: 0.1905 memory: 52497 grad_norm: 24.5264 loss: 0.8237 top1_acc: 0.7188 top5_acc: 0.9375 loss_cls: 0.8237 2023/02/14 14:03:44 - mmengine - INFO - Epoch(train) [5][200/940] lr: 1.0928e-06 eta: 0:15:55 time: 1.2814 data_time: 0.1723 memory: 52497 grad_norm: 23.8874 loss: 0.8680 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.8680 2023/02/14 14:04:36 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 14:05:53 - mmengine - INFO - Epoch(train) [5][300/940] lr: 1.0700e-06 eta: 0:13:46 time: 1.3470 data_time: 0.0624 memory: 52497 grad_norm: 23.3899 loss: 0.8502 top1_acc: 0.8125 top5_acc: 0.9688 loss_cls: 0.8502 2023/02/14 14:08:01 - mmengine - INFO - Epoch(train) [5][400/940] lr: 1.0502e-06 eta: 0:11:37 time: 1.3179 data_time: 0.1986 memory: 52497 grad_norm: 24.1348 loss: 0.8463 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.8463 2023/02/14 14:10:11 - mmengine - INFO - Epoch(train) [5][500/940] lr: 1.0336e-06 eta: 0:09:28 time: 1.2909 data_time: 0.1716 memory: 52497 grad_norm: 23.6634 loss: 0.8269 top1_acc: 0.6250 top5_acc: 0.7812 loss_cls: 0.8269 2023/02/14 14:12:20 - mmengine - INFO - Epoch(train) [5][600/940] lr: 1.0202e-06 eta: 0:07:19 time: 1.3023 data_time: 0.1928 memory: 52497 grad_norm: 24.3488 loss: 0.8662 top1_acc: 0.6875 top5_acc: 0.8438 loss_cls: 0.8662 2023/02/14 14:14:25 - mmengine - INFO - Epoch(train) [5][700/940] lr: 1.0101e-06 eta: 0:05:09 time: 1.2601 data_time: 0.1392 memory: 52497 grad_norm: 24.5356 loss: 0.8943 top1_acc: 0.8750 top5_acc: 0.9062 loss_cls: 0.8943 2023/02/14 14:16:32 - mmengine - INFO - Epoch(train) [5][800/940] lr: 1.0035e-06 eta: 0:03:00 time: 1.2894 data_time: 0.1290 memory: 52497 grad_norm: 23.8213 loss: 0.8753 top1_acc: 0.7812 top5_acc: 0.8750 loss_cls: 0.8753 2023/02/14 14:18:41 - mmengine - INFO - Epoch(train) [5][900/940] lr: 1.0003e-06 eta: 0:00:51 time: 1.3201 data_time: 0.0975 memory: 52497 grad_norm: 23.7347 loss: 0.8083 top1_acc: 0.7188 top5_acc: 0.9062 loss_cls: 0.8083 2023/02/14 14:19:28 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_20230214_123321 2023/02/14 14:19:28 - mmengine - INFO - Epoch(train) [5][940/940] lr: 1.0000e-06 eta: 0:00:00 time: 1.0904 data_time: 0.0315 memory: 52497 grad_norm: 25.8882 loss: 0.8291 top1_acc: 0.8571 top5_acc: 0.8571 loss_cls: 0.8291 2023/02/14 14:19:28 - mmengine - INFO - Saving checkpoint at 5 epochs 2023/02/14 14:19:55 - mmengine - INFO - Epoch(val) [5][100/310] eta: 0:00:44 time: 0.2163 data_time: 0.1343 memory: 2891 2023/02/14 14:20:15 - mmengine - INFO - Epoch(val) [5][200/310] eta: 0:00:22 time: 0.1953 data_time: 0.1147 memory: 2891 2023/02/14 14:20:35 - mmengine - INFO - Epoch(val) [5][300/310] eta: 0:00:02 time: 0.1666 data_time: 0.0888 memory: 2891 2023/02/14 14:20:38 - mmengine - INFO - Epoch(val) [5][310/310] acc/top1: 0.8439 acc/top5: 0.9634 acc/mean1: 0.8439 2023/02/14 14:20:38 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/lilin/Repos/mmact_dev/mmaction2/work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics400-rgb_fix-load-cls-head/best_acc/top1_epoch_4.pth is removed 2023/02/14 14:20:41 - mmengine - INFO - The best checkpoint with 0.8439 acc/top1 at 5 epoch is saved to best_acc/top1_epoch_5.pth.