2023/02/15 20:48:02 - mmengine - INFO - ------------------------------------------------------------ System environment: sys.platform: linux Python: 3.7.0 (default, Oct 9 2018, 10:31:47) [GCC 7.3.0] CUDA available: True numpy_random_seed: 1793613696 GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB CUDA_HOME: /mnt/cache/share/cuda-11.1 NVCC: Cuda compilation tools, release 11.1, V11.1.74 GCC: gcc (GCC) 5.4.0 PyTorch: 1.9.0+cu111 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.1 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86 - CuDNN 8.0.5 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, TorchVision: 0.10.0+cu111 OpenCV: 4.6.0 MMEngine: 0.5.0 Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None diff_rank_seed: False deterministic: False Distributed launcher: pytorch Distributed training: True GPU number: 8 ------------------------------------------------------------ 2023/02/15 20:48:03 - mmengine - INFO - Config: default_scope = 'mmaction' default_hooks = dict( runtime_info=dict(type='RuntimeInfoHook'), timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=100, ignore_last=False), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict( type='CheckpointHook', interval=3, save_best='auto', max_keep_ckpts=5), sampler_seed=dict(type='DistSamplerSeedHook'), sync_buffers=dict(type='SyncBuffersHook')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) log_processor = dict(type='LogProcessor', window_size=20, by_epoch=True) vis_backends = [ dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend') ] visualizer = dict( type='ActionVisualizer', vis_backends=[ dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend') ]) log_level = 'INFO' load_from = None resume = False num_frames = 8 model = dict( type='Recognizer3D', backbone=dict( type='UniFormerV2', input_resolution=224, patch_size=16, width=768, layers=12, heads=12, t_size=8, dw_reduction=1.5, backbone_drop_path_rate=0.0, temporal_downsample=False, no_lmhra=True, double_lmhra=True, return_list=[8, 9, 10, 11], n_layers=4, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0.0, mlp_dropout=[0.5, 0.5, 0.5, 0.5], clip_pretrained=False, init_cfg=dict( type='Pretrained', checkpoint= 'https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth', prefix='backbone.')), cls_head=dict( type='UniFormerHead', dropout_ratio=0.5, num_classes=600, in_channels=768, average_clips='prob', channel_map= 'configs/recognition/uniformerv2/k710_channel_map/map_k600.json', init_cfg=dict( type='Pretrained', checkpoint= 'https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth', prefix='cls_head.')), data_preprocessor=dict( type='ActionDataPreprocessor', mean=[114.75, 114.75, 114.75], std=[57.375, 57.375, 57.375], format_shape='NCTHW')) dataset_type = 'VideoDataset' data_root = 'data/kinetics600/videos' data_root_val = 'data/kinetics600/videos' ann_file_train = 'data/kinetics600/kinetics600_train_list_videos.txt' ann_file_val = 'data/kinetics600/kinetics600_val_list_videos.txt' ann_file_test = 'data/kinetics600/kinetics600_val_list_videos.txt' file_client_args = dict( io_backend='petrel', path_mapping=dict({ 'data/kinetics600': 's254:s3://openmmlab/datasets/action/Kinetics600' })) train_pipeline = [ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics600': 's254:s3://openmmlab/datasets/action/Kinetics600' })), dict(type='UniformSample', clip_len=8, num_clips=1), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 256)), dict( type='PytorchVideoWrapper', op='RandAugment', magnitude=7, num_layers=4), dict(type='RandomResizedCrop'), dict(type='Resize', scale=(224, 224), keep_ratio=False), dict(type='Flip', flip_ratio=0.5), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ] val_pipeline = [ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics600': 's254:s3://openmmlab/datasets/action/Kinetics600' })), dict(type='UniformSample', clip_len=8, num_clips=1, test_mode=True), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 224)), dict(type='CenterCrop', crop_size=224), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ] test_pipeline = [ dict(type='DecordInit'), dict(type='UniformSample', clip_len=8, num_clips=4, test_mode=True), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 224)), dict(type='ThreeCrop', crop_size=224), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ] train_dataloader = dict( batch_size=32, num_workers=8, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='VideoDataset', ann_file='data/kinetics600/kinetics600_train_list_videos.txt', data_prefix=dict(video='data/kinetics600/videos'), pipeline=[ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics600': 's254:s3://openmmlab/datasets/action/Kinetics600' })), dict(type='UniformSample', clip_len=8, num_clips=1), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 256)), dict( type='PytorchVideoWrapper', op='RandAugment', magnitude=7, num_layers=4), dict(type='RandomResizedCrop'), dict(type='Resize', scale=(224, 224), keep_ratio=False), dict(type='Flip', flip_ratio=0.5), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ])) val_dataloader = dict( batch_size=8, num_workers=8, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='VideoDataset', ann_file='data/kinetics600/kinetics600_val_list_videos.txt', data_prefix=dict(video='data/kinetics600/videos'), pipeline=[ dict( type='DecordInit', io_backend='petrel', path_mapping=dict({ 'data/kinetics600': 's254:s3://openmmlab/datasets/action/Kinetics600' })), dict( type='UniformSample', clip_len=8, num_clips=1, test_mode=True), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 224)), dict(type='CenterCrop', crop_size=224), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ], test_mode=True)) test_dataloader = dict( batch_size=8, num_workers=8, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='VideoDataset', ann_file='data/kinetics600/kinetics600_val_list_videos.txt', data_prefix=dict(video='data/kinetics600/videos'), pipeline=[ dict(type='DecordInit'), dict( type='UniformSample', clip_len=8, num_clips=4, test_mode=True), dict(type='DecordDecode'), dict(type='Resize', scale=(-1, 224)), dict(type='ThreeCrop', crop_size=224), dict(type='FormatShape', input_format='NCTHW'), dict(type='PackActionInputs') ], test_mode=True)) val_evaluator = dict(type='AccMetric') test_evaluator = dict(type='AccMetric') train_cfg = dict( type='EpochBasedTrainLoop', max_epochs=5, val_begin=1, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') base_lr = 2e-06 optim_wrapper = dict( optimizer=dict( type='AdamW', lr=2e-06, betas=(0.9, 0.999), weight_decay=0.05), paramwise_cfg=dict(norm_decay_mult=0.0, bias_decay_mult=0.0), clip_grad=dict(max_norm=20, norm_type=2)) param_scheduler = [ dict( type='LinearLR', start_factor=0.5, by_epoch=True, begin=0, end=1, convert_to_iter_based=True), dict( type='CosineAnnealingLR', T_max=4, eta_min_ratio=0.5, by_epoch=True, begin=1, end=5, convert_to_iter_based=True) ] auto_scale_lr = dict(enable=True, base_batch_size=256) launcher = 'pytorch' work_dir = 'work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb' randomness = dict(seed=None, diff_rank_seed=False, deterministic=False) 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:06 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:06 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:07 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:07 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:07 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:07 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:07 - mmengine - INFO - No L_MHRA: True 2023/02/15 20:48:07 - mmengine - INFO - Double L_MHRA: True 2023/02/15 20:48:07 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:07 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:07 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:07 - mmengine - INFO - Drop path rate: 0.0 2023/02/15 20:48:07 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook (BELOW_NORMAL) LoggerHook -------------------- before_train: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (VERY_LOW ) CheckpointHook -------------------- before_train_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (NORMAL ) DistSamplerSeedHook -------------------- before_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook -------------------- after_train_iter: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- after_train_epoch: (NORMAL ) IterTimerHook (NORMAL ) SyncBuffersHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_val_epoch: (NORMAL ) IterTimerHook -------------------- before_val_iter: (NORMAL ) IterTimerHook -------------------- after_val_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_val_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook (LOW ) ParamSchedulerHook (VERY_LOW ) CheckpointHook -------------------- before_test_epoch: (NORMAL ) IterTimerHook -------------------- before_test_iter: (NORMAL ) IterTimerHook -------------------- after_test_iter: (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_test_epoch: (VERY_HIGH ) RuntimeInfoHook (NORMAL ) IterTimerHook (BELOW_NORMAL) LoggerHook -------------------- after_run: (BELOW_NORMAL) LoggerHook -------------------- 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.ln_pre.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.ln_pre.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.0.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.1.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.2.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.3.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.4.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.5.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.6.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.7.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.8.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.9.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.10.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.resblocks.11.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dpe.0.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dpe.1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dpe.2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dpe.3.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_3.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.0.ln_3.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_3.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.1.ln_3.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_3.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.2.ln_3.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.attn.out_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_1.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_1.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.mlp.c_fc.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.mlp.c_proj.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_2.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_2.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_3.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.dec.3.ln_3.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.norm.weight:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- backbone.transformer.norm.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - paramwise_options -- cls_head.fc_cls.bias:weight_decay=0.0 2023/02/15 20:48:10 - mmengine - INFO - LR is set based on batch size of 256 and the current batch size is 256. Scaling the original LR by 1.0. 2023/02/15 20:48:11 - mmengine - INFO - load backbone. in model from: https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth 2023/02/15 20:48:12 - mmengine - INFO - load pretrained model from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth 2023/02/15 20:48:13 - mmengine - INFO - Name of parameter - Initialization information backbone.class_embedding - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.positional_embedding - torch.Size([197, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.conv1.weight - torch.Size([768, 3, 1, 16, 16]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.ln_pre.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.ln_pre.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.temporal_cls_token - torch.Size([1, 1, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.balance - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.0.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.1.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.2.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.3.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.4.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.5.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.6.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.7.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.8.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.9.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.10.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.resblocks.11.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.0.weight - torch.Size([768, 1, 3, 3, 3]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.0.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.1.weight - torch.Size([768, 1, 3, 3, 3]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.2.weight - torch.Size([768, 1, 3, 3, 3]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.3.weight - torch.Size([768, 1, 3, 3, 3]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dpe.3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_3.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.0.ln_3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_3.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.1.ln_3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_3.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.2.ln_3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.attn.in_proj_weight - torch.Size([2304, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.attn.in_proj_bias - torch.Size([2304]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.attn.out_proj.weight - torch.Size([768, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.attn.out_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_1.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_1.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.mlp.c_fc.weight - torch.Size([3072, 768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.mlp.c_fc.bias - torch.Size([3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.mlp.c_proj.weight - torch.Size([768, 3072]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.mlp.c_proj.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_2.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_2.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_3.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.dec.3.ln_3.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.norm.weight - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth backbone.transformer.norm.bias - torch.Size([768]): PretrainedInit: load from https://download.openmmlab.com/mmaction/v1.0/recognition/uniformerv2/kinetics710/uniformerv2-base-p16-res224_clip-pre_u8_kinetics710-rgb_20221219-77d34f81.pth cls_head.fc_cls.weight - torch.Size([600, 768]): Initialized by user-defined `init_weights` in UniFormerHead cls_head.fc_cls.bias - torch.Size([600]): Initialized by user-defined `init_weights` in UniFormerHead 2023/02/15 20:48:13 - mmengine - INFO - Checkpoints will be saved to /mnt/petrelfs/lilin/Repos/mmact_dev/mmaction2/work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb. 2023/02/15 20:51:10 - mmengine - INFO - Epoch(train) [1][ 100/1498] lr: 1.0661e-06 eta: 3:38:01 time: 1.3788 data_time: 0.0382 memory: 52498 grad_norm: 26.0674 loss: 0.9679 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.9679 2023/02/15 20:53:19 - mmengine - INFO - Epoch(train) [1][ 200/1498] lr: 1.1329e-06 eta: 3:05:35 time: 1.2221 data_time: 0.0309 memory: 52498 grad_norm: 27.1279 loss: 0.9841 top1_acc: 0.8125 top5_acc: 0.9062 loss_cls: 0.9841 2023/02/15 20:55:32 - mmengine - INFO - Epoch(train) [1][ 300/1498] lr: 1.1997e-06 eta: 2:55:15 time: 1.3474 data_time: 0.0339 memory: 52498 grad_norm: 25.4709 loss: 1.0079 top1_acc: 0.8750 top5_acc: 0.9688 loss_cls: 1.0079 2023/02/15 20:57:44 - mmengine - INFO - Epoch(train) [1][ 400/1498] lr: 1.2665e-06 eta: 2:48:30 time: 1.2771 data_time: 0.0319 memory: 52498 grad_norm: 25.3820 loss: 0.9760 top1_acc: 0.6562 top5_acc: 0.9375 loss_cls: 0.9760 2023/02/15 20:59:57 - mmengine - INFO - Epoch(train) [1][ 500/1498] lr: 1.3333e-06 eta: 2:44:01 time: 1.3740 data_time: 0.0788 memory: 52498 grad_norm: 26.3053 loss: 0.9945 top1_acc: 0.8438 top5_acc: 0.8750 loss_cls: 0.9945 2023/02/15 21:02:05 - mmengine - INFO - Epoch(train) [1][ 600/1498] lr: 1.4001e-06 eta: 2:39:15 time: 1.1691 data_time: 0.0316 memory: 52498 grad_norm: 25.5660 loss: 0.9266 top1_acc: 0.8125 top5_acc: 0.9688 loss_cls: 0.9266 2023/02/15 21:04:19 - mmengine - INFO - Epoch(train) [1][ 700/1498] lr: 1.4669e-06 eta: 2:36:03 time: 1.3692 data_time: 0.0281 memory: 52498 grad_norm: 24.4190 loss: 0.8860 top1_acc: 0.6875 top5_acc: 0.9375 loss_cls: 0.8860 2023/02/15 21:06:26 - mmengine - INFO - Epoch(train) [1][ 800/1498] lr: 1.5337e-06 eta: 2:32:14 time: 1.2527 data_time: 0.0385 memory: 52498 grad_norm: 25.5084 loss: 1.0856 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 1.0856 2023/02/15 21:08:32 - mmengine - INFO - Epoch(train) [1][ 900/1498] lr: 1.6005e-06 eta: 2:28:42 time: 1.2286 data_time: 0.0298 memory: 52498 grad_norm: 25.3636 loss: 0.8826 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 0.8826 2023/02/15 21:10:42 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 21:10:42 - mmengine - INFO - Epoch(train) [1][1000/1498] lr: 1.6673e-06 eta: 2:25:53 time: 1.3693 data_time: 0.0335 memory: 52498 grad_norm: 25.1346 loss: 1.0063 top1_acc: 0.8125 top5_acc: 0.8750 loss_cls: 1.0063 2023/02/15 21:12:50 - mmengine - INFO - Epoch(train) [1][1100/1498] lr: 1.7341e-06 eta: 2:22:56 time: 1.2609 data_time: 0.0622 memory: 52498 grad_norm: 25.3035 loss: 0.9710 top1_acc: 0.8438 top5_acc: 0.9062 loss_cls: 0.9710 2023/02/15 21:14:58 - mmengine - INFO - Epoch(train) [1][1200/1498] lr: 1.8009e-06 eta: 2:20:12 time: 1.2671 data_time: 0.0368 memory: 52498 grad_norm: 26.3232 loss: 0.9816 top1_acc: 0.9062 top5_acc: 1.0000 loss_cls: 0.9816 2023/02/15 21:17:10 - mmengine - INFO - Epoch(train) [1][1300/1498] lr: 1.8677e-06 eta: 2:17:51 time: 1.3066 data_time: 0.0283 memory: 52498 grad_norm: 25.7904 loss: 1.1173 top1_acc: 0.7188 top5_acc: 0.8750 loss_cls: 1.1173 2023/02/15 21:19:22 - mmengine - INFO - Epoch(train) [1][1400/1498] lr: 1.9345e-06 eta: 2:15:27 time: 1.3386 data_time: 0.0709 memory: 52498 grad_norm: 25.0161 loss: 1.0069 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 1.0069 2023/02/15 21:21:23 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 21:21:23 - mmengine - INFO - Epoch(train) [1][1498/1498] lr: 2.0000e-06 eta: 2:12:39 time: 1.1490 data_time: 0.0671 memory: 52498 grad_norm: 25.7841 loss: 0.9545 top1_acc: 0.6667 top5_acc: 0.9048 loss_cls: 0.9545 2023/02/15 21:22:36 - mmengine - INFO - Epoch(val) [1][100/437] eta: 0:04:04 time: 0.1999 data_time: 0.1201 memory: 2893 2023/02/15 21:22:57 - mmengine - INFO - Epoch(val) [1][200/437] eta: 0:01:51 time: 0.2335 data_time: 0.1540 memory: 2893 2023/02/15 21:23:17 - mmengine - INFO - Epoch(val) [1][300/437] eta: 0:00:52 time: 0.1713 data_time: 0.0912 memory: 2893 2023/02/15 21:23:39 - mmengine - INFO - Epoch(val) [1][400/437] eta: 0:00:12 time: 0.2464 data_time: 0.1662 memory: 2893 2023/02/15 21:23:55 - mmengine - INFO - Epoch(val) [1][437/437] acc/top1: 0.8476 acc/top5: 0.9656 acc/mean1: 0.8467 2023/02/15 21:23:58 - mmengine - INFO - The best checkpoint with 0.8476 acc/top1 at 1 epoch is saved to best_acc/top1_epoch_1.pth. 2023/02/15 21:26:13 - mmengine - INFO - Epoch(train) [2][ 100/1498] lr: 1.9993e-06 eta: 2:10:35 time: 1.3137 data_time: 0.0431 memory: 52498 grad_norm: 24.6375 loss: 0.8846 top1_acc: 0.7812 top5_acc: 0.9062 loss_cls: 0.8846 2023/02/15 21:28:20 - mmengine - INFO - Epoch(train) [2][ 200/1498] lr: 1.9973e-06 eta: 2:08:00 time: 1.1885 data_time: 0.0775 memory: 52498 grad_norm: 26.3324 loss: 1.0095 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 1.0095 2023/02/15 21:30:28 - mmengine - INFO - Epoch(train) [2][ 300/1498] lr: 1.9939e-06 eta: 2:05:34 time: 1.3076 data_time: 0.0363 memory: 52498 grad_norm: 24.8118 loss: 0.9877 top1_acc: 0.7500 top5_acc: 0.9062 loss_cls: 0.9877 2023/02/15 21:32:35 - mmengine - INFO - Epoch(train) [2][ 400/1498] lr: 1.9891e-06 eta: 2:03:05 time: 1.2718 data_time: 0.0300 memory: 52498 grad_norm: 25.4828 loss: 0.8202 top1_acc: 0.7188 top5_acc: 0.8750 loss_cls: 0.8202 2023/02/15 21:34:42 - mmengine - INFO - Epoch(train) [2][ 500/1498] lr: 1.9830e-06 eta: 2:00:38 time: 1.2232 data_time: 0.0316 memory: 52498 grad_norm: 25.3653 loss: 0.9327 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 0.9327 2023/02/15 21:34:44 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 21:36:50 - mmengine - INFO - Epoch(train) [2][ 600/1498] lr: 1.9755e-06 eta: 1:58:17 time: 1.3006 data_time: 0.0311 memory: 52498 grad_norm: 25.7091 loss: 0.9204 top1_acc: 0.7812 top5_acc: 0.9375 loss_cls: 0.9204 2023/02/15 21:38:56 - mmengine - INFO - Epoch(train) [2][ 700/1498] lr: 1.9668e-06 eta: 1:55:52 time: 1.2421 data_time: 0.0335 memory: 52498 grad_norm: 25.4202 loss: 0.9464 top1_acc: 0.8125 top5_acc: 0.8750 loss_cls: 0.9464 2023/02/15 21:41:01 - mmengine - INFO - Epoch(train) [2][ 800/1498] lr: 1.9568e-06 eta: 1:53:28 time: 1.2445 data_time: 0.0351 memory: 52498 grad_norm: 25.3887 loss: 0.8094 top1_acc: 0.8438 top5_acc: 0.9375 loss_cls: 0.8094 2023/02/15 21:43:12 - mmengine - INFO - Epoch(train) [2][ 900/1498] lr: 1.9455e-06 eta: 1:51:15 time: 1.4288 data_time: 0.0306 memory: 52498 grad_norm: 25.3068 loss: 1.0944 top1_acc: 0.7500 top5_acc: 0.8438 loss_cls: 1.0944 2023/02/15 21:45:17 - mmengine - INFO - Epoch(train) [2][1000/1498] lr: 1.9330e-06 eta: 1:48:52 time: 1.2608 data_time: 0.0332 memory: 52498 grad_norm: 24.9893 loss: 0.9162 top1_acc: 0.9062 top5_acc: 0.9688 loss_cls: 0.9162 2023/02/15 21:47:24 - mmengine - INFO - Epoch(train) [2][1100/1498] lr: 1.9193e-06 eta: 1:46:33 time: 1.2765 data_time: 0.0308 memory: 52498 grad_norm: 24.5301 loss: 0.8993 top1_acc: 0.6250 top5_acc: 0.8438 loss_cls: 0.8993 2023/02/15 21:49:29 - mmengine - INFO - Epoch(train) [2][1200/1498] lr: 1.9044e-06 eta: 1:44:12 time: 1.1866 data_time: 0.0462 memory: 52498 grad_norm: 26.1523 loss: 0.8875 top1_acc: 0.8750 top5_acc: 1.0000 loss_cls: 0.8875 2023/02/15 21:51:37 - mmengine - INFO - Epoch(train) [2][1300/1498] lr: 1.8885e-06 eta: 1:41:58 time: 1.2873 data_time: 0.0343 memory: 52498 grad_norm: 25.2841 loss: 0.8934 top1_acc: 0.7812 top5_acc: 0.9062 loss_cls: 0.8934 2023/02/15 21:53:44 - mmengine - INFO - Epoch(train) [2][1400/1498] lr: 1.8714e-06 eta: 1:39:42 time: 1.2485 data_time: 0.0325 memory: 52498 grad_norm: 25.3868 loss: 1.0016 top1_acc: 0.6562 top5_acc: 0.9062 loss_cls: 1.0016 2023/02/15 21:55:46 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 21:55:46 - mmengine - INFO - Epoch(train) [2][1498/1498] lr: 1.8537e-06 eta: 1:37:27 time: 1.1381 data_time: 0.0222 memory: 52498 grad_norm: 26.4935 loss: 0.9294 top1_acc: 0.7619 top5_acc: 0.9048 loss_cls: 0.9294 2023/02/15 21:56:07 - mmengine - INFO - Epoch(val) [2][100/437] eta: 0:01:11 time: 0.1885 data_time: 0.1071 memory: 2893 2023/02/15 21:56:29 - mmengine - INFO - Epoch(val) [2][200/437] eta: 0:00:50 time: 0.2313 data_time: 0.1507 memory: 2893 2023/02/15 21:56:48 - mmengine - INFO - Epoch(val) [2][300/437] eta: 0:00:28 time: 0.1592 data_time: 0.0791 memory: 2893 2023/02/15 21:57:10 - mmengine - INFO - Epoch(val) [2][400/437] eta: 0:00:07 time: 0.2389 data_time: 0.1592 memory: 2893 2023/02/15 21:57:21 - mmengine - INFO - Epoch(val) [2][437/437] acc/top1: 0.8461 acc/top5: 0.9668 acc/mean1: 0.8453 2023/02/15 21:57:36 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 21:59:41 - mmengine - INFO - Epoch(train) [3][ 100/1498] lr: 1.8347e-06 eta: 1:35:31 time: 1.4014 data_time: 0.2774 memory: 52498 grad_norm: 24.5932 loss: 1.0528 top1_acc: 0.7188 top5_acc: 0.9062 loss_cls: 1.0528 2023/02/15 22:01:49 - mmengine - INFO - Epoch(train) [3][ 200/1498] lr: 1.8148e-06 eta: 1:33:17 time: 1.2876 data_time: 0.0698 memory: 52498 grad_norm: 25.6968 loss: 0.9143 top1_acc: 0.6875 top5_acc: 0.8438 loss_cls: 0.9143 2023/02/15 22:03:55 - mmengine - INFO - Epoch(train) [3][ 300/1498] lr: 1.7940e-06 eta: 1:31:01 time: 1.2648 data_time: 0.0331 memory: 52498 grad_norm: 25.1622 loss: 0.8790 top1_acc: 0.8125 top5_acc: 0.9062 loss_cls: 0.8790 2023/02/15 22:06:05 - mmengine - INFO - Epoch(train) [3][ 400/1498] lr: 1.7724e-06 eta: 1:28:50 time: 1.2126 data_time: 0.0311 memory: 52498 grad_norm: 24.8912 loss: 0.7798 top1_acc: 0.9062 top5_acc: 0.9375 loss_cls: 0.7798 2023/02/15 22:08:14 - mmengine - INFO - Epoch(train) [3][ 500/1498] lr: 1.7501e-06 eta: 1:26:39 time: 1.2900 data_time: 0.0300 memory: 52498 grad_norm: 25.4937 loss: 0.9939 top1_acc: 0.7812 top5_acc: 0.8438 loss_cls: 0.9939 2023/02/15 22:10:19 - mmengine - INFO - Epoch(train) [3][ 600/1498] lr: 1.7270e-06 eta: 1:24:23 time: 1.2788 data_time: 0.0331 memory: 52498 grad_norm: 25.5371 loss: 0.9500 top1_acc: 0.8125 top5_acc: 0.8750 loss_cls: 0.9500 2023/02/15 22:12:28 - mmengine - INFO - Epoch(train) [3][ 700/1498] lr: 1.7034e-06 eta: 1:22:12 time: 1.2480 data_time: 0.0391 memory: 52498 grad_norm: 24.9075 loss: 0.9630 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.9630 2023/02/15 22:14:33 - mmengine - INFO - Epoch(train) [3][ 800/1498] lr: 1.6792e-06 eta: 1:19:57 time: 1.2144 data_time: 0.0324 memory: 52498 grad_norm: 25.8904 loss: 0.9680 top1_acc: 0.8438 top5_acc: 0.9375 loss_cls: 0.9680 2023/02/15 22:16:43 - mmengine - INFO - Epoch(train) [3][ 900/1498] lr: 1.6545e-06 eta: 1:17:47 time: 1.3824 data_time: 0.0338 memory: 52498 grad_norm: 24.6447 loss: 1.0268 top1_acc: 0.7188 top5_acc: 0.8125 loss_cls: 1.0268 2023/02/15 22:18:49 - mmengine - INFO - Epoch(train) [3][1000/1498] lr: 1.6293e-06 eta: 1:15:34 time: 1.2618 data_time: 0.0327 memory: 52498 grad_norm: 25.4518 loss: 0.9613 top1_acc: 0.8438 top5_acc: 0.8750 loss_cls: 0.9613 2023/02/15 22:18:54 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 22:20:56 - mmengine - INFO - Epoch(train) [3][1100/1498] lr: 1.6038e-06 eta: 1:13:22 time: 1.3728 data_time: 0.0354 memory: 52498 grad_norm: 25.6478 loss: 0.8342 top1_acc: 0.8438 top5_acc: 0.9062 loss_cls: 0.8342 2023/02/15 22:23:01 - mmengine - INFO - Epoch(train) [3][1200/1498] lr: 1.5781e-06 eta: 1:11:08 time: 1.2241 data_time: 0.0317 memory: 52498 grad_norm: 25.8889 loss: 0.9128 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.9128 2023/02/15 22:25:08 - mmengine - INFO - Epoch(train) [3][1300/1498] lr: 1.5521e-06 eta: 1:08:57 time: 1.3054 data_time: 0.0375 memory: 52498 grad_norm: 25.1134 loss: 0.9479 top1_acc: 0.7812 top5_acc: 0.8750 loss_cls: 0.9479 2023/02/15 22:27:13 - mmengine - INFO - Epoch(train) [3][1400/1498] lr: 1.5259e-06 eta: 1:06:44 time: 1.2313 data_time: 0.0311 memory: 52498 grad_norm: 25.8618 loss: 0.8606 top1_acc: 0.8125 top5_acc: 0.9062 loss_cls: 0.8606 2023/02/15 22:29:17 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 22:29:17 - mmengine - INFO - Epoch(train) [3][1498/1498] lr: 1.5003e-06 eta: 1:04:36 time: 1.1650 data_time: 0.0983 memory: 52498 grad_norm: 25.7349 loss: 1.0085 top1_acc: 0.8095 top5_acc: 0.9524 loss_cls: 1.0085 2023/02/15 22:29:17 - mmengine - INFO - Saving checkpoint at 3 epochs 2023/02/15 22:29:43 - mmengine - INFO - Epoch(val) [3][100/437] eta: 0:01:11 time: 0.1680 data_time: 0.0646 memory: 2893 2023/02/15 22:30:04 - mmengine - INFO - Epoch(val) [3][200/437] eta: 0:00:49 time: 0.2140 data_time: 0.1336 memory: 2893 2023/02/15 22:30:24 - mmengine - INFO - Epoch(val) [3][300/437] eta: 0:00:28 time: 0.1609 data_time: 0.0808 memory: 2893 2023/02/15 22:30:46 - mmengine - INFO - Epoch(val) [3][400/437] eta: 0:00:07 time: 0.2259 data_time: 0.1460 memory: 2893 2023/02/15 22:30:54 - mmengine - INFO - Epoch(val) [3][437/437] acc/top1: 0.8474 acc/top5: 0.9663 acc/mean1: 0.8465 2023/02/15 22:33:12 - mmengine - INFO - Epoch(train) [4][ 100/1498] lr: 1.4741e-06 eta: 1:02:31 time: 1.3150 data_time: 0.0361 memory: 52498 grad_norm: 25.2891 loss: 0.8865 top1_acc: 0.6875 top5_acc: 0.8125 loss_cls: 0.8865 2023/02/15 22:35:18 - mmengine - INFO - Epoch(train) [4][ 200/1498] lr: 1.4479e-06 eta: 1:00:20 time: 1.2242 data_time: 0.0311 memory: 52498 grad_norm: 24.8601 loss: 0.7787 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.7787 2023/02/15 22:37:29 - mmengine - INFO - Epoch(train) [4][ 300/1498] lr: 1.4219e-06 eta: 0:58:11 time: 1.4257 data_time: 0.0312 memory: 52498 grad_norm: 24.4257 loss: 0.8879 top1_acc: 0.6250 top5_acc: 0.9062 loss_cls: 0.8879 2023/02/15 22:39:35 - mmengine - INFO - Epoch(train) [4][ 400/1498] lr: 1.3962e-06 eta: 0:56:00 time: 1.3152 data_time: 0.0370 memory: 52498 grad_norm: 24.4756 loss: 0.9520 top1_acc: 0.7500 top5_acc: 0.9375 loss_cls: 0.9520 2023/02/15 22:41:42 - mmengine - INFO - Epoch(train) [4][ 500/1498] lr: 1.3707e-06 eta: 0:53:49 time: 1.3278 data_time: 0.0333 memory: 52498 grad_norm: 25.3366 loss: 0.9362 top1_acc: 0.6250 top5_acc: 0.8750 loss_cls: 0.9362 2023/02/15 22:41:49 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 22:43:48 - mmengine - INFO - Epoch(train) [4][ 600/1498] lr: 1.3455e-06 eta: 0:51:38 time: 1.1810 data_time: 0.0319 memory: 52498 grad_norm: 25.7143 loss: 0.8693 top1_acc: 0.8438 top5_acc: 0.9688 loss_cls: 0.8693 2023/02/15 22:45:55 - mmengine - INFO - Epoch(train) [4][ 700/1498] lr: 1.3208e-06 eta: 0:49:28 time: 1.2432 data_time: 0.0340 memory: 52498 grad_norm: 25.4116 loss: 0.9284 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.9284 2023/02/15 22:48:00 - mmengine - INFO - Epoch(train) [4][ 800/1498] lr: 1.2966e-06 eta: 0:47:17 time: 1.2328 data_time: 0.0379 memory: 52498 grad_norm: 25.4700 loss: 0.9925 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.9925 2023/02/15 22:50:09 - mmengine - INFO - Epoch(train) [4][ 900/1498] lr: 1.2730e-06 eta: 0:45:07 time: 1.3494 data_time: 0.0451 memory: 52498 grad_norm: 25.5591 loss: 0.8261 top1_acc: 0.8438 top5_acc: 0.9062 loss_cls: 0.8261 2023/02/15 22:52:15 - mmengine - INFO - Epoch(train) [4][1000/1498] lr: 1.2499e-06 eta: 0:42:57 time: 1.2589 data_time: 0.0305 memory: 52498 grad_norm: 24.6891 loss: 0.7929 top1_acc: 0.7812 top5_acc: 0.8125 loss_cls: 0.7929 2023/02/15 22:54:23 - mmengine - INFO - Epoch(train) [4][1100/1498] lr: 1.2276e-06 eta: 0:40:48 time: 1.3154 data_time: 0.0325 memory: 52498 grad_norm: 25.3575 loss: 0.8368 top1_acc: 0.7188 top5_acc: 0.9375 loss_cls: 0.8368 2023/02/15 22:56:29 - mmengine - INFO - Epoch(train) [4][1200/1498] lr: 1.2060e-06 eta: 0:38:37 time: 1.2436 data_time: 0.0337 memory: 52498 grad_norm: 24.9923 loss: 0.8822 top1_acc: 0.8125 top5_acc: 0.9062 loss_cls: 0.8822 2023/02/15 22:58:36 - mmengine - INFO - Epoch(train) [4][1300/1498] lr: 1.1852e-06 eta: 0:36:28 time: 1.2838 data_time: 0.0345 memory: 52498 grad_norm: 25.8287 loss: 0.9438 top1_acc: 0.8438 top5_acc: 0.9375 loss_cls: 0.9438 2023/02/15 23:00:45 - mmengine - INFO - Epoch(train) [4][1400/1498] lr: 1.1653e-06 eta: 0:34:19 time: 1.3346 data_time: 0.0333 memory: 52498 grad_norm: 25.1952 loss: 0.7888 top1_acc: 0.8438 top5_acc: 0.9062 loss_cls: 0.7888 2023/02/15 23:02:46 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 23:02:46 - mmengine - INFO - Epoch(train) [4][1498/1498] lr: 1.1466e-06 eta: 0:32:11 time: 1.1011 data_time: 0.0240 memory: 52498 grad_norm: 25.7186 loss: 0.7925 top1_acc: 0.8095 top5_acc: 0.9048 loss_cls: 0.7925 2023/02/15 23:03:08 - mmengine - INFO - Epoch(val) [4][100/437] eta: 0:01:11 time: 0.1915 data_time: 0.1117 memory: 2893 2023/02/15 23:03:29 - mmengine - INFO - Epoch(val) [4][200/437] eta: 0:00:50 time: 0.2185 data_time: 0.1384 memory: 2893 2023/02/15 23:03:48 - mmengine - INFO - Epoch(val) [4][300/437] eta: 0:00:28 time: 0.1645 data_time: 0.0847 memory: 2893 2023/02/15 23:04:10 - mmengine - INFO - Epoch(val) [4][400/437] eta: 0:00:07 time: 0.2321 data_time: 0.1519 memory: 2893 2023/02/15 23:04:21 - mmengine - INFO - Epoch(val) [4][437/437] acc/top1: 0.8485 acc/top5: 0.9666 acc/mean1: 0.8476 2023/02/15 23:04:21 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/lilin/Repos/mmact_dev/mmaction2/work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb/best_acc/top1_epoch_1.pth is removed 2023/02/15 23:04:24 - mmengine - INFO - The best checkpoint with 0.8485 acc/top1 at 4 epoch is saved to best_acc/top1_epoch_4.pth. 2023/02/15 23:04:42 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 23:06:42 - mmengine - INFO - Epoch(train) [5][ 100/1498] lr: 1.1286e-06 eta: 0:30:04 time: 1.3416 data_time: 0.2203 memory: 52498 grad_norm: 24.7119 loss: 0.9050 top1_acc: 0.6562 top5_acc: 0.8438 loss_cls: 0.9050 2023/02/15 23:08:47 - mmengine - INFO - Epoch(train) [5][ 200/1498] lr: 1.1115e-06 eta: 0:27:54 time: 1.3029 data_time: 0.1907 memory: 52498 grad_norm: 25.6444 loss: 0.9195 top1_acc: 0.7500 top5_acc: 0.8438 loss_cls: 0.9195 2023/02/15 23:10:57 - mmengine - INFO - Epoch(train) [5][ 300/1498] lr: 1.0956e-06 eta: 0:25:45 time: 1.3963 data_time: 0.0343 memory: 52498 grad_norm: 24.0374 loss: 0.9098 top1_acc: 0.8438 top5_acc: 0.8750 loss_cls: 0.9098 2023/02/15 23:13:10 - mmengine - INFO - Epoch(train) [5][ 400/1498] lr: 1.0807e-06 eta: 0:23:37 time: 1.2699 data_time: 0.0332 memory: 52498 grad_norm: 24.3221 loss: 0.8909 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.8909 2023/02/15 23:15:18 - mmengine - INFO - Epoch(train) [5][ 500/1498] lr: 1.0670e-06 eta: 0:21:28 time: 1.3323 data_time: 0.0394 memory: 52498 grad_norm: 25.3498 loss: 0.8717 top1_acc: 0.6875 top5_acc: 0.8750 loss_cls: 0.8717 2023/02/15 23:17:26 - mmengine - INFO - Epoch(train) [5][ 600/1498] lr: 1.0545e-06 eta: 0:19:19 time: 1.2901 data_time: 0.0320 memory: 52498 grad_norm: 24.6594 loss: 0.8588 top1_acc: 0.8438 top5_acc: 0.9062 loss_cls: 0.8588 2023/02/15 23:19:32 - mmengine - INFO - Epoch(train) [5][ 700/1498] lr: 1.0432e-06 eta: 0:17:09 time: 1.2425 data_time: 0.0349 memory: 52498 grad_norm: 23.5364 loss: 0.8721 top1_acc: 0.8125 top5_acc: 0.9375 loss_cls: 0.8721 2023/02/15 23:21:37 - mmengine - INFO - Epoch(train) [5][ 800/1498] lr: 1.0332e-06 eta: 0:15:00 time: 1.2374 data_time: 0.0423 memory: 52498 grad_norm: 25.8217 loss: 0.7623 top1_acc: 0.8750 top5_acc: 0.9375 loss_cls: 0.7623 2023/02/15 23:23:44 - mmengine - INFO - Epoch(train) [5][ 900/1498] lr: 1.0245e-06 eta: 0:12:51 time: 1.2403 data_time: 0.0345 memory: 52498 grad_norm: 24.8545 loss: 0.9107 top1_acc: 0.7500 top5_acc: 0.8750 loss_cls: 0.9107 2023/02/15 23:25:51 - mmengine - INFO - Epoch(train) [5][1000/1498] lr: 1.0170e-06 eta: 0:10:41 time: 1.2856 data_time: 0.0574 memory: 52498 grad_norm: 25.4496 loss: 1.0252 top1_acc: 0.7812 top5_acc: 0.8750 loss_cls: 1.0252 2023/02/15 23:26:02 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 23:28:00 - mmengine - INFO - Epoch(train) [5][1100/1498] lr: 1.0109e-06 eta: 0:08:33 time: 1.3227 data_time: 0.1057 memory: 52498 grad_norm: 25.4033 loss: 0.9093 top1_acc: 0.7188 top5_acc: 0.8750 loss_cls: 0.9093 2023/02/15 23:30:06 - mmengine - INFO - Epoch(train) [5][1200/1498] lr: 1.0061e-06 eta: 0:06:24 time: 1.2588 data_time: 0.1409 memory: 52498 grad_norm: 24.7623 loss: 0.9607 top1_acc: 0.7188 top5_acc: 1.0000 loss_cls: 0.9607 2023/02/15 23:32:12 - mmengine - INFO - Epoch(train) [5][1300/1498] lr: 1.0027e-06 eta: 0:04:15 time: 1.2494 data_time: 0.1326 memory: 52498 grad_norm: 24.9403 loss: 1.0374 top1_acc: 0.7500 top5_acc: 0.9062 loss_cls: 1.0374 2023/02/15 23:34:19 - mmengine - INFO - Epoch(train) [5][1400/1498] lr: 1.0007e-06 eta: 0:02:06 time: 1.3172 data_time: 0.0340 memory: 52498 grad_norm: 25.1936 loss: 1.0981 top1_acc: 0.6562 top5_acc: 0.8750 loss_cls: 1.0981 2023/02/15 23:36:27 - mmengine - INFO - Exp name: uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb_20230215_204752 2023/02/15 23:36:27 - mmengine - INFO - Epoch(train) [5][1498/1498] lr: 1.0000e-06 eta: 0:00:00 time: 1.1234 data_time: 0.0221 memory: 52498 grad_norm: 25.2580 loss: 0.8245 top1_acc: 0.9048 top5_acc: 0.9524 loss_cls: 0.8245 2023/02/15 23:36:28 - mmengine - INFO - Saving checkpoint at 5 epochs 2023/02/15 23:36:54 - mmengine - INFO - Epoch(val) [5][100/437] eta: 0:01:12 time: 0.1737 data_time: 0.0939 memory: 2893 2023/02/15 23:37:15 - mmengine - INFO - Epoch(val) [5][200/437] eta: 0:00:50 time: 0.2224 data_time: 0.1428 memory: 2893 2023/02/15 23:37:35 - mmengine - INFO - Epoch(val) [5][300/437] eta: 0:00:28 time: 0.1596 data_time: 0.0783 memory: 2893 2023/02/15 23:37:56 - mmengine - INFO - Epoch(val) [5][400/437] eta: 0:00:07 time: 0.2268 data_time: 0.1471 memory: 2893 2023/02/15 23:38:05 - mmengine - INFO - Epoch(val) [5][437/437] acc/top1: 0.8487 acc/top5: 0.9668 acc/mean1: 0.8477 2023/02/15 23:38:05 - mmengine - INFO - The previous best checkpoint /mnt/petrelfs/lilin/Repos/mmact_dev/mmaction2/work_dirs/uniformer_train/uniformerv2-base-p16-res224_clip-kinetics710-pre_u8_kinetics600-rgb/best_acc/top1_epoch_4.pth is removed 2023/02/15 23:38:08 - mmengine - INFO - The best checkpoint with 0.8487 acc/top1 at 5 epoch is saved to best_acc/top1_epoch_5.pth.