2021-12-10 09:58:36,024 - mmocr - INFO - Environment info: ------------------------------------------------------------ sys.platform: linux Python: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] CUDA available: True GPU 0,1,2,3,4,5,6,7: Tesla V100-SXM2-32GB CUDA_HOME: /mnt/lustre/share/cuda-10.2 NVCC: Cuda compilation tools, release 10.2, V10.2.89 GCC: gcc (GCC) 5.4.0 PyTorch: 1.7.1 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) oneAPI Math Kernel Library Version 2021.3-Product Build 20210617 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 10.2 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37 - CuDNN 7.6.5 - Magma 2.5.2 - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, TorchVision: 0.8.2 OpenCV: 4.5.3 MMCV: 1.3.14 MMCV Compiler: GCC 5.4 MMCV CUDA Compiler: 10.2 MMOCR: 0.3.0+191e497 ------------------------------------------------------------ 2021-12-10 09:58:37,934 - mmocr - INFO - Distributed training: True 2021-12-10 09:58:39,760 - mmocr - INFO - Config: checkpoint_config = dict(interval=1) log_config = dict(interval=1000, hooks=[dict(type='TextLoggerHook')]) dist_params = dict(backend='nccl') log_level = 'INFO' load_from = 'abinet_pretrained-fb7ce2e4.pth' resume_from = None workflow = [('train', 1)] num_chars = 37 max_seq_len = 26 label_convertor = dict( type='ABIConvertor', dict_type='DICT36', with_unknown=False, with_padding=False, lower=True) model = dict( type='ABINet', backbone=dict( type='ResTransformer', n_layers=3, n_head=8, d_model=512, d_inner=2048, dropout=0.1, max_len=256), encoder=dict( type='ABIVisionEncoder', in_channels=512, num_channels=64, attn_height=8, attn_width=32, attn_mode='nearest', use_result='feature', num_chars=37, max_seq_len=26, init_cfg=dict(type='Xavier', layer='Conv2d')), decoder=dict( type='ABILanguageDecoder', d_model=512, n_head=8, d_inner=2048, n_layers=4, dropout=0.1, detach_tokens=True, use_self_attn=False, pad_idx=36, num_chars=37, max_seq_len=26, init_cfg=None), fuser=dict( type='BaseAlignment', d_model=512, num_chars=37, init_cfg=None, max_seq_len=26), loss=dict( type='ABILoss', enc_weight=1.0, dec_weight=1.0, fusion_weight=1.0), label_convertor=dict( type='ABIConvertor', dict_type='DICT36', with_unknown=False, with_padding=False, lower=True), max_seq_len=26, iter_size=3) optimizer = dict(type='Adam', lr=0.0002) optimizer_config = dict(grad_clip=dict(max_norm=20)) lr_config = dict( policy='step', step=[10], warmup='linear', warmup_iters=1, warmup_ratio=0.001, warmup_by_epoch=True) total_epochs = 20 img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) train_pipeline = [ dict(type='LoadImageFromFile', file_client_args=dict(backend='petrel')), dict( type='ResizeOCR', height=32, min_width=128, max_width=128, keep_aspect_ratio=False, width_downsample_ratio=0.25), dict( type='RandomWrapper', p=0.5, transforms=[ dict( type='OneOfWrapper', transforms=[ dict(type='RandomRotateTextDet', max_angle=15), dict( type='TorchVisionWrapper', op='RandomAffine', degrees=15, translate=(0.3, 0.3), scale=(0.5, 2.0), shear=(-45, 45)), dict( type='TorchVisionWrapper', op='RandomPerspective', distortion_scale=0.5, p=1) ]) ]), dict( type='RandomWrapper', p=0.25, transforms=[ dict(type='PyramidRescale'), dict( type='Albu', transforms=[ dict(type='GaussNoise', var_limit=(20, 20), p=1), dict(type='MotionBlur', blur_limit=12, p=1) ]) ]), dict( type='RandomWrapper', p=0.25, transforms=[ dict( type='TorchVisionWrapper', op='ColorJitter', brightness=0.5, saturation=0.5, contrast=0.5, hue=0.1) ]), dict(type='ToTensorOCR'), dict( type='NormalizeOCR', mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), dict( type='Collect', keys=['img'], meta_keys=[ 'filename', 'ori_shape', 'img_shape', 'text', 'valid_ratio', 'resize_shape' ]) ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiRotateAugOCR', rotate_degrees=[0, 90, 270], transforms=[ dict( type='ResizeOCR', height=32, min_width=128, max_width=128, keep_aspect_ratio=False, width_downsample_ratio=0.25), dict(type='ToTensorOCR'), dict( type='NormalizeOCR', mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), dict( type='Collect', keys=['img'], meta_keys=[ 'filename', 'ori_shape', 'img_shape', 'valid_ratio', 'resize_shape' ]) ]) ] dataset_type = 'OCRDataset' train_prefix = 's3://openmmlab/datasets/ocr/recog/' train_ann_file1 = 'data/SynthText/labels/alphanumeric.lmdb' train_img_prefix1 = 's3://openmmlab/datasets/ocr/recog/SynthText/synthtext/SynthText_patch_horizontal/' train_img_prefix2 = 's3://openmmlab/datasets/ocr/recog/mnt/ramdisk/max/90kDICT32px/' train_ann_file2 = 'data/mnt/ramdisk/max/90kDICT32px/full_labels.lmdb' train1 = dict( type='OCRDataset', img_prefix= 's3://openmmlab/datasets/ocr/recog/SynthText/synthtext/SynthText_patch_horizontal/', ann_file='data/SynthText/labels/alphanumeric.lmdb', loader=dict( type='LmdbLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=False) train2 = dict( type='OCRDataset', img_prefix= 's3://openmmlab/datasets/ocr/recog/mnt/ramdisk/max/90kDICT32px/', ann_file='data/mnt/ramdisk/max/90kDICT32px/full_labels.lmdb', loader=dict( type='LmdbLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=False) test_prefix = 'data/mixture/testset/' test_img_prefix1 = 'data/mixture/testset/IIIT5K/' test_img_prefix2 = 'data/mixture/testset/svt/' test_img_prefix3 = 'data/mixture/testset/icdar_2013/Challenge2_Test_Task3_Images/' test_img_prefix4 = 'data/mixture/testset/icdar_2015/ch4_test_word_images_gt/' test_img_prefix5 = 'data/mixture/testset/svtp/' test_img_prefix6 = 'data/mixture/testset/ct80/' test_ann_file1 = 'data/mixture/testset/IIIT5K/label.txt' test_ann_file2 = 'data/mixture/testset/svt/test_list.txt' test_ann_file3 = 'data/mixture/testset/icdar_2013/1015_test_label.txt' test_ann_file4 = 'data/mixture/testset/icdar_2015/test_label.txt' test_ann_file5 = 'data/mixture/testset/svtp/imagelist.txt' test_ann_file6 = 'data/mixture/testset/ct80/imagelist.txt' test1 = dict( type='OCRDataset', img_prefix='data/mixture/testset/IIIT5K/', ann_file='data/mixture/testset/IIIT5K/label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True) test2 = dict( type='OCRDataset', img_prefix='data/mixture/testset/svt/', ann_file='data/mixture/testset/svt/test_list.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True) test3 = dict( type='OCRDataset', img_prefix='data/mixture/testset/icdar_2013/Challenge2_Test_Task3_Images/', ann_file='data/mixture/testset/icdar_2013/1015_test_label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True) test4 = dict( type='OCRDataset', img_prefix='data/mixture/testset/icdar_2015/ch4_test_word_images_gt/', ann_file='data/mixture/testset/icdar_2015/test_label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True) test5 = dict( type='OCRDataset', img_prefix='data/mixture/testset/svtp/', ann_file='data/mixture/testset/svtp/imagelist.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True) test6 = dict( type='OCRDataset', img_prefix='data/mixture/testset/ct80/', ann_file='data/mixture/testset/ct80/imagelist.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True) data = dict( samples_per_gpu=192, workers_per_gpu=8, val_dataloader=dict(samples_per_gpu=1), test_dataloader=dict(samples_per_gpu=1), train=dict( type='UniformConcatDataset', datasets=[ dict( type='OCRDataset', img_prefix= 's3://openmmlab/datasets/ocr/recog/SynthText/synthtext/SynthText_patch_horizontal/', ann_file='data/SynthText/labels/alphanumeric.lmdb', loader=dict( type='LmdbLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=False), dict( type='OCRDataset', img_prefix= 's3://openmmlab/datasets/ocr/recog/mnt/ramdisk/max/90kDICT32px/', ann_file='data/mnt/ramdisk/max/90kDICT32px/full_labels.lmdb', loader=dict( type='LmdbLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=False) ], pipeline=[ dict( type='LoadImageFromFile', file_client_args=dict(backend='petrel')), dict( type='ResizeOCR', height=32, min_width=128, max_width=128, keep_aspect_ratio=False, width_downsample_ratio=0.25), dict( type='RandomWrapper', p=0.5, transforms=[ dict( type='OneOfWrapper', transforms=[ dict(type='RandomRotateTextDet', max_angle=15), dict( type='TorchVisionWrapper', op='RandomAffine', degrees=15, translate=(0.3, 0.3), scale=(0.5, 2.0), shear=(-45, 45)), dict( type='TorchVisionWrapper', op='RandomPerspective', distortion_scale=0.5, p=1) ]) ]), dict( type='RandomWrapper', p=0.25, transforms=[ dict(type='PyramidRescale'), dict( type='Albu', transforms=[ dict(type='GaussNoise', var_limit=(20, 20), p=1), dict(type='MotionBlur', blur_limit=12, p=1) ]) ]), dict( type='RandomWrapper', p=0.25, transforms=[ dict( type='TorchVisionWrapper', op='ColorJitter', brightness=0.5, saturation=0.5, contrast=0.5, hue=0.1) ]), dict(type='ToTensorOCR'), dict( type='NormalizeOCR', mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), dict( type='Collect', keys=['img'], meta_keys=[ 'filename', 'ori_shape', 'img_shape', 'text', 'valid_ratio', 'resize_shape' ]) ]), val=dict( type='UniformConcatDataset', datasets=[ dict( type='OCRDataset', img_prefix='data/mixture/testset/IIIT5K/', ann_file='data/mixture/testset/IIIT5K/label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix='data/mixture/testset/svt/', ann_file='data/mixture/testset/svt/test_list.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix= 'data/mixture/testset/icdar_2013/Challenge2_Test_Task3_Images/', ann_file='data/mixture/testset/icdar_2013/1015_test_label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix= 'data/mixture/testset/icdar_2015/ch4_test_word_images_gt/', ann_file='data/mixture/testset/icdar_2015/test_label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix='data/mixture/testset/svtp/', ann_file='data/mixture/testset/svtp/imagelist.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix='data/mixture/testset/ct80/', ann_file='data/mixture/testset/ct80/imagelist.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True) ], pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiRotateAugOCR', rotate_degrees=[0, 90, 270], transforms=[ dict( type='ResizeOCR', height=32, min_width=128, max_width=128, keep_aspect_ratio=False, width_downsample_ratio=0.25), dict(type='ToTensorOCR'), dict( type='NormalizeOCR', mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), dict( type='Collect', keys=['img'], meta_keys=[ 'filename', 'ori_shape', 'img_shape', 'valid_ratio', 'resize_shape' ]) ]) ]), test=dict( type='UniformConcatDataset', datasets=[ dict( type='OCRDataset', img_prefix='data/mixture/testset/IIIT5K/', ann_file='data/mixture/testset/IIIT5K/label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix='data/mixture/testset/svt/', ann_file='data/mixture/testset/svt/test_list.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix= 'data/mixture/testset/icdar_2013/Challenge2_Test_Task3_Images/', ann_file='data/mixture/testset/icdar_2013/1015_test_label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix= 'data/mixture/testset/icdar_2015/ch4_test_word_images_gt/', ann_file='data/mixture/testset/icdar_2015/test_label.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix='data/mixture/testset/svtp/', ann_file='data/mixture/testset/svtp/imagelist.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True), dict( type='OCRDataset', img_prefix='data/mixture/testset/ct80/', ann_file='data/mixture/testset/ct80/imagelist.txt', loader=dict( type='HardDiskLoader', repeat=1, parser=dict( type='LineStrParser', keys=['filename', 'text'], keys_idx=[0, 1], separator=' ')), pipeline=None, test_mode=True) ], pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiRotateAugOCR', rotate_degrees=[0, 90, 270], transforms=[ dict( type='ResizeOCR', height=32, min_width=128, max_width=128, keep_aspect_ratio=False, width_downsample_ratio=0.25), dict(type='ToTensorOCR'), dict( type='NormalizeOCR', mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), dict( type='Collect', keys=['img'], meta_keys=[ 'filename', 'ori_shape', 'img_shape', 'valid_ratio', 'resize_shape' ]) ]) ])) evaluation = dict(interval=1, metric='acc') work_dir = 'abinet_train_reimplement_augv3' gpu_ids = range(0, 8) 2021-12-10 09:58:40,360 - mmocr - INFO - initialize ResNetABI with init_cfg [{'type': 'Xavier', 'layer': 'Conv2d'}, {'type': 'Constant', 'val': 1, 'layer': 'BatchNorm2d'}] 2021-12-10 09:58:41,209 - mmocr - INFO - initialize ABIVisionEncoder with init_cfg {'type': 'Xavier', 'layer': 'Conv2d'} Name of parameter - Initialization information backbone.resnet.conv1.weight - torch.Size([32, 3, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.conv1.bias - torch.Size([32]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.bn1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.bn1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.0.conv1.weight - torch.Size([32, 32, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.0.0.bn1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.0.bn1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.0.conv2.weight - torch.Size([32, 32, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.0.0.bn2.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.0.bn2.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.0.downsample.0.weight - torch.Size([32, 32, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.0.0.downsample.1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.0.downsample.1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.1.conv1.weight - torch.Size([32, 32, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.0.1.bn1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.1.bn1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.1.conv2.weight - torch.Size([32, 32, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.0.1.bn2.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.1.bn2.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.2.conv1.weight - torch.Size([32, 32, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.0.2.bn1.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.2.bn1.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.2.conv2.weight - torch.Size([32, 32, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.0.2.bn2.weight - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.0.2.bn2.bias - torch.Size([32]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.0.conv1.weight - torch.Size([64, 32, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.0.bn1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.0.bn1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.0.conv2.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.0.bn2.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.0.bn2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.0.downsample.0.weight - torch.Size([64, 32, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.0.downsample.1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.0.downsample.1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.1.conv1.weight - torch.Size([64, 64, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.1.bn1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.1.bn1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.1.conv2.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.1.bn2.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.1.bn2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.2.conv1.weight - torch.Size([64, 64, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.2.bn1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.2.bn1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.2.conv2.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.2.bn2.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.2.bn2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.3.conv1.weight - torch.Size([64, 64, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.3.bn1.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.3.bn1.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.3.conv2.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.1.3.bn2.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.1.3.bn2.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.0.conv1.weight - torch.Size([128, 64, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.0.bn1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.0.bn1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.0.conv2.weight - torch.Size([128, 128, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.0.bn2.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.0.bn2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.0.downsample.0.weight - torch.Size([128, 64, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.0.downsample.1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.0.downsample.1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.1.conv1.weight - torch.Size([128, 128, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.1.bn1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.1.bn1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.1.conv2.weight - torch.Size([128, 128, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.1.bn2.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.1.bn2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.2.conv1.weight - torch.Size([128, 128, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.2.bn1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.2.bn1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.2.conv2.weight - torch.Size([128, 128, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.2.bn2.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.2.bn2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.3.conv1.weight - torch.Size([128, 128, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.3.bn1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.3.bn1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.3.conv2.weight - torch.Size([128, 128, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.3.bn2.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.3.bn2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.4.conv1.weight - torch.Size([128, 128, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.4.bn1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.4.bn1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.4.conv2.weight - torch.Size([128, 128, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.4.bn2.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.4.bn2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.5.conv1.weight - torch.Size([128, 128, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.5.bn1.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.5.bn1.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.5.conv2.weight - torch.Size([128, 128, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.2.5.bn2.weight - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.2.5.bn2.bias - torch.Size([128]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.0.conv1.weight - torch.Size([256, 128, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.0.bn1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.0.bn1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.0.conv2.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.0.bn2.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.0.bn2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.0.downsample.0.weight - torch.Size([256, 128, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.0.downsample.1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.0.downsample.1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.1.conv1.weight - torch.Size([256, 256, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.1.bn1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.1.bn1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.1.conv2.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.1.bn2.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.1.bn2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.2.conv1.weight - torch.Size([256, 256, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.2.bn1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.2.bn1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.2.conv2.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.2.bn2.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.2.bn2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.3.conv1.weight - torch.Size([256, 256, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.3.bn1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.3.bn1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.3.conv2.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.3.bn2.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.3.bn2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.4.conv1.weight - torch.Size([256, 256, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.4.bn1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.4.bn1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.4.conv2.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.4.bn2.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.4.bn2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.5.conv1.weight - torch.Size([256, 256, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.5.bn1.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.5.bn1.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.5.conv2.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.3.5.bn2.weight - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.3.5.bn2.bias - torch.Size([256]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.0.conv1.weight - torch.Size([512, 256, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.4.0.bn1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.0.bn1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.0.conv2.weight - torch.Size([512, 512, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.4.0.bn2.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.0.bn2.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.0.downsample.0.weight - torch.Size([512, 256, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.4.0.downsample.1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.0.downsample.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.1.conv1.weight - torch.Size([512, 512, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.4.1.bn1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.1.bn1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.1.conv2.weight - torch.Size([512, 512, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.4.1.bn2.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.1.bn2.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.2.conv1.weight - torch.Size([512, 512, 1, 1]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.4.2.bn1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.2.bn1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.2.conv2.weight - torch.Size([512, 512, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 backbone.resnet.layers.4.2.bn2.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.resnet.layers.4.2.bn2.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.attentions.0.attn.in_proj_weight - torch.Size([1536, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.attentions.0.attn.in_proj_bias - torch.Size([1536]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.attentions.0.attn.out_proj.weight - torch.Size([512, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.attentions.0.attn.out_proj.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.ffns.0.layers.0.0.weight - torch.Size([2048, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.ffns.0.layers.0.0.bias - torch.Size([2048]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.ffns.0.layers.1.weight - torch.Size([512, 2048]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.ffns.0.layers.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.norms.0.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.norms.0.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.norms.1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.0.norms.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.attentions.0.attn.in_proj_weight - torch.Size([1536, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.attentions.0.attn.in_proj_bias - torch.Size([1536]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.attentions.0.attn.out_proj.weight - torch.Size([512, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.attentions.0.attn.out_proj.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.ffns.0.layers.0.0.weight - torch.Size([2048, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.ffns.0.layers.0.0.bias - torch.Size([2048]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.ffns.0.layers.1.weight - torch.Size([512, 2048]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.ffns.0.layers.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.norms.0.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.norms.0.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.norms.1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.1.norms.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.attentions.0.attn.in_proj_weight - torch.Size([1536, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.attentions.0.attn.in_proj_bias - torch.Size([1536]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.attentions.0.attn.out_proj.weight - torch.Size([512, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.attentions.0.attn.out_proj.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.ffns.0.layers.0.0.weight - torch.Size([2048, 512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.ffns.0.layers.0.0.bias - torch.Size([2048]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.ffns.0.layers.1.weight - torch.Size([512, 2048]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.ffns.0.layers.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.norms.0.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.norms.0.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.norms.1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet backbone.transformer.2.norms.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_encoder.0.conv.weight - torch.Size([64, 512, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 encoder.attention.k_encoder.0.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_encoder.0.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_encoder.1.conv.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 encoder.attention.k_encoder.1.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_encoder.1.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_encoder.2.conv.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 encoder.attention.k_encoder.2.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_encoder.2.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_encoder.3.conv.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 encoder.attention.k_encoder.3.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_encoder.3.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_decoder.0.1.conv.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 encoder.attention.k_decoder.0.1.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_decoder.0.1.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_decoder.1.1.conv.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 encoder.attention.k_decoder.1.1.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_decoder.1.1.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_decoder.2.1.conv.weight - torch.Size([64, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 encoder.attention.k_decoder.2.1.bn.weight - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_decoder.2.1.bn.bias - torch.Size([64]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_decoder.3.1.conv.weight - torch.Size([512, 64, 3, 3]): XavierInit: gain=1, distribution=normal, bias=0 encoder.attention.k_decoder.3.1.bn.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.k_decoder.3.1.bn.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.project.weight - torch.Size([512, 512]): The value is the same before and after calling `init_weights` of ABINet encoder.attention.project.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet encoder.cls.weight - torch.Size([37, 512]): The value is the same before and after calling `init_weights` of ABINet encoder.cls.bias - torch.Size([37]): The value is the same before and after calling `init_weights` of ABINet decoder.proj.weight - torch.Size([512, 37]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.attentions.0.attn.in_proj_weight - torch.Size([1536, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.attentions.0.attn.in_proj_bias - torch.Size([1536]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.attentions.0.attn.out_proj.weight - torch.Size([512, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.attentions.0.attn.out_proj.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.ffns.0.layers.0.0.weight - torch.Size([2048, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.ffns.0.layers.0.0.bias - torch.Size([2048]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.ffns.0.layers.1.weight - torch.Size([512, 2048]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.ffns.0.layers.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.norms.0.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.norms.0.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.norms.1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.0.norms.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.attentions.0.attn.in_proj_weight - torch.Size([1536, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.attentions.0.attn.in_proj_bias - torch.Size([1536]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.attentions.0.attn.out_proj.weight - torch.Size([512, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.attentions.0.attn.out_proj.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.ffns.0.layers.0.0.weight - torch.Size([2048, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.ffns.0.layers.0.0.bias - torch.Size([2048]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.ffns.0.layers.1.weight - torch.Size([512, 2048]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.ffns.0.layers.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.norms.0.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.norms.0.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.norms.1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.1.norms.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.attentions.0.attn.in_proj_weight - torch.Size([1536, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.attentions.0.attn.in_proj_bias - torch.Size([1536]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.attentions.0.attn.out_proj.weight - torch.Size([512, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.attentions.0.attn.out_proj.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.ffns.0.layers.0.0.weight - torch.Size([2048, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.ffns.0.layers.0.0.bias - torch.Size([2048]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.ffns.0.layers.1.weight - torch.Size([512, 2048]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.ffns.0.layers.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.norms.0.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.norms.0.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.norms.1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.2.norms.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.attentions.0.attn.in_proj_weight - torch.Size([1536, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.attentions.0.attn.in_proj_bias - torch.Size([1536]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.attentions.0.attn.out_proj.weight - torch.Size([512, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.attentions.0.attn.out_proj.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.ffns.0.layers.0.0.weight - torch.Size([2048, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.ffns.0.layers.0.0.bias - torch.Size([2048]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.ffns.0.layers.1.weight - torch.Size([512, 2048]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.ffns.0.layers.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.norms.0.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.norms.0.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.norms.1.weight - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.decoder_layers.3.norms.1.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet decoder.cls.weight - torch.Size([37, 512]): The value is the same before and after calling `init_weights` of ABINet decoder.cls.bias - torch.Size([37]): The value is the same before and after calling `init_weights` of ABINet fuser.w_att.weight - torch.Size([512, 1024]): The value is the same before and after calling `init_weights` of ABINet fuser.w_att.bias - torch.Size([512]): The value is the same before and after calling `init_weights` of ABINet fuser.cls.weight - torch.Size([37, 512]): The value is the same before and after calling `init_weights` of ABINet fuser.cls.bias - torch.Size([37]): The value is the same before and after calling `init_weights` of ABINet 2021-12-10 09:58:43,448 - mmocr - INFO - Use load_from_local loader 2021-12-10 09:58:49,631 - mmocr - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) StepLrUpdaterHook (NORMAL ) CheckpointHook (NORMAL ) DistEvalHook (VERY_LOW ) TextLoggerHook -------------------- before_train_epoch: (VERY_HIGH ) StepLrUpdaterHook (NORMAL ) DistSamplerSeedHook (NORMAL ) DistEvalHook (LOW ) IterTimerHook (VERY_LOW ) TextLoggerHook -------------------- before_train_iter: (VERY_HIGH ) StepLrUpdaterHook (NORMAL ) DistEvalHook (LOW ) IterTimerHook -------------------- after_train_iter: (ABOVE_NORMAL) OptimizerHook (NORMAL ) CheckpointHook (NORMAL ) DistEvalHook (LOW ) IterTimerHook (VERY_LOW ) TextLoggerHook -------------------- after_train_epoch: (NORMAL ) CheckpointHook (NORMAL ) DistEvalHook (VERY_LOW ) TextLoggerHook -------------------- before_val_epoch: (NORMAL ) DistSamplerSeedHook (LOW ) IterTimerHook (VERY_LOW ) TextLoggerHook -------------------- before_val_iter: (LOW ) IterTimerHook -------------------- after_val_iter: (LOW ) IterTimerHook -------------------- after_val_epoch: (VERY_LOW ) TextLoggerHook -------------------- 2021-12-10 09:58:49,631 - mmocr - INFO - workflow: [('train', 1)], max: 20 epochs 2021-12-10 10:30:23,084 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 10:30:23,191 - mmocr - INFO - Epoch [1][1000/10520] lr: 1.917e-05, eta: 4 days, 14:07:59, time: 1.893, data_time: 0.650, memory: 18328, loss_visual: 0.3797, loss_lang: 4.0659, loss_fusion: 1.6251, loss: 6.0708, grad_norm: 6.4533 2021-12-10 10:51:07,862 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 10:51:07,961 - mmocr - INFO - Epoch [1][2000/10520] lr: 3.817e-05, eta: 3 days, 18:49:59, time: 1.245, data_time: 0.007, memory: 18328, loss_visual: 0.3547, loss_lang: 1.8447, loss_fusion: 0.4353, loss: 2.6347, grad_norm: 2.9573 2021-12-10 11:11:53,173 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 11:11:53,189 - mmocr - INFO - Epoch [1][3000/10520] lr: 5.716e-05, eta: 3 days, 12:10:44, time: 1.245, data_time: 0.006, memory: 18328, loss_visual: 0.3422, loss_lang: 1.5199, loss_fusion: 0.3788, loss: 2.2409, grad_norm: 2.8979 2021-12-10 11:32:49,463 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 11:32:49,477 - mmocr - INFO - Epoch [1][4000/10520] lr: 7.615e-05, eta: 3 days, 8:50:09, time: 1.256, data_time: 0.006, memory: 18328, loss_visual: 0.3402, loss_lang: 1.3525, loss_fusion: 0.3597, loss: 2.0524, grad_norm: 3.0340 2021-12-10 11:53:32,765 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 11:53:32,977 - mmocr - INFO - Epoch [1][5000/10520] lr: 9.514e-05, eta: 3 days, 6:32:36, time: 1.243, data_time: 0.006, memory: 18328, loss_visual: 0.3423, loss_lang: 1.2328, loss_fusion: 0.3527, loss: 1.9278, grad_norm: 2.9404 2021-12-10 12:14:17,891 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 12:14:17,905 - mmocr - INFO - Epoch [1][6000/10520] lr: 1.141e-04, eta: 3 days, 4:54:59, time: 1.245, data_time: 0.006, memory: 18328, loss_visual: 0.3456, loss_lang: 1.1383, loss_fusion: 0.3505, loss: 1.8343, grad_norm: 2.9622 2021-12-10 12:35:04,712 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 12:35:04,747 - mmocr - INFO - Epoch [1][7000/10520] lr: 1.331e-04, eta: 3 days, 3:40:10, time: 1.247, data_time: 0.006, memory: 18328, loss_visual: 0.3506, loss_lang: 1.0619, loss_fusion: 0.3519, loss: 1.7643, grad_norm: 2.8706 2021-12-10 12:55:52,541 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 12:55:52,556 - mmocr - INFO - Epoch [1][8000/10520] lr: 1.521e-04, eta: 3 days, 2:39:17, time: 1.248, data_time: 0.010, memory: 18328, loss_visual: 0.3555, loss_lang: 0.9981, loss_fusion: 0.3540, loss: 1.7076, grad_norm: 2.7988 2021-12-10 13:16:38,335 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 13:16:38,346 - mmocr - INFO - Epoch [1][9000/10520] lr: 1.711e-04, eta: 3 days, 1:46:32, time: 1.246, data_time: 0.006, memory: 18328, loss_visual: 0.3602, loss_lang: 0.9445, loss_fusion: 0.3562, loss: 1.6609, grad_norm: 2.7604 2021-12-10 13:37:24,524 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 13:37:24,550 - mmocr - INFO - Epoch [1][10000/10520] lr: 1.901e-04, eta: 3 days, 1:00:20, time: 1.246, data_time: 0.006, memory: 18328, loss_visual: 0.3670, loss_lang: 0.9043, loss_fusion: 0.3609, loss: 1.6322, grad_norm: 2.6108 2021-12-10 13:48:17,823 - mmocr - INFO - Saving checkpoint at 1 epochs 2021-12-10 13:58:03,182 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-10 13:58:03,675 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-10 13:58:03,685 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-10 13:58:03,699 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-10 13:58:03,729 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-10 13:58:03,739 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-10 13:58:03,744 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 13:58:03,744 - mmocr - INFO - Epoch(val) [1][959] 0_word_acc: 0.0740, 0_word_acc_ignore_case: 0.9450, 0_word_acc_ignore_case_symbol: 0.9450, 0_char_recall: 0.9858, 0_char_precision: 0.9845, 0_1-N.E.D: 0.9827, 1_word_acc: 0.9150, 1_word_acc_ignore_case: 0.9150, 1_word_acc_ignore_case_symbol: 0.9150, 1_char_recall: 0.9742, 1_char_precision: 0.9775, 1_1-N.E.D: 0.9680, 2_word_acc: 0.2867, 2_word_acc_ignore_case: 0.9291, 2_word_acc_ignore_case_symbol: 0.9291, 2_char_recall: 0.9848, 2_char_precision: 0.9806, 2_1-N.E.D: 0.9630, 3_word_acc: 0.1281, 3_word_acc_ignore_case: 0.7780, 3_word_acc_ignore_case_symbol: 0.8252, 3_char_recall: 0.9499, 3_char_precision: 0.9378, 3_1-N.E.D: 0.9288, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8496, 4_word_acc_ignore_case_symbol: 0.8496, 4_char_recall: 0.9387, 4_char_precision: 0.9490, 4_1-N.E.D: 0.9323, 5_word_acc: 0.1528, 5_word_acc_ignore_case: 0.8785, 5_word_acc_ignore_case_symbol: 0.8819, 5_char_recall: 0.9486, 5_char_precision: 0.9492, 5_1-N.E.D: 0.9485 2021-12-10 14:29:49,248 - mmocr - INFO - Epoch [2][1000/10520] lr: 2.000e-04, eta: 3 days, 0:01:45, time: 1.905, data_time: 0.662, memory: 18328, loss_visual: 0.3732, loss_lang: 0.8502, loss_fusion: 0.3646, loss: 1.5880, grad_norm: 2.5335 2021-12-10 14:50:35,900 - mmocr - INFO - Epoch [2][2000/10520] lr: 2.000e-04, eta: 2 days, 23:24:56, time: 1.247, data_time: 0.006, memory: 18328, loss_visual: 0.3734, loss_lang: 0.8206, loss_fusion: 0.3633, loss: 1.5573, grad_norm: 2.4784 2021-12-10 15:11:21,579 - mmocr - INFO - Epoch [2][3000/10520] lr: 2.000e-04, eta: 2 days, 22:50:18, time: 1.246, data_time: 0.005, memory: 18328, loss_visual: 0.3709, loss_lang: 0.7951, loss_fusion: 0.3598, loss: 1.5258, grad_norm: 2.3982 2021-12-10 15:32:09,508 - mmocr - INFO - Epoch [2][4000/10520] lr: 2.000e-04, eta: 2 days, 22:18:03, time: 1.248, data_time: 0.005, memory: 18328, loss_visual: 0.3729, loss_lang: 0.7758, loss_fusion: 0.3608, loss: 1.5095, grad_norm: 2.3479 2021-12-10 15:52:53,930 - mmocr - INFO - Epoch [2][5000/10520] lr: 2.000e-04, eta: 2 days, 21:46:34, time: 1.244, data_time: 0.007, memory: 18328, loss_visual: 0.3704, loss_lang: 0.7567, loss_fusion: 0.3574, loss: 1.4846, grad_norm: 2.3010 2021-12-10 16:13:47,961 - mmocr - INFO - Epoch [2][6000/10520] lr: 2.000e-04, eta: 2 days, 21:18:14, time: 1.254, data_time: 0.005, memory: 18328, loss_visual: 0.3705, loss_lang: 0.7419, loss_fusion: 0.3570, loss: 1.4694, grad_norm: 2.2357 2021-12-10 16:34:33,748 - mmocr - INFO - Epoch [2][7000/10520] lr: 2.000e-04, eta: 2 days, 20:49:15, time: 1.246, data_time: 0.006, memory: 18328, loss_visual: 0.3708, loss_lang: 0.7308, loss_fusion: 0.3565, loss: 1.4580, grad_norm: 2.2535 2021-12-10 16:55:29,384 - mmocr - INFO - Epoch [2][8000/10520] lr: 2.000e-04, eta: 2 days, 20:22:51, time: 1.256, data_time: 0.008, memory: 18328, loss_visual: 0.3682, loss_lang: 0.7180, loss_fusion: 0.3534, loss: 1.4395, grad_norm: 2.2066 2021-12-10 17:16:20,007 - mmocr - INFO - Epoch [2][9000/10520] lr: 2.000e-04, eta: 2 days, 19:56:12, time: 1.251, data_time: 0.006, memory: 18328, loss_visual: 0.3684, loss_lang: 0.7093, loss_fusion: 0.3531, loss: 1.4308, grad_norm: 2.1910 2021-12-10 17:38:46,155 - mmocr - INFO - Epoch [2][10000/10520] lr: 2.000e-04, eta: 2 days, 19:44:51, time: 1.346, data_time: 0.106, memory: 18328, loss_visual: 0.3693, loss_lang: 0.7021, loss_fusion: 0.3535, loss: 1.4249, grad_norm: 2.2110 2021-12-10 17:49:38,205 - mmocr - INFO - Saving checkpoint at 2 epochs 2021-12-10 17:59:23,573 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-10 17:59:23,743 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-10 17:59:23,753 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-10 17:59:23,767 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-10 17:59:23,797 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-10 17:59:23,807 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-10 17:59:23,813 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 17:59:23,813 - mmocr - INFO - Epoch(val) [2][959] 0_word_acc: 0.0740, 0_word_acc_ignore_case: 0.9427, 0_word_acc_ignore_case_symbol: 0.9427, 0_char_recall: 0.9835, 0_char_precision: 0.9812, 0_1-N.E.D: 0.9793, 1_word_acc: 0.9165, 1_word_acc_ignore_case: 0.9165, 1_word_acc_ignore_case_symbol: 0.9165, 1_char_recall: 0.9737, 1_char_precision: 0.9786, 1_1-N.E.D: 0.9675, 2_word_acc: 0.2818, 2_word_acc_ignore_case: 0.9261, 2_word_acc_ignore_case_symbol: 0.9261, 2_char_recall: 0.9835, 2_char_precision: 0.9819, 2_1-N.E.D: 0.9597, 3_word_acc: 0.1257, 3_word_acc_ignore_case: 0.7833, 3_word_acc_ignore_case_symbol: 0.8339, 3_char_recall: 0.9497, 3_char_precision: 0.9401, 3_1-N.E.D: 0.9297, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8651, 4_word_acc_ignore_case_symbol: 0.8651, 4_char_recall: 0.9435, 4_char_precision: 0.9561, 4_1-N.E.D: 0.9367, 5_word_acc: 0.1493, 5_word_acc_ignore_case: 0.8715, 5_word_acc_ignore_case_symbol: 0.8750, 5_char_recall: 0.9461, 5_char_precision: 0.9605, 5_1-N.E.D: 0.9468 2021-12-10 18:30:56,255 - mmocr - INFO - Epoch [3][1000/10520] lr: 2.000e-04, eta: 2 days, 19:03:46, time: 1.892, data_time: 0.644, memory: 18328, loss_visual: 0.3656, loss_lang: 0.6864, loss_fusion: 0.3489, loss: 1.4009, grad_norm: 2.1045 2021-12-10 18:51:43,831 - mmocr - INFO - Epoch [3][2000/10520] lr: 2.000e-04, eta: 2 days, 18:37:47, time: 1.248, data_time: 0.006, memory: 18328, loss_visual: 0.3635, loss_lang: 0.6791, loss_fusion: 0.3466, loss: 1.3893, grad_norm: 2.0513 2021-12-10 19:12:29,898 - mmocr - INFO - Epoch [3][3000/10520] lr: 2.000e-04, eta: 2 days, 18:12:01, time: 1.246, data_time: 0.006, memory: 18328, loss_visual: 0.3636, loss_lang: 0.6737, loss_fusion: 0.3462, loss: 1.3835, grad_norm: 2.0313 2021-12-10 19:33:19,532 - mmocr - INFO - Epoch [3][4000/10520] lr: 2.000e-04, eta: 2 days, 17:47:06, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3623, loss_lang: 0.6678, loss_fusion: 0.3447, loss: 1.3748, grad_norm: 2.0397 2021-12-10 19:54:05,719 - mmocr - INFO - Epoch [3][5000/10520] lr: 2.000e-04, eta: 2 days, 17:22:05, time: 1.246, data_time: 0.005, memory: 18328, loss_visual: 0.3616, loss_lang: 0.6619, loss_fusion: 0.3435, loss: 1.3670, grad_norm: 2.0219 2021-12-10 20:14:54,872 - mmocr - INFO - Epoch [3][6000/10520] lr: 2.000e-04, eta: 2 days, 16:57:44, time: 1.249, data_time: 0.007, memory: 18328, loss_visual: 0.3612, loss_lang: 0.6577, loss_fusion: 0.3430, loss: 1.3618, grad_norm: 1.9596 2021-12-10 20:35:39,675 - mmocr - INFO - Epoch [3][7000/10520] lr: 2.000e-04, eta: 2 days, 16:33:09, time: 1.245, data_time: 0.005, memory: 18328, loss_visual: 0.3601, loss_lang: 0.6530, loss_fusion: 0.3415, loss: 1.3546, grad_norm: 1.9156 2021-12-10 20:56:34,671 - mmocr - INFO - Epoch [3][8000/10520] lr: 2.000e-04, eta: 2 days, 16:09:54, time: 1.255, data_time: 0.011, memory: 18328, loss_visual: 0.3600, loss_lang: 0.6495, loss_fusion: 0.3410, loss: 1.3505, grad_norm: 1.9397 2021-12-10 21:19:42,750 - mmocr - INFO - Epoch [3][9000/10520] lr: 2.000e-04, eta: 2 days, 16:00:06, time: 1.388, data_time: 0.009, memory: 18328, loss_visual: 0.3591, loss_lang: 0.6444, loss_fusion: 0.3400, loss: 1.3434, grad_norm: 1.9289 2021-12-10 21:41:23,219 - mmocr - INFO - Epoch [3][10000/10520] lr: 2.000e-04, eta: 2 days, 15:41:03, time: 1.301, data_time: 0.015, memory: 18328, loss_visual: 0.3579, loss_lang: 0.6408, loss_fusion: 0.3386, loss: 1.3374, grad_norm: 1.8893 2021-12-10 21:53:52,739 - mmocr - INFO - Saving checkpoint at 3 epochs 2021-12-10 22:03:32,831 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-10 22:03:32,900 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-10 22:03:32,911 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-10 22:03:32,925 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-10 22:03:32,956 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-10 22:03:32,967 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-10 22:03:32,973 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-10 22:03:32,974 - mmocr - INFO - Epoch(val) [3][959] 0_word_acc: 0.0753, 0_word_acc_ignore_case: 0.9517, 0_word_acc_ignore_case_symbol: 0.9517, 0_char_recall: 0.9867, 0_char_precision: 0.9856, 0_1-N.E.D: 0.9845, 1_word_acc: 0.9274, 1_word_acc_ignore_case: 0.9274, 1_word_acc_ignore_case_symbol: 0.9274, 1_char_recall: 0.9760, 1_char_precision: 0.9812, 1_1-N.E.D: 0.9734, 2_word_acc: 0.2857, 2_word_acc_ignore_case: 0.9330, 2_word_acc_ignore_case_symbol: 0.9330, 2_char_recall: 0.9852, 2_char_precision: 0.9833, 2_1-N.E.D: 0.9656, 3_word_acc: 0.1271, 3_word_acc_ignore_case: 0.7752, 3_word_acc_ignore_case_symbol: 0.8247, 3_char_recall: 0.9469, 3_char_precision: 0.9351, 3_1-N.E.D: 0.9261, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8636, 4_word_acc_ignore_case_symbol: 0.8636, 4_char_recall: 0.9406, 4_char_precision: 0.9537, 4_1-N.E.D: 0.9336, 5_word_acc: 0.1493, 5_word_acc_ignore_case: 0.8750, 5_word_acc_ignore_case_symbol: 0.8785, 5_char_recall: 0.9392, 5_char_precision: 0.9529, 5_1-N.E.D: 0.9454 2021-12-10 22:36:56,929 - mmocr - INFO - Epoch [4][1000/10520] lr: 2.000e-04, eta: 2 days, 15:14:13, time: 2.004, data_time: 0.655, memory: 18328, loss_visual: 0.3564, loss_lang: 0.6352, loss_fusion: 0.3367, loss: 1.3283, grad_norm: 1.8409 2021-12-10 22:57:42,336 - mmocr - INFO - Epoch [4][2000/10520] lr: 2.000e-04, eta: 2 days, 14:49:50, time: 1.245, data_time: 0.006, memory: 18328, loss_visual: 0.3551, loss_lang: 0.6312, loss_fusion: 0.3353, loss: 1.3217, grad_norm: 1.8107 2021-12-10 23:18:27,616 - mmocr - INFO - Epoch [4][3000/10520] lr: 2.000e-04, eta: 2 days, 14:25:39, time: 1.245, data_time: 0.006, memory: 18328, loss_visual: 0.3545, loss_lang: 0.6277, loss_fusion: 0.3345, loss: 1.3167, grad_norm: 1.8622 2021-12-10 23:39:15,654 - mmocr - INFO - Epoch [4][4000/10520] lr: 2.000e-04, eta: 2 days, 14:01:52, time: 1.248, data_time: 0.006, memory: 18328, loss_visual: 0.3539, loss_lang: 0.6254, loss_fusion: 0.3338, loss: 1.3131, grad_norm: 1.8337 2021-12-11 00:00:01,914 - mmocr - INFO - Epoch [4][5000/10520] lr: 2.000e-04, eta: 2 days, 13:38:08, time: 1.246, data_time: 0.006, memory: 18328, loss_visual: 0.3527, loss_lang: 0.6219, loss_fusion: 0.3323, loss: 1.3069, grad_norm: 1.8037 2021-12-11 00:20:46,683 - mmocr - INFO - Epoch [4][6000/10520] lr: 2.000e-04, eta: 2 days, 13:14:26, time: 1.245, data_time: 0.006, memory: 18328, loss_visual: 0.3525, loss_lang: 0.6197, loss_fusion: 0.3320, loss: 1.3042, grad_norm: 1.8276 2021-12-11 00:41:30,956 - mmocr - INFO - Epoch [4][7000/10520] lr: 2.000e-04, eta: 2 days, 12:50:51, time: 1.244, data_time: 0.006, memory: 18328, loss_visual: 0.3510, loss_lang: 0.6166, loss_fusion: 0.3304, loss: 1.2981, grad_norm: 1.7821 2021-12-11 01:02:20,577 - mmocr - INFO - Epoch [4][8000/10520] lr: 2.000e-04, eta: 2 days, 12:27:48, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3507, loss_lang: 0.6146, loss_fusion: 0.3298, loss: 1.2951, grad_norm: 1.7217 2021-12-11 01:23:11,520 - mmocr - INFO - Epoch [4][9000/10520] lr: 2.000e-04, eta: 2 days, 12:04:57, time: 1.251, data_time: 0.006, memory: 18328, loss_visual: 0.3499, loss_lang: 0.6118, loss_fusion: 0.3289, loss: 1.2906, grad_norm: 1.7194 2021-12-11 01:43:56,038 - mmocr - INFO - Epoch [4][10000/10520] lr: 2.000e-04, eta: 2 days, 11:41:45, time: 1.245, data_time: 0.006, memory: 18328, loss_visual: 0.3500, loss_lang: 0.6097, loss_fusion: 0.3289, loss: 1.2885, grad_norm: 1.7398 2021-12-11 01:54:47,169 - mmocr - INFO - Saving checkpoint at 4 epochs 2021-12-11 02:04:21,800 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-11 02:04:21,870 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-11 02:04:21,880 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-11 02:04:21,895 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-11 02:04:21,925 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-11 02:04:21,935 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-11 02:04:21,940 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-11 02:04:21,940 - mmocr - INFO - Epoch(val) [4][959] 0_word_acc: 0.0740, 0_word_acc_ignore_case: 0.9433, 0_word_acc_ignore_case_symbol: 0.9433, 0_char_recall: 0.9843, 0_char_precision: 0.9826, 0_1-N.E.D: 0.9805, 1_word_acc: 0.9366, 1_word_acc_ignore_case: 0.9366, 1_word_acc_ignore_case_symbol: 0.9366, 1_char_recall: 0.9779, 1_char_precision: 0.9838, 1_1-N.E.D: 0.9732, 2_word_acc: 0.2857, 2_word_acc_ignore_case: 0.9360, 2_word_acc_ignore_case_symbol: 0.9360, 2_char_recall: 0.9865, 2_char_precision: 0.9830, 2_1-N.E.D: 0.9633, 3_word_acc: 0.1266, 3_word_acc_ignore_case: 0.7790, 3_word_acc_ignore_case_symbol: 0.8286, 3_char_recall: 0.9497, 3_char_precision: 0.9404, 3_1-N.E.D: 0.9295, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8946, 4_word_acc_ignore_case_symbol: 0.8946, 4_char_recall: 0.9509, 4_char_precision: 0.9631, 4_1-N.E.D: 0.9480, 5_word_acc: 0.1458, 5_word_acc_ignore_case: 0.8681, 5_word_acc_ignore_case_symbol: 0.8715, 5_char_recall: 0.9461, 5_char_precision: 0.9461, 5_1-N.E.D: 0.9442 2021-12-11 02:35:51,049 - mmocr - INFO - Epoch [5][1000/10520] lr: 2.000e-04, eta: 2 days, 11:06:33, time: 1.889, data_time: 0.642, memory: 18328, loss_visual: 0.3469, loss_lang: 0.6040, loss_fusion: 0.3255, loss: 1.2764, grad_norm: 1.6560 2021-12-11 02:56:39,954 - mmocr - INFO - Epoch [5][2000/10520] lr: 2.000e-04, eta: 2 days, 10:43:55, time: 1.249, data_time: 0.006, memory: 18328, loss_visual: 0.3467, loss_lang: 0.6032, loss_fusion: 0.3253, loss: 1.2751, grad_norm: 1.6842 2021-12-11 03:17:25,907 - mmocr - INFO - Epoch [5][3000/10520] lr: 2.000e-04, eta: 2 days, 10:21:11, time: 1.246, data_time: 0.005, memory: 18328, loss_visual: 0.3457, loss_lang: 0.6009, loss_fusion: 0.3241, loss: 1.2707, grad_norm: 1.6470 2021-12-11 03:38:13,230 - mmocr - INFO - Epoch [5][4000/10520] lr: 2.000e-04, eta: 2 days, 9:58:37, time: 1.247, data_time: 0.007, memory: 18328, loss_visual: 0.3455, loss_lang: 0.5993, loss_fusion: 0.3239, loss: 1.2686, grad_norm: 1.6279 2021-12-11 03:59:01,359 - mmocr - INFO - Epoch [5][5000/10520] lr: 2.000e-04, eta: 2 days, 9:36:10, time: 1.248, data_time: 0.006, memory: 18328, loss_visual: 0.3442, loss_lang: 0.5956, loss_fusion: 0.3225, loss: 1.2623, grad_norm: 1.6792 2021-12-11 04:19:46,825 - mmocr - INFO - Epoch [5][6000/10520] lr: 2.000e-04, eta: 2 days, 9:13:39, time: 1.245, data_time: 0.006, memory: 18328, loss_visual: 0.3435, loss_lang: 0.5948, loss_fusion: 0.3219, loss: 1.2602, grad_norm: 1.6799 2021-12-11 04:40:30,910 - mmocr - INFO - Epoch [5][7000/10520] lr: 2.000e-04, eta: 2 days, 8:51:07, time: 1.244, data_time: 0.006, memory: 18328, loss_visual: 0.3425, loss_lang: 0.5924, loss_fusion: 0.3206, loss: 1.2555, grad_norm: 1.6095 2021-12-11 05:01:20,753 - mmocr - INFO - Epoch [5][8000/10520] lr: 2.000e-04, eta: 2 days, 8:28:58, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3428, loss_lang: 0.5910, loss_fusion: 0.3209, loss: 1.2547, grad_norm: 1.6192 2021-12-11 05:22:10,627 - mmocr - INFO - Epoch [5][9000/10520] lr: 2.000e-04, eta: 2 days, 8:06:52, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3416, loss_lang: 0.5887, loss_fusion: 0.3196, loss: 1.2499, grad_norm: 1.5847 2021-12-11 05:42:56,249 - mmocr - INFO - Epoch [5][10000/10520] lr: 2.000e-04, eta: 2 days, 7:44:36, time: 1.246, data_time: 0.006, memory: 18328, loss_visual: 0.3405, loss_lang: 0.5868, loss_fusion: 0.3183, loss: 1.2456, grad_norm: 1.5810 2021-12-11 05:53:48,130 - mmocr - INFO - Saving checkpoint at 5 epochs 2021-12-11 06:03:21,498 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-11 06:03:21,571 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-11 06:03:21,581 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-11 06:03:21,595 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-11 06:03:21,625 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-11 06:03:21,635 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-11 06:03:21,641 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-11 06:03:21,641 - mmocr - INFO - Epoch(val) [5][959] 0_word_acc: 0.0760, 0_word_acc_ignore_case: 0.9480, 0_word_acc_ignore_case_symbol: 0.9480, 0_char_recall: 0.9851, 0_char_precision: 0.9864, 0_1-N.E.D: 0.9828, 1_word_acc: 0.9382, 1_word_acc_ignore_case: 0.9382, 1_word_acc_ignore_case_symbol: 0.9382, 1_char_recall: 0.9802, 1_char_precision: 0.9836, 1_1-N.E.D: 0.9767, 2_word_acc: 0.2867, 2_word_acc_ignore_case: 0.9379, 2_word_acc_ignore_case_symbol: 0.9379, 2_char_recall: 0.9863, 2_char_precision: 0.9866, 2_1-N.E.D: 0.9671, 3_word_acc: 0.1257, 3_word_acc_ignore_case: 0.7857, 3_word_acc_ignore_case_symbol: 0.8344, 3_char_recall: 0.9515, 3_char_precision: 0.9396, 3_1-N.E.D: 0.9312, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8744, 4_word_acc_ignore_case_symbol: 0.8744, 4_char_recall: 0.9451, 4_char_precision: 0.9552, 4_1-N.E.D: 0.9376, 5_word_acc: 0.1667, 5_word_acc_ignore_case: 0.8819, 5_word_acc_ignore_case_symbol: 0.8854, 5_char_recall: 0.9461, 5_char_precision: 0.9593, 5_1-N.E.D: 0.9590 2021-12-11 06:34:57,642 - mmocr - INFO - Epoch [6][1000/10520] lr: 2.000e-04, eta: 2 days, 7:10:59, time: 1.896, data_time: 0.651, memory: 18328, loss_visual: 0.3397, loss_lang: 0.5841, loss_fusion: 0.3175, loss: 1.2414, grad_norm: 1.5586 2021-12-11 06:55:49,152 - mmocr - INFO - Epoch [6][2000/10520] lr: 2.000e-04, eta: 2 days, 6:49:08, time: 1.251, data_time: 0.005, memory: 18328, loss_visual: 0.3404, loss_lang: 0.5839, loss_fusion: 0.3181, loss: 1.2424, grad_norm: 1.5160 2021-12-11 07:16:35,152 - mmocr - INFO - Epoch [6][3000/10520] lr: 2.000e-04, eta: 2 days, 6:27:04, time: 1.246, data_time: 0.006, memory: 18328, loss_visual: 0.3384, loss_lang: 0.5810, loss_fusion: 0.3160, loss: 1.2355, grad_norm: 1.5728 2021-12-11 07:37:21,076 - mmocr - INFO - Epoch [6][4000/10520] lr: 2.000e-04, eta: 2 days, 6:05:02, time: 1.246, data_time: 0.006, memory: 18328, loss_visual: 0.3381, loss_lang: 0.5803, loss_fusion: 0.3157, loss: 1.2342, grad_norm: 1.5195 2021-12-11 07:58:06,512 - mmocr - INFO - Epoch [6][5000/10520] lr: 2.000e-04, eta: 2 days, 5:43:02, time: 1.245, data_time: 0.006, memory: 18328, loss_visual: 0.3368, loss_lang: 0.5776, loss_fusion: 0.3142, loss: 1.2286, grad_norm: 1.5296 2021-12-11 08:18:54,248 - mmocr - INFO - Epoch [6][6000/10520] lr: 2.000e-04, eta: 2 days, 5:21:10, time: 1.248, data_time: 0.008, memory: 18328, loss_visual: 0.3374, loss_lang: 0.5768, loss_fusion: 0.3147, loss: 1.2288, grad_norm: 1.5011 2021-12-11 08:39:40,851 - mmocr - INFO - Epoch [6][7000/10520] lr: 2.000e-04, eta: 2 days, 4:59:17, time: 1.247, data_time: 0.006, memory: 18328, loss_visual: 0.3369, loss_lang: 0.5764, loss_fusion: 0.3144, loss: 1.2277, grad_norm: 1.5345 2021-12-11 09:00:29,803 - mmocr - INFO - Epoch [6][8000/10520] lr: 2.000e-04, eta: 2 days, 4:37:33, time: 1.249, data_time: 0.006, memory: 18328, loss_visual: 0.3355, loss_lang: 0.5740, loss_fusion: 0.3129, loss: 1.2225, grad_norm: 1.4931 2021-12-11 09:21:21,123 - mmocr - INFO - Epoch [6][9000/10520] lr: 2.000e-04, eta: 2 days, 4:15:56, time: 1.251, data_time: 0.006, memory: 18328, loss_visual: 0.3347, loss_lang: 0.5725, loss_fusion: 0.3120, loss: 1.2191, grad_norm: 1.4611 2021-12-11 09:42:09,636 - mmocr - INFO - Epoch [6][10000/10520] lr: 2.000e-04, eta: 2 days, 3:54:13, time: 1.248, data_time: 0.007, memory: 18328, loss_visual: 0.3345, loss_lang: 0.5717, loss_fusion: 0.3117, loss: 1.2179, grad_norm: 1.4642 2021-12-11 09:53:02,710 - mmocr - INFO - Saving checkpoint at 6 epochs 2021-12-11 10:02:43,665 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-11 10:02:43,728 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-11 10:02:43,742 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-11 10:02:43,756 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-11 10:02:43,786 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-11 10:02:43,796 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-11 10:02:43,802 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-11 10:02:43,802 - mmocr - INFO - Epoch(val) [6][959] 0_word_acc: 0.0753, 0_word_acc_ignore_case: 0.9483, 0_word_acc_ignore_case_symbol: 0.9483, 0_char_recall: 0.9865, 0_char_precision: 0.9841, 0_1-N.E.D: 0.9825, 1_word_acc: 0.9335, 1_word_acc_ignore_case: 0.9335, 1_word_acc_ignore_case_symbol: 0.9335, 1_char_recall: 0.9787, 1_char_precision: 0.9836, 1_1-N.E.D: 0.9734, 2_word_acc: 0.2847, 2_word_acc_ignore_case: 0.9369, 2_word_acc_ignore_case_symbol: 0.9369, 2_char_recall: 0.9863, 2_char_precision: 0.9843, 2_1-N.E.D: 0.9643, 3_word_acc: 0.1261, 3_word_acc_ignore_case: 0.7790, 3_word_acc_ignore_case_symbol: 0.8296, 3_char_recall: 0.9503, 3_char_precision: 0.9416, 3_1-N.E.D: 0.9305, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8775, 4_word_acc_ignore_case_symbol: 0.8775, 4_char_recall: 0.9485, 4_char_precision: 0.9625, 4_1-N.E.D: 0.9437, 5_word_acc: 0.1528, 5_word_acc_ignore_case: 0.8854, 5_word_acc_ignore_case_symbol: 0.8889, 5_char_recall: 0.9473, 5_char_precision: 0.9509, 5_1-N.E.D: 0.9546 2021-12-11 10:34:21,273 - mmocr - INFO - Epoch [7][1000/10520] lr: 2.000e-04, eta: 2 days, 3:21:16, time: 1.897, data_time: 0.651, memory: 18328, loss_visual: 0.3337, loss_lang: 0.5692, loss_fusion: 0.3108, loss: 1.2137, grad_norm: 1.4635 2021-12-11 10:55:09,203 - mmocr - INFO - Epoch [7][2000/10520] lr: 2.000e-04, eta: 2 days, 2:59:37, time: 1.248, data_time: 0.005, memory: 18328, loss_visual: 0.3314, loss_lang: 0.5663, loss_fusion: 0.3085, loss: 1.2061, grad_norm: 1.4516 2021-12-11 11:15:54,688 - mmocr - INFO - Epoch [7][3000/10520] lr: 2.000e-04, eta: 2 days, 2:37:54, time: 1.246, data_time: 0.005, memory: 18328, loss_visual: 0.3325, loss_lang: 0.5669, loss_fusion: 0.3096, loss: 1.2090, grad_norm: 1.4628 2021-12-11 11:36:42,030 - mmocr - INFO - Epoch [7][4000/10520] lr: 2.000e-04, eta: 2 days, 2:16:16, time: 1.247, data_time: 0.006, memory: 18328, loss_visual: 0.3323, loss_lang: 0.5664, loss_fusion: 0.3092, loss: 1.2079, grad_norm: 1.4409 2021-12-11 11:57:29,047 - mmocr - INFO - Epoch [7][5000/10520] lr: 2.000e-04, eta: 2 days, 1:54:40, time: 1.247, data_time: 0.006, memory: 18328, loss_visual: 0.3316, loss_lang: 0.5641, loss_fusion: 0.3086, loss: 1.2042, grad_norm: 1.4247 2021-12-11 12:18:24,732 - mmocr - INFO - Epoch [7][6000/10520] lr: 2.000e-04, eta: 2 days, 1:33:22, time: 1.256, data_time: 0.012, memory: 18328, loss_visual: 0.3300, loss_lang: 0.5626, loss_fusion: 0.3070, loss: 1.1996, grad_norm: 1.4050 2021-12-11 12:39:11,424 - mmocr - INFO - Epoch [7][7000/10520] lr: 2.000e-04, eta: 2 days, 1:11:47, time: 1.247, data_time: 0.006, memory: 18328, loss_visual: 0.3304, loss_lang: 0.5619, loss_fusion: 0.3074, loss: 1.1996, grad_norm: 1.4103 2021-12-11 12:59:58,658 - mmocr - INFO - Epoch [7][8000/10520] lr: 2.000e-04, eta: 2 days, 0:50:15, time: 1.247, data_time: 0.006, memory: 18328, loss_visual: 0.3308, loss_lang: 0.5625, loss_fusion: 0.3076, loss: 1.2010, grad_norm: 1.3887 2021-12-11 13:20:43,441 - mmocr - INFO - Epoch [7][9000/10520] lr: 2.000e-04, eta: 2 days, 0:28:39, time: 1.245, data_time: 0.007, memory: 18328, loss_visual: 0.3292, loss_lang: 0.5599, loss_fusion: 0.3061, loss: 1.1953, grad_norm: 1.3802 2021-12-11 13:41:36,164 - mmocr - INFO - Epoch [7][10000/10520] lr: 2.000e-04, eta: 2 days, 0:07:19, time: 1.253, data_time: 0.006, memory: 18328, loss_visual: 0.3275, loss_lang: 0.5576, loss_fusion: 0.3044, loss: 1.1895, grad_norm: 1.3743 2021-12-11 13:52:33,843 - mmocr - INFO - Saving checkpoint at 7 epochs 2021-12-11 14:02:35,143 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-11 14:02:35,213 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-11 14:02:35,223 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-11 14:02:35,237 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-11 14:02:35,268 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-11 14:02:35,278 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-11 14:02:35,284 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-11 14:02:35,284 - mmocr - INFO - Epoch(val) [7][959] 0_word_acc: 0.0750, 0_word_acc_ignore_case: 0.9417, 0_word_acc_ignore_case_symbol: 0.9417, 0_char_recall: 0.9838, 0_char_precision: 0.9819, 0_1-N.E.D: 0.9806, 1_word_acc: 0.9413, 1_word_acc_ignore_case: 0.9413, 1_word_acc_ignore_case_symbol: 0.9413, 1_char_recall: 0.9784, 1_char_precision: 0.9828, 1_1-N.E.D: 0.9744, 2_word_acc: 0.2867, 2_word_acc_ignore_case: 0.9340, 2_word_acc_ignore_case_symbol: 0.9340, 2_char_recall: 0.9857, 2_char_precision: 0.9828, 2_1-N.E.D: 0.9616, 3_word_acc: 0.1261, 3_word_acc_ignore_case: 0.7771, 3_word_acc_ignore_case_symbol: 0.8272, 3_char_recall: 0.9494, 3_char_precision: 0.9403, 3_1-N.E.D: 0.9285, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8806, 4_word_acc_ignore_case_symbol: 0.8806, 4_char_recall: 0.9496, 4_char_precision: 0.9618, 4_1-N.E.D: 0.9454, 5_word_acc: 0.1562, 5_word_acc_ignore_case: 0.8681, 5_word_acc_ignore_case_symbol: 0.8715, 5_char_recall: 0.9473, 5_char_precision: 0.9456, 5_1-N.E.D: 0.9477 2021-12-11 14:34:25,993 - mmocr - INFO - Epoch [8][1000/10520] lr: 2.000e-04, eta: 1 day, 23:35:07, time: 1.911, data_time: 0.666, memory: 18328, loss_visual: 0.3280, loss_lang: 0.5563, loss_fusion: 0.3046, loss: 1.1888, grad_norm: 1.3775 2021-12-11 14:55:16,200 - mmocr - INFO - Epoch [8][2000/10520] lr: 2.000e-04, eta: 1 day, 23:13:45, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3280, loss_lang: 0.5564, loss_fusion: 0.3047, loss: 1.1891, grad_norm: 1.3355 2021-12-11 15:16:05,441 - mmocr - INFO - Epoch [8][3000/10520] lr: 2.000e-04, eta: 1 day, 22:52:21, time: 1.249, data_time: 0.007, memory: 18328, loss_visual: 0.3264, loss_lang: 0.5549, loss_fusion: 0.3031, loss: 1.1843, grad_norm: 1.3657 2021-12-11 15:36:56,583 - mmocr - INFO - Epoch [8][4000/10520] lr: 2.000e-04, eta: 1 day, 22:31:02, time: 1.251, data_time: 0.006, memory: 18328, loss_visual: 0.3250, loss_lang: 0.5530, loss_fusion: 0.3016, loss: 1.1796, grad_norm: 1.3267 2021-12-11 15:57:46,660 - mmocr - INFO - Epoch [8][5000/10520] lr: 2.000e-04, eta: 1 day, 22:09:42, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3256, loss_lang: 0.5528, loss_fusion: 0.3022, loss: 1.1806, grad_norm: 1.3309 2021-12-11 16:18:36,421 - mmocr - INFO - Epoch [8][6000/10520] lr: 2.000e-04, eta: 1 day, 21:48:21, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3245, loss_lang: 0.5511, loss_fusion: 0.3008, loss: 1.1764, grad_norm: 1.3367 2021-12-11 16:39:29,410 - mmocr - INFO - Epoch [8][7000/10520] lr: 2.000e-04, eta: 1 day, 21:27:07, time: 1.253, data_time: 0.006, memory: 18328, loss_visual: 0.3243, loss_lang: 0.5508, loss_fusion: 0.3008, loss: 1.1759, grad_norm: 1.3331 2021-12-11 17:00:19,282 - mmocr - INFO - Epoch [8][8000/10520] lr: 2.000e-04, eta: 1 day, 21:05:49, time: 1.250, data_time: 0.007, memory: 18328, loss_visual: 0.3240, loss_lang: 0.5499, loss_fusion: 0.3004, loss: 1.1743, grad_norm: 1.2953 2021-12-11 17:21:08,882 - mmocr - INFO - Epoch [8][9000/10520] lr: 2.000e-04, eta: 1 day, 20:44:30, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3238, loss_lang: 0.5496, loss_fusion: 0.3004, loss: 1.1738, grad_norm: 1.2883 2021-12-11 17:42:00,146 - mmocr - INFO - Epoch [8][10000/10520] lr: 2.000e-04, eta: 1 day, 20:23:15, time: 1.251, data_time: 0.006, memory: 18328, loss_visual: 0.3232, loss_lang: 0.5480, loss_fusion: 0.2998, loss: 1.1710, grad_norm: 1.2756 2021-12-11 17:52:54,111 - mmocr - INFO - Saving checkpoint at 8 epochs 2021-12-11 18:02:34,792 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-11 18:02:34,857 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-11 18:02:34,867 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-11 18:02:34,881 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-11 18:02:34,920 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-11 18:02:34,930 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-11 18:02:34,935 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-11 18:02:34,936 - mmocr - INFO - Epoch(val) [8][959] 0_word_acc: 0.0747, 0_word_acc_ignore_case: 0.9473, 0_word_acc_ignore_case_symbol: 0.9473, 0_char_recall: 0.9857, 0_char_precision: 0.9848, 0_1-N.E.D: 0.9822, 1_word_acc: 0.9366, 1_word_acc_ignore_case: 0.9366, 1_word_acc_ignore_case_symbol: 0.9366, 1_char_recall: 0.9800, 1_char_precision: 0.9852, 1_1-N.E.D: 0.9755, 2_word_acc: 0.2867, 2_word_acc_ignore_case: 0.9340, 2_word_acc_ignore_case_symbol: 0.9340, 2_char_recall: 0.9861, 2_char_precision: 0.9843, 2_1-N.E.D: 0.9632, 3_word_acc: 0.1281, 3_word_acc_ignore_case: 0.7833, 3_word_acc_ignore_case_symbol: 0.8329, 3_char_recall: 0.9500, 3_char_precision: 0.9402, 3_1-N.E.D: 0.9307, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8915, 4_word_acc_ignore_case_symbol: 0.8915, 4_char_recall: 0.9498, 4_char_precision: 0.9625, 4_1-N.E.D: 0.9462, 5_word_acc: 0.1562, 5_word_acc_ignore_case: 0.8854, 5_word_acc_ignore_case_symbol: 0.8889, 5_char_recall: 0.9555, 5_char_precision: 0.9603, 5_1-N.E.D: 0.9587 2021-12-11 18:34:48,877 - mmocr - INFO - Epoch [9][1000/10520] lr: 2.000e-04, eta: 1 day, 19:51:45, time: 1.934, data_time: 0.683, memory: 18328, loss_visual: 0.3215, loss_lang: 0.5455, loss_fusion: 0.2979, loss: 1.1649, grad_norm: 1.2731 2021-12-11 18:55:42,570 - mmocr - INFO - Epoch [9][2000/10520] lr: 2.000e-04, eta: 1 day, 19:30:34, time: 1.254, data_time: 0.006, memory: 18328, loss_visual: 0.3224, loss_lang: 0.5463, loss_fusion: 0.2989, loss: 1.1676, grad_norm: 1.2751 2021-12-11 19:16:35,496 - mmocr - INFO - Epoch [9][3000/10520] lr: 2.000e-04, eta: 1 day, 19:09:22, time: 1.253, data_time: 0.006, memory: 18328, loss_visual: 0.3210, loss_lang: 0.5440, loss_fusion: 0.2975, loss: 1.1625, grad_norm: 1.2661 2021-12-11 19:37:30,344 - mmocr - INFO - Epoch [9][4000/10520] lr: 2.000e-04, eta: 1 day, 18:48:14, time: 1.255, data_time: 0.008, memory: 18328, loss_visual: 0.3208, loss_lang: 0.5445, loss_fusion: 0.2972, loss: 1.1625, grad_norm: 1.2806 2021-12-11 19:58:21,949 - mmocr - INFO - Epoch [9][5000/10520] lr: 2.000e-04, eta: 1 day, 18:27:01, time: 1.252, data_time: 0.006, memory: 18328, loss_visual: 0.3208, loss_lang: 0.5435, loss_fusion: 0.2972, loss: 1.1615, grad_norm: 1.2743 2021-12-11 20:19:15,802 - mmocr - INFO - Epoch [9][6000/10520] lr: 2.000e-04, eta: 1 day, 18:05:51, time: 1.254, data_time: 0.006, memory: 18328, loss_visual: 0.3197, loss_lang: 0.5416, loss_fusion: 0.2961, loss: 1.1575, grad_norm: 1.2593 2021-12-11 20:40:09,084 - mmocr - INFO - Epoch [9][7000/10520] lr: 2.000e-04, eta: 1 day, 17:44:42, time: 1.253, data_time: 0.005, memory: 18328, loss_visual: 0.3189, loss_lang: 0.5405, loss_fusion: 0.2952, loss: 1.1546, grad_norm: 1.2380 2021-12-11 21:01:00,641 - mmocr - INFO - Epoch [9][8000/10520] lr: 2.000e-04, eta: 1 day, 17:23:30, time: 1.252, data_time: 0.005, memory: 18328, loss_visual: 0.3202, loss_lang: 0.5420, loss_fusion: 0.2965, loss: 1.1586, grad_norm: 1.2133 2021-12-11 21:21:50,147 - mmocr - INFO - Epoch [9][9000/10520] lr: 2.000e-04, eta: 1 day, 17:02:16, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3198, loss_lang: 0.5407, loss_fusion: 0.2960, loss: 1.1565, grad_norm: 1.2016 2021-12-11 21:42:44,231 - mmocr - INFO - Epoch [9][10000/10520] lr: 2.000e-04, eta: 1 day, 16:41:09, time: 1.254, data_time: 0.006, memory: 18328, loss_visual: 0.3179, loss_lang: 0.5382, loss_fusion: 0.2941, loss: 1.1502, grad_norm: 1.2452 2021-12-11 21:53:37,284 - mmocr - INFO - Saving checkpoint at 9 epochs 2021-12-11 22:03:45,368 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-11 22:03:45,432 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-11 22:03:45,442 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-11 22:03:45,456 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-11 22:03:45,486 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-11 22:03:45,496 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-11 22:03:45,501 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-11 22:03:45,502 - mmocr - INFO - Epoch(val) [9][959] 0_word_acc: 0.0753, 0_word_acc_ignore_case: 0.9497, 0_word_acc_ignore_case_symbol: 0.9497, 0_char_recall: 0.9852, 0_char_precision: 0.9837, 0_1-N.E.D: 0.9831, 1_word_acc: 0.9274, 1_word_acc_ignore_case: 0.9274, 1_word_acc_ignore_case_symbol: 0.9274, 1_char_recall: 0.9771, 1_char_precision: 0.9836, 1_1-N.E.D: 0.9713, 2_word_acc: 0.2887, 2_word_acc_ignore_case: 0.9448, 2_word_acc_ignore_case_symbol: 0.9448, 2_char_recall: 0.9874, 2_char_precision: 0.9856, 2_1-N.E.D: 0.9680, 3_word_acc: 0.1286, 3_word_acc_ignore_case: 0.7843, 3_word_acc_ignore_case_symbol: 0.8344, 3_char_recall: 0.9500, 3_char_precision: 0.9448, 3_1-N.E.D: 0.9314, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8822, 4_word_acc_ignore_case_symbol: 0.8822, 4_char_recall: 0.9490, 4_char_precision: 0.9622, 4_1-N.E.D: 0.9426, 5_word_acc: 0.1667, 5_word_acc_ignore_case: 0.8993, 5_word_acc_ignore_case_symbol: 0.9028, 5_char_recall: 0.9555, 5_char_precision: 0.9573, 5_1-N.E.D: 0.9653 2021-12-11 22:35:38,895 - mmocr - INFO - Epoch [10][1000/10520] lr: 2.000e-04, eta: 1 day, 16:09:11, time: 1.913, data_time: 0.664, memory: 18328, loss_visual: 0.3173, loss_lang: 0.5369, loss_fusion: 0.2935, loss: 1.1477, grad_norm: 1.2228 2021-12-11 22:56:30,206 - mmocr - INFO - Epoch [10][2000/10520] lr: 2.000e-04, eta: 1 day, 15:48:01, time: 1.251, data_time: 0.006, memory: 18328, loss_visual: 0.3171, loss_lang: 0.5362, loss_fusion: 0.2933, loss: 1.1465, grad_norm: 1.2164 2021-12-11 23:17:19,201 - mmocr - INFO - Epoch [10][3000/10520] lr: 2.000e-04, eta: 1 day, 15:26:48, time: 1.249, data_time: 0.005, memory: 18328, loss_visual: 0.3164, loss_lang: 0.5360, loss_fusion: 0.2926, loss: 1.1450, grad_norm: 1.2261 2021-12-11 23:38:12,942 - mmocr - INFO - Epoch [10][4000/10520] lr: 2.000e-04, eta: 1 day, 15:05:41, time: 1.254, data_time: 0.006, memory: 18328, loss_visual: 0.3160, loss_lang: 0.5353, loss_fusion: 0.2922, loss: 1.1436, grad_norm: 1.2055 2021-12-11 23:59:04,675 - mmocr - INFO - Epoch [10][5000/10520] lr: 2.000e-04, eta: 1 day, 14:44:33, time: 1.252, data_time: 0.005, memory: 18328, loss_visual: 0.3151, loss_lang: 0.5338, loss_fusion: 0.2913, loss: 1.1402, grad_norm: 1.2048 2021-12-12 00:19:54,823 - mmocr - INFO - Epoch [10][6000/10520] lr: 2.000e-04, eta: 1 day, 14:23:23, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.3156, loss_lang: 0.5341, loss_fusion: 0.2916, loss: 1.1413, grad_norm: 1.2105 2021-12-12 00:40:48,246 - mmocr - INFO - Epoch [10][7000/10520] lr: 2.000e-04, eta: 1 day, 14:02:16, time: 1.253, data_time: 0.006, memory: 18328, loss_visual: 0.3168, loss_lang: 0.5353, loss_fusion: 0.2929, loss: 1.1450, grad_norm: 1.2120 2021-12-12 01:01:39,781 - mmocr - INFO - Epoch [10][8000/10520] lr: 2.000e-04, eta: 1 day, 13:41:09, time: 1.252, data_time: 0.006, memory: 18328, loss_visual: 0.3138, loss_lang: 0.5311, loss_fusion: 0.2899, loss: 1.1349, grad_norm: 1.1898 2021-12-12 01:22:32,248 - mmocr - INFO - Epoch [10][9000/10520] lr: 2.000e-04, eta: 1 day, 13:20:02, time: 1.252, data_time: 0.007, memory: 18328, loss_visual: 0.3153, loss_lang: 0.5328, loss_fusion: 0.2914, loss: 1.1395, grad_norm: 1.1782 2021-12-12 01:43:20,578 - mmocr - INFO - Epoch [10][10000/10520] lr: 2.000e-04, eta: 1 day, 12:58:51, time: 1.248, data_time: 0.005, memory: 18328, loss_visual: 0.3144, loss_lang: 0.5316, loss_fusion: 0.2905, loss: 1.1365, grad_norm: 1.1802 2021-12-12 01:54:21,812 - mmocr - INFO - Saving checkpoint at 10 epochs 2021-12-12 02:04:11,310 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-12 02:04:11,377 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-12 02:04:11,387 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-12 02:04:11,401 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-12 02:04:11,431 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-12 02:04:11,441 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-12 02:04:11,446 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-12 02:04:11,447 - mmocr - INFO - Epoch(val) [10][959] 0_word_acc: 0.0753, 0_word_acc_ignore_case: 0.9480, 0_word_acc_ignore_case_symbol: 0.9480, 0_char_recall: 0.9859, 0_char_precision: 0.9865, 0_1-N.E.D: 0.9846, 1_word_acc: 0.9351, 1_word_acc_ignore_case: 0.9351, 1_word_acc_ignore_case_symbol: 0.9351, 1_char_recall: 0.9805, 1_char_precision: 0.9870, 1_1-N.E.D: 0.9764, 2_word_acc: 0.2867, 2_word_acc_ignore_case: 0.9498, 2_word_acc_ignore_case_symbol: 0.9498, 2_char_recall: 0.9892, 2_char_precision: 0.9878, 2_1-N.E.D: 0.9701, 3_word_acc: 0.1266, 3_word_acc_ignore_case: 0.7910, 3_word_acc_ignore_case_symbol: 0.8411, 3_char_recall: 0.9521, 3_char_precision: 0.9434, 3_1-N.E.D: 0.9332, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8822, 4_word_acc_ignore_case_symbol: 0.8822, 4_char_recall: 0.9469, 4_char_precision: 0.9635, 4_1-N.E.D: 0.9431, 5_word_acc: 0.1528, 5_word_acc_ignore_case: 0.8646, 5_word_acc_ignore_case_symbol: 0.8681, 5_char_recall: 0.9473, 5_char_precision: 0.9569, 5_1-N.E.D: 0.9528 2021-12-12 02:36:10,725 - mmocr - INFO - Epoch [11][1000/10520] lr: 2.000e-05, eta: 1 day, 12:27:02, time: 1.919, data_time: 0.670, memory: 18328, loss_visual: 0.2943, loss_lang: 0.5044, loss_fusion: 0.2709, loss: 1.0697, grad_norm: 1.0432 2021-12-12 02:57:00,201 - mmocr - INFO - Epoch [11][2000/10520] lr: 2.000e-05, eta: 1 day, 12:05:53, time: 1.249, data_time: 0.005, memory: 18328, loss_visual: 0.2874, loss_lang: 0.4950, loss_fusion: 0.2643, loss: 1.0467, grad_norm: 1.0615 2021-12-12 03:17:50,573 - mmocr - INFO - Epoch [11][3000/10520] lr: 2.000e-05, eta: 1 day, 11:44:46, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.2868, loss_lang: 0.4935, loss_fusion: 0.2640, loss: 1.0443, grad_norm: 1.0397 2021-12-12 03:38:42,947 - mmocr - INFO - Epoch [11][4000/10520] lr: 2.000e-05, eta: 1 day, 11:23:40, time: 1.252, data_time: 0.005, memory: 18328, loss_visual: 0.2843, loss_lang: 0.4907, loss_fusion: 0.2617, loss: 1.0367, grad_norm: 1.0453 2021-12-12 03:59:38,422 - mmocr - INFO - Epoch [11][5000/10520] lr: 2.000e-05, eta: 1 day, 11:02:38, time: 1.255, data_time: 0.011, memory: 18328, loss_visual: 0.2823, loss_lang: 0.4890, loss_fusion: 0.2598, loss: 1.0311, grad_norm: 1.0900 2021-12-12 04:20:30,276 - mmocr - INFO - Epoch [11][6000/10520] lr: 2.000e-05, eta: 1 day, 10:41:32, time: 1.252, data_time: 0.006, memory: 18328, loss_visual: 0.2805, loss_lang: 0.4863, loss_fusion: 0.2581, loss: 1.0250, grad_norm: 1.0772 2021-12-12 04:41:21,160 - mmocr - INFO - Epoch [11][7000/10520] lr: 2.000e-05, eta: 1 day, 10:20:26, time: 1.251, data_time: 0.006, memory: 18328, loss_visual: 0.2812, loss_lang: 0.4868, loss_fusion: 0.2588, loss: 1.0267, grad_norm: 1.0744 2021-12-12 05:02:15,705 - mmocr - INFO - Epoch [11][8000/10520] lr: 2.000e-05, eta: 1 day, 9:59:24, time: 1.255, data_time: 0.007, memory: 18328, loss_visual: 0.2801, loss_lang: 0.4854, loss_fusion: 0.2578, loss: 1.0233, grad_norm: 1.0930 2021-12-12 05:23:05,571 - mmocr - INFO - Epoch [11][9000/10520] lr: 2.000e-05, eta: 1 day, 9:38:17, time: 1.250, data_time: 0.005, memory: 18328, loss_visual: 0.2793, loss_lang: 0.4840, loss_fusion: 0.2570, loss: 1.0204, grad_norm: 1.0830 2021-12-12 05:43:58,294 - mmocr - INFO - Epoch [11][10000/10520] lr: 2.000e-05, eta: 1 day, 9:17:13, time: 1.253, data_time: 0.007, memory: 18328, loss_visual: 0.2782, loss_lang: 0.4831, loss_fusion: 0.2559, loss: 1.0172, grad_norm: 1.0910 2021-12-12 05:54:52,340 - mmocr - INFO - Saving checkpoint at 11 epochs 2021-12-12 06:04:21,785 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-12 06:04:21,887 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-12 06:04:21,897 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-12 06:04:21,911 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-12 06:04:21,942 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-12 06:04:21,952 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-12 06:04:21,958 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-12 06:04:21,958 - mmocr - INFO - Epoch(val) [11][959] 0_word_acc: 0.0767, 0_word_acc_ignore_case: 0.9547, 0_word_acc_ignore_case_symbol: 0.9547, 0_char_recall: 0.9887, 0_char_precision: 0.9865, 0_1-N.E.D: 0.9860, 1_word_acc: 0.9444, 1_word_acc_ignore_case: 0.9444, 1_word_acc_ignore_case_symbol: 0.9444, 1_char_recall: 0.9831, 1_char_precision: 0.9891, 1_1-N.E.D: 0.9785, 2_word_acc: 0.2916, 2_word_acc_ignore_case: 0.9507, 2_word_acc_ignore_case_symbol: 0.9507, 2_char_recall: 0.9889, 2_char_precision: 0.9883, 2_1-N.E.D: 0.9706, 3_word_acc: 0.1286, 3_word_acc_ignore_case: 0.8012, 3_word_acc_ignore_case_symbol: 0.8522, 3_char_recall: 0.9581, 3_char_precision: 0.9472, 3_1-N.E.D: 0.9399, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.9008, 4_word_acc_ignore_case_symbol: 0.9008, 4_char_recall: 0.9519, 4_char_precision: 0.9657, 4_1-N.E.D: 0.9481, 5_word_acc: 0.1597, 5_word_acc_ignore_case: 0.8958, 5_word_acc_ignore_case_symbol: 0.8993, 5_char_recall: 0.9561, 5_char_precision: 0.9567, 5_1-N.E.D: 0.9635 2021-12-12 06:35:40,148 - mmocr - INFO - Epoch [12][1000/10520] lr: 2.000e-05, eta: 1 day, 8:44:52, time: 1.878, data_time: 0.635, memory: 18328, loss_visual: 0.2768, loss_lang: 0.4812, loss_fusion: 0.2546, loss: 1.0125, grad_norm: 1.0893 2021-12-12 06:56:25,807 - mmocr - INFO - Epoch [12][2000/10520] lr: 2.000e-05, eta: 1 day, 8:23:43, time: 1.246, data_time: 0.005, memory: 18328, loss_visual: 0.2773, loss_lang: 0.4809, loss_fusion: 0.2551, loss: 1.0134, grad_norm: 1.0956 2021-12-12 07:17:15,069 - mmocr - INFO - Epoch [12][3000/10520] lr: 2.000e-05, eta: 1 day, 8:02:38, time: 1.249, data_time: 0.006, memory: 18328, loss_visual: 0.2768, loss_lang: 0.4807, loss_fusion: 0.2547, loss: 1.0122, grad_norm: 1.1073 2021-12-12 07:38:04,913 - mmocr - INFO - Epoch [12][4000/10520] lr: 2.000e-05, eta: 1 day, 7:41:33, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.2762, loss_lang: 0.4799, loss_fusion: 0.2540, loss: 1.0101, grad_norm: 1.1065 2021-12-12 08:00:53,279 - mmocr - INFO - Epoch [12][5000/10520] lr: 2.000e-05, eta: 1 day, 7:21:57, time: 1.368, data_time: 0.006, memory: 18328, loss_visual: 0.2748, loss_lang: 0.4786, loss_fusion: 0.2528, loss: 1.0062, grad_norm: 1.1027 2021-12-12 08:21:43,384 - mmocr - INFO - Epoch [12][6000/10520] lr: 2.000e-05, eta: 1 day, 7:00:51, time: 1.250, data_time: 0.006, memory: 18328, loss_visual: 0.2747, loss_lang: 0.4782, loss_fusion: 0.2527, loss: 1.0056, grad_norm: 1.0991 2021-12-12 08:42:35,950 - mmocr - INFO - Epoch [12][7000/10520] lr: 2.000e-05, eta: 1 day, 6:39:47, time: 1.253, data_time: 0.005, memory: 18328, loss_visual: 0.2755, loss_lang: 0.4791, loss_fusion: 0.2536, loss: 1.0082, grad_norm: 1.1179 2021-12-12 09:03:26,581 - mmocr - INFO - Epoch [12][8000/10520] lr: 2.000e-05, eta: 1 day, 6:18:42, time: 1.251, data_time: 0.007, memory: 18328, loss_visual: 0.2740, loss_lang: 0.4774, loss_fusion: 0.2520, loss: 1.0034, grad_norm: 1.1326 2021-12-12 09:24:13,648 - mmocr - INFO - Epoch [12][9000/10520] lr: 2.000e-05, eta: 1 day, 5:57:35, time: 1.247, data_time: 0.005, memory: 18328, loss_visual: 0.2741, loss_lang: 0.4778, loss_fusion: 0.2522, loss: 1.0041, grad_norm: 1.1075 2021-12-12 09:45:07,576 - mmocr - INFO - Epoch [12][10000/10520] lr: 2.000e-05, eta: 1 day, 5:36:33, time: 1.254, data_time: 0.009, memory: 18328, loss_visual: 0.2729, loss_lang: 0.4755, loss_fusion: 0.2510, loss: 0.9994, grad_norm: 1.1410 2021-12-12 09:56:01,375 - mmocr - INFO - Saving checkpoint at 12 epochs 2021-12-12 10:05:21,793 - mmocr - INFO - Evaluateing data/mixture/testset/IIIT5K/label.txt with 3000 images now 2021-12-12 10:05:21,867 - mmocr - INFO - Evaluateing data/mixture/testset/svt/test_list.txt with 647 images now 2021-12-12 10:05:21,877 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2013/1015_test_label.txt with 1015 images now 2021-12-12 10:05:21,895 - mmocr - INFO - Evaluateing data/mixture/testset/icdar_2015/test_label.txt with 2077 images now 2021-12-12 10:05:21,926 - mmocr - INFO - Evaluateing data/mixture/testset/svtp/imagelist.txt with 645 images now 2021-12-12 10:05:21,936 - mmocr - INFO - Evaluateing data/mixture/testset/ct80/imagelist.txt with 288 images now 2021-12-12 10:05:21,942 - mmocr - INFO - Exp name: abinet_train_reimplement_augv3.py 2021-12-12 10:05:21,942 - mmocr - INFO - Epoch(val) [12][959] 0_word_acc: 0.0767, 0_word_acc_ignore_case: 0.9563, 0_word_acc_ignore_case_symbol: 0.9563, 0_char_recall: 0.9891, 0_char_precision: 0.9862, 0_1-N.E.D: 0.9872, 1_word_acc: 0.9428, 1_word_acc_ignore_case: 0.9428, 1_word_acc_ignore_case_symbol: 0.9428, 1_char_recall: 0.9829, 1_char_precision: 0.9889, 1_1-N.E.D: 0.9786, 2_word_acc: 0.2916, 2_word_acc_ignore_case: 0.9468, 2_word_acc_ignore_case_symbol: 0.9468, 2_char_recall: 0.9879, 2_char_precision: 0.9868, 2_1-N.E.D: 0.9692, 3_word_acc: 0.1276, 3_word_acc_ignore_case: 0.7959, 3_word_acc_ignore_case_symbol: 0.8459, 3_char_recall: 0.9562, 3_char_precision: 0.9451, 3_1-N.E.D: 0.9375, 4_word_acc: 0.0000, 4_word_acc_ignore_case: 0.8961, 4_word_acc_ignore_case_symbol: 0.8961, 4_char_recall: 0.9548, 4_char_precision: 0.9653, 4_1-N.E.D: 0.9506, 5_word_acc: 0.1528, 5_word_acc_ignore_case: 0.8750, 5_word_acc_ignore_case_symbol: 0.8785, 5_char_recall: 0.9505, 5_char_precision: 0.9469, 5_1-N.E.D: 0.9519 2021-12-12 10:36:42,241 - mmocr - INFO - Epoch [13][1000/10520] lr: 2.000e-05, eta: 1 day, 5:04:18, time: 1.880, data_time: 0.632, memory: 18328, loss_visual: 0.2724, loss_lang: 0.4751, loss_fusion: 0.2505, loss: 0.9980, grad_norm: 1.1207 2021-12-12 10:57:34,553 - mmocr - INFO - Epoch [13][2000/10520] lr: 2.000e-05, eta: 1 day, 4:43:16, time: 1.252, data_time: 0.006, memory: 18328, loss_visual: 0.2717, loss_lang: 0.4749, loss_fusion: 0.2498, loss: 0.9964, grad_norm: 1.1321 2021-12-12 11:18:33,231 - mmocr - INFO - Epoch [13][3000/10520] lr: 2.000e-05, eta: 1 day, 4:22:17, time: 1.259, data_time: 0.006, memory: 18328, loss_visual: 0.2725, loss_lang: 0.4754, loss_fusion: 0.2508, loss: 0.9986, grad_norm: 1.1324 2021-12-12 11:39:28,107 - mmocr - INFO - Epoch [13][4000/10520] lr: 2.000e-05, eta: 1 day, 4:01:17, time: 1.255, data_time: 0.007, memory: 18328, loss_visual: 0.2716, loss_lang: 0.4743, loss_fusion: 0.2498, loss: 0.9957, grad_norm: 1.1215 2021-12-12 12:00:22,425 - mmocr - INFO - Epoch [13][5000/10520] lr: 2.000e-05, eta: 1 day, 3:40:16, time: 1.254, data_time: 0.005, memory: 18328, loss_visual: 0.2715, loss_lang: 0.4745, loss_fusion: 0.2499, loss: 0.9959, grad_norm: 1.1363 2021-12-12 12:21:14,435 - mmocr - INFO - Epoch [13][6000/10520] lr: 2.000e-05, eta: 1 day, 3:19:14, time: 1.252, data_time: 0.006, memory: 18328, loss_visual: 0.2718, loss_lang: 0.4743, loss_fusion: 0.2499, loss: 0.9960, grad_norm: 1.1318 2021-12-12 12:42:05,072 - mmocr - INFO - Epoch [13][7000/10520] lr: 2.000e-05, eta: 1 day, 2:58:11, time: 1.251, data_time: 0.006, memory: 18328, loss_visual: 0.2717, loss_lang: 0.4742, loss_fusion: 0.2498, loss: 0.9956, grad_norm: 1.1259 2021-12-12 13:02:55,404 - mmocr - INFO - Epoch [13][8000/10520] lr: 2.000e-05, eta: 1 day, 2:37:08, time: 1.250, data_time: 0.007, memory: 18328, loss_visual: 0.2705, loss_lang: 0.4734, loss_fusion: 0.2487, loss: 0.9926, grad_norm: 1.1363