★★★ 本文源自AlStudio社区精品项目,【点击此处】查看更多精品内容 >>>

0. 项目介绍

项目背景:

随着互联网飞速发展,爬虫技术越来越成熟,为了防止机器人爬虫对网站数据的非法获取,验证码早已成为了互联网服务中不可或缺的一部分。然而随着人工智能技术的发展,验证码越来越容易被机器识别攻破,验证码的保护也变重要了。

梯度攻击是一种应用广泛的对抗攻击技术,在验证码保护方面,可以利用梯度攻击生成对抗样本,有效增加验证码模型识别的复杂度和难度,使得机器人难以通过识别验证来进行非法访问和网络攻击。

技术原理:

PGD是一种常见的梯度攻击方法,它可以通过多次迭代产生对抗样本。其基本原理是,利用神经网络的反向传播算法来求解对抗样本的梯度信息,并根据梯度信息对输入数据进行微小的扰动,以改变其分类结果。

本项目利用PGD攻击对正常验证码图像添加噪声扰动,通过不断地迭代噪声扰动,可以得到更加复杂和难以识别的对抗样本,从而提高验证码的保护效果。攻击噪声的强度迭代次数可调,方便达到想要的效果。

  • 下图为攻击样本生成原理,在原图上添加特定的噪声扰动,使得不影响人阅读的情况下,模型识别错误,起到反爬的效果:

项目内容:

  1. 验证码识别模型的训练

  2. 对抗样本生成及测试(PGD)

  • 对于识别模型来说,防御方法有对抗训练、模型鲁棒结构设计、对抗扰动结构破坏等。本项目使用的是对抗训练的方法,在训练过程中加入对抗样本,通过不断的学习对抗样本的特征,从而提高模型的鲁棒性。

  • 虽然训练的模型在测试集上有较高的准确率,但用攻击较强的样本进行测试,准确率也会大幅下降。

1. 验证码识别

由于本项目中验证码比较特别,开源的ocr模型准确率较低,这里自己训练一个模型来使用

以开源项目ddddocr对本项目中的测试集进行测试,准确率为0.25和0.32(两种模型,不区分大小写),准确率较低

1.1 目标验证码

验证码如下:


这种验证码字符间隔很近,甚至粘连在一起,通用OCR的识别效果不理想

这里我采集并标注了100张验证码:“captcha_img/00_data.rar”,图片名字就是标签,用做测试集。

训练集和验证集则使用代码生成;模型方面采用CRNN结构,方便端到端训练。

# 解压测试集
!unzip -q captcha_img/00_data.zip -d captcha_img/test

1.2 数据集构建

对于端到端训练,直接构建带标签的字符串验证码图片来制作训练集。

从目标验证码来看,字体特殊,黏连明显。

最简单的制作方法就是找到现成的、与验证码接近的字体来当做素材。

但是我暂时没有找到与本项目相近的字体,所以就使用测试图片素材,自己制作了与验证码相同的字体。该方法比较耗时费力,但本项目仅有26个大写字母需要制作,时间成本还能接受,制作流程可以参考网上的FontCreator使用教程,这里我制作的自定义字体已在项目中分享,文件: create_captcha/captcha.ttf

有了字体就方便了,先观察一下目标验证码的特点:


  1. 验证码的长度有6个字符或7个字符

  2. 每个字符的大小不统一

  3. 每个字符间隔不统一

  4. 每个字符有不同的旋转角度

下面就用代码来实现上述的4个特点,主要思想是分开构建每个单字符的图片,使它们有不同的大小,旋转角度和起始坐标,最后再将多张单个字符的图片合并为一张字符串图片



生成验证码的文件在: create_captcha/generate_img.py

生成验证码的代码思想主要是:对于字符个数为L的验证码,我们需要在L张画布上每个字符的对应位置画上单个字符,并添加旋转等操作,最后再将这L张图片叠加起来,就是最终的验证码图片了,详细原理可查看源码注释,这里就不做详细介绍了。

1.2.1 数据集分布统计

在前期工作中,经过对生成的数据集测试,发现测试集准确率受训练集与测试集字符在图片中的左右位置差异的影响较大,所以构建数据集的时候尽量让训练集的分布覆盖测试集的分布。

这里统计了字符串在图片中的起始位置,结束位置,字符串的长度,用来参考分析训练集和测试集的差异。

本项目中已经调整好了生成验证码的各个参数,运行后面代码可以看到生成的验证码和测试集还是较为一致的。

运行后续代码,可查看分布情况,下面贴出调整前后的分布对比图:

before==>
after

*调整前测试集准确率10%,调整后测试集75%(未使用对抗训练)

# 先生成少量图片,用于统计图片相关特征的分布情况
%cd 
%cd create_captcha
!python3 generate_img.py --total_number 100 --save_dir '/home/aistudio/test_captcha' --img_wh 120 50
%cd 
"""外接矩形位置统计"""
# 先在test_captcha文件夹生成少量样本,看下字符在图片中的位置,长度分布是否与测试集相似
%matplotlib inline
import cv2
import numpy as np
import os
import matplotlib.pyplot as plt
import json

start_x = []
end_x = []
width = []

def parse_a_img(img_file):
    img = cv2.imread(img_file, 0)
    img = 255 - img

    box_list = []
    x, y, w, h = cv2.boundingRect(img)
    start_x.append(x)
    end_x.append(x+w)
    width.append(w)

path_list = ['test_captcha', 'captcha_img/test']  # 需要对比的图片文件夹
for path in path_list:
    for filename in os.listdir(path):
        if '.png' not in filename:
            continue
        img_path = os.path.join(path, filename)
        boxes = parse_a_img(img_path)

    plt.subplot(131)
    plt.title("start-x")
    plt.hist(start_x, alpha=0.5, label=f"{path}")
    plt.subplot(132)
    plt.title("end-x")
    plt.hist(end_x, alpha=0.5, label=f"{path}")
    plt.subplot(133)
    plt.title("width")
    plt.hist(width, alpha=0.5, label=f"{path}")
    start_x = []
    end_x = []
    width = []
plt.legend()
plt.show()

1.2.2 验证码生成

确认训练集生成无问题后,这里正式生成训练数据。

同时为了提高模型鲁棒性,生成的训练数据的字符颜色与测试集并不相同,而是随机的颜色。



# 生成30000张验证码图片
%cd 
%cd create_captcha
!python3 generate_img.py --total_number 30000 --save_dir '/home/aistudio/train_data/img' --img_wh 120 50
%cd
# 查看文件数量,可能会有重名导致数据集差几张
%cd
!cd train_data/img && ls -l | grep "^-" | wc -l

1.2.3 数据集划分

将数据集划分为测试集和验证集

# 9:1划分数据集
import os
import random

image_dir = "train_data/img" 
train_file = 'train_data/train.txt'
eval_file = 'train_data/valid.txt'

for file in [train_file, eval_file]:
    if os.path.exists(file):
        os.remove(file)

dataset_list = os.listdir(image_dir)

train_num = 0
valid_num = 0
for img_name in dataset_list:
    if '.png' not in img_name:
        print(img_name)
        continue
    probo = random.randint(1, 100)
    if(probo <= 90): # train
        with open(train_file, 'a') as f_train:
            f_train.write(img_name+'\n')
        train_num+=1
    else: #valid
        with open(eval_file, 'a') as f_eval:
            f_eval.write(img_name+'\n')
        valid_num+=1
print(f'train: {train_num}, val:{valid_num}')
train: 27002, val:2997

1.3 模型部分

模型为CRNN结构,RNN使用LSTM(BLSTM)

验证码图片的宽高为(120,50),高度50不是2的幂数,为方便cnn处理,前处理将图片高度缩放至2的幂数。

前处理的时候将图片转为了灰度图,所以第一层卷积的输入通道为1。

CRNN开源代码中,原本输入图片的高度是32,高度方向可以缩小为1,时序长度为图片长度的1/4。这里为了保留图片更多细节,选择传入网络的图片高度为64,高度方向同样缩小到1,CNN部分就考虑在resnet18上进行修改了。

最终方案是将图片高度缩放至64,图片最终宽高为(width, 64),所以网络输入为(batch, 1, 64, width)。送入基于resnet18修改的CNN网络,输出(batch, channel, 1, T),去掉中间为长度为1的维度,最后经过lstm和线性层,输出维度(T, batch, 27)。

模型修改后的部分代码如下:最重要的修改是将后两层layer的宽度方向步长改为1,保留一定的宽度(时序长度);添加了LSTM结构,以特征图宽度为时序长度进行处理。

class Model(nn.Layer):
    """在resnet18的基础上修改而来"""
    def __init__(self, block=BasicBlock, width=64, groups=1, vocabulary=27):
        ...
        self.layer1 = self._make_layer(block, 64, layers[0])
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=(2, 1))  # 修改步长(2,1)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=(2, 1))  # 修改步长(2,1)
        # 将h缩小为1,添加lstm
        self.conv2 = nn.Conv2D(in_channels=512, out_channels=512, kernel_size=2, stride=1, padding=0)
        self.bn2 = self._norm_layer(512)
        self.lstm1 = nn.LSTM(input_size=512, hidden_size=256, direction='bidirect')
        # 线性层输出为26个字符分类+1个填充字符
        self.fc1 = nn.Linear(in_features=512, out_features=vocabulary)
     ...

模型测试如下,图片宽高缩放至(64, 153),送入模型后输出的时序长度T为17:

# 模型测试输入输出, 图片高度固定在64
from model import Model
import paddle

net = Model(vocabulary=27)
paddle.summary(net, (8,1,64,153))

data = paddle.randn((8,1,64,153), dtype='float32')
result = net(data)
print(result.shape)
---------------------------------------------------------------------------------------------------
 Layer (type)       Input Shape                      Output Shape                     Param #    
===================================================================================================
   Conv2D-22     [[8, 1, 64, 153]]                 [8, 64, 29, 74]                     3,136     
BatchNorm2D-22   [[8, 64, 29, 74]]                 [8, 64, 29, 74]                      256      
    ReLU-10      [[8, 512, 1, 17]]                 [8, 512, 1, 17]                       0       
  MaxPool2D-2    [[8, 64, 29, 74]]                 [8, 64, 14, 36]                       0       
   Conv2D-23     [[8, 64, 14, 36]]                 [8, 64, 14, 36]                    36,864     
BatchNorm2D-23   [[8, 64, 14, 36]]                 [8, 64, 14, 36]                      256      
    ReLU-11      [[8, 64, 14, 36]]                 [8, 64, 14, 36]                       0       
   Conv2D-24     [[8, 64, 14, 36]]                 [8, 64, 14, 36]                    36,864     
BatchNorm2D-24   [[8, 64, 14, 36]]                 [8, 64, 14, 36]                      256      
 BasicBlock-9    [[8, 64, 14, 36]]                 [8, 64, 14, 36]                       0       
   Conv2D-25     [[8, 64, 14, 36]]                 [8, 64, 14, 36]                    36,864     
BatchNorm2D-25   [[8, 64, 14, 36]]                 [8, 64, 14, 36]                      256      
    ReLU-12      [[8, 64, 14, 36]]                 [8, 64, 14, 36]                       0       
   Conv2D-26     [[8, 64, 14, 36]]                 [8, 64, 14, 36]                    36,864     
BatchNorm2D-26   [[8, 64, 14, 36]]                 [8, 64, 14, 36]                      256      
 BasicBlock-10   [[8, 64, 14, 36]]                 [8, 64, 14, 36]                       0       
   Conv2D-28     [[8, 64, 14, 36]]                 [8, 128, 7, 18]                    73,728     
BatchNorm2D-28   [[8, 128, 7, 18]]                 [8, 128, 7, 18]                      512      
    ReLU-13      [[8, 128, 7, 18]]                 [8, 128, 7, 18]                       0       
   Conv2D-29     [[8, 128, 7, 18]]                 [8, 128, 7, 18]                    147,456    
BatchNorm2D-29   [[8, 128, 7, 18]]                 [8, 128, 7, 18]                      512      
   Conv2D-27     [[8, 64, 14, 36]]                 [8, 128, 7, 18]                     8,192     
BatchNorm2D-27   [[8, 128, 7, 18]]                 [8, 128, 7, 18]                      512      
 BasicBlock-11   [[8, 64, 14, 36]]                 [8, 128, 7, 18]                       0       
   Conv2D-30     [[8, 128, 7, 18]]                 [8, 128, 7, 18]                    147,456    
BatchNorm2D-30   [[8, 128, 7, 18]]                 [8, 128, 7, 18]                      512      
    ReLU-14      [[8, 128, 7, 18]]                 [8, 128, 7, 18]                       0       
   Conv2D-31     [[8, 128, 7, 18]]                 [8, 128, 7, 18]                    147,456    
BatchNorm2D-31   [[8, 128, 7, 18]]                 [8, 128, 7, 18]                      512      
 BasicBlock-12   [[8, 128, 7, 18]]                 [8, 128, 7, 18]                       0       
   Conv2D-33     [[8, 128, 7, 18]]                 [8, 256, 4, 18]                    294,912    
BatchNorm2D-33   [[8, 256, 4, 18]]                 [8, 256, 4, 18]                     1,024     
    ReLU-15      [[8, 256, 4, 18]]                 [8, 256, 4, 18]                       0       
   Conv2D-34     [[8, 256, 4, 18]]                 [8, 256, 4, 18]                    589,824    
BatchNorm2D-34   [[8, 256, 4, 18]]                 [8, 256, 4, 18]                     1,024     
   Conv2D-32     [[8, 128, 7, 18]]                 [8, 256, 4, 18]                    32,768     
BatchNorm2D-32   [[8, 256, 4, 18]]                 [8, 256, 4, 18]                     1,024     
 BasicBlock-13   [[8, 128, 7, 18]]                 [8, 256, 4, 18]                       0       
   Conv2D-35     [[8, 256, 4, 18]]                 [8, 256, 4, 18]                    589,824    
BatchNorm2D-35   [[8, 256, 4, 18]]                 [8, 256, 4, 18]                     1,024     
    ReLU-16      [[8, 256, 4, 18]]                 [8, 256, 4, 18]                       0       
   Conv2D-36     [[8, 256, 4, 18]]                 [8, 256, 4, 18]                    589,824    
BatchNorm2D-36   [[8, 256, 4, 18]]                 [8, 256, 4, 18]                     1,024     
 BasicBlock-14   [[8, 256, 4, 18]]                 [8, 256, 4, 18]                       0       
   Conv2D-38     [[8, 256, 4, 18]]                 [8, 512, 2, 18]                   1,179,648   
BatchNorm2D-38   [[8, 512, 2, 18]]                 [8, 512, 2, 18]                     2,048     
    ReLU-17      [[8, 512, 2, 18]]                 [8, 512, 2, 18]                       0       
   Conv2D-39     [[8, 512, 2, 18]]                 [8, 512, 2, 18]                   2,359,296   
BatchNorm2D-39   [[8, 512, 2, 18]]                 [8, 512, 2, 18]                     2,048     
   Conv2D-37     [[8, 256, 4, 18]]                 [8, 512, 2, 18]                    131,072    
BatchNorm2D-37   [[8, 512, 2, 18]]                 [8, 512, 2, 18]                     2,048     
 BasicBlock-15   [[8, 256, 4, 18]]                 [8, 512, 2, 18]                       0       
   Conv2D-40     [[8, 512, 2, 18]]                 [8, 512, 2, 18]                   2,359,296   
BatchNorm2D-40   [[8, 512, 2, 18]]                 [8, 512, 2, 18]                     2,048     
    ReLU-18      [[8, 512, 2, 18]]                 [8, 512, 2, 18]                       0       
   Conv2D-41     [[8, 512, 2, 18]]                 [8, 512, 2, 18]                   2,359,296   
BatchNorm2D-41   [[8, 512, 2, 18]]                 [8, 512, 2, 18]                     2,048     
 BasicBlock-16   [[8, 512, 2, 18]]                 [8, 512, 2, 18]                       0       
   Conv2D-42     [[8, 512, 2, 18]]                 [8, 512, 1, 17]                   1,049,088   
BatchNorm2D-42   [[8, 512, 1, 17]]                 [8, 512, 1, 17]                     2,048     
    LSTM-2         [[17, 8, 512]]    [[17, 8, 512], [[2, 17, 256], [2, 17, 256]]]    1,576,960   
   Linear-2         [[136, 512]]                      [136, 27]                       13,851     
===================================================================================================
Total params: 13,821,787
Trainable params: 13,811,163
Non-trainable params: 10,624
---------------------------------------------------------------------------------------------------
Input size (MB): 0.30
Forward/backward pass size (MB): 89.93
Params size (MB): 52.73
Estimated Total Size (MB): 142.95
---------------------------------------------------------------------------------------------------

[17, 8, 27]


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/numpy/core/fromnumeric.py:87: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
  return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:712: UserWarning: When training, we now always track global mean and variance.
  "When training, we now always track global mean and variance."

1.4 训练

这里采用基于PGD梯度攻击的抗训练。

训练数据会被前处理为灰度图,为方便训练,将对抗噪声扰动直接加在处理后的灰度图上,每次送入的数据都会先经过数轮攻击迭代后再作为最终样本送入模型训练:

噪声的迭代算法为:先随机初始化噪声数据,与训练数据叠加后再前向计算获取噪声梯度的正负,最后以固定大小更新噪声并clip,如此迭代数轮。

噪声迭代逻辑如下:

  1. 前向计算
  2. 梯度回传
  3. 更新噪声

代码解释:

    # 更新方法伪代码
    def attack_ctc(self, model, input_batch, clip_base, labels, label_lengths):
        """
        利用ctcloss进行梯度攻击
        :param model: 攻击的模型
        :param input_batch: 输入模型的batch数据
        :param clip_base: 裁剪基础
        :param labels: 标签,ctc_loss计算需要
        :param label_lengths: 标签的长度,ctc_loss计算需要
        :return: 攻击噪声
        """
        # lossfunc定义
        loss_func = paddle.nn.functional.ctc_loss
        # 初始化噪声扰动
        self.delta = self.init_noise(clip_base)
        # 循环迭代噪声
        for _ in range(self.iter_num):  
            # 前向计算
            delta_gray = trans_func(self.delta)
            outputs = model(input_batch + delta_gray)
            # loss计算
            input_length = outputs.shape[0]
            input_lengths = paddle.full([batch_size], input_length, dtype='int64')
            loss = loss_func(outputs, labels, input_lengths, label_lengths)
            # 梯度回传
            loss.backward(retain_graph=False)
            # 固定大小更新噪声
            delta_new = self.delta + self.delta.grad.sign() * self.eps_iter
            # 限制噪声的大小:1.不超过设定的噪声大小;
            delta_new = paddle.clip(delta_new, -self.eps, self.eps)
            self.delta.clear_grad()
            # print(loss)  # 随着攻击的迭代,loss会越来越大
            # 限制噪声的大小:2.与验证码叠加后不超过像素值范围
            delta_new = paddle.clip(clip_base + delta_new, -1.0, 1.0) - clip_base
            # backward之后有计算操作,需阻断梯度传递
            delta_new.stop_gradient = True
            # 更新噪声的最新值
            paddle.assign(delta_new, self.delta)
        return self.delta

迭代的次数越多越耗时,噪声扰动的限制越大任务越难,这里梯度攻击循环迭代次数为5,每次噪声扰动迭代的大小为30/255,最大噪声扰动限制为30/255

训练时候的更新步骤如下:

  1. 迭代噪声
  2. 叠加噪声正常训练
    ...
    for batch_id, batch_data in enumerate(train_loader):
        # 数据加载
        img_data, label_data, label_lens = batch_data

        # 生成对抗噪声
        delta = pgd.attack_ctc(model, img_data, img_data, label_data, label_lens)

        # 叠加噪声,正常训练
        predict = model(img_data + delta)
        ...

这里放一个我训练好的模型权重备份:copy/last.pdparams

%cd
!python3 train.py
/home/aistudio
W0413 11:20:55.578990  1169 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W0413 11:20:55.582762  1169 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.
Epoch 0: LinearWarmup set learning rate to 0.0002.
2023-04-13 11:20:57 || Epoch 0 start:
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:712: UserWarning: When training, we now always track global mean and variance.
  "When training, we now always track global mean and variance."
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:277: UserWarning: The dtype of left and right variables are not the same, left dtype is paddle.float32, but right dtype is paddle.int64, the right dtype will convert to paddle.float32
  .format(lhs_dtype, rhs_dtype, lhs_dtype))
epoch:0, batch_id:0, loss:7.3665,             acc:0.0000 Tp/Tn_1/Tn_2: 0/127/1
epoch:0, batch_id:50, loss:3.5335,             acc:0.0000 Tp/Tn_1/Tn_2: 0/6521/7
epoch:0, batch_id:100, loss:3.5321,             acc:0.0000 Tp/Tn_1/Tn_2: 0/12921/7
epoch:0, batch_id:150, loss:3.5383,             acc:0.0000 Tp/Tn_1/Tn_2: 0/19321/7
epoch:0, batch_id:200, loss:3.4638,             acc:0.0000 Tp/Tn_1/Tn_2: 0/25721/7
Eval of epoch 0 => acc:0.0000, loss:3.2312
Epoch 1: LinearWarmup set learning rate to 0.00036.
2023-04-13 11:23:50 || Epoch 1 start:
epoch:1, batch_id:0, loss:3.4310,             acc:0.0000 Tp/Tn_1/Tn_2: 0/128/0
epoch:1, batch_id:50, loss:1.9896,             acc:0.0011 Tp/Tn_1/Tn_2: 7/5809/712
epoch:1, batch_id:100, loss:1.1958,             acc:0.0207 Tp/Tn_1/Tn_2: 268/8476/4184
epoch:1, batch_id:150, loss:0.9234,             acc:0.0651 Tp/Tn_1/Tn_2: 1258/10193/7877
epoch:1, batch_id:200, loss:0.8228,             acc:0.1103 Tp/Tn_1/Tn_2: 2837/11494/11397
Eval of epoch 1 => acc:0.9273, loss:0.1065
Saved best model of epoch1, acc 0.9273, save path "runs"
Epoch 2: LinearWarmup set learning rate to 0.0005200000000000001.
2023-04-13 11:26:55 || Epoch 2 start:
epoch:2, batch_id:0, loss:0.7033,             acc:0.3281 Tp/Tn_1/Tn_2: 42/25/61
epoch:2, batch_id:50, loss:0.5577,             acc:0.3396 Tp/Tn_1/Tn_2: 2217/1149/3162
epoch:2, batch_id:100, loss:0.5571,             acc:0.3612 Tp/Tn_1/Tn_2: 4669/2152/6107
epoch:2, batch_id:150, loss:0.6713,             acc:0.3850 Tp/Tn_1/Tn_2: 7441/3015/8872
epoch:2, batch_id:200, loss:0.4419,             acc:0.4071 Tp/Tn_1/Tn_2: 10473/3810/11445
Eval of epoch 2 => acc:0.9586, loss:0.0667
Saved best model of epoch2, acc 0.9586, save path "runs"
Epoch 3: LinearWarmup set learning rate to 0.00068.
2023-04-13 11:29:52 || Epoch 3 start:
epoch:3, batch_id:0, loss:0.6534,             acc:0.4844 Tp/Tn_1/Tn_2: 62/16/50
epoch:3, batch_id:50, loss:0.5135,             acc:0.5026 Tp/Tn_1/Tn_2: 3281/681/2566
epoch:3, batch_id:100, loss:0.5163,             acc:0.5111 Tp/Tn_1/Tn_2: 6608/1327/4993
epoch:3, batch_id:150, loss:0.3255,             acc:0.5249 Tp/Tn_1/Tn_2: 10146/1933/7249
epoch:3, batch_id:200, loss:0.4662,             acc:0.5372 Tp/Tn_1/Tn_2: 13821/2526/9381
Eval of epoch 3 => acc:0.9683, loss:0.0633
Saved best model of epoch3, acc 0.9683, save path "runs"
Epoch 4: LinearWarmup set learning rate to 0.00084.
2023-04-13 11:32:47 || Epoch 4 start:
epoch:4, batch_id:0, loss:0.4096,             acc:0.6094 Tp/Tn_1/Tn_2: 78/6/44
epoch:4, batch_id:50, loss:0.4133,             acc:0.5933 Tp/Tn_1/Tn_2: 3873/536/2119
epoch:4, batch_id:100, loss:0.4424,             acc:0.5999 Tp/Tn_1/Tn_2: 7755/1044/4129
epoch:4, batch_id:150, loss:0.3470,             acc:0.6113 Tp/Tn_1/Tn_2: 11815/1533/5980
epoch:4, batch_id:200, loss:0.4199,             acc:0.6163 Tp/Tn_1/Tn_2: 15855/1976/7897
Eval of epoch 4 => acc:0.9743, loss:0.0419
Saved best model of epoch4, acc 0.9743, save path "runs"
Epoch 5: LinearWarmup set learning rate to 0.001.
2023-04-13 11:35:46 || Epoch 5 start:
epoch:5, batch_id:0, loss:0.2778,             acc:0.6484 Tp/Tn_1/Tn_2: 83/7/38
epoch:5, batch_id:50, loss:0.3065,             acc:0.6595 Tp/Tn_1/Tn_2: 4305/373/1850
epoch:5, batch_id:100, loss:0.3511,             acc:0.6612 Tp/Tn_1/Tn_2: 8548/764/3616
epoch:5, batch_id:150, loss:0.3471,             acc:0.6633 Tp/Tn_1/Tn_2: 12820/1197/5311
epoch:5, batch_id:200, loss:0.3950,             acc:0.6646 Tp/Tn_1/Tn_2: 17099/1671/6958
Eval of epoch 5 => acc:0.9796, loss:0.0282
Saved best model of epoch5, acc 0.9796, save path "runs"
Epoch 6: LinearWarmup set learning rate to 0.0009993370449424153.
2023-04-13 11:38:39 || Epoch 6 start:
epoch:6, batch_id:0, loss:0.2566,             acc:0.7031 Tp/Tn_1/Tn_2: 90/6/32
epoch:6, batch_id:50, loss:0.3140,             acc:0.7088 Tp/Tn_1/Tn_2: 4627/344/1557
epoch:6, batch_id:100, loss:0.1539,             acc:0.7153 Tp/Tn_1/Tn_2: 9248/667/3013
epoch:6, batch_id:150, loss:0.2506,             acc:0.7180 Tp/Tn_1/Tn_2: 13878/964/4486
epoch:6, batch_id:200, loss:0.4347,             acc:0.7184 Tp/Tn_1/Tn_2: 18482/1284/5962
Eval of epoch 6 => acc:0.9800, loss:0.0299
Saved best model of epoch6, acc 0.9800, save path "runs"
Epoch 7: LinearWarmup set learning rate to 0.0009973499378072945.
2023-04-13 11:42:07 || Epoch 7 start:
epoch:7, batch_id:0, loss:0.3357,             acc:0.7500 Tp/Tn_1/Tn_2: 96/10/22
epoch:7, batch_id:50, loss:0.2417,             acc:0.7417 Tp/Tn_1/Tn_2: 4842/249/1437
epoch:7, batch_id:100, loss:0.2500,             acc:0.7375 Tp/Tn_1/Tn_2: 9535/519/2874
epoch:7, batch_id:150, loss:0.2991,             acc:0.7365 Tp/Tn_1/Tn_2: 14236/767/4325
epoch:7, batch_id:200, loss:0.2911,             acc:0.7422 Tp/Tn_1/Tn_2: 19095/1034/5599
Eval of epoch 7 => acc:0.9847, loss:0.0283
Saved best model of epoch7, acc 0.9847, save path "runs"
Epoch 8: LinearWarmup set learning rate to 0.0009940439480455386.
2023-04-13 11:44:56 || Epoch 8 start:
epoch:8, batch_id:0, loss:0.1723,             acc:0.7891 Tp/Tn_1/Tn_2: 101/3/24
epoch:8, batch_id:50, loss:0.2620,             acc:0.7638 Tp/Tn_1/Tn_2: 4986/239/1303
epoch:8, batch_id:100, loss:0.2110,             acc:0.7710 Tp/Tn_1/Tn_2: 9968/469/2491
epoch:8, batch_id:150, loss:0.1694,             acc:0.7706 Tp/Tn_1/Tn_2: 14895/664/3769
epoch:8, batch_id:200, loss:0.2717,             acc:0.7697 Tp/Tn_1/Tn_2: 19804/896/5028
Eval of epoch 8 => acc:0.9837, loss:0.0266
Epoch 9: LinearWarmup set learning rate to 0.000989427842547679.
2023-04-13 11:47:49 || Epoch 9 start:
epoch:9, batch_id:0, loss:0.2811,             acc:0.7891 Tp/Tn_1/Tn_2: 101/6/21
epoch:9, batch_id:50, loss:0.1166,             acc:0.7921 Tp/Tn_1/Tn_2: 5171/212/1145
epoch:9, batch_id:100, loss:0.1899,             acc:0.7956 Tp/Tn_1/Tn_2: 10286/415/2227
epoch:9, batch_id:150, loss:0.1540,             acc:0.7916 Tp/Tn_1/Tn_2: 15301/649/3378
epoch:9, batch_id:200, loss:0.1625,             acc:0.7900 Tp/Tn_1/Tn_2: 20326/843/4559
Eval of epoch 9 => acc:0.9823, loss:0.0223
Epoch 10: LinearWarmup set learning rate to 0.0009835138623956602.
2023-04-13 11:50:35 || Epoch 10 start:
epoch:10, batch_id:0, loss:0.1810,             acc:0.8359 Tp/Tn_1/Tn_2: 107/6/15
epoch:10, batch_id:50, loss:0.1534,             acc:0.8065 Tp/Tn_1/Tn_2: 5265/183/1080
epoch:10, batch_id:100, loss:0.2162,             acc:0.7982 Tp/Tn_1/Tn_2: 10319/371/2238
epoch:10, batch_id:150, loss:0.1808,             acc:0.7934 Tp/Tn_1/Tn_2: 15335/555/3438
epoch:10, batch_id:200, loss:0.2063,             acc:0.7958 Tp/Tn_1/Tn_2: 20475/744/4509
Eval of epoch 10 => acc:0.9847, loss:0.0256
Epoch 11: LinearWarmup set learning rate to 0.0009763176904016913.
2023-04-13 11:53:35 || Epoch 11 start:
epoch:11, batch_id:0, loss:0.1569,             acc:0.8125 Tp/Tn_1/Tn_2: 104/3/21
epoch:11, batch_id:50, loss:0.2098,             acc:0.8088 Tp/Tn_1/Tn_2: 5280/174/1074
epoch:11, batch_id:100, loss:0.1478,             acc:0.8045 Tp/Tn_1/Tn_2: 10401/404/2123
epoch:11, batch_id:150, loss:0.1815,             acc:0.8012 Tp/Tn_1/Tn_2: 15485/604/3239
epoch:11, batch_id:200, loss:0.2305,             acc:0.7997 Tp/Tn_1/Tn_2: 20574/805/4349
Eval of epoch 11 => acc:0.9860, loss:0.0233
Saved best model of epoch11, acc 0.9860, save path "runs"
Epoch 12: LinearWarmup set learning rate to 0.0009678584095202469.
2023-04-13 11:56:35 || Epoch 12 start:
epoch:12, batch_id:0, loss:0.1031,             acc:0.8516 Tp/Tn_1/Tn_2: 109/3/16
epoch:12, batch_id:50, loss:0.1208,             acc:0.8100 Tp/Tn_1/Tn_2: 5288/158/1082
epoch:12, batch_id:100, loss:0.1326,             acc:0.8126 Tp/Tn_1/Tn_2: 10505/311/2112
epoch:12, batch_id:150, loss:0.1800,             acc:0.8148 Tp/Tn_1/Tn_2: 15748/467/3113
epoch:12, batch_id:200, loss:0.1838,             acc:0.8130 Tp/Tn_1/Tn_2: 20917/635/4176
Eval of epoch 12 => acc:0.9867, loss:0.0187
Saved best model of epoch12, acc 0.9867, save path "runs"
Epoch 13: LinearWarmup set learning rate to 0.0009581584522435024.
2023-04-13 11:59:25 || Epoch 13 start:
epoch:13, batch_id:0, loss:0.1977,             acc:0.8516 Tp/Tn_1/Tn_2: 109/3/16
epoch:13, batch_id:50, loss:0.0991,             acc:0.8333 Tp/Tn_1/Tn_2: 5440/142/946
epoch:13, batch_id:100, loss:0.2358,             acc:0.8334 Tp/Tn_1/Tn_2: 10774/303/1851
epoch:13, batch_id:150, loss:0.1643,             acc:0.8284 Tp/Tn_1/Tn_2: 16011/456/2861
epoch:13, batch_id:200, loss:0.1971,             acc:0.8267 Tp/Tn_1/Tn_2: 21269/628/3831
Eval of epoch 13 => acc:0.9857, loss:0.0234
Epoch 14: LinearWarmup set learning rate to 0.0009472435411143978.
2023-04-13 12:02:12 || Epoch 14 start:
epoch:14, batch_id:0, loss:0.1824,             acc:0.7812 Tp/Tn_1/Tn_2: 100/2/26
epoch:14, batch_id:50, loss:0.1750,             acc:0.8263 Tp/Tn_1/Tn_2: 5394/154/980
epoch:14, batch_id:100, loss:0.2069,             acc:0.8273 Tp/Tn_1/Tn_2: 10695/311/1922
epoch:14, batch_id:150, loss:0.1700,             acc:0.8235 Tp/Tn_1/Tn_2: 15917/476/2935
epoch:14, batch_id:200, loss:0.1804,             acc:0.8239 Tp/Tn_1/Tn_2: 21197/640/3891
Eval of epoch 14 => acc:0.9810, loss:0.0244
Epoch 15: LinearWarmup set learning rate to 0.0009351426205150777.
2023-04-13 12:04:58 || Epoch 15 start:
epoch:15, batch_id:0, loss:0.1283,             acc:0.8828 Tp/Tn_1/Tn_2: 113/2/13
epoch:15, batch_id:50, loss:0.1279,             acc:0.8428 Tp/Tn_1/Tn_2: 5502/133/893
epoch:15, batch_id:100, loss:0.1607,             acc:0.8423 Tp/Tn_1/Tn_2: 10889/286/1753
epoch:15, batch_id:150, loss:0.1362,             acc:0.8399 Tp/Tn_1/Tn_2: 16234/426/2668
epoch:15, batch_id:200, loss:0.0839,             acc:0.8396 Tp/Tn_1/Tn_2: 21601/548/3579
Eval of epoch 15 => acc:0.9873, loss:0.0186
Saved best model of epoch15, acc 0.9873, save path "runs"
Epoch 16: LinearWarmup set learning rate to 0.0009218877799115928.
2023-04-13 12:07:47 || Epoch 16 start:
epoch:16, batch_id:0, loss:0.1587,             acc:0.8594 Tp/Tn_1/Tn_2: 110/4/14
epoch:16, batch_id:50, loss:0.1172,             acc:0.8508 Tp/Tn_1/Tn_2: 5554/158/816
epoch:16, batch_id:100, loss:0.1998,             acc:0.8463 Tp/Tn_1/Tn_2: 10941/279/1708
epoch:16, batch_id:150, loss:0.1428,             acc:0.8454 Tp/Tn_1/Tn_2: 16340/423/2565
epoch:16, batch_id:200, loss:0.2176,             acc:0.8412 Tp/Tn_1/Tn_2: 21643/557/3528
Eval of epoch 16 => acc:0.9857, loss:0.0250
Epoch 17: LinearWarmup set learning rate to 0.0009075141687584057.
2023-04-13 12:10:34 || Epoch 17 start:
epoch:17, batch_id:0, loss:0.1990,             acc:0.8281 Tp/Tn_1/Tn_2: 106/4/18
epoch:17, batch_id:50, loss:0.1452,             acc:0.8467 Tp/Tn_1/Tn_2: 5527/145/856
epoch:17, batch_id:100, loss:0.1586,             acc:0.8495 Tp/Tn_1/Tn_2: 10982/270/1676
epoch:17, batch_id:150, loss:0.1126,             acc:0.8488 Tp/Tn_1/Tn_2: 16406/417/2505
epoch:17, batch_id:200, loss:0.1238,             acc:0.8478 Tp/Tn_1/Tn_2: 21812/559/3357
Eval of epoch 17 => acc:0.9847, loss:0.0191
Epoch 18: LinearWarmup set learning rate to 0.0008920599032883552.
2023-04-13 12:13:28 || Epoch 18 start:
epoch:18, batch_id:0, loss:0.1730,             acc:0.8438 Tp/Tn_1/Tn_2: 108/2/18
epoch:18, batch_id:50, loss:0.1482,             acc:0.8574 Tp/Tn_1/Tn_2: 5597/133/798
epoch:18, batch_id:100, loss:0.1541,             acc:0.8615 Tp/Tn_1/Tn_2: 11138/237/1553
epoch:18, batch_id:150, loss:0.1317,             acc:0.8619 Tp/Tn_1/Tn_2: 16658/355/2315
epoch:18, batch_id:200, loss:0.1744,             acc:0.8582 Tp/Tn_1/Tn_2: 22079/492/3157
Eval of epoch 18 => acc:0.9810, loss:0.0253
Epoch 19: LinearWarmup set learning rate to 0.0008755659654352599.
2023-04-13 12:16:18 || Epoch 19 start:
epoch:19, batch_id:0, loss:0.1023,             acc:0.8516 Tp/Tn_1/Tn_2: 109/1/18
epoch:19, batch_id:50, loss:0.1296,             acc:0.8490 Tp/Tn_1/Tn_2: 5542/131/855
epoch:19, batch_id:100, loss:0.3056,             acc:0.8465 Tp/Tn_1/Tn_2: 10943/267/1718
epoch:19, batch_id:150, loss:0.1251,             acc:0.8491 Tp/Tn_1/Tn_2: 16412/383/2533
epoch:19, batch_id:200, loss:0.1424,             acc:0.8483 Tp/Tn_1/Tn_2: 21826/479/3423
Eval of epoch 19 => acc:0.9847, loss:0.0194
Epoch 20: LinearWarmup set learning rate to 0.0008580760941571966.
2023-04-13 12:19:08 || Epoch 20 start:
epoch:20, batch_id:0, loss:0.1063,             acc:0.8750 Tp/Tn_1/Tn_2: 112/1/15
epoch:20, batch_id:50, loss:0.0935,             acc:0.8689 Tp/Tn_1/Tn_2: 5672/115/741
epoch:20, batch_id:100, loss:0.2303,             acc:0.8704 Tp/Tn_1/Tn_2: 11252/225/1451
epoch:20, batch_id:150, loss:0.1469,             acc:0.8656 Tp/Tn_1/Tn_2: 16730/385/2213
epoch:20, batch_id:200, loss:0.1468,             acc:0.8607 Tp/Tn_1/Tn_2: 22144/514/3070
Eval of epoch 20 => acc:0.9830, loss:0.0217
Epoch 21: LinearWarmup set learning rate to 0.0008396366694486466.
2023-04-13 12:21:54 || Epoch 21 start:
epoch:21, batch_id:0, loss:0.0807,             acc:0.8594 Tp/Tn_1/Tn_2: 110/1/17
epoch:21, batch_id:50, loss:0.0879,             acc:0.8548 Tp/Tn_1/Tn_2: 5580/107/841
epoch:21, batch_id:100, loss:0.1644,             acc:0.8621 Tp/Tn_1/Tn_2: 11145/210/1573
epoch:21, batch_id:150, loss:0.1465,             acc:0.8661 Tp/Tn_1/Tn_2: 16740/321/2267
epoch:21, batch_id:200, loss:0.1764,             acc:0.8643 Tp/Tn_1/Tn_2: 22237/437/3054
Eval of epoch 21 => acc:0.9837, loss:0.0215
Epoch 22: LinearWarmup set learning rate to 0.0008202965893490875.
2023-04-13 12:24:39 || Epoch 22 start:
epoch:22, batch_id:0, loss:0.1479,             acc:0.8516 Tp/Tn_1/Tn_2: 109/2/17
epoch:22, batch_id:50, loss:0.0924,             acc:0.8730 Tp/Tn_1/Tn_2: 5699/124/705
epoch:22, batch_id:100, loss:0.1454,             acc:0.8777 Tp/Tn_1/Tn_2: 11347/223/1358
epoch:22, batch_id:150, loss:0.2939,             acc:0.8775 Tp/Tn_1/Tn_2: 16960/336/2032
epoch:22, batch_id:200, loss:0.1497,             acc:0.8772 Tp/Tn_1/Tn_2: 22569/459/2700
Eval of epoch 22 => acc:0.9853, loss:0.0215
Epoch 23: LinearWarmup set learning rate to 0.0008001071402741842.
2023-04-13 12:27:25 || Epoch 23 start:
epoch:23, batch_id:0, loss:0.1337,             acc:0.8438 Tp/Tn_1/Tn_2: 108/2/18
epoch:23, batch_id:50, loss:0.1799,             acc:0.8744 Tp/Tn_1/Tn_2: 5708/110/710
epoch:23, batch_id:100, loss:0.1647,             acc:0.8784 Tp/Tn_1/Tn_2: 11356/221/1351
epoch:23, batch_id:150, loss:0.1682,             acc:0.8749 Tp/Tn_1/Tn_2: 16911/343/2074
epoch:23, batch_id:200, loss:0.1850,             acc:0.8712 Tp/Tn_1/Tn_2: 22413/486/2829
Eval of epoch 23 => acc:0.9880, loss:0.0150
Saved best model of epoch23, acc 0.9880, save path "runs"
Epoch 24: LinearWarmup set learning rate to 0.0007791218610134325.
2023-04-13 12:30:11 || Epoch 24 start:
epoch:24, batch_id:0, loss:0.2006,             acc:0.8125 Tp/Tn_1/Tn_2: 104/2/22
epoch:24, batch_id:50, loss:0.0586,             acc:0.8741 Tp/Tn_1/Tn_2: 5706/116/706
epoch:24, batch_id:100, loss:0.0807,             acc:0.8791 Tp/Tn_1/Tn_2: 11365/218/1345
epoch:24, batch_id:150, loss:0.1632,             acc:0.8788 Tp/Tn_1/Tn_2: 16986/326/2016
epoch:24, batch_id:200, loss:0.1321,             acc:0.8788 Tp/Tn_1/Tn_2: 22610/427/2691
Eval of epoch 24 => acc:0.9833, loss:0.0254
Epoch 25: LinearWarmup set learning rate to 0.0007573964007549155.
2023-04-13 12:33:01 || Epoch 25 start:
epoch:25, batch_id:0, loss:0.2095,             acc:0.8359 Tp/Tn_1/Tn_2: 107/5/16
epoch:25, batch_id:50, loss:0.1130,             acc:0.8842 Tp/Tn_1/Tn_2: 5772/104/652
epoch:25, batch_id:100, loss:0.0905,             acc:0.8853 Tp/Tn_1/Tn_2: 11445/194/1289
epoch:25, batch_id:150, loss:0.1382,             acc:0.8852 Tp/Tn_1/Tn_2: 17109/272/1947
epoch:25, batch_id:200, loss:0.0755,             acc:0.8854 Tp/Tn_1/Tn_2: 22779/352/2597
Eval of epoch 25 => acc:0.9823, loss:0.0187
Epoch 26: LinearWarmup set learning rate to 0.00073498837151366.
2023-04-13 12:35:47 || Epoch 26 start:
epoch:26, batch_id:0, loss:0.1842,             acc:0.8828 Tp/Tn_1/Tn_2: 113/2/13
epoch:26, batch_id:50, loss:0.1109,             acc:0.8934 Tp/Tn_1/Tn_2: 5832/99/597
epoch:26, batch_id:100, loss:0.0985,             acc:0.8876 Tp/Tn_1/Tn_2: 11475/210/1243
epoch:26, batch_id:150, loss:0.0547,             acc:0.8852 Tp/Tn_1/Tn_2: 17109/299/1920
epoch:26, batch_id:200, loss:0.0938,             acc:0.8850 Tp/Tn_1/Tn_2: 22770/409/2549
Eval of epoch 26 => acc:0.9847, loss:0.0171
Epoch 27: LinearWarmup set learning rate to 0.0007119571953549304.
2023-04-13 12:38:32 || Epoch 27 start:
epoch:27, batch_id:0, loss:0.1279,             acc:0.8750 Tp/Tn_1/Tn_2: 112/1/15
epoch:27, batch_id:50, loss:0.1665,             acc:0.9044 Tp/Tn_1/Tn_2: 5904/100/524
epoch:27, batch_id:100, loss:0.0896,             acc:0.9011 Tp/Tn_1/Tn_2: 11649/183/1096
epoch:27, batch_id:150, loss:0.1479,             acc:0.8999 Tp/Tn_1/Tn_2: 17394/285/1649
epoch:27, batch_id:200, loss:0.1380,             acc:0.8975 Tp/Tn_1/Tn_2: 23092/368/2268
Eval of epoch 27 => acc:0.9890, loss:0.0158
Saved best model of epoch27, acc 0.9890, save path "runs"
Epoch 28: LinearWarmup set learning rate to 0.0006883639468175926.
2023-04-13 12:41:18 || Epoch 28 start:
epoch:28, batch_id:0, loss:0.1200,             acc:0.9297 Tp/Tn_1/Tn_2: 119/2/7
epoch:28, batch_id:50, loss:0.0998,             acc:0.8994 Tp/Tn_1/Tn_2: 5871/79/578
epoch:28, batch_id:100, loss:0.1259,             acc:0.8970 Tp/Tn_1/Tn_2: 11596/159/1173
epoch:28, batch_id:150, loss:0.1907,             acc:0.8959 Tp/Tn_1/Tn_2: 17316/256/1756
epoch:28, batch_id:200, loss:0.1449,             acc:0.8968 Tp/Tn_1/Tn_2: 23073/353/2302
Eval of epoch 28 => acc:0.9903, loss:0.0129
Saved best model of epoch28, acc 0.9903, save path "runs"
Epoch 29: LinearWarmup set learning rate to 0.0006642711909554174.
2023-04-13 12:44:05 || Epoch 29 start:
epoch:29, batch_id:0, loss:0.0927,             acc:0.8906 Tp/Tn_1/Tn_2: 114/1/13
epoch:29, batch_id:50, loss:0.1169,             acc:0.9070 Tp/Tn_1/Tn_2: 5921/77/530
epoch:29, batch_id:100, loss:0.1664,             acc:0.9050 Tp/Tn_1/Tn_2: 11700/157/1071
epoch:29, batch_id:150, loss:0.0842,             acc:0.9010 Tp/Tn_1/Tn_2: 17414/248/1666
epoch:29, batch_id:200, loss:0.1445,             acc:0.9010 Tp/Tn_1/Tn_2: 23180/328/2220
Eval of epoch 29 => acc:0.9850, loss:0.0204
Epoch 30: LinearWarmup set learning rate to 0.0006397428174258048.
2023-04-13 12:46:48 || Epoch 30 start:
epoch:30, batch_id:0, loss:0.1201,             acc:0.8750 Tp/Tn_1/Tn_2: 112/2/14
epoch:30, batch_id:50, loss:0.0818,             acc:0.9079 Tp/Tn_1/Tn_2: 5927/86/515
epoch:30, batch_id:100, loss:0.1017,             acc:0.9063 Tp/Tn_1/Tn_2: 11717/163/1048
epoch:30, batch_id:150, loss:0.1018,             acc:0.9066 Tp/Tn_1/Tn_2: 17523/248/1557
epoch:30, batch_id:200, loss:0.1117,             acc:0.9061 Tp/Tn_1/Tn_2: 23311/336/2081
Eval of epoch 30 => acc:0.9863, loss:0.0173
Epoch 31: LinearWarmup set learning rate to 0.0006148438710658978.
2023-04-13 12:49:31 || Epoch 31 start:
epoch:31, batch_id:0, loss:0.1805,             acc:0.8984 Tp/Tn_1/Tn_2: 115/4/9
epoch:31, batch_id:50, loss:0.2042,             acc:0.9046 Tp/Tn_1/Tn_2: 5905/94/529
epoch:31, batch_id:100, loss:0.3057,             acc:0.9028 Tp/Tn_1/Tn_2: 11671/177/1080
epoch:31, batch_id:150, loss:0.0962,             acc:0.9040 Tp/Tn_1/Tn_2: 17472/259/1597
epoch:31, batch_id:200, loss:0.0891,             acc:0.9047 Tp/Tn_1/Tn_2: 23277/332/2119
Eval of epoch 31 => acc:0.9880, loss:0.0143
Epoch 32: LinearWarmup set learning rate to 0.0005896403794053679.
2023-04-13 12:52:16 || Epoch 32 start:
epoch:32, batch_id:0, loss:0.1007,             acc:0.9375 Tp/Tn_1/Tn_2: 120/3/5
epoch:32, batch_id:50, loss:0.1161,             acc:0.9127 Tp/Tn_1/Tn_2: 5958/70/500
epoch:32, batch_id:100, loss:0.0395,             acc:0.9107 Tp/Tn_1/Tn_2: 11774/136/1018
epoch:32, batch_id:150, loss:0.2024,             acc:0.9071 Tp/Tn_1/Tn_2: 17532/207/1589
epoch:32, batch_id:200, loss:0.0789,             acc:0.9087 Tp/Tn_1/Tn_2: 23380/295/2053
Eval of epoch 32 => acc:0.9887, loss:0.0125
Epoch 33: LinearWarmup set learning rate to 0.0005641991775732756.
2023-04-13 12:55:04 || Epoch 33 start:
epoch:33, batch_id:0, loss:0.1997,             acc:0.8828 Tp/Tn_1/Tn_2: 113/6/9
epoch:33, batch_id:50, loss:0.0865,             acc:0.9147 Tp/Tn_1/Tn_2: 5971/80/477
epoch:33, batch_id:100, loss:0.1476,             acc:0.9130 Tp/Tn_1/Tn_2: 11803/148/977
epoch:33, batch_id:150, loss:0.1031,             acc:0.9130 Tp/Tn_1/Tn_2: 17647/236/1445
epoch:33, batch_id:200, loss:0.1405,             acc:0.9114 Tp/Tn_1/Tn_2: 23449/322/1957
Eval of epoch 33 => acc:0.9830, loss:0.0188
Epoch 34: LinearWarmup set learning rate to 0.0005385877310633231.
2023-04-13 12:57:58 || Epoch 34 start:
epoch:34, batch_id:0, loss:0.0955,             acc:0.9219 Tp/Tn_1/Tn_2: 118/0/10
epoch:34, batch_id:50, loss:0.1789,             acc:0.9197 Tp/Tn_1/Tn_2: 6004/81/443
epoch:34, batch_id:100, loss:0.1056,             acc:0.9144 Tp/Tn_1/Tn_2: 11821/167/940
epoch:34, batch_id:150, loss:0.1184,             acc:0.9134 Tp/Tn_1/Tn_2: 17655/246/1427
epoch:34, batch_id:200, loss:0.1063,             acc:0.9129 Tp/Tn_1/Tn_2: 23487/320/1921
Eval of epoch 34 => acc:0.9843, loss:0.0134
Epoch 35: LinearWarmup set learning rate to 0.0005128739568274944.
2023-04-13 13:01:44 || Epoch 35 start:
epoch:35, batch_id:0, loss:0.1681,             acc:0.9297 Tp/Tn_1/Tn_2: 119/2/7
epoch:35, batch_id:50, loss:0.1110,             acc:0.9211 Tp/Tn_1/Tn_2: 6013/80/435
epoch:35, batch_id:100, loss:0.0480,             acc:0.9211 Tp/Tn_1/Tn_2: 11908/138/882
epoch:35, batch_id:150, loss:0.1006,             acc:0.9208 Tp/Tn_1/Tn_2: 17797/200/1331
epoch:35, batch_id:200, loss:0.0904,             acc:0.9211 Tp/Tn_1/Tn_2: 23699/264/1765
Eval of epoch 35 => acc:0.9853, loss:0.0159
Epoch 36: LinearWarmup set learning rate to 0.00048712604317250577.
2023-04-13 13:05:29 || Epoch 36 start:
epoch:36, batch_id:0, loss:0.0383,             acc:0.9219 Tp/Tn_1/Tn_2: 118/1/9
epoch:36, batch_id:50, loss:0.1093,             acc:0.9266 Tp/Tn_1/Tn_2: 6049/51/428
epoch:36, batch_id:100, loss:0.0765,             acc:0.9233 Tp/Tn_1/Tn_2: 11936/111/881
epoch:36, batch_id:150, loss:0.1108,             acc:0.9249 Tp/Tn_1/Tn_2: 17877/183/1268
epoch:36, batch_id:200, loss:0.0769,             acc:0.9229 Tp/Tn_1/Tn_2: 23745/258/1725
Eval of epoch 36 => acc:0.9870, loss:0.0129
Epoch 37: LinearWarmup set learning rate to 0.00046141226893667693.
2023-04-13 13:08:14 || Epoch 37 start:
epoch:37, batch_id:0, loss:0.1236,             acc:0.9062 Tp/Tn_1/Tn_2: 116/2/10
epoch:37, batch_id:50, loss:0.1446,             acc:0.9344 Tp/Tn_1/Tn_2: 6100/61/367
epoch:37, batch_id:100, loss:0.1227,             acc:0.9293 Tp/Tn_1/Tn_2: 12014/120/794
epoch:37, batch_id:150, loss:0.0806,             acc:0.9281 Tp/Tn_1/Tn_2: 17938/174/1216
epoch:37, batch_id:200, loss:0.0828,             acc:0.9265 Tp/Tn_1/Tn_2: 23838/242/1648
Eval of epoch 37 => acc:0.9847, loss:0.0183
Epoch 38: LinearWarmup set learning rate to 0.00043580082242672456.
2023-04-13 13:10:59 || Epoch 38 start:
epoch:38, batch_id:0, loss:0.1299,             acc:0.9219 Tp/Tn_1/Tn_2: 118/0/10
epoch:38, batch_id:50, loss:0.2125,             acc:0.9306 Tp/Tn_1/Tn_2: 6075/50/403
epoch:38, batch_id:100, loss:0.1448,             acc:0.9333 Tp/Tn_1/Tn_2: 12066/115/747
epoch:38, batch_id:150, loss:0.0961,             acc:0.9312 Tp/Tn_1/Tn_2: 17999/181/1148
epoch:38, batch_id:200, loss:0.0754,             acc:0.9299 Tp/Tn_1/Tn_2: 23924/247/1557
Eval of epoch 38 => acc:0.9810, loss:0.0210
Epoch 39: LinearWarmup set learning rate to 0.00041035962059463217.
2023-04-13 13:13:43 || Epoch 39 start:
epoch:39, batch_id:0, loss:0.0451,             acc:0.9531 Tp/Tn_1/Tn_2: 122/1/5
epoch:39, batch_id:50, loss:0.1071,             acc:0.9187 Tp/Tn_1/Tn_2: 5997/65/466
epoch:39, batch_id:100, loss:0.1282,             acc:0.9230 Tp/Tn_1/Tn_2: 11933/119/876
epoch:39, batch_id:150, loss:0.0518,             acc:0.9281 Tp/Tn_1/Tn_2: 17939/177/1212
epoch:39, batch_id:200, loss:0.0690,             acc:0.9289 Tp/Tn_1/Tn_2: 23900/230/1598
Eval of epoch 39 => acc:0.9837, loss:0.0169
Epoch 40: LinearWarmup set learning rate to 0.00038515612893410227.
2023-04-13 13:16:28 || Epoch 40 start:
epoch:40, batch_id:0, loss:0.0560,             acc:0.9219 Tp/Tn_1/Tn_2: 118/1/9
epoch:40, batch_id:50, loss:0.0604,             acc:0.9312 Tp/Tn_1/Tn_2: 6079/68/381
epoch:40, batch_id:100, loss:0.0696,             acc:0.9305 Tp/Tn_1/Tn_2: 12029/123/776
epoch:40, batch_id:150, loss:0.1847,             acc:0.9292 Tp/Tn_1/Tn_2: 17959/198/1171
epoch:40, batch_id:200, loss:0.0239,             acc:0.9311 Tp/Tn_1/Tn_2: 23956/266/1506
Eval of epoch 40 => acc:0.9826, loss:0.0191
Epoch 41: LinearWarmup set learning rate to 0.0003602571825741953.
2023-04-13 13:19:11 || Epoch 41 start:
epoch:41, batch_id:0, loss:0.1073,             acc:0.9453 Tp/Tn_1/Tn_2: 121/0/7
epoch:41, batch_id:50, loss:0.1301,             acc:0.9406 Tp/Tn_1/Tn_2: 6140/57/331
epoch:41, batch_id:100, loss:0.1370,             acc:0.9400 Tp/Tn_1/Tn_2: 12152/115/661
epoch:41, batch_id:150, loss:0.0615,             acc:0.9389 Tp/Tn_1/Tn_2: 18148/164/1016
epoch:41, batch_id:200, loss:0.1564,             acc:0.9380 Tp/Tn_1/Tn_2: 24132/230/1366
Eval of epoch 41 => acc:0.9823, loss:0.0167
Epoch 42: LinearWarmup set learning rate to 0.00033572880904458267.
2023-04-13 13:21:53 || Epoch 42 start:
epoch:42, batch_id:0, loss:0.0469,             acc:0.8906 Tp/Tn_1/Tn_2: 114/0/14
epoch:42, batch_id:50, loss:0.0521,             acc:0.9396 Tp/Tn_1/Tn_2: 6134/54/340
epoch:42, batch_id:100, loss:0.0889,             acc:0.9394 Tp/Tn_1/Tn_2: 12144/113/671
epoch:42, batch_id:150, loss:0.0696,             acc:0.9393 Tp/Tn_1/Tn_2: 18155/162/1011
epoch:42, batch_id:200, loss:0.1327,             acc:0.9386 Tp/Tn_1/Tn_2: 24148/205/1375
Eval of epoch 42 => acc:0.9850, loss:0.0175
Epoch 43: LinearWarmup set learning rate to 0.0003116360531824075.
2023-04-13 13:24:36 || Epoch 43 start:
epoch:43, batch_id:0, loss:0.0971,             acc:0.9531 Tp/Tn_1/Tn_2: 122/1/5
epoch:43, batch_id:50, loss:0.1427,             acc:0.9415 Tp/Tn_1/Tn_2: 6146/66/316
epoch:43, batch_id:100, loss:0.0774,             acc:0.9418 Tp/Tn_1/Tn_2: 12176/119/633
epoch:43, batch_id:150, loss:0.0946,             acc:0.9421 Tp/Tn_1/Tn_2: 18209/174/945
epoch:43, batch_id:200, loss:0.0813,             acc:0.9436 Tp/Tn_1/Tn_2: 24278/224/1226
Eval of epoch 43 => acc:0.9837, loss:0.0171
Epoch 44: LinearWarmup set learning rate to 0.0002880428046450697.
2023-04-13 13:27:17 || Epoch 44 start:
epoch:44, batch_id:0, loss:0.0991,             acc:0.9375 Tp/Tn_1/Tn_2: 120/2/6
epoch:44, batch_id:50, loss:0.0619,             acc:0.9430 Tp/Tn_1/Tn_2: 6156/66/306
epoch:44, batch_id:100, loss:0.0414,             acc:0.9442 Tp/Tn_1/Tn_2: 12207/116/605
epoch:44, batch_id:150, loss:0.0691,             acc:0.9443 Tp/Tn_1/Tn_2: 18252/158/918
epoch:44, batch_id:200, loss:0.1740,             acc:0.9450 Tp/Tn_1/Tn_2: 24312/226/1190
Eval of epoch 44 => acc:0.9803, loss:0.0191
Epoch 45: LinearWarmup set learning rate to 0.00026501162848633996.
2023-04-13 13:30:00 || Epoch 45 start:
epoch:45, batch_id:0, loss:0.1803,             acc:0.9453 Tp/Tn_1/Tn_2: 121/3/4
epoch:45, batch_id:50, loss:0.0932,             acc:0.9444 Tp/Tn_1/Tn_2: 6165/62/301
epoch:45, batch_id:100, loss:0.0978,             acc:0.9466 Tp/Tn_1/Tn_2: 12238/114/576
epoch:45, batch_id:150, loss:0.0795,             acc:0.9465 Tp/Tn_1/Tn_2: 18293/183/852
epoch:45, batch_id:200, loss:0.0386,             acc:0.9464 Tp/Tn_1/Tn_2: 24349/247/1132
Eval of epoch 45 => acc:0.9800, loss:0.0187
Epoch 46: LinearWarmup set learning rate to 0.0002426035992450848.
2023-04-13 13:32:42 || Epoch 46 start:
epoch:46, batch_id:0, loss:0.0499,             acc:0.9453 Tp/Tn_1/Tn_2: 121/3/4
epoch:46, batch_id:50, loss:0.0299,             acc:0.9487 Tp/Tn_1/Tn_2: 6193/57/278
epoch:46, batch_id:100, loss:0.0724,             acc:0.9502 Tp/Tn_1/Tn_2: 12284/105/539
epoch:46, batch_id:150, loss:0.0453,             acc:0.9490 Tp/Tn_1/Tn_2: 18342/161/825
epoch:46, batch_id:200, loss:0.0850,             acc:0.9493 Tp/Tn_1/Tn_2: 24423/227/1078
Eval of epoch 46 => acc:0.9823, loss:0.0171
Epoch 47: LinearWarmup set learning rate to 0.00022087813898656754.
2023-04-13 13:35:28 || Epoch 47 start:
epoch:47, batch_id:0, loss:0.1089,             acc:0.9531 Tp/Tn_1/Tn_2: 122/3/3
epoch:47, batch_id:50, loss:0.0382,             acc:0.9517 Tp/Tn_1/Tn_2: 6213/54/261
epoch:47, batch_id:100, loss:0.1043,             acc:0.9527 Tp/Tn_1/Tn_2: 12317/111/500
epoch:47, batch_id:150, loss:0.0517,             acc:0.9504 Tp/Tn_1/Tn_2: 18369/169/790
epoch:47, batch_id:200, loss:0.0715,             acc:0.9511 Tp/Tn_1/Tn_2: 24469/205/1054
Eval of epoch 47 => acc:0.9800, loss:0.0221
Epoch 48: LinearWarmup set learning rate to 0.000199892859725816.
2023-04-13 13:38:16 || Epoch 48 start:
epoch:48, batch_id:0, loss:0.0330,             acc:0.9297 Tp/Tn_1/Tn_2: 119/0/9
epoch:48, batch_id:50, loss:0.0980,             acc:0.9557 Tp/Tn_1/Tn_2: 6239/57/232
epoch:48, batch_id:100, loss:0.0361,             acc:0.9517 Tp/Tn_1/Tn_2: 12304/104/520
epoch:48, batch_id:150, loss:0.0826,             acc:0.9513 Tp/Tn_1/Tn_2: 18387/139/802
epoch:48, batch_id:200, loss:0.0463,             acc:0.9517 Tp/Tn_1/Tn_2: 24485/189/1054
Eval of epoch 48 => acc:0.9813, loss:0.0175
Epoch 49: LinearWarmup set learning rate to 0.00017970341065091244.
2023-04-13 13:41:04 || Epoch 49 start:
epoch:49, batch_id:0, loss:0.0603,             acc:0.9688 Tp/Tn_1/Tn_2: 124/1/3
epoch:49, batch_id:50, loss:0.1243,             acc:0.9544 Tp/Tn_1/Tn_2: 6230/55/243
epoch:49, batch_id:100, loss:0.1252,             acc:0.9560 Tp/Tn_1/Tn_2: 12359/105/464
epoch:49, batch_id:150, loss:0.1489,             acc:0.9559 Tp/Tn_1/Tn_2: 18476/149/703
epoch:49, batch_id:200, loss:0.1104,             acc:0.9565 Tp/Tn_1/Tn_2: 24609/197/922
Eval of epoch 49 => acc:0.9813, loss:0.0170
Epoch 50: LinearWarmup set learning rate to 0.00016036333055135344.
2023-04-13 13:43:49 || Epoch 50 start:
epoch:50, batch_id:0, loss:0.0849,             acc:0.9766 Tp/Tn_1/Tn_2: 125/1/2
epoch:50, batch_id:50, loss:0.1497,             acc:0.9591 Tp/Tn_1/Tn_2: 6261/62/205
epoch:50, batch_id:100, loss:0.0328,             acc:0.9571 Tp/Tn_1/Tn_2: 12373/117/438
epoch:50, batch_id:150, loss:0.0561,             acc:0.9554 Tp/Tn_1/Tn_2: 18466/163/699
epoch:50, batch_id:200, loss:0.0406,             acc:0.9551 Tp/Tn_1/Tn_2: 24574/218/936
Eval of epoch 50 => acc:0.9783, loss:0.0228
Epoch 51: LinearWarmup set learning rate to 0.00014192390584280345.
2023-04-13 13:46:37 || Epoch 51 start:
epoch:51, batch_id:0, loss:0.0657,             acc:0.9766 Tp/Tn_1/Tn_2: 125/2/1
epoch:51, batch_id:50, loss:0.0603,             acc:0.9593 Tp/Tn_1/Tn_2: 6262/52/214
epoch:51, batch_id:100, loss:0.0685,             acc:0.9599 Tp/Tn_1/Tn_2: 12410/101/417
epoch:51, batch_id:150, loss:0.0658,             acc:0.9605 Tp/Tn_1/Tn_2: 18564/141/623
epoch:51, batch_id:200, loss:0.1815,             acc:0.9598 Tp/Tn_1/Tn_2: 24693/190/845
Eval of epoch 51 => acc:0.9783, loss:0.0193
Epoch 52: LinearWarmup set learning rate to 0.00012443403456474.
2023-04-13 13:49:23 || Epoch 52 start:
epoch:52, batch_id:0, loss:0.0292,             acc:0.9688 Tp/Tn_1/Tn_2: 124/0/4
epoch:52, batch_id:50, loss:0.0559,             acc:0.9586 Tp/Tn_1/Tn_2: 6258/47/223
epoch:52, batch_id:100, loss:0.0276,             acc:0.9597 Tp/Tn_1/Tn_2: 12407/95/426
epoch:52, batch_id:150, loss:0.0339,             acc:0.9609 Tp/Tn_1/Tn_2: 18572/138/618
epoch:52, batch_id:200, loss:0.0367,             acc:0.9616 Tp/Tn_1/Tn_2: 24739/190/799
Eval of epoch 52 => acc:0.9830, loss:0.0135
Epoch 53: LinearWarmup set learning rate to 0.00010794009671164484.
2023-04-13 13:52:12 || Epoch 53 start:
epoch:53, batch_id:0, loss:0.1342,             acc:0.9609 Tp/Tn_1/Tn_2: 123/2/3
epoch:53, batch_id:50, loss:0.1328,             acc:0.9634 Tp/Tn_1/Tn_2: 6289/58/181
epoch:53, batch_id:100, loss:0.0456,             acc:0.9643 Tp/Tn_1/Tn_2: 12466/110/352
epoch:53, batch_id:150, loss:0.0384,             acc:0.9635 Tp/Tn_1/Tn_2: 18622/167/539
epoch:53, batch_id:200, loss:0.1430,             acc:0.9627 Tp/Tn_1/Tn_2: 24768/218/742
Eval of epoch 53 => acc:0.9813, loss:0.0150
Epoch 54: LinearWarmup set learning rate to 9.248583124159438e-05.
2023-04-13 13:55:00 || Epoch 54 start:
epoch:54, batch_id:0, loss:0.0796,             acc:0.9688 Tp/Tn_1/Tn_2: 124/2/2
epoch:54, batch_id:50, loss:0.0554,             acc:0.9635 Tp/Tn_1/Tn_2: 6290/60/178
epoch:54, batch_id:100, loss:0.0500,             acc:0.9641 Tp/Tn_1/Tn_2: 12464/103/361
epoch:54, batch_id:150, loss:0.0562,             acc:0.9641 Tp/Tn_1/Tn_2: 18634/146/548
epoch:54, batch_id:200, loss:0.0797,             acc:0.9641 Tp/Tn_1/Tn_2: 24804/197/727
Eval of epoch 54 => acc:0.9766, loss:0.0218
Epoch 55: LinearWarmup set learning rate to 7.811222008840718e-05.
2023-04-13 13:57:45 || Epoch 55 start:
epoch:55, batch_id:0, loss:0.1431,             acc:0.9609 Tp/Tn_1/Tn_2: 123/1/4
epoch:55, batch_id:50, loss:0.0162,             acc:0.9660 Tp/Tn_1/Tn_2: 6306/47/175
epoch:55, batch_id:100, loss:0.0765,             acc:0.9660 Tp/Tn_1/Tn_2: 12489/97/342
epoch:55, batch_id:150, loss:0.1531,             acc:0.9651 Tp/Tn_1/Tn_2: 18654/152/522
epoch:55, batch_id:200, loss:0.0420,             acc:0.9652 Tp/Tn_1/Tn_2: 24832/192/704
Eval of epoch 55 => acc:0.9816, loss:0.0156
Epoch 56: LinearWarmup set learning rate to 6.485737948492237e-05.
2023-04-13 14:00:29 || Epoch 56 start:
epoch:56, batch_id:0, loss:0.0480,             acc:0.9531 Tp/Tn_1/Tn_2: 122/0/6
epoch:56, batch_id:50, loss:0.0705,             acc:0.9655 Tp/Tn_1/Tn_2: 6303/53/172
epoch:56, batch_id:100, loss:0.0207,             acc:0.9639 Tp/Tn_1/Tn_2: 12461/98/369
epoch:56, batch_id:150, loss:0.0511,             acc:0.9644 Tp/Tn_1/Tn_2: 18640/151/537
epoch:56, batch_id:200, loss:0.0410,             acc:0.9647 Tp/Tn_1/Tn_2: 24821/188/719
Eval of epoch 56 => acc:0.9800, loss:0.0145
Epoch 57: LinearWarmup set learning rate to 5.275645888560221e-05.
2023-04-13 14:03:13 || Epoch 57 start:
epoch:57, batch_id:0, loss:0.0331,             acc:0.9844 Tp/Tn_1/Tn_2: 126/2/0
epoch:57, batch_id:50, loss:0.0475,             acc:0.9660 Tp/Tn_1/Tn_2: 6306/41/181
epoch:57, batch_id:100, loss:0.0339,             acc:0.9657 Tp/Tn_1/Tn_2: 12484/79/365
epoch:57, batch_id:150, loss:0.0652,             acc:0.9657 Tp/Tn_1/Tn_2: 18665/115/548
epoch:57, batch_id:200, loss:0.0584,             acc:0.9664 Tp/Tn_1/Tn_2: 24863/170/695
Eval of epoch 57 => acc:0.9823, loss:0.0159
Epoch 58: LinearWarmup set learning rate to 4.1841547756497625e-05.
2023-04-13 14:05:58 || Epoch 58 start:
epoch:58, batch_id:0, loss:0.0751,             acc:0.9766 Tp/Tn_1/Tn_2: 125/1/2
epoch:58, batch_id:50, loss:0.0879,             acc:0.9671 Tp/Tn_1/Tn_2: 6313/55/160
epoch:58, batch_id:100, loss:0.0369,             acc:0.9663 Tp/Tn_1/Tn_2: 12492/102/334
epoch:58, batch_id:150, loss:0.0193,             acc:0.9668 Tp/Tn_1/Tn_2: 18686/144/498
epoch:58, batch_id:200, loss:0.0755,             acc:0.9673 Tp/Tn_1/Tn_2: 24887/200/641
Eval of epoch 58 => acc:0.9796, loss:0.0172
Epoch 59: LinearWarmup set learning rate to 3.214159047975324e-05.
2023-04-13 14:08:46 || Epoch 59 start:
epoch:59, batch_id:0, loss:0.0665,             acc:0.9688 Tp/Tn_1/Tn_2: 124/1/3
epoch:59, batch_id:50, loss:0.0474,             acc:0.9686 Tp/Tn_1/Tn_2: 6323/42/163
epoch:59, batch_id:100, loss:0.0684,             acc:0.9691 Tp/Tn_1/Tn_2: 12529/94/305
epoch:59, batch_id:150, loss:0.0375,             acc:0.9695 Tp/Tn_1/Tn_2: 18738/134/456
epoch:59, batch_id:200, loss:0.1157,             acc:0.9692 Tp/Tn_1/Tn_2: 24935/179/614
Eval of epoch 59 => acc:0.9826, loss:0.0152
Epoch 60: LinearWarmup set learning rate to 2.368230959830875e-05.
2023-04-13 14:11:31 || Epoch 60 start:
epoch:60, batch_id:0, loss:0.0774,             acc:0.9688 Tp/Tn_1/Tn_2: 124/0/4
epoch:60, batch_id:50, loss:0.0975,             acc:0.9669 Tp/Tn_1/Tn_2: 6312/40/176
epoch:60, batch_id:100, loss:0.0308,             acc:0.9684 Tp/Tn_1/Tn_2: 12520/83/325
epoch:60, batch_id:150, loss:0.0738,             acc:0.9694 Tp/Tn_1/Tn_2: 18736/124/468
epoch:60, batch_id:200, loss:0.0497,             acc:0.9695 Tp/Tn_1/Tn_2: 24943/167/618
Eval of epoch 60 => acc:0.9820, loss:0.0140
Epoch 61: LinearWarmup set learning rate to 1.6486137604339813e-05.

Epoch 60 start:
epoch:60, batch_id:0, loss:0.0774, acc:0.9688 Tp/Tn_1/Tn_2: 124/0/4
epoch:60, batch_id:50, loss:0.0975, acc:0.9669 Tp/Tn_1/Tn_2: 6312/40/176
epoch:60, batch_id:100, loss:0.0308, acc:0.9684 Tp/Tn_1/Tn_2: 12520/83/325
epoch:60, batch_id:150, loss:0.0738, acc:0.9694 Tp/Tn_1/Tn_2: 18736/124/468
epoch:60, batch_id:200, loss:0.0497, acc:0.9695 Tp/Tn_1/Tn_2: 24943/167/618
Eval of epoch 60 => acc:0.9820, loss:0.0140
Epoch 61: LinearWarmup set learning rate to 1.6486137604339813e-05.

1.5 测试

在上一小节的训练中,最终训练集准确率为0.97,验证集准确率为0.98,虽然训练后期loss有些波动,但是训练集准确率还有提升的趋势,所以选择最后一个epoch的模型为最终模型进行测试。

运行下面脚本可以看到测试集准确率为97%,准确率还是不错的。

%cd
!python3 test_model.py --img_path 'captcha_img/test' --param_path 'runs/last'

2. 梯度攻击

在前面章节中,我们利用对抗训练的方式训练出了准确率97%的验证码识别模型,而对抗训练本身也是一种防御方式,那么梯度攻击还会有效果吗?

与训练时的梯度攻击不同,这里的攻击噪声是RGB三个通道,并且使用Adam优化器对噪声进行优化,攻击效果会更好

    # 代码解释,主要区别是使用了adam优化器进行噪声的优化迭代
    def attack_ctc(self, model, input_batch, clip_base, labels, label_lengths):
        """
        利用ctcloss进行梯度攻击
        :param model: 攻击的模型
        :param input_batch: 输入模型的batch数据
        :param clip_base: 裁剪基础
        :param labels: 标签,ctc_loss计算需要
        :param label_lengths: 标签的长度,ctc_loss计算需要
        :return: 攻击噪声
        """
        loss_func = paddle.nn.functional.ctc_loss
        # 噪声初始化
        self.delta = self.init_noise(clip_base)
        # 优化器定义
        optimizer = paddle.optimizer.Adam(
            learning_rate=self.iter_ratio,
            parameters=[self.delta]
        )
        # 反方向优化
        loss_func = self.negative_loss(loss_func)
        batch_size = input_batch.shape[0]
        for _ in range(self.iter_num):  # 循环次数
            # 前向计算
            delta_gray = trans_func(self.delta)
            outputs = model(input_batch + delta_gray)
            # loss计算
            input_length = outputs.shape[0]
            input_lengths = paddle.full([batch_size], input_length, dtype='int64')
            loss = loss_func(outputs, labels, input_lengths, label_lengths)
            # 梯度回传
            loss.backward(retain_graph=False)
            # 使用优化器迭代
            optimizer.step()
            optimizer.clear_grad()
            delta_new = paddle.clip(self.delta, -self.eps, self.eps)
            self.delta.clear_grad()
            # print(loss)
            # 限制噪声的大小
            delta_new = paddle.clip(clip_base + delta_new, -1.0, 1.0) - clip_base
            delta_new.stop_gradient = True  # backward之后需阻断梯度传递
            paddle.assign(delta_new, self.delta)
        return self.delta

注意这里是白盒攻击,直接利用模型的梯度信息来更新迭代攻击噪声。另外还有一种黑盒攻击方法——迁移攻击:在拿不到模型的情况下,可以在相似任务的已知模型上先进行白盒攻击,生成的攻击样本会因为具有迁移攻击性,在黑盒模型上也有一定的攻击效果。关于迁移攻击的效果这里就不做测试了,感兴趣的话可以自行在其他模型上测试,或则参考本人另外的项目:https://aistudio.baidu.com/aistudio/projectdetail/5843946

2.1 生成攻击样本

可以多次设置不同的参数来对比攻击效果

在config中进行设置:

attack_sample_config = {
    # PGD相关
    'iter_num': 60,  # 迭代次数
    'eps_iter': 4/255,  # 不使用优化器时,迭代的固定值。当use_opt为True时,此项无意义
    'eps': 80/255,  # 最大噪声限制
    'use_opt': True,  # 是否使用优化器
    'iter_ratio': 0.1, # 优化器的lr,当use_opt为Fase时,此项无意义
...
}
%cd
!python3 generate_attack_sample.py --model_path 'runs/last' --save_path 'captcha_img/attack_sample'

2.2 攻击效果测试

  • 与对抗训练相同的攻击设置时,迭代5次,最大噪声限制30/255,准确率为94%,攻击效果不理想(因为原本就是以此种攻击强度的数据进行训练的)


预测为:AICISI

  • 当迭代增加至20次,最大噪声限制为60/255,采用Adam优化的方式,准确率降为46%


预测为:AICISI

  • 当迭代增加至60次,最大噪声限制为80/255,同样Adam来优化,准确率降为5%


预测为:RICBH

可以看到攻击噪声越大,模型的识别率越低,同时更大的噪声也更有可能对人造成干扰,所以若用此方法来反爬虫,噪声的大小应该适度,若人也无法准确识别,那验证码也失去了意义。

%cd
!python3 test_model.py --img_path 'captcha_img/attack_sample' --param_path 'runs/last'

请点击此处查看本环境基本用法.

Please click here for more detailed instructions.

此文章为搬运
原项目链接

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐