PaddleNLP基于ERNIR3.0文本分类以中医疗搜索检索词意图分类(KUAKE-QIC)为例【多分类(单标签)】

0.前言:文本分类任务介绍

文本分类任务是自然语言处理中最常见的任务,文本分类任务简单来说就是对给定的一个句子或一段文本使用文本分类器进行分类。文本分类任务广泛应用于长短文本分类、情感分析、新闻分类、事件类别分类、政务数据分类、商品信息分类、商品类目预测、文章分类、论文类别分类、专利分类、案件描述分类、罪名分类、意图分类、论文专利分类、邮件自动标签、评论正负识别、药物反应分类、对话分类、税种识别、来电信息自动分类、投诉分类、广告检测、敏感违法内容检测、内容安全检测、舆情分析、话题标记等各类日常或专业领域中。

文本分类任务可以根据标签类型分为多分类(multi class)、多标签(multi label)、层次分类(hierarchical等三类任务,接下来我们将以下图的新闻文本分类为例介绍三种分类任务的区别。

在这里插入图片描述

PaddleNLP采用AutoModelForSequenceClassification, AutoTokenizer提供了方便易用的接口,可指定模型名或模型参数文件路径通过from_pretrained() 方法加载不同网络结构的预训练模型,并在输出层上叠加一层线性层,且相应预训练模型权重下载速度快、稳定。Transformer预训练模型汇总包含了如 ERNIE、BERT、RoBERTa等40多个主流预训练模型,500多个模型权重。下面以ERNIE 3.0 中文base模型为例,演示如何加载预训练模型和分词器:

from paddlenlp.transformers import AutoModelForSequenceClassification, AutoTokenizer
num_classes = 10
model_name = "ernie-3.0-base-zh"
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_classes=num_classes)
tokenizer = AutoTokenizer.from_pretrained(model_name)

1.数据准备

1.1加载数据集、自定义数据集

通过使用PaddleNLP提供的 load_datasetMapDatasetIterDataset ,可以方便的自定义属于自己的数据集。

目前PaddleNLP的通用数据处理流程如下:

  1. 加载数据集(内置数据集或者自定义数据集,数据集返回 原始数据)。

  2. 定义 trans_func() ,包括tokenize,token to id等操作,并传入数据集的 map() 方法,将原始数据转为 feature 。

  3. 根据上一步数据处理的结果定义 batchify 方法和 BatchSampler 。

  4. 定义 DataLoader , 传入 BatchSampler 和 batchify_fn() 。

在这里插入图片描述

PaddleNLP Datasets API:供参考

PaddleNLP提供了以下数据集的快速读取API,实际使用时请根据需要添加splits信息:

在这里插入图片描述

加载数据集

快速加载内置数据集
目前PaddleNLP内置20余个NLP数据集,涵盖阅读理解,文本分类,序列标注,机器翻译等多项任务。目前提供的数据集可以在 数据集列表 中找到。

以 msra_ner 数据集为例:

load_dataset() 方法会从 paddlenlp.datasets 下找到msra_ner数据集对应的数据读取脚本(默认路径:paddlenlp/datasets/msra_ner.py),并调用脚本中 DatasetBuilder 类的相关方法生成数据集。

生成数据集可以以 MapDataset 和 IterDataset 两种类型返回,分别是对 paddle.io.Dataset 和 paddle.io.IterableDataset 的扩展,只需在 load_dataset() 时设置 lazy 参数即可获取相应类型。Flase 对应返回 MapDataset ,True 对应返回 IterDataset,默认值为None,对应返回 DatasetBuilder 默认的数据集类型,大多数为 MapDataset

!pip install --upgrade paddlenlp
from paddlenlp.datasets import load_dataset
train_ds, test_ds = load_dataset("msra_ner", splits=("train", "test"))
for i in range(3):
    print(train_ds[i])
{'tokens': ['当', '希', '望', '工', '程', '救', '助', '的', '百', '万', '儿', '童', '成', '长', '起', '来', ',', '科', '教', '兴', '国', '蔚', '然', '成', '风', '时', ',', '今', '天', '有', '收', '藏', '价', '值', '的', '书', '你', '没', '买', ',', '明', '日', '就', '叫', '你', '悔', '不', '当', '初', '!'], 'labels': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]}
{'tokens': ['藏', '书', '本', '来', '就', '是', '所', '有', '传', '统', '收', '藏', '门', '类', '中', '的', '第', '一', '大', '户', ',', '只', '是', '我', '们', '结', '束', '温', '饱', '的', '时', '间', '太', '短', '而', '已', '。'], 'labels': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]}
{'tokens': ['因', '有', '关', '日', '寇', '在', '京', '掠', '夺', '文', '物', '详', '情', ',', '藏', '界', '较', '为', '重', '视', ',', '也', '是', '我', '们', '收', '藏', '北', '京', '史', '料', '中', '的', '要', '件', '之', '一', '。'], 'labels': [6, 6, 6, 4, 6, 6, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6]}

选择子数据集

有些数据集是很多子数据集的集合,每个子数据集都是一个独立的数据集。例如 GLUE 数据集就包含COLA, SST2, MRPC, QQP等10个子数据集。

load_dataset() 方法提供了一个 name 参数用来指定想要获取的子数据集。使用方法如下:

from paddlenlp.datasets import load_dataset
train_ds, dev_ds = load_dataset("glue", name="cola", splits=("train", "dev"))
for i in range(3):
    print(train_ds[i])
{'sentence': "Our friends won't buy this analysis, let alone the next one we propose.", 'labels': 1}
{'sentence': "One more pseudo generalization and I'm giving up.", 'labels': 1}
{'sentence': "One more pseudo generalization or I'm giving up.", 'labels': 1}

1.1.1以内置数据集格式读取本地数据集

有的时候,我们希望使用数据格式与内置数据集相同的本地数据替换某些内置数据集的数据(例如参加SQuAD竞赛,对训练数据进行了数据增强)。 load_dataset() 方法提供的 data_files参数可以实现这个功能。以 SQuAD 为例。

 from paddlenlp.datasets import load_dataset
 train_ds, dev_ds = load_dataset("squad", data_files=("my_train_file.json", "my_dev_file.json"))
 test_ds = load_dataset("squad", data_files="my_test_file.json")

注解

对于某些数据集,不同的split的读取方式不同。对于这种情况则需要在 splits 参数中以传入与 data_files 一一对应 的split信息。

此时 splits 不再代表选取的内置数据集,而代表以何种格式读取本地数据集。

下面以 COLA 数据集为例:

from paddlenlp.datasets import load_dataset
train_ds, test_ds = load_dataset("glue", "cola", splits=["train", "test"], data_files=["my_train_file.csv", "my_test_file.csv"])

另外需要注意数据集的是没有默认加载选项的,splits 和data_files 必须至少指定一个。

这个方法还是比较简单的

需要注意的是格式要一致!!!!,可以写程序转换一下

1.1.2 自定义数据集

通过使用PaddleNLP提供的 load_dataset() , MapDataset 和 IterDataset 。任何人都可以方便的定义属于自己的数据集。

从本地文件创建数据集
从本地文件创建数据集时,我们 推荐 根据本地数据集的格式给出读取function并传入 load_dataset() 中创建数据集。

以 waybill_ie 快递单信息抽取任务中的数据为例:

from paddlenlp.datasets import load_dataset

def read(data_path):
    with open(data_path, 'r', encoding='utf-8') as f:
        # 跳过列名
        next(f)
        for line in f:
            words, labels = line.strip('\n').split('\t')
            words = words.split('\002')
            labels = labels.split('\002')
            yield {'tokens': words, 'labels': labels}

# data_path为read()方法的参数
map_ds = load_dataset(read, data_path='数据集/data1/dev.txt', lazy=False)
iter_ds = load_dataset(read, data_path='数据集/data1/dev.txt', lazy=True)

for i in range(3):
    print(map_ds[i])

{'tokens': ['喻', '晓', '刚', '云', '南', '省', '楚', '雄', '彝', '族', '自', '治', '州', '南', '华', '县', '东', '街', '古', '城', '路', '3', '7', '号', '1', '8', '5', '1', '3', '3', '8', '6', '1', '6', '3'], 'labels': ['P-B', 'P-I', 'P-I', 'A1-B', 'A1-I', 'A1-I', 'A2-B', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A3-B', 'A3-I', 'A3-I', 'A4-B', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'T-B', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I']}
{'tokens': ['1', '3', '4', '2', '6', '3', '3', '8', '1', '3', '5', '寇', '铭', '哲', '黑', '龙', '江', '省', '七', '台', '河', '市', '桃', '山', '区', '风', '采', '路', '朝', '阳', '广', '场'], 'labels': ['T-B', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'P-B', 'P-I', 'P-I', 'A1-B', 'A1-I', 'A1-I', 'A1-I', 'A2-B', 'A2-I', 'A2-I', 'A2-I', 'A3-B', 'A3-I', 'A3-I', 'A4-B', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I']}
{'tokens': ['湖', '南', '省', '长', '沙', '市', '岳', '麓', '区', '银', '杉', '路', '3', '1', '号', '绿', '地', '中', '央', '广', '场', '7', '栋', '2', '1', '楼', '须', '平', '盛', '1', '3', '6', '0', '1', '2', '6', '9', '5', '3', '8'], 'labels': ['A1-B', 'A1-I', 'A1-I', 'A2-B', 'A2-I', 'A2-I', 'A3-B', 'A3-I', 'A3-I', 'A4-B', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'P-B', 'P-I', 'P-I', 'T-B', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I']}
from paddlenlp.datasets import load_dataset

def read(data_path):
    with open(data_path, 'r', encoding='utf-8') as f:
        # 跳过列名
        next(f)
        for line in f:
            words, labels = line.strip('\n').split(' ')
            # words = words.split('\002')
            # labels = labels.split('')       #分类问题内容和标签一般不需要再分割
            yield {'connect': words, 'labels': labels}

# data_path为read()方法的参数
map_ds = load_dataset(read, data_path='数据集/input.txt', lazy=False)
# iter_ds = load_dataset(read, data_path='数据集/dev.txt', lazy=True)

# train= load_dataset(read, data_path='数据集/input_train.txt', lazy=False)
# dev= load_dataset(read, data_path='数据集/input_dev.txt', lazy=False) #自定义好训练测试集
for i in range(3):
    print(map_ds[i])
{'connect': '出栏一头猪亏损300元,究竟谁能笑到最后!', 'labels': '金融'}
{'connect': '区块链投资心得,能做到就不会亏钱', 'labels': '金融'}
{'connect': '你家拆迁,要钱还是要房?答案一目了然。', 'labels': '房产'}

报错:
not enough values to unpack (expected 2, got 1) 就是分隔符问题检查一下输入本文,是否分割一致

api接口文档
https://paddlenlp.readthedocs.io/zh/latest/source/paddlenlp.datasets.dataset.html

在这里插入图片描述

推荐将数据读取代码写成生成器(generator)的形式,这样可以更好的构建 MapDataset 和 IterDataset 两种数据集。同时也推荐将单条数据写成字典的格式,这样可以更方便的监测数据流向。

事实上,MapDataset 在绝大多数时候都可以满足要求。一般只有在数据集过于庞大无法一次性加载进内存的时候我们才考虑使用 IterDataset 。任何人都可以方便的定义属于自己的数据集。

注解:

  1. 需要注意的是,只有PaddleNLP内置的数据集具有将数据中的label自动转为id的功能(详细条件参见 创建DatasetBuilder)。

  2. 像上例中的自定义数据集需要在自定义的convert to feature方法中添加label转id的功能。

  3. 自定义数据读取function中的参数可以直接以关键字参数的的方式传入 load_dataset() 中。而且对于自定义数据集,lazy 参数是必须传入的。

2.基于ERNIR3.0文本分类任务模型微调

2.1 加载已有数据集:CBLUE数据集中医疗搜索检索词意图分类(训练)

数据集定义:
以公开数据集CBLUE数据集中医疗搜索检索词意图分类(KUAKE-QIC)任务为示例,在训练集上进行模型微调,并在开发集上使用准确率Accuracy评估模型表现。

数据集默认为:默认为"cblue"。

save_dir:保存训练模型的目录;默认保存在当前目录checkpoint文件夹下。

dataset:训练数据集;默认为"cblue"。

dataset_dir:本地数据集路径,数据集路径中应包含train.txt,dev.txt和label.txt文件;默认为None。

task_name:训练数据集;默认为"KUAKE-QIC"。

max_seq_length:ERNIE模型使用的最大序列长度,最大不能超过512, 若出现显存不足,请适当调低这一参数;默认为128。

model_name:选择预训练模型;默认为"ernie-3.0-base-zh"。

device: 选用什么设备进行训练,可选cpu、gpu、xpu、npu。如使用gpu训练,可使用参数gpus指定GPU卡号。

batch_size:批处理大小,请结合显存情况进行调整,若出现显存不足,请适当调低这一参数;默认为32。

learning_rate:Fine-tune的最大学习率;默认为6e-5。

weight_decay:控制正则项力度的参数,用于防止过拟合,默认为0.01。

early_stop:选择是否使用早停法(EarlyStopping);默认为False。

early_stop_nums:在设定的早停训练轮次内,模型在开发集上表现不再上升,训练终止;默认为4。
epochs: 训练轮次,默认为100。

warmup:是否使用学习率warmup策略;默认为False。

warmup_proportion:学习率warmup策略的比例数,如果设为0.1,则学习率会在前10%steps数从0慢慢增长到learning_rate, 而后再缓慢衰减;默认为0.1。

logging_steps: 日志打印的间隔steps数,默认5。

init_from_ckpt: 模型初始checkpoint参数地址,默认None。

seed:随机种子,默认为3。

parser.add_argument("--save_dir",
                    default="./checkpoint",
                    type=str,
                    help="The output directory where the model "
                    "checkpoints will be written.")
parser.add_argument("--dataset",
                    default="cblue",
                    type=str,
                    help="Dataset for text classfication.")
parser.add_argument("--dataset_dir",
                    default=None,
                    type=str,
                    help="Local dataset directory should include"
                    "train.txt, dev.txt and label.txt")
parser.add_argument("--task_name",
                    default="KUAKE-QIC",
                    type=str,
                    help="Task name for text classfication dataset.")
parser.add_argument("--max_seq_length",
                    default=128,
                    type=int,
                    help="The maximum total input sequence length"
                    "after tokenization. Sequences longer than this "
                    "will be truncated, sequences shorter will be padded.")
parser.add_argument('--model_name',
                    default="ernie-3.0-base-zh",
                    help="Select model to train, defaults "
                    "to ernie-3.0-base-zh.")
parser.add_argument('--device',
                    choices=['cpu', 'gpu', 'xpu', 'npu'],
                    default="gpu",
                    help="Select which device to train model, defaults to gpu.")
parser.add_argument("--batch_size",
                    default=32,
                    type=int,
                    help="Batch size per GPU/CPU for training.")
parser.add_argument("--learning_rate",
                    default=6e-5,
                    type=float,
                    help="The initial learning rate for Adam.")
parser.add_argument("--weight_decay",
                    default=0.01,
                    type=float,
                    help="Weight decay if we apply some.")
parser.add_argument('--early_stop',
                    action='store_true',
                    help='Epoch before early stop.')
parser.add_argument('--early_stop_nums',
                    type=int,
                    default=4,
                    help='Number of epoch before early stop.')
parser.add_argument("--epochs",
                    default=100,
                    type=int,
                    help="Total number of training epochs to perform.")
parser.add_argument('--warmup',
                    action='store_true',
                    help="whether use warmup strategy")
parser.add_argument('--warmup_proportion',
                    default=0.1,
                    type=float,
                    help="Linear warmup proportion of learning "
                    "rate over the training process.")
parser.add_argument("--logging_steps",
                    default=5,
                    type=int,
                    help="The interval steps to logging.")
parser.add_argument("--init_from_ckpt",
                    type=str,
                    default=None,
                    help="The path of checkpoint to be loaded.")
parser.add_argument("--seed",
                    type=int,
                    default=3,
                    help="random seed for initialization")
!pip install --upgrade paddlenlp

2.1.1 性能指标修改

关于性能指标参考手册修改添加:我已经添加进去
https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/metric/Overview_cn.html#about-metric-class:

在这里插入图片描述

import numpy as np
import paddle

x = np.array([0.1, 0.5, 0.6, 0.7])
y = np.array([0, 1, 1, 1])

m = paddle.metric.Precision()
m.update(x, y)
res = m.accumulate()
print(res) # 1.0

import numpy as np
import paddle

x = np.array([0.1, 0.5, 0.6, 0.7])
y = np.array([1, 0, 1, 1])

m = paddle.metric.Recall()
m.update(x, y)
res = m.accumulate()
print(res) # 2.0 / 3.0

f1_score = float(2 * precision * recall /
                         (precision + recall)) 
                         

上述办法在准确率和精准率上不会出现异常,在recall上出现异常,看了源文档:
https://github.com/PaddlePaddle/Paddle/blob/release%2F2.3/python/paddle/metric/metrics.py

修改后文件为:new

暂时未解决。

吐槽一下最新版本paddlenlp上面py文件已经没了!

实际采用这个-----AccuracyAndF1

https://paddlenlp.readthedocs.io/zh/latest/source/paddlenlp.metrics.glue.html

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

修改后文件为:new2

# !python train.py --warmup --early_stop --epochs 10 --model_name "ernie-3.0-base-zh" --max_seq_length 128 --batch_size 32 --logging_steps 10 --learning_rate 6e-5 
!python train.py --warmup --early_stop --epochs 5 --model_name ernie-3.0-medium-zh
[32m[2022-07-26 14:17:50,691] [    INFO][0m - We are using <class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'> to load 'ernie-3.0-medium-zh'.[0m
[32m[2022-07-26 14:17:50,691] [    INFO][0m - Downloading https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh_vocab.txt and saved to /home/aistudio/.paddlenlp/models/ernie-3.0-medium-zh[0m
[32m[2022-07-26 14:17:50,691] [    INFO][0m - Downloading ernie_3.0_medium_zh_vocab.txt from https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh_vocab.txt[0m
100%|████████████████████████████████████████| 182k/182k [00:00<00:00, 1.48MB/s]
[32m[2022-07-26 14:17:50,958] [    INFO][0m - tokenizer config file saved in /home/aistudio/.paddlenlp/models/ernie-3.0-medium-zh/tokenizer_config.json[0m
[32m[2022-07-26 14:17:50,958] [    INFO][0m - Special tokens file saved in /home/aistudio/.paddlenlp/models/ernie-3.0-medium-zh/special_tokens_map.json[0m
[32m[2022-07-26 14:17:50,959] [    INFO][0m - We are using <class 'paddlenlp.transformers.ernie.modeling.ErnieForSequenceClassification'> to load 'ernie-3.0-medium-zh'.[0m
[32m[2022-07-26 14:17:50,959] [    INFO][0m - Downloading https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh.pdparams and saved to /home/aistudio/.paddlenlp/models/ernie-3.0-medium-zh[0m
[32m[2022-07-26 14:17:50,959] [    INFO][0m - Downloading ernie_3.0_medium_zh.pdparams from https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh.pdparams[0m
100%|████████████████████████████████████████| 313M/313M [00:21<00:00, 15.0MB/s]
W0726 14:18:13.007437  1797 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0726 14:18:13.010838  1797 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
[32m[2022-07-26 14:18:17,263] [    INFO][0m - global step 10, epoch: 1, batch: 10, loss: 2.45565, acc: 0.01562, speed: 4.76 step/s[0m
[32m[2022-07-26 14:18:17,743] [    INFO][0m - global step 20, epoch: 1, batch: 20, loss: 2.01481, acc: 0.13906, speed: 21.14 step/s[0m
[32m[2022-07-26 14:18:18,230] [    INFO][0m - global step 30, epoch: 1, batch: 30, loss: 1.98736, acc: 0.20417, speed: 20.77 step/s[0m
[32m[2022-07-26 14:18:18,746] [    INFO][0m - global step 40, epoch: 1, batch: 40, loss: 1.56398, acc: 0.26094, speed: 19.73 step/s[0m
[32m[2022-07-26 14:18:19,387] [    INFO][0m - global step 50, epoch: 1, batch: 50, loss: 1.32730, acc: 0.31438, speed: 16.06 step/s[0m
[32m[2022-07-26 14:18:19,935] [    INFO][0m - global step 60, epoch: 1, batch: 60, loss: 1.25555, acc: 0.36406, speed: 18.46 step/s[0m
[32m[2022-07-26 14:18:20,466] [    INFO][0m - global step 70, epoch: 1, batch: 70, loss: 1.03789, acc: 0.41741, speed: 19.04 step/s[0m
[32m[2022-07-26 14:18:20,943] [    INFO][0m - global step 80, epoch: 1, batch: 80, loss: 0.78225, acc: 0.45664, speed: 21.57 step/s[0m
[32m[2022-07-26 14:18:21,566] [    INFO][0m - global step 90, epoch: 1, batch: 90, loss: 0.58045, acc: 0.48611, speed: 16.06 step/s[0m
[32m[2022-07-26 14:18:22,315] [    INFO][0m - global step 100, epoch: 1, batch: 100, loss: 0.52578, acc: 0.51250, speed: 13.82 step/s[0m
[32m[2022-07-26 14:18:23,081] [    INFO][0m - global step 110, epoch: 1, batch: 110, loss: 0.57547, acc: 0.53551, speed: 13.07 step/s[0m
[32m[2022-07-26 14:18:23,694] [    INFO][0m - global step 120, epoch: 1, batch: 120, loss: 0.57023, acc: 0.55521, speed: 17.66 step/s[0m
[32m[2022-07-26 14:18:24,294] [    INFO][0m - global step 130, epoch: 1, batch: 130, loss: 0.53647, acc: 0.57163, speed: 16.98 step/s[0m
[32m[2022-07-26 14:18:24,777] [    INFO][0m - global step 140, epoch: 1, batch: 140, loss: 0.37499, acc: 0.58705, speed: 20.91 step/s[0m
[32m[2022-07-26 14:18:25,331] [    INFO][0m - global step 150, epoch: 1, batch: 150, loss: 0.68366, acc: 0.59854, speed: 18.31 step/s[0m
[32m[2022-07-26 14:18:25,880] [    INFO][0m - global step 160, epoch: 1, batch: 160, loss: 0.84607, acc: 0.61055, speed: 18.44 step/s[0m
[32m[2022-07-26 14:18:26,433] [    INFO][0m - global step 170, epoch: 1, batch: 170, loss: 0.37839, acc: 0.62206, speed: 18.31 step/s[0m
[32m[2022-07-26 14:18:26,911] [    INFO][0m - global step 180, epoch: 1, batch: 180, loss: 0.84740, acc: 0.63125, speed: 21.06 step/s[0m
[32m[2022-07-26 14:18:27,418] [    INFO][0m - global step 190, epoch: 1, batch: 190, loss: 0.60020, acc: 0.63898, speed: 20.70 step/s[0m
[32m[2022-07-26 14:18:27,916] [    INFO][0m - global step 200, epoch: 1, batch: 200, loss: 0.42389, acc: 0.64844, speed: 20.16 step/s[0m
[32m[2022-07-26 14:18:28,485] [    INFO][0m - global step 210, epoch: 1, batch: 210, loss: 0.52830, acc: 0.65580, speed: 17.76 step/s[0m
[32m[2022-07-26 14:18:30,705] [    INFO][0m - eval loss: 0.56197, acc: 0.79795[0m
[32m[2022-07-26 14:18:30,706] [    INFO][0m - Current best accuracy: 0.79795[0m
[32m[2022-07-26 14:18:33,104] [    INFO][0m - tokenizer config file saved in ./checkpoint/tokenizer_config.json[0m
[32m[2022-07-26 14:18:33,104] [    INFO][0m - Special tokens file saved in ./checkpoint/special_tokens_map.json[0m
[32m[2022-07-26 14:18:33,349] [    INFO][0m - global step 220, epoch: 2, batch: 3, loss: 0.29689, acc: 0.83333, speed: 2.06 step/s[0m
[32m[2022-07-26 14:18:33,841] [    INFO][0m - global step 230, epoch: 2, batch: 13, loss: 0.59831, acc: 0.84375, speed: 20.46 step/s[0m
[32m[2022-07-26 14:18:34,338] [    INFO][0m - global step 240, epoch: 2, batch: 23, loss: 0.52675, acc: 0.84511, speed: 21.09 step/s[0m
[32m[2022-07-26 14:18:34,897] [    INFO][0m - global step 250, epoch: 2, batch: 33, loss: 0.45023, acc: 0.83807, speed: 18.67 step/s[0m
[32m[2022-07-26 14:18:35,378] [    INFO][0m - global step 260, epoch: 2, batch: 43, loss: 0.27924, acc: 0.84230, speed: 20.96 step/s[0m
[32m[2022-07-26 14:18:35,861] [    INFO][0m - global step 270, epoch: 2, batch: 53, loss: 0.57852, acc: 0.84080, speed: 21.06 step/s[0m
[32m[2022-07-26 14:18:36,390] [    INFO][0m - global step 280, epoch: 2, batch: 63, loss: 0.54282, acc: 0.84077, speed: 18.91 step/s[0m
[32m[2022-07-26 14:18:36,891] [    INFO][0m - global step 290, epoch: 2, batch: 73, loss: 0.45873, acc: 0.84375, speed: 20.18 step/s[0m
[32m[2022-07-26 14:18:37,479] [    INFO][0m - global step 300, epoch: 2, batch: 83, loss: 0.55707, acc: 0.84074, speed: 18.64 step/s[0m
[32m[2022-07-26 14:18:38,008] [    INFO][0m - global step 310, epoch: 2, batch: 93, loss: 0.44380, acc: 0.84073, speed: 20.08 step/s[0m
[32m[2022-07-26 14:18:38,600] [    INFO][0m - global step 320, epoch: 2, batch: 103, loss: 0.61454, acc: 0.84375, speed: 17.02 step/s[0m
[32m[2022-07-26 14:18:39,205] [    INFO][0m - global step 330, epoch: 2, batch: 113, loss: 0.38427, acc: 0.84569, speed: 16.74 step/s[0m
[32m[2022-07-26 14:18:39,906] [    INFO][0m - global step 340, epoch: 2, batch: 123, loss: 0.07552, acc: 0.84782, speed: 14.27 step/s[0m
[32m[2022-07-26 14:18:40,623] [    INFO][0m - global step 350, epoch: 2, batch: 133, loss: 0.28944, acc: 0.85103, speed: 13.96 step/s[0m
[32m[2022-07-26 14:18:41,180] [    INFO][0m - global step 360, epoch: 2, batch: 143, loss: 0.42489, acc: 0.85162, speed: 18.22 step/s[0m
[32m[2022-07-26 14:18:41,755] [    INFO][0m - global step 370, epoch: 2, batch: 153, loss: 0.29524, acc: 0.85233, speed: 17.86 step/s[0m
[32m[2022-07-26 14:18:42,227] [    INFO][0m - global step 380, epoch: 2, batch: 163, loss: 0.26282, acc: 0.85257, speed: 21.90 step/s[0m
[32m[2022-07-26 14:18:42,702] [    INFO][0m - global step 390, epoch: 2, batch: 173, loss: 0.44737, acc: 0.85296, speed: 21.78 step/s[0m
[32m[2022-07-26 14:18:43,321] [    INFO][0m - global step 400, epoch: 2, batch: 183, loss: 0.32747, acc: 0.85348, speed: 17.65 step/s[0m
[32m[2022-07-26 14:18:43,777] [    INFO][0m - global step 410, epoch: 2, batch: 193, loss: 0.43133, acc: 0.85379, speed: 22.59 step/s[0m
[32m[2022-07-26 14:18:44,300] [    INFO][0m - global step 420, epoch: 2, batch: 203, loss: 0.21463, acc: 0.85514, speed: 19.45 step/s[0m
[32m[2022-07-26 14:18:44,968] [    INFO][0m - global step 430, epoch: 2, batch: 213, loss: 0.37544, acc: 0.85549, speed: 14.99 step/s[0m
[32m[2022-07-26 14:18:47,109] [    INFO][0m - eval loss: 0.56914, acc: 0.80614[0m
[32m[2022-07-26 14:18:47,110] [    INFO][0m - Current best accuracy: 0.80614[0m
[32m[2022-07-26 14:18:49,552] [    INFO][0m - tokenizer config file saved in ./checkpoint/tokenizer_config.json[0m
[32m[2022-07-26 14:18:49,553] [    INFO][0m - Special tokens file saved in ./checkpoint/special_tokens_map.json[0m
[32m[2022-07-26 14:18:50,050] [    INFO][0m - global step 440, epoch: 3, batch: 6, loss: 0.22512, acc: 0.92188, speed: 1.97 step/s[0m
[32m[2022-07-26 14:18:50,807] [    INFO][0m - global step 450, epoch: 3, batch: 16, loss: 0.09957, acc: 0.92773, speed: 13.23 step/s[0m
[32m[2022-07-26 14:18:51,551] [    INFO][0m - global step 460, epoch: 3, batch: 26, loss: 0.14142, acc: 0.92788, speed: 13.44 step/s[0m
[32m[2022-07-26 14:18:52,296] [    INFO][0m - global step 470, epoch: 3, batch: 36, loss: 0.23036, acc: 0.92708, speed: 13.45 step/s[0m
[32m[2022-07-26 14:18:52,868] [    INFO][0m - global step 480, epoch: 3, batch: 46, loss: 0.25976, acc: 0.92527, speed: 18.41 step/s[0m
[32m[2022-07-26 14:18:53,452] [    INFO][0m - global step 490, epoch: 3, batch: 56, loss: 0.15924, acc: 0.92355, speed: 18.67 step/s[0m
[32m[2022-07-26 14:18:53,974] [    INFO][0m - global step 500, epoch: 3, batch: 66, loss: 0.22317, acc: 0.92614, speed: 19.50 step/s[0m
[32m[2022-07-26 14:18:54,469] [    INFO][0m - global step 510, epoch: 3, batch: 76, loss: 0.28616, acc: 0.92311, speed: 20.77 step/s[0m
[32m[2022-07-26 14:18:55,009] [    INFO][0m - global step 520, epoch: 3, batch: 86, loss: 0.13148, acc: 0.92151, speed: 18.74 step/s[0m
[32m[2022-07-26 14:18:55,555] [    INFO][0m - global step 530, epoch: 3, batch: 96, loss: 0.12343, acc: 0.92057, speed: 19.57 step/s[0m
[32m[2022-07-26 14:18:56,160] [    INFO][0m - global step 540, epoch: 3, batch: 106, loss: 0.37899, acc: 0.92070, speed: 16.65 step/s[0m
[32m[2022-07-26 14:18:56,567] [    INFO][0m - global step 550, epoch: 3, batch: 116, loss: 0.35276, acc: 0.92161, speed: 25.27 step/s[0m
[32m[2022-07-26 14:18:57,057] [    INFO][0m - global step 560, epoch: 3, batch: 126, loss: 0.50528, acc: 0.92163, speed: 21.36 step/s[0m
[32m[2022-07-26 14:18:57,510] [    INFO][0m - global step 570, epoch: 3, batch: 136, loss: 0.29590, acc: 0.91797, speed: 23.24 step/s[0m
[32m[2022-07-26 14:18:58,094] [    INFO][0m - global step 580, epoch: 3, batch: 146, loss: 0.24058, acc: 0.91652, speed: 17.44 step/s[0m
[32m[2022-07-26 14:18:58,524] [    INFO][0m - global step 590, epoch: 3, batch: 156, loss: 0.32639, acc: 0.91546, speed: 23.53 step/s[0m
[32m[2022-07-26 14:18:58,966] [    INFO][0m - global step 600, epoch: 3, batch: 166, loss: 0.26907, acc: 0.91566, speed: 22.98 step/s[0m
[32m[2022-07-26 14:18:59,438] [    INFO][0m - global step 610, epoch: 3, batch: 176, loss: 0.42968, acc: 0.91460, speed: 22.04 step/s[0m
[32m[2022-07-26 14:18:59,925] [    INFO][0m - global step 620, epoch: 3, batch: 186, loss: 0.33246, acc: 0.91482, speed: 20.94 step/s[0m
[32m[2022-07-26 14:19:00,583] [    INFO][0m - global step 630, epoch: 3, batch: 196, loss: 0.21074, acc: 0.91342, speed: 15.94 step/s[0m
[32m[2022-07-26 14:19:01,084] [    INFO][0m - global step 640, epoch: 3, batch: 206, loss: 0.22968, acc: 0.91338, speed: 20.70 step/s[0m
[32m[2022-07-26 14:19:01,471] [    INFO][0m - global step 650, epoch: 3, batch: 216, loss: 0.24368, acc: 0.91377, speed: 26.48 step/s[0m
[32m[2022-07-26 14:19:03,037] [    INFO][0m - eval loss: 0.59721, acc: 0.81023[0m
[32m[2022-07-26 14:19:03,038] [    INFO][0m - Current best accuracy: 0.81023[0m
[32m[2022-07-26 14:19:05,474] [    INFO][0m - tokenizer config file saved in ./checkpoint/tokenizer_config.json[0m
[32m[2022-07-26 14:19:05,475] [    INFO][0m - Special tokens file saved in ./checkpoint/special_tokens_map.json[0m
[32m[2022-07-26 14:19:05,891] [    INFO][0m - global step 660, epoch: 4, batch: 9, loss: 0.05673, acc: 0.97222, speed: 2.27 step/s[0m
[32m[2022-07-26 14:19:06,400] [    INFO][0m - global step 670, epoch: 4, batch: 19, loss: 0.26895, acc: 0.95395, speed: 19.88 step/s[0m
[32m[2022-07-26 14:19:06,952] [    INFO][0m - global step 680, epoch: 4, batch: 29, loss: 0.05619, acc: 0.95905, speed: 18.19 step/s[0m
[32m[2022-07-26 14:19:07,426] [    INFO][0m - global step 690, epoch: 4, batch: 39, loss: 0.19571, acc: 0.95913, speed: 21.24 step/s[0m
[32m[2022-07-26 14:19:07,947] [    INFO][0m - global step 700, epoch: 4, batch: 49, loss: 0.22622, acc: 0.95217, speed: 19.38 step/s[0m
[32m[2022-07-26 14:19:08,508] [    INFO][0m - global step 710, epoch: 4, batch: 59, loss: 0.03632, acc: 0.95710, speed: 18.39 step/s[0m
[32m[2022-07-26 14:19:09,032] [    INFO][0m - global step 720, epoch: 4, batch: 69, loss: 0.20891, acc: 0.95743, speed: 19.44 step/s[0m
[32m[2022-07-26 14:19:09,560] [    INFO][0m - global step 730, epoch: 4, batch: 79, loss: 0.09024, acc: 0.95847, speed: 19.18 step/s[0m
[32m[2022-07-26 14:19:10,099] [    INFO][0m - global step 740, epoch: 4, batch: 89, loss: 0.16542, acc: 0.96103, speed: 20.52 step/s[0m
[32m[2022-07-26 14:19:10,566] [    INFO][0m - global step 750, epoch: 4, batch: 99, loss: 0.26839, acc: 0.96117, speed: 21.76 step/s[0m
[32m[2022-07-26 14:19:11,190] [    INFO][0m - global step 760, epoch: 4, batch: 109, loss: 0.27351, acc: 0.96072, speed: 16.30 step/s[0m
[32m[2022-07-26 14:19:11,825] [    INFO][0m - global step 770, epoch: 4, batch: 119, loss: 0.13546, acc: 0.96061, speed: 17.06 step/s[0m
[32m[2022-07-26 14:19:12,283] [    INFO][0m - global step 780, epoch: 4, batch: 129, loss: 0.09454, acc: 0.96172, speed: 22.15 step/s[0m
[32m[2022-07-26 14:19:12,893] [    INFO][0m - global step 790, epoch: 4, batch: 139, loss: 0.12009, acc: 0.96201, speed: 17.89 step/s[0m
[32m[2022-07-26 14:19:13,467] [    INFO][0m - global step 800, epoch: 4, batch: 149, loss: 0.10994, acc: 0.96141, speed: 17.69 step/s[0m
[32m[2022-07-26 14:19:14,001] [    INFO][0m - global step 810, epoch: 4, batch: 159, loss: 0.26249, acc: 0.96128, speed: 19.51 step/s[0m
[32m[2022-07-26 14:19:14,541] [    INFO][0m - global step 820, epoch: 4, batch: 169, loss: 0.13344, acc: 0.95987, speed: 18.83 step/s[0m
[32m[2022-07-26 14:19:15,022] [    INFO][0m - global step 830, epoch: 4, batch: 179, loss: 0.13372, acc: 0.96002, speed: 20.79 step/s[0m
[32m[2022-07-26 14:19:15,487] [    INFO][0m - global step 840, epoch: 4, batch: 189, loss: 0.38304, acc: 0.95982, speed: 21.71 step/s[0m
[32m[2022-07-26 14:19:15,972] [    INFO][0m - global step 850, epoch: 4, batch: 199, loss: 0.37115, acc: 0.95948, speed: 23.01 step/s[0m
[32m[2022-07-26 14:19:16,439] [    INFO][0m - global step 860, epoch: 4, batch: 209, loss: 0.10606, acc: 0.96008, speed: 21.73 step/s[0m
[32m[2022-07-26 14:19:18,375] [    INFO][0m - eval loss: 0.70016, acc: 0.81586[0m
[32m[2022-07-26 14:19:18,376] [    INFO][0m - Current best accuracy: 0.81586[0m
[32m[2022-07-26 14:19:21,102] [    INFO][0m - tokenizer config file saved in ./checkpoint/tokenizer_config.json[0m
[32m[2022-07-26 14:19:21,106] [    INFO][0m - Special tokens file saved in ./checkpoint/special_tokens_map.json[0m
[32m[2022-07-26 14:19:21,262] [    INFO][0m - global step 870, epoch: 5, batch: 2, loss: 0.06552, acc: 0.98438, speed: 2.08 step/s[0m
[32m[2022-07-26 14:19:21,955] [    INFO][0m - global step 880, epoch: 5, batch: 12, loss: 0.02434, acc: 0.98698, speed: 15.24 step/s[0m
[32m[2022-07-26 14:19:22,481] [    INFO][0m - global step 890, epoch: 5, batch: 22, loss: 0.02208, acc: 0.98153, speed: 19.77 step/s[0m
[32m[2022-07-26 14:19:23,029] [    INFO][0m - global step 900, epoch: 5, batch: 32, loss: 0.06341, acc: 0.98047, speed: 18.80 step/s[0m
[32m[2022-07-26 14:19:23,543] [    INFO][0m - global step 910, epoch: 5, batch: 42, loss: 0.03933, acc: 0.98289, speed: 20.93 step/s[0m
[32m[2022-07-26 14:19:24,067] [    INFO][0m - global step 920, epoch: 5, batch: 52, loss: 0.06578, acc: 0.98077, speed: 19.38 step/s[0m
[32m[2022-07-26 14:19:24,568] [    INFO][0m - global step 930, epoch: 5, batch: 62, loss: 0.09988, acc: 0.97933, speed: 21.20 step/s[0m
[32m[2022-07-26 14:19:25,070] [    INFO][0m - global step 940, epoch: 5, batch: 72, loss: 0.03971, acc: 0.97917, speed: 20.60 step/s[0m
[32m[2022-07-26 14:19:25,543] [    INFO][0m - global step 950, epoch: 5, batch: 82, loss: 0.10622, acc: 0.97904, speed: 22.25 step/s[0m
[32m[2022-07-26 14:19:25,968] [    INFO][0m - global step 960, epoch: 5, batch: 92, loss: 0.05229, acc: 0.97928, speed: 23.65 step/s[0m
[32m[2022-07-26 14:19:26,600] [    INFO][0m - global step 970, epoch: 5, batch: 102, loss: 0.07278, acc: 0.97917, speed: 16.66 step/s[0m
[32m[2022-07-26 14:19:27,110] [    INFO][0m - global step 980, epoch: 5, batch: 112, loss: 0.01466, acc: 0.97907, speed: 19.81 step/s[0m
[32m[2022-07-26 14:19:27,605] [    INFO][0m - global step 990, epoch: 5, batch: 122, loss: 0.05983, acc: 0.97874, speed: 20.58 step/s[0m
[32m[2022-07-26 14:19:28,125] [    INFO][0m - global step 1000, epoch: 5, batch: 132, loss: 0.02756, acc: 0.97893, speed: 20.13 step/s[0m
[32m[2022-07-26 14:19:28,538] [    INFO][0m - global step 1010, epoch: 5, batch: 142, loss: 0.10091, acc: 0.97799, speed: 24.70 step/s[0m
[32m[2022-07-26 14:19:29,121] [    INFO][0m - global step 1020, epoch: 5, batch: 152, loss: 0.12691, acc: 0.97800, speed: 17.62 step/s[0m
[32m[2022-07-26 14:19:29,545] [    INFO][0m - global step 1030, epoch: 5, batch: 162, loss: 0.01848, acc: 0.97917, speed: 24.10 step/s[0m
[32m[2022-07-26 14:19:30,038] [    INFO][0m - global step 1040, epoch: 5, batch: 172, loss: 0.02475, acc: 0.97929, speed: 22.46 step/s[0m
[32m[2022-07-26 14:19:30,590] [    INFO][0m - global step 1050, epoch: 5, batch: 182, loss: 0.10955, acc: 0.97957, speed: 18.38 step/s[0m
[32m[2022-07-26 14:19:31,143] [    INFO][0m - global step 1060, epoch: 5, batch: 192, loss: 0.02241, acc: 0.97982, speed: 18.29 step/s[0m
[32m[2022-07-26 14:19:31,653] [    INFO][0m - global step 1070, epoch: 5, batch: 202, loss: 0.01980, acc: 0.98004, speed: 20.48 step/s[0m
[32m[2022-07-26 14:19:32,145] [    INFO][0m - global step 1080, epoch: 5, batch: 212, loss: 0.04292, acc: 0.98040, speed: 20.79 step/s[0m
[32m[2022-07-26 14:19:33,865] [    INFO][0m - eval loss: 0.72816, acc: 0.80767[0m
[32m[2022-07-26 14:19:33,866] [    INFO][0m - Final best accuracy: 0.81586[0m
[32m[2022-07-26 14:19:33,866] [    INFO][0m - Save best accuracy text classification model in ./checkpoint[0m
[0m
#修改后的训练文件train_new2.py ,主要使用了paddlenlp.metrics.glue的AccuracyAndF1:准确率及F1-score,可用于GLUE中的MRPC 和QQP任务
#不过吐槽一下:    return (acc,precision,recall,f1,(acc + f1) / 2,) 最后一个指标竟然是加权平均.....
.
!python train_new2.py --warmup --early_stop --epochs 5 --save_dir "./checkpoint2" --batch_size 16
[2022-07-26 20:13:34,942] [    INFO] - We are using <class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'> to load 'ernie-3.0-base-zh'.
[2022-07-26 20:13:34,943] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-3.0-base-zh/ernie_3.0_base_zh_vocab.txt
[2022-07-26 20:13:34,965] [    INFO] - tokenizer config file saved in /home/aistudio/.paddlenlp/models/ernie-3.0-base-zh/tokenizer_config.json
[2022-07-26 20:13:34,966] [    INFO] - Special tokens file saved in /home/aistudio/.paddlenlp/models/ernie-3.0-base-zh/special_tokens_map.json
[2022-07-26 20:13:34,967] [    INFO] - We are using <class 'paddlenlp.transformers.ernie.modeling.ErnieForSequenceClassification'> to load 'ernie-3.0-base-zh'.
[2022-07-26 20:13:34,967] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-3.0-base-zh/ernie_3.0_base_zh.pdparams
W0726 20:13:34.968411 27390 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0726 20:13:34.971930 27390 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
[2022-07-26 20:13:39,957] [    INFO] - global step 10, epoch: 1, batch: 10, loss: 2.40898, acc: 0.08750, speed: 4.52 step/s
[2022-07-26 20:13:40,519] [    INFO] - global step 20, epoch: 1, batch: 20, loss: 2.24876, acc: 0.11250, speed: 18.26 step/s
[2022-07-26 20:13:41,067] [    INFO] - global step 30, epoch: 1, batch: 30, loss: 2.11567, acc: 0.13542, speed: 18.29 step/s
[2022-07-26 20:13:41,602] [    INFO] - global step 40, epoch: 1, batch: 40, loss: 2.16998, acc: 0.16406, speed: 18.76 step/s
[2022-07-26 20:13:42,132] [    INFO] - global step 50, epoch: 1, batch: 50, loss: 2.15460, acc: 0.18125, speed: 18.95 step/s
[2022-07-26 20:13:42,660] [    INFO] - global step 60, epoch: 1, batch: 60, loss: 2.24910, acc: 0.20625, speed: 19.00 step/s
[2022-07-26 20:13:43,229] [    INFO] - global step 70, epoch: 1, batch: 70, loss: 1.96933, acc: 0.22500, speed: 17.64 step/s
[2022-07-26 20:13:43,790] [    INFO] - global step 80, epoch: 1, batch: 80, loss: 1.73790, acc: 0.24297, speed: 17.86 step/s
[2022-07-26 20:13:44,381] [    INFO] - global step 90, epoch: 1, batch: 90, loss: 1.72513, acc: 0.25486, speed: 16.94 step/s
[2022-07-26 20:13:45,088] [    INFO] - global step 100, epoch: 1, batch: 100, loss: 1.76564, acc: 0.28000, speed: 14.24 step/s
[2022-07-26 20:13:45,735] [    INFO] - global step 110, epoch: 1, batch: 110, loss: 1.40768, acc: 0.30057, speed: 15.51 step/s
[2022-07-26 20:13:46,282] [    INFO] - global step 120, epoch: 1, batch: 120, loss: 1.74826, acc: 0.32500, speed: 18.35 step/s
[2022-07-26 20:13:46,875] [    INFO] - global step 130, epoch: 1, batch: 130, loss: 1.48208, acc: 0.35337, speed: 16.90 step/s
[2022-07-26 20:13:47,469] [    INFO] - global step 140, epoch: 1, batch: 140, loss: 1.08619, acc: 0.37812, speed: 16.87 step/s
[2022-07-26 20:13:48,017] [    INFO] - global step 150, epoch: 1, batch: 150, loss: 1.03628, acc: 0.39625, speed: 18.72 step/s
[2022-07-26 20:13:48,569] [    INFO] - global step 160, epoch: 1, batch: 160, loss: 1.16593, acc: 0.41797, speed: 18.11 step/s
[2022-07-26 20:13:49,174] [    INFO] - global step 170, epoch: 1, batch: 170, loss: 1.27777, acc: 0.43309, speed: 16.81 step/s
[2022-07-26 20:13:49,755] [    INFO] - global step 180, epoch: 1, batch: 180, loss: 0.65828, acc: 0.44653, speed: 17.22 step/s
[2022-07-26 20:13:50,316] [    INFO] - global step 190, epoch: 1, batch: 190, loss: 0.68629, acc: 0.46020, speed: 17.84 step/s
[2022-07-26 20:13:50,922] [    INFO] - global step 200, epoch: 1, batch: 200, loss: 0.46445, acc: 0.47563, speed: 17.46 step/s
[2022-07-26 20:13:51,567] [    INFO] - global step 210, epoch: 1, batch: 210, loss: 1.44085, acc: 0.48839, speed: 15.52 step/s
[2022-07-26 20:13:52,092] [    INFO] - global step 220, epoch: 1, batch: 220, loss: 0.62452, acc: 0.50028, speed: 19.17 step/s
[2022-07-26 20:13:52,683] [    INFO] - global step 230, epoch: 1, batch: 230, loss: 0.56271, acc: 0.51168, speed: 16.97 step/s
[2022-07-26 20:13:53,259] [    INFO] - global step 240, epoch: 1, batch: 240, loss: 0.43529, acc: 0.52344, speed: 18.54 step/s
[2022-07-26 20:13:53,854] [    INFO] - global step 250, epoch: 1, batch: 250, loss: 1.05362, acc: 0.53200, speed: 16.99 step/s
[2022-07-26 20:13:54,528] [    INFO] - global step 260, epoch: 1, batch: 260, loss: 1.21929, acc: 0.53918, speed: 15.25 step/s
[2022-07-26 20:13:55,163] [    INFO] - global step 270, epoch: 1, batch: 270, loss: 0.72062, acc: 0.54630, speed: 15.80 step/s
[2022-07-26 20:13:55,760] [    INFO] - global step 280, epoch: 1, batch: 280, loss: 0.44124, acc: 0.55446, speed: 16.77 step/s
[2022-07-26 20:13:56,496] [    INFO] - global step 290, epoch: 1, batch: 290, loss: 1.25100, acc: 0.55905, speed: 13.62 step/s
[2022-07-26 20:13:57,252] [    INFO] - global step 300, epoch: 1, batch: 300, loss: 0.59415, acc: 0.56563, speed: 13.24 step/s
[2022-07-26 20:13:57,960] [    INFO] - global step 310, epoch: 1, batch: 310, loss: 0.42418, acc: 0.57319, speed: 14.15 step/s
[2022-07-26 20:13:58,638] [    INFO] - global step 320, epoch: 1, batch: 320, loss: 0.86126, acc: 0.57910, speed: 14.78 step/s
[2022-07-26 20:13:59,285] [    INFO] - global step 330, epoch: 1, batch: 330, loss: 0.61564, acc: 0.58523, speed: 15.47 step/s
[2022-07-26 20:13:59,994] [    INFO] - global step 340, epoch: 1, batch: 340, loss: 0.33949, acc: 0.59118, speed: 14.14 step/s
[2022-07-26 20:14:00,572] [    INFO] - global step 350, epoch: 1, batch: 350, loss: 1.10109, acc: 0.59696, speed: 17.32 step/s
[2022-07-26 20:14:01,120] [    INFO] - global step 360, epoch: 1, batch: 360, loss: 0.91603, acc: 0.60174, speed: 18.26 step/s
[2022-07-26 20:14:01,760] [    INFO] - global step 370, epoch: 1, batch: 370, loss: 1.03005, acc: 0.60557, speed: 15.73 step/s
[2022-07-26 20:14:02,334] [    INFO] - global step 380, epoch: 1, batch: 380, loss: 1.09632, acc: 0.61118, speed: 17.59 step/s
[2022-07-26 20:14:02,922] [    INFO] - global step 390, epoch: 1, batch: 390, loss: 0.96567, acc: 0.61603, speed: 18.03 step/s
[2022-07-26 20:14:03,504] [    INFO] - global step 400, epoch: 1, batch: 400, loss: 0.92750, acc: 0.62062, speed: 17.20 step/s
[2022-07-26 20:14:04,122] [    INFO] - global step 410, epoch: 1, batch: 410, loss: 0.50256, acc: 0.62591, speed: 16.19 step/s
[2022-07-26 20:14:04,770] [    INFO] - global step 420, epoch: 1, batch: 420, loss: 0.43833, acc: 0.63051, speed: 15.45 step/s
[2022-07-26 20:14:05,676] [    INFO] - global step 430, epoch: 1, batch: 430, loss: 0.51918, acc: 0.63445, speed: 11.04 step/s
[2022-07-26 20:14:09,131] [    INFO] - eval loss: 0.60346, acc: 0.80665
(acc, precision, recall, f1, average_of_acc_and_f1):(0.8066496163682865, 0.8859154929577465, 0.9304733727810651, 0.9076479076479077, 0.8571487620080971)
[2022-07-26 20:14:16,914] [    INFO] - tokenizer config file saved in ./checkpoint2/tokenizer_config.json
[2022-07-26 20:14:16,914] [    INFO] - Special tokens file saved in ./checkpoint2/special_tokens_map.json
[2022-07-26 20:14:17,355] [    INFO] - global step 440, epoch: 2, batch: 6, loss: 0.23575, acc: 0.85417, speed: 0.86 step/s
[2022-07-26 20:14:17,912] [    INFO] - global step 450, epoch: 2, batch: 16, loss: 0.42002, acc: 0.83203, speed: 18.30 step/s
[2022-07-26 20:14:18,528] [    INFO] - global step 460, epoch: 2, batch: 26, loss: 0.38867, acc: 0.84135, speed: 16.26 step/s
[2022-07-26 20:14:19,100] [    INFO] - global step 470, epoch: 2, batch: 36, loss: 0.16732, acc: 0.83854, speed: 17.60 step/s
[2022-07-26 20:14:19,686] [    INFO] - global step 480, epoch: 2, batch: 46, loss: 0.68551, acc: 0.83288, speed: 17.67 step/s
[2022-07-26 20:14:20,351] [    INFO] - global step 490, epoch: 2, batch: 56, loss: 0.32540, acc: 0.83482, speed: 15.98 step/s
[2022-07-26 20:14:20,929] [    INFO] - global step 500, epoch: 2, batch: 66, loss: 0.88764, acc: 0.83239, speed: 17.85 step/s
[2022-07-26 20:14:21,485] [    INFO] - global step 510, epoch: 2, batch: 76, loss: 0.43310, acc: 0.83964, speed: 18.01 step/s
[2022-07-26 20:14:22,018] [    INFO] - global step 520, epoch: 2, batch: 86, loss: 0.33023, acc: 0.84084, speed: 18.79 step/s
[2022-07-26 20:14:22,597] [    INFO] - global step 530, epoch: 2, batch: 96, loss: 0.64421, acc: 0.84375, speed: 17.35 step/s
[2022-07-26 20:14:23,134] [    INFO] - global step 540, epoch: 2, batch: 106, loss: 0.73746, acc: 0.84257, speed: 18.70 step/s
[2022-07-26 20:14:23,776] [    INFO] - global step 550, epoch: 2, batch: 116, loss: 0.70969, acc: 0.83890, speed: 15.82 step/s
[2022-07-26 20:14:24,370] [    INFO] - global step 560, epoch: 2, batch: 126, loss: 0.77572, acc: 0.83978, speed: 16.84 step/s
[2022-07-26 20:14:24,992] [    INFO] - global step 570, epoch: 2, batch: 136, loss: 0.66655, acc: 0.84007, speed: 16.43 step/s
[2022-07-26 20:14:25,528] [    INFO] - global step 580, epoch: 2, batch: 146, loss: 0.34812, acc: 0.84204, speed: 18.68 step/s
[2022-07-26 20:14:26,102] [    INFO] - global step 590, epoch: 2, batch: 156, loss: 0.54772, acc: 0.83694, speed: 17.45 step/s
[2022-07-26 20:14:26,712] [    INFO] - global step 600, epoch: 2, batch: 166, loss: 0.33386, acc: 0.83773, speed: 16.55 step/s
[2022-07-26 20:14:27,246] [    INFO] - global step 610, epoch: 2, batch: 176, loss: 0.58542, acc: 0.83629, speed: 18.80 step/s
[2022-07-26 20:14:27,850] [    INFO] - global step 620, epoch: 2, batch: 186, loss: 0.43648, acc: 0.83602, speed: 16.97 step/s
[2022-07-26 20:14:28,492] [    INFO] - global step 630, epoch: 2, batch: 196, loss: 0.31360, acc: 0.83418, speed: 16.14 step/s
[2022-07-26 20:14:29,101] [    INFO] - global step 640, epoch: 2, batch: 206, loss: 0.12677, acc: 0.83525, speed: 16.49 step/s
[2022-07-26 20:14:29,687] [    INFO] - global step 650, epoch: 2, batch: 216, loss: 1.00025, acc: 0.83507, speed: 17.12 step/s
[2022-07-26 20:14:30,351] [    INFO] - global step 660, epoch: 2, batch: 226, loss: 0.52062, acc: 0.83407, speed: 15.07 step/s
[2022-07-26 20:14:30,900] [    INFO] - global step 670, epoch: 2, batch: 236, loss: 0.39284, acc: 0.83342, speed: 18.22 step/s
[2022-07-26 20:14:31,429] [    INFO] - global step 680, epoch: 2, batch: 246, loss: 0.30129, acc: 0.83613, speed: 19.08 step/s
[2022-07-26 20:14:32,037] [    INFO] - global step 690, epoch: 2, batch: 256, loss: 0.18863, acc: 0.83569, speed: 17.31 step/s
[2022-07-26 20:14:32,559] [    INFO] - global step 700, epoch: 2, batch: 266, loss: 0.35285, acc: 0.83576, speed: 19.16 step/s
[2022-07-26 20:14:33,102] [    INFO] - global step 710, epoch: 2, batch: 276, loss: 0.39410, acc: 0.83628, speed: 18.46 step/s
[2022-07-26 20:14:33,700] [    INFO] - global step 720, epoch: 2, batch: 286, loss: 0.60941, acc: 0.83588, speed: 16.76 step/s
[2022-07-26 20:14:34,357] [    INFO] - global step 730, epoch: 2, batch: 296, loss: 0.53442, acc: 0.83488, speed: 15.30 step/s
[2022-07-26 20:14:34,910] [    INFO] - global step 740, epoch: 2, batch: 306, loss: 0.53619, acc: 0.83599, speed: 18.33 step/s
[2022-07-26 20:14:35,504] [    INFO] - global step 750, epoch: 2, batch: 316, loss: 0.63134, acc: 0.83544, speed: 16.86 step/s
[2022-07-26 20:14:36,104] [    INFO] - global step 760, epoch: 2, batch: 326, loss: 0.21748, acc: 0.83551, speed: 16.84 step/s
[2022-07-26 20:14:36,741] [    INFO] - global step 770, epoch: 2, batch: 336, loss: 0.36691, acc: 0.83538, speed: 15.71 step/s
[2022-07-26 20:14:37,380] [    INFO] - global step 780, epoch: 2, batch: 346, loss: 0.33786, acc: 0.83671, speed: 15.93 step/s
[2022-07-26 20:14:37,959] [    INFO] - global step 790, epoch: 2, batch: 356, loss: 0.23148, acc: 0.83655, speed: 17.30 step/s
[2022-07-26 20:14:38,669] [    INFO] - global step 800, epoch: 2, batch: 366, loss: 0.24920, acc: 0.83675, speed: 14.99 step/s
[2022-07-26 20:14:39,276] [    INFO] - global step 810, epoch: 2, batch: 376, loss: 0.57411, acc: 0.83727, speed: 16.50 step/s
[2022-07-26 20:14:39,861] [    INFO] - global step 820, epoch: 2, batch: 386, loss: 0.64060, acc: 0.83824, speed: 17.10 step/s
[2022-07-26 20:14:40,516] [    INFO] - global step 830, epoch: 2, batch: 396, loss: 0.35713, acc: 0.83823, speed: 16.09 step/s
[2022-07-26 20:14:41,132] [    INFO] - global step 840, epoch: 2, batch: 406, loss: 0.26443, acc: 0.83852, speed: 16.32 step/s
[2022-07-26 20:14:41,786] [    INFO] - global step 850, epoch: 2, batch: 416, loss: 0.36587, acc: 0.83849, speed: 15.39 step/s
[2022-07-26 20:14:42,406] [    INFO] - global step 860, epoch: 2, batch: 426, loss: 0.34736, acc: 0.83862, speed: 16.15 step/s
[2022-07-26 20:14:45,887] [    INFO] - eval loss: 0.59024, acc: 0.80767
(acc, precision, recall, f1, average_of_acc_and_f1):(0.8076726342710997, 0.9362363919129082, 0.8905325443786982, 0.9128127369219106, 0.8602426855965051)
[2022-07-26 20:14:53,828] [    INFO] - tokenizer config file saved in ./checkpoint2/tokenizer_config.json
[2022-07-26 20:14:53,829] [    INFO] - Special tokens file saved in ./checkpoint2/special_tokens_map.json
[2022-07-26 20:14:53,993] [    INFO] - global step 870, epoch: 3, batch: 2, loss: 0.20408, acc: 0.81250, speed: 0.86 step/s
[2022-07-26 20:14:54,636] [    INFO] - global step 880, epoch: 3, batch: 12, loss: 0.13691, acc: 0.89062, speed: 16.23 step/s
[2022-07-26 20:14:55,190] [    INFO] - global step 890, epoch: 3, batch: 22, loss: 0.20633, acc: 0.86932, speed: 18.06 step/s
[2022-07-26 20:14:55,865] [    INFO] - global step 900, epoch: 3, batch: 32, loss: 0.22725, acc: 0.88477, speed: 14.83 step/s
[2022-07-26 20:14:56,480] [    INFO] - global step 910, epoch: 3, batch: 42, loss: 0.50567, acc: 0.87054, speed: 16.36 step/s
[2022-07-26 20:14:57,005] [    INFO] - global step 920, epoch: 3, batch: 52, loss: 0.17533, acc: 0.87981, speed: 19.08 step/s
[2022-07-26 20:14:57,622] [    INFO] - global step 930, epoch: 3, batch: 62, loss: 0.47600, acc: 0.87802, speed: 16.21 step/s
[2022-07-26 20:14:58,146] [    INFO] - global step 940, epoch: 3, batch: 72, loss: 0.34135, acc: 0.88281, speed: 19.13 step/s
[2022-07-26 20:14:58,729] [    INFO] - global step 950, epoch: 3, batch: 82, loss: 0.12486, acc: 0.88643, speed: 17.76 step/s
[2022-07-26 20:14:59,324] [    INFO] - global step 960, epoch: 3, batch: 92, loss: 0.33004, acc: 0.88451, speed: 17.58 step/s
[2022-07-26 20:14:59,907] [    INFO] - global step 970, epoch: 3, batch: 102, loss: 0.25796, acc: 0.88787, speed: 17.17 step/s
[2022-07-26 20:15:00,621] [    INFO] - global step 980, epoch: 3, batch: 112, loss: 0.28512, acc: 0.88951, speed: 14.01 step/s
[2022-07-26 20:15:01,203] [    INFO] - global step 990, epoch: 3, batch: 122, loss: 0.23326, acc: 0.89191, speed: 17.21 step/s
[2022-07-26 20:15:01,755] [    INFO] - global step 1000, epoch: 3, batch: 132, loss: 0.16778, acc: 0.89347, speed: 18.16 step/s
[2022-07-26 20:15:02,315] [    INFO] - global step 1010, epoch: 3, batch: 142, loss: 0.52319, acc: 0.89393, speed: 18.03 step/s
[2022-07-26 20:15:02,830] [    INFO] - global step 1020, epoch: 3, batch: 152, loss: 0.06413, acc: 0.89227, speed: 19.46 step/s
[2022-07-26 20:15:03,502] [    INFO] - global step 1030, epoch: 3, batch: 162, loss: 0.28263, acc: 0.89120, speed: 14.90 step/s
[2022-07-26 20:15:04,065] [    INFO] - global step 1040, epoch: 3, batch: 172, loss: 0.16983, acc: 0.89317, speed: 17.79 step/s
[2022-07-26 20:15:04,669] [    INFO] - global step 1050, epoch: 3, batch: 182, loss: 0.19173, acc: 0.89217, speed: 17.00 step/s
[2022-07-26 20:15:05,272] [    INFO] - global step 1060, epoch: 3, batch: 192, loss: 0.28616, acc: 0.89160, speed: 17.47 step/s
[2022-07-26 20:15:06,001] [    INFO] - global step 1070, epoch: 3, batch: 202, loss: 0.32866, acc: 0.89078, speed: 13.73 step/s
[2022-07-26 20:15:06,537] [    INFO] - global step 1080, epoch: 3, batch: 212, loss: 0.42913, acc: 0.89062, speed: 18.69 step/s
[2022-07-26 20:15:07,153] [    INFO] - global step 1090, epoch: 3, batch: 222, loss: 0.55892, acc: 0.89105, speed: 16.25 step/s
[2022-07-26 20:15:07,693] [    INFO] - global step 1100, epoch: 3, batch: 232, loss: 0.54133, acc: 0.89089, speed: 18.63 step/s
[2022-07-26 20:15:08,266] [    INFO] - global step 1110, epoch: 3, batch: 242, loss: 0.49333, acc: 0.89101, speed: 17.98 step/s
[2022-07-26 20:15:08,833] [    INFO] - global step 1120, epoch: 3, batch: 252, loss: 0.69313, acc: 0.89211, speed: 18.12 step/s
[2022-07-26 20:15:09,409] [    INFO] - global step 1130, epoch: 3, batch: 262, loss: 0.26772, acc: 0.89146, speed: 17.39 step/s
[2022-07-26 20:15:10,019] [    INFO] - global step 1140, epoch: 3, batch: 272, loss: 0.65566, acc: 0.88971, speed: 16.91 step/s
[2022-07-26 20:15:10,628] [    INFO] - global step 1150, epoch: 3, batch: 282, loss: 0.34439, acc: 0.88830, speed: 17.67 step/s
[2022-07-26 20:15:11,279] [    INFO] - global step 1160, epoch: 3, batch: 292, loss: 0.17968, acc: 0.88827, speed: 15.46 step/s
[2022-07-26 20:15:11,833] [    INFO] - global step 1170, epoch: 3, batch: 302, loss: 1.01590, acc: 0.88907, speed: 18.10 step/s
[2022-07-26 20:15:12,374] [    INFO] - global step 1180, epoch: 3, batch: 312, loss: 0.63331, acc: 0.88862, speed: 18.49 step/s
[2022-07-26 20:15:13,001] [    INFO] - global step 1190, epoch: 3, batch: 322, loss: 0.41135, acc: 0.88878, speed: 16.03 step/s
[2022-07-26 20:15:13,611] [    INFO] - global step 1200, epoch: 3, batch: 332, loss: 0.34342, acc: 0.89006, speed: 16.41 step/s
[2022-07-26 20:15:14,258] [    INFO] - global step 1210, epoch: 3, batch: 342, loss: 0.31269, acc: 0.89072, speed: 15.48 step/s
[2022-07-26 20:15:14,853] [    INFO] - global step 1220, epoch: 3, batch: 352, loss: 0.22478, acc: 0.89151, speed: 17.11 step/s
[2022-07-26 20:15:15,490] [    INFO] - global step 1230, epoch: 3, batch: 362, loss: 0.09001, acc: 0.89140, speed: 15.75 step/s
[2022-07-26 20:15:16,157] [    INFO] - global step 1240, epoch: 3, batch: 372, loss: 0.04564, acc: 0.89180, speed: 15.02 step/s
[2022-07-26 20:15:16,805] [    INFO] - global step 1250, epoch: 3, batch: 382, loss: 0.69286, acc: 0.89038, speed: 15.51 step/s
[2022-07-26 20:15:17,479] [    INFO] - global step 1260, epoch: 3, batch: 392, loss: 0.52506, acc: 0.89031, speed: 14.85 step/s
[2022-07-26 20:15:18,081] [    INFO] - global step 1270, epoch: 3, batch: 402, loss: 0.11165, acc: 0.89055, speed: 16.61 step/s
[2022-07-26 20:15:18,696] [    INFO] - global step 1280, epoch: 3, batch: 412, loss: 0.25420, acc: 0.89078, speed: 16.49 step/s
[2022-07-26 20:15:19,255] [    INFO] - global step 1290, epoch: 3, batch: 422, loss: 0.16422, acc: 0.89114, speed: 17.91 step/s
[2022-07-26 20:15:19,814] [    INFO] - global step 1300, epoch: 3, batch: 432, loss: 0.11314, acc: 0.89193, speed: 17.93 step/s
[2022-07-26 20:15:23,079] [    INFO] - eval loss: 0.61136, acc: 0.81483
(acc, precision, recall, f1, average_of_acc_and_f1):(0.8148337595907928, 0.9305135951661632, 0.9112426035502958, 0.9207772795216741, 0.8678055195562335)
[2022-07-26 20:15:31,210] [    INFO] - tokenizer config file saved in ./checkpoint2/tokenizer_config.json
[2022-07-26 20:15:31,210] [    INFO] - Special tokens file saved in ./checkpoint2/special_tokens_map.json
[2022-07-26 20:15:31,689] [    INFO] - global step 1310, epoch: 4, batch: 8, loss: 0.09207, acc: 0.94531, speed: 0.84 step/s
[2022-07-26 20:15:32,241] [    INFO] - global step 1320, epoch: 4, batch: 18, loss: 0.20974, acc: 0.93403, speed: 18.14 step/s
[2022-07-26 20:15:32,793] [    INFO] - global step 1330, epoch: 4, batch: 28, loss: 0.21955, acc: 0.93750, speed: 18.15 step/s
[2022-07-26 20:15:33,441] [    INFO] - global step 1340, epoch: 4, batch: 38, loss: 0.66919, acc: 0.93421, speed: 15.47 step/s
[2022-07-26 20:15:34,097] [    INFO] - global step 1350, epoch: 4, batch: 48, loss: 0.17049, acc: 0.93229, speed: 15.27 step/s
[2022-07-26 20:15:34,662] [    INFO] - global step 1360, epoch: 4, batch: 58, loss: 0.06049, acc: 0.93211, speed: 17.72 step/s
[2022-07-26 20:15:35,216] [    INFO] - global step 1370, epoch: 4, batch: 68, loss: 0.22464, acc: 0.93658, speed: 18.07 step/s
[2022-07-26 20:15:35,799] [    INFO] - global step 1380, epoch: 4, batch: 78, loss: 0.10731, acc: 0.93750, speed: 17.18 step/s
[2022-07-26 20:15:36,405] [    INFO] - global step 1390, epoch: 4, batch: 88, loss: 0.07752, acc: 0.93111, speed: 16.91 step/s
[2022-07-26 20:15:36,969] [    INFO] - global step 1400, epoch: 4, batch: 98, loss: 0.51876, acc: 0.92730, speed: 17.77 step/s
[2022-07-26 20:15:37,597] [    INFO] - global step 1410, epoch: 4, batch: 108, loss: 0.06277, acc: 0.92766, speed: 16.13 step/s
[2022-07-26 20:15:38,248] [    INFO] - global step 1420, epoch: 4, batch: 118, loss: 0.37938, acc: 0.92850, speed: 15.63 step/s
[2022-07-26 20:15:38,807] [    INFO] - global step 1430, epoch: 4, batch: 128, loss: 0.24524, acc: 0.93066, speed: 18.36 step/s
[2022-07-26 20:15:39,402] [    INFO] - global step 1440, epoch: 4, batch: 138, loss: 0.33070, acc: 0.93252, speed: 16.98 step/s
[2022-07-26 20:15:39,944] [    INFO] - global step 1450, epoch: 4, batch: 148, loss: 0.21366, acc: 0.93201, speed: 18.48 step/s
[2022-07-26 20:15:40,582] [    INFO] - global step 1460, epoch: 4, batch: 158, loss: 0.25311, acc: 0.93157, speed: 15.68 step/s
[2022-07-26 20:15:41,154] [    INFO] - global step 1470, epoch: 4, batch: 168, loss: 0.40399, acc: 0.93229, speed: 17.80 step/s
[2022-07-26 20:15:41,780] [    INFO] - global step 1480, epoch: 4, batch: 178, loss: 0.32950, acc: 0.93294, speed: 17.16 step/s
[2022-07-26 20:15:42,360] [    INFO] - global step 1490, epoch: 4, batch: 188, loss: 0.15351, acc: 0.93384, speed: 17.27 step/s
[2022-07-26 20:15:42,910] [    INFO] - global step 1500, epoch: 4, batch: 198, loss: 0.26419, acc: 0.93340, speed: 18.27 step/s
[2022-07-26 20:15:43,551] [    INFO] - global step 1510, epoch: 4, batch: 208, loss: 0.06240, acc: 0.93209, speed: 16.52 step/s
[2022-07-26 20:15:44,186] [    INFO] - global step 1520, epoch: 4, batch: 218, loss: 0.38750, acc: 0.93234, speed: 15.75 step/s
[2022-07-26 20:15:44,873] [    INFO] - global step 1530, epoch: 4, batch: 228, loss: 0.08455, acc: 0.93257, speed: 15.53 step/s
[2022-07-26 20:15:45,650] [    INFO] - global step 1540, epoch: 4, batch: 238, loss: 0.07565, acc: 0.93225, speed: 13.50 step/s
[2022-07-26 20:15:46,282] [    INFO] - global step 1550, epoch: 4, batch: 248, loss: 0.29345, acc: 0.93246, speed: 15.83 step/s
[2022-07-26 20:15:46,870] [    INFO] - global step 1560, epoch: 4, batch: 258, loss: 0.12958, acc: 0.93290, speed: 17.03 step/s
[2022-07-26 20:15:47,544] [    INFO] - global step 1570, epoch: 4, batch: 268, loss: 0.20767, acc: 0.93307, speed: 15.54 step/s
[2022-07-26 20:15:48,172] [    INFO] - global step 1580, epoch: 4, batch: 278, loss: 0.19042, acc: 0.93390, speed: 15.92 step/s
[2022-07-26 20:15:48,818] [    INFO] - global step 1590, epoch: 4, batch: 288, loss: 0.22258, acc: 0.93359, speed: 15.51 step/s
[2022-07-26 20:15:49,483] [    INFO] - global step 1600, epoch: 4, batch: 298, loss: 0.19055, acc: 0.93352, speed: 15.03 step/s
[2022-07-26 20:15:50,070] [    INFO] - global step 1610, epoch: 4, batch: 308, loss: 0.37185, acc: 0.93344, speed: 17.76 step/s
[2022-07-26 20:15:50,633] [    INFO] - global step 1620, epoch: 4, batch: 318, loss: 0.68668, acc: 0.93219, speed: 17.81 step/s
[2022-07-26 20:15:51,184] [    INFO] - global step 1630, epoch: 4, batch: 328, loss: 0.11824, acc: 0.93216, speed: 18.19 step/s
[2022-07-26 20:15:51,802] [    INFO] - global step 1640, epoch: 4, batch: 338, loss: 0.34601, acc: 0.93195, speed: 16.20 step/s
[2022-07-26 20:15:52,457] [    INFO] - global step 1650, epoch: 4, batch: 348, loss: 0.15648, acc: 0.93229, speed: 15.58 step/s
[2022-07-26 20:15:53,011] [    INFO] - global step 1660, epoch: 4, batch: 358, loss: 0.14079, acc: 0.93122, speed: 18.06 step/s
[2022-07-26 20:15:53,594] [    INFO] - global step 1670, epoch: 4, batch: 368, loss: 0.21114, acc: 0.93037, speed: 17.17 step/s
[2022-07-26 20:15:54,130] [    INFO] - global step 1680, epoch: 4, batch: 378, loss: 0.14871, acc: 0.93056, speed: 18.67 step/s
[2022-07-26 20:15:54,674] [    INFO] - global step 1690, epoch: 4, batch: 388, loss: 0.49369, acc: 0.93009, speed: 18.38 step/s
[2022-07-26 20:15:55,261] [    INFO] - global step 1700, epoch: 4, batch: 398, loss: 0.45490, acc: 0.93028, speed: 18.42 step/s
[2022-07-26 20:15:55,844] [    INFO] - global step 1710, epoch: 4, batch: 408, loss: 0.16591, acc: 0.93045, speed: 17.21 step/s
[2022-07-26 20:15:56,414] [    INFO] - global step 1720, epoch: 4, batch: 418, loss: 0.06319, acc: 0.93092, speed: 17.57 step/s
[2022-07-26 20:15:56,988] [    INFO] - global step 1730, epoch: 4, batch: 428, loss: 0.53140, acc: 0.93107, speed: 17.62 step/s
[2022-07-26 20:16:00,386] [    INFO] - eval loss: 0.65913, acc: 0.81125
[2022-07-26 20:16:04,332] [    INFO] - global step 1740, epoch: 5, batch: 4, loss: 0.11557, acc: 0.93750, speed: 1.36 step/s
[2022-07-26 20:16:05,008] [    INFO] - global step 1750, epoch: 5, batch: 14, loss: 0.06121, acc: 0.95982, speed: 15.11 step/s
[2022-07-26 20:16:05,659] [    INFO] - global step 1760, epoch: 5, batch: 24, loss: 0.27467, acc: 0.96094, speed: 16.16 step/s
[2022-07-26 20:16:06,244] [    INFO] - global step 1770, epoch: 5, batch: 34, loss: 0.06489, acc: 0.96140, speed: 17.10 step/s
[2022-07-26 20:16:06,805] [    INFO] - global step 1780, epoch: 5, batch: 44, loss: 0.11836, acc: 0.95739, speed: 17.86 step/s
[2022-07-26 20:16:07,450] [    INFO] - global step 1790, epoch: 5, batch: 54, loss: 0.24871, acc: 0.95370, speed: 15.55 step/s
[2022-07-26 20:16:08,057] [    INFO] - global step 1800, epoch: 5, batch: 64, loss: 0.09178, acc: 0.95410, speed: 16.74 step/s
[2022-07-26 20:16:08,685] [    INFO] - global step 1810, epoch: 5, batch: 74, loss: 0.09852, acc: 0.95693, speed: 15.94 step/s
[2022-07-26 20:16:09,335] [    INFO] - global step 1820, epoch: 5, batch: 84, loss: 0.16119, acc: 0.95238, speed: 15.43 step/s
[2022-07-26 20:16:09,982] [    INFO] - global step 1830, epoch: 5, batch: 94, loss: 0.08209, acc: 0.95080, speed: 15.45 step/s
[2022-07-26 20:16:10,521] [    INFO] - global step 1840, epoch: 5, batch: 104, loss: 0.22437, acc: 0.95312, speed: 18.58 step/s
[2022-07-26 20:16:11,104] [    INFO] - global step 1850, epoch: 5, batch: 114, loss: 0.02776, acc: 0.95285, speed: 17.18 step/s
[2022-07-26 20:16:11,699] [    INFO] - global step 1860, epoch: 5, batch: 124, loss: 0.28880, acc: 0.95262, speed: 16.99 step/s
[2022-07-26 20:16:12,290] [    INFO] - global step 1870, epoch: 5, batch: 134, loss: 0.01732, acc: 0.95196, speed: 16.91 step/s
[2022-07-26 20:16:12,848] [    INFO] - global step 1880, epoch: 5, batch: 144, loss: 0.21188, acc: 0.95182, speed: 18.39 step/s
[2022-07-26 20:16:13,368] [    INFO] - global step 1890, epoch: 5, batch: 154, loss: 0.57706, acc: 0.95049, speed: 19.25 step/s
[2022-07-26 20:16:13,981] [    INFO] - global step 1900, epoch: 5, batch: 164, loss: 0.15902, acc: 0.95084, speed: 16.34 step/s
[2022-07-26 20:16:14,534] [    INFO] - global step 1910, epoch: 5, batch: 174, loss: 0.31892, acc: 0.95223, speed: 18.16 step/s
[2022-07-26 20:16:15,147] [    INFO] - global step 1920, epoch: 5, batch: 184, loss: 0.07268, acc: 0.95177, speed: 16.33 step/s
[2022-07-26 20:16:15,754] [    INFO] - global step 1930, epoch: 5, batch: 194, loss: 0.07049, acc: 0.95329, speed: 17.36 step/s
[2022-07-26 20:16:16,382] [    INFO] - global step 1940, epoch: 5, batch: 204, loss: 0.02998, acc: 0.95466, speed: 15.94 step/s
[2022-07-26 20:16:16,992] [    INFO] - global step 1950, epoch: 5, batch: 214, loss: 0.04521, acc: 0.95532, speed: 16.40 step/s
[2022-07-26 20:16:17,575] [    INFO] - global step 1960, epoch: 5, batch: 224, loss: 0.03190, acc: 0.95619, speed: 17.17 step/s
[2022-07-26 20:16:18,193] [    INFO] - global step 1970, epoch: 5, batch: 234, loss: 0.05681, acc: 0.95566, speed: 16.20 step/s
[2022-07-26 20:16:18,854] [    INFO] - global step 1980, epoch: 5, batch: 244, loss: 0.01984, acc: 0.95569, speed: 15.14 step/s
[2022-07-26 20:16:19,537] [    INFO] - global step 1990, epoch: 5, batch: 254, loss: 0.21938, acc: 0.95669, speed: 14.71 step/s
[2022-07-26 20:16:20,259] [    INFO] - global step 2000, epoch: 5, batch: 264, loss: 0.03922, acc: 0.95620, speed: 13.87 step/s
[2022-07-26 20:16:20,877] [    INFO] - global step 2010, epoch: 5, batch: 274, loss: 0.40234, acc: 0.95575, speed: 16.33 step/s
[2022-07-26 20:16:21,499] [    INFO] - global step 2020, epoch: 5, batch: 284, loss: 0.31409, acc: 0.95533, speed: 16.13 step/s
[2022-07-26 20:16:22,232] [    INFO] - global step 2030, epoch: 5, batch: 294, loss: 0.12014, acc: 0.95493, speed: 13.66 step/s
[2022-07-26 20:16:22,864] [    INFO] - global step 2040, epoch: 5, batch: 304, loss: 0.03451, acc: 0.95539, speed: 15.81 step/s
[2022-07-26 20:16:23,481] [    INFO] - global step 2050, epoch: 5, batch: 314, loss: 0.05242, acc: 0.95621, speed: 16.24 step/s
[2022-07-26 20:16:24,087] [    INFO] - global step 2060, epoch: 5, batch: 324, loss: 0.27956, acc: 0.95602, speed: 16.50 step/s
[2022-07-26 20:16:24,645] [    INFO] - global step 2070, epoch: 5, batch: 334, loss: 0.07198, acc: 0.95640, speed: 17.96 step/s
[2022-07-26 20:16:25,270] [    INFO] - global step 2080, epoch: 5, batch: 344, loss: 0.05728, acc: 0.95603, speed: 16.96 step/s
[2022-07-26 20:16:25,834] [    INFO] - global step 2090, epoch: 5, batch: 354, loss: 0.11812, acc: 0.95586, speed: 17.73 step/s
[2022-07-26 20:16:26,438] [    INFO] - global step 2100, epoch: 5, batch: 364, loss: 0.04325, acc: 0.95656, speed: 16.62 step/s
[2022-07-26 20:16:26,998] [    INFO] - global step 2110, epoch: 5, batch: 374, loss: 0.04523, acc: 0.95622, speed: 17.92 step/s
[2022-07-26 20:16:27,688] [    INFO] - global step 2120, epoch: 5, batch: 384, loss: 0.28787, acc: 0.95687, speed: 14.51 step/s
[2022-07-26 20:16:28,327] [    INFO] - global step 2130, epoch: 5, batch: 394, loss: 0.20212, acc: 0.95638, speed: 16.22 step/s
[2022-07-26 20:16:28,924] [    INFO] - global step 2140, epoch: 5, batch: 404, loss: 0.01631, acc: 0.95668, speed: 16.76 step/s
[2022-07-26 20:16:29,528] [    INFO] - global step 2150, epoch: 5, batch: 414, loss: 0.16515, acc: 0.95682, speed: 16.61 step/s
[2022-07-26 20:16:30,148] [    INFO] - global step 2160, epoch: 5, batch: 424, loss: 0.04206, acc: 0.95696, speed: 16.21 step/s
[2022-07-26 20:16:30,674] [    INFO] - global step 2170, epoch: 5, batch: 434, loss: 0.17887, acc: 0.95729, speed: 19.04 step/s
[2022-07-26 20:16:33,697] [    INFO] - eval loss: 0.71180, acc: 0.80767
[2022-07-26 20:16:37,302] [    INFO] - Save best accuracy text classification model in ./checkpoint2

程序运行时将会自动进行训练,评估,测试。同时训练过程中会自动保存开发集上最佳模型在指定的 save_dir 中,保存模型文件结构如下所示:

checkpoint/
├── model_config.json
├── model_state.pdparams
├── tokenizer_config.json
└── vocab.txt

NOTE:

如需恢复模型训练,则可以设置 init_from_ckpt , 如init_from_ckpt=checkpoint/model_state.pdparams

如需训练中文文本分类任务,只需更换预训练模型参数 model_name 。中文训练任务推荐使用"ernie-3.0-base-zh",更多可选模型可参考Transformer预训练模型。

2.1.2 使用文心ERNIE最新大模型进行训练

最新开源ERNIE 3.0系列预训练模型:

  • 110M参数通用模型ERNIE 3.0 Base
  • 280M参数重量级通用模型ERNIE 3.0 XBase
  • 74M轻量级通用模型ERNIE 3.0 Medium

在这里插入图片描述

文档链接:
https://github.com/PaddlePaddle/ERNIE

ERNIE模型汇总

ERNIE模型汇总

在这里插入图片描述

目前直接定义name就可以调用的主要为下面几类:

目开源 ERNIE 3.0 Base 、ERNIE 3.0 Medium 、 ERNIE 3.0 Mini 、 ERNIE 3.0 Micro 、 ERNIE 3.0 Nano 五个模型:

ERNIE 3.0-Base (12-layer, 768-hidden, 12-heads)

ERNIE 3.0-Medium (6-layer, 768-hidden, 12-heads)

ERNIE 3.0-Mini (6-layer, 384-hidden, 12-heads)

ERNIE 3.0-Micro (4-layer, 384-hidden, 12-heads)

ERNIE 3.0-Nano (4-layer, 312-hidden, 12-heads)

#!git clone https://github.com/PaddlePaddle/ERNIE.git  
#如果无法下载,就是网络问题需要开vpn去github下载
#推荐直接去github下载到本地再上传

# ernie_3.0 模型下载
# 进入models_hub目录
!cd ./models_hub
# 运行下载脚本
!sh download_ernie_3.0_x_base_ch.sh

#如果不能运行sh标本,就把程序打开,单独拿出来运行
sh: 0: Can't open download_ernie_3.0_x_base_ch.sh

#get pretrained ernie3.0_x_base model params
# !wget -q --no-check-certificate http://bj.bcebos.com/wenxin-models/ernie_3.0_x_base_ch_open.tgz
# !cd ./models
# model_files_path="./ernie_3.0_x_base_ch_dir"
!mkdir "./ernie_3.0_x_base_ch_dir"
!tar xzf models/ernie_3.0_x_base_ch_open.tgz -C "./ernie_3.0_x_base_ch_dir"
# !rm ernie_3.0_x_base_ch_open.tgz


#模型已经下载下来,更多继续细节会在后续完善

2.2 加载自定义数据集

从本地文件创建数据集

使用本地数据集来训练我们的文本分类模型,本项目支持使用固定格式本地数据集文件进行训练
如果需要对本地数据集进行数据标注,可以参考文本分类任务doccano数据标注使用指南进行文本分类数据标注。[这个放到下个项目讲解]

本项目将以CBLUE数据集中医疗搜索检索词意图分类(KUAKE-QIC)任务为例进行介绍如何加载本地固定格式数据集进行训练:

本地数据集目录结构如下:

data/
├── train.txt # 训练数据集文件
├── dev.txt # 开发数据集文件
├── label.txt # 分类标签文件
└── data.txt # 可选,待预测数据文件
!wget https://paddlenlp.bj.bcebos.com/datasets/KUAKE_QIC.tar.gz
!tar -zxvf KUAKE_QIC.tar.gz
!mv KUAKE_QIC data
--2022-07-25 19:14:33--  https://paddlenlp.bj.bcebos.com/datasets/KUAKE_QIC.tar.gz
正在解析主机 paddlenlp.bj.bcebos.com (paddlenlp.bj.bcebos.com)... 182.61.200.229, 182.61.200.195, 2409:8c04:1001:1002:0:ff:b001:368a
正在连接 paddlenlp.bj.bcebos.com (paddlenlp.bj.bcebos.com)|182.61.200.229|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度: 176907 (173K) [application/octet-stream]
正在保存至: “KUAKE_QIC.tar.gz”

KUAKE_QIC.tar.gz    100%[===================>] 172.76K  --.-KB/s    in 0.07s   

2022-07-25 19:14:33 (2.31 MB/s) - 已保存 “KUAKE_QIC.tar.gz” [176907/176907])

KUAKE_QIC/
KUAKE_QIC/data.txt
KUAKE_QIC/train.txt
KUAKE_QIC/dev.txt
KUAKE_QIC/label.txt

train.txt(训练数据集文件), dev.txt(开发数据集文件),输入文本序列与标签类别名用’\t’分隔开。
train.txt/dev.txt 文件格式:

<输入序列1>'\t'<标签1>'\n'
<输入序列2>'\t'<标签2>'\n'

丙氨酸氨基转移酶和天门冬氨酸氨基转移酶高严重吗	其他
慢性肝炎早期症状有哪些表现	疾病表述
胃不好能吃南瓜吗	注意事项
为什么我的手到夏天就会脱皮而且很严重有什么办法翱4天...	病情诊断
脸上拆线后可以出去玩吗?可以流	其他
西宁青海治不孕不育专科医院	就医建议
冠状沟例外很多肉粒是什么	病情诊断
肛裂治疗用什么方法比较好	治疗方案
包皮过长应该怎么样治疗有效	治疗方案
请问白癜风是一种什么样的疾病	疾病表述
月经过了四天测出怀孕是否可以确定	其他

label.txt(分类标签文件)记录数据集中所有标签集合,每一行为一个标签名。
label.txt 文件格式:

<标签名1>'\n'
<标签名2>'\n'
...
病情诊断
治疗方案
病因分析
指标解读
就医建议
疾病表述
后果表述
注意事项
功效作用
医疗费用
其他

data.txt(可选,待预测数据文件)。

黑苦荞茶的功效与作用及食用方法
交界痣会凸起吗
检查是否能怀孕挂什么科
鱼油怎么吃咬破吃还是直接咽下去
幼儿挑食的生理原因是

在训练过程中通过指定数据集路径参数dataset_dir进行: 单卡训练

python train.py --warmup --dataset_dir data/KUAKE_QIC

dataset_dir:本地数据集路径,数据集路径中应包含train.txt,dev.txt和label.txt文件;默认为None。

2.3 GPU多卡训练

指定GPU卡号/多卡训练

unset CUDA_VISIBLE_DEVICES
python -m paddle.distributed.launch --gpus "0" train.py --warmup --early_stop
unset CUDA_VISIBLE_DEVICES
python -m paddle.distributed.launch --gpus "0" train.py --warmup --dataset_dir data/KUAKE_QIC

使用多卡训练可以指定多个GPU卡号,例如 –gpus “0,1”

unset CUDA_VISIBLE_DEVICES
python -m paddle.distributed.launch --gpus "0,1" train.py --warmup --dataset_dir data/KUAKE_QIC

2.4模型预测

输入待预测数据和数据标签对照列表,模型预测数据对应的标签

使用默认数据进行预测:

!python predict.py --params_path ./checkpoint/
[32m[2022-07-25 19:24:38,583] [    INFO][0m - We are using <class 'paddlenlp.transformers.ernie.modeling.ErnieForSequenceClassification'> to load './checkpoint/'.[0m
W0725 19:24:38.585124  3317 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0725 19:24:38.590363  3317 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
[32m[2022-07-25 19:24:41,614] [    INFO][0m - We are using <class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'> to load './checkpoint/'.[0m
input data: 黑苦荞茶的功效与作用及食用方法
label: 功效作用
---------------------------------
input data: 交界痣会凸起吗
label: 疾病表述
---------------------------------
input data: 检查是否能怀孕挂什么科
label: 就医建议
---------------------------------
input data: 鱼油怎么吃咬破吃还是直接咽下去
label: 其他
---------------------------------
input data: 幼儿挑食的生理原因是
label: 病因分析
---------------------------------
[0m
#也可以选择使用本地数据文件data/data.txt进行预测:
!python predict.py --params_path ./checkpoint/ --dataset_dir data/KUAKE_QIC
[32m[2022-07-25 19:29:23,118] [    INFO][0m - We are using <class 'paddlenlp.transformers.ernie.modeling.ErnieForSequenceClassification'> to load './checkpoint/'.[0m
W0725 19:29:23.119428  3963 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0725 19:29:23.124732  3963 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
[32m[2022-07-25 19:29:26,121] [    INFO][0m - We are using <class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'> to load './checkpoint/'.[0m
input data: 黑苦荞茶的功效与作用及食用方法
label: 功效作用
---------------------------------
input data: 交界痣会凸起吗
label: 疾病表述
---------------------------------
input data: 检查是否能怀孕挂什么科
label: 就医建议
---------------------------------
input data: 鱼油怎么吃咬破吃还是直接咽下去
label: 其他
---------------------------------
input data: 幼儿挑食的生理原因是
label: 病因分析
---------------------------------
[0m

可支持配置的参数:

params_path:待预测模型参数文件夹;默认为"./checkpoint/"。

dataset_dir:本地数据集路径,数据集路径中应包含data.txt和label.txt文件;默认为None。

max_seq_length:ERNIE模型使用的最大序列长度,最大不能超过512, 若出现显存不足,请适当调低这一参数;默认为512。

batch_size:批处理大小,请结合显存情况进行调整,若出现显存不足,请适当调低这一参数;默认为32。

device: 选用什么设备进行训练,可选cpu、gpu、xpu、npu;默认为gpu。

3.总结

最新开源ERNIE 3.0系列预训练模型:

  • 110M参数通用模型ERNIE 3.0 Base

  • 280M参数重量级通用模型ERNIE 3.0 XBase

  • 74M轻量级通用模型ERNIE 3.0 Medium

  • 新增语音-语言跨模态模型ERNIE-SAT 正式开源

  • 新增ERNIE-Gen(中文)预训练模型,支持多类主流生成任务:主要包括摘要、问题生成、对话、问答
    动静结合的文心ERNIE开发套件:基于飞桨动态图功能,支持文心ERNIE模型动态图训练。

将文本预处理、预训练模型、网络搭建、模型评估、上线部署等NLP开发流程规范封装。

支持NLP常用任务:文本分类、文本匹配、序列标注、信息抽取、文本生成、数据蒸馏等。

提供数据清洗、数据增强、分词、格式转换、大小写转换等数据预处理工具。

文心大模型ERNIE是百度发布的产业级知识增强大模型,涵盖了NLP大模型和跨模态大模型。2019年3月,开源了国内首个开源预训练模型文心ERNIE 1.0,此后在语言与跨模态的理解和生成等领域取得一系列技术突破,并对外开源与开放了系列模型,助力大模型研究与产业化应用发展。

这里温馨提示遇到问题多看文档手册

后续将对:多标签分类、层次分类进行讲解、以及这块数据集的标注讲解。

在空了会对交叉验证,数据增强研究一下

本人博客:https://blog.csdn.net/sinat_39620217?type=blog

此文仅为搬运,原作链接:https://aistudio.baidu.com/aistudio/projectdetail/4362154

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐