【CLUE benchmark】C3 中文多选阅读理解-中文多项选择
中文多选阅读理解数据集,包含对话和长文等混合类型数据集。训练和验证集中的d,m分别代表对话、多种文本类型混合。
【CLUE benchmark】C3 中文多选阅读理解
资源
⭐ ⭐ ⭐ 欢迎点个小小的Star支持!⭐ ⭐ ⭐
开源不易,希望大家多多支持~
一、背景介绍
这是Paddle版本的CLUE benchmark,旨在为用户提供Paddle版本的benchmark进行学习和交流,该版本提供了bert,ernie,roberta-wwm三个版本的基线。CLUE官网的链接为:
https://www.cluebenchmarks.com/
二、 数据预处理
在数据预处理前,需要更新一下paddlenlp版本,如果是初次运行,请升级以后,重启内核运行,这样加载的就是最新的paddlenlp了。
!pip install paddlenlp --upgrade
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: paddlenlp in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2.2.3)
Requirement already satisfied: seqeval in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (1.2.2)
Requirement already satisfied: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.4.4)
Requirement already satisfied: multiprocess in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.70.11.1)
Requirement already satisfied: h5py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (2.9.0)
Requirement already satisfied: colorlog in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (4.1.0)
Requirement already satisfied: jieba in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.42.1)
Requirement already satisfied: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp) (1.16.0)
Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp) (1.19.5)
Requirement already satisfied: dill>=0.3.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from multiprocess->paddlenlp) (0.3.3)
Requirement already satisfied: scikit-learn>=0.21.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from seqeval->paddlenlp) (0.24.2)
Requirement already satisfied: scipy>=0.19.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (1.6.3)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (0.14.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (2.1.0)
import json
import numpy as np
from tqdm import tqdm
import os
import pickle
import logging
import time
import random
import pandas as pd
from data import c3Processor,convert_examples_to_features
import paddle
from paddle.io import TensorDataset
import paddlenlp as ppnlp
from paddlenlp.transformers import BertTokenizer
from tqdm import tqdm
from paddle.io import BatchSampler
from paddlenlp.data import Stack, Dict, Pad, Tuple
from paddlenlp.transformers import BertForMultipleChoice
from paddlenlp.transformers import LinearDecayWithWarmup
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlenlp/transformers/funnel/modeling.py:31: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Iterable
解压C3的数据集
├── data119543
│ └── c3_public.zip
├── d-dev.json
├── dev_examples.pkl
├── dev_features512.pkl
├── d-train.json
├── m-dev.json
├── m-train.json
├── README.md
├── test1.0.json
└── test1.1.json
C3数据集一共有13369篇文章和19577个问题,其中的60%用是训练集,20%是开发集,20%是测试集。
!unzip -o data/data119543/c3_public.zip -d data/
Archive: data/data119543/c3_public.zip
inflating: data/d-dev.json
inflating: data/d-train.json
inflating: data/m-dev.json
inflating: data/m-train.json
inflating: data/test1.0.json
inflating: data/test1.1.json
inflating: data/README.md
m-train.json类似这种以m开头的表示正式书面文本,d-train.json类似这种以d开头的文本为口语化文本。书面文本比口语化文本更长。
我们选取其中的一条样本:
[
[
[
"男:你今天晚上有时间吗?我们一起去看电影吧?",
"女:你喜欢恐怖片和爱情片,但是我喜欢喜剧片,科幻片一般。所以……"
],
[
{
"question": "女的最喜欢哪种电影?",
"choice": [
"恐怖片",
"爱情片",
"喜剧片",
"科幻片"
],
"answer": "喜剧片"
}
],
"25-35"
]
其中的上下文是:
男:你今天晚上有时间吗?我们一起去看电影吧?",
"女:你喜欢恐怖片和爱情片,但是我喜欢喜剧片,科幻片一般。所以……
问题是:
女的最喜欢哪种电影?
选项是:
"恐怖片",
"爱情片",
"喜剧片",
"科幻片"
答案为:
"喜剧片"
文档的id是:
"25-35"
设置随机种子,固定住随机因素,方便用户稳定复现。
def set_seed(seed):
"""sets random seed"""
random.seed(seed)
np.random.seed(seed)
paddle.seed(seed)
set_seed(2022)
初始化日志函数
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO)
logger = logging.getLogger(__name__)
处理训练集合的数据,转换成id的形式,然后构造训练集的Dataloader
def process_train_data(data_dir,processor,tokenizer,n_class,max_seq_length):
label_list = processor.get_labels()
train_examples = processor.get_train_examples()
feature_dir = os.path.join(data_dir, 'train_features{}.pkl'.format(max_seq_length))
if os.path.exists(feature_dir):
train_features = pickle.load(open(feature_dir, 'rb'))
else:
train_features = convert_examples_to_features(train_examples, label_list, max_seq_length, tokenizer)
with open(feature_dir, 'wb') as w:
pickle.dump(train_features, w)
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_examples))
input_ids = []
input_mask = []
segment_ids = []
label_id = []
for f in train_features:
input_ids.append([])
input_mask.append([])
segment_ids.append([])
for i in range(n_class):
input_ids[-1].append(f[i].input_ids)
input_mask[-1].append(f[i].input_mask)
segment_ids[-1].append(f[i].segment_ids)
label_id.append(f[0].label_id)
all_input_ids = paddle.to_tensor(input_ids, dtype='int64')
all_input_mask = paddle.to_tensor(input_mask, dtype='int64')
all_segment_ids = paddle.to_tensor(segment_ids, dtype='int64')
all_label_ids = paddle.to_tensor(label_id, dtype='int64')
train_data = TensorDataset([all_input_ids, all_input_mask, all_segment_ids, all_label_ids])
return train_data
处理验证集合的数据,转换成id的形式,然后构造验证集的Dataloader
def process_validation_data(data_dir,processor,tokenizer,n_class,max_seq_length):
label_list = processor.get_labels()
eval_examples = processor.get_dev_examples()
feature_dir = os.path.join(data_dir, 'dev_features{}.pkl'.format(max_seq_length))
if os.path.exists(feature_dir):
eval_features = pickle.load(open(feature_dir, 'rb'))
else:
eval_features = convert_examples_to_features(eval_examples, label_list, max_seq_length, tokenizer)
with open(feature_dir, 'wb') as w:
pickle.dump(eval_features, w)
input_ids = []
input_mask = []
segment_ids = []
label_id = []
for f in eval_features:
input_ids.append([])
input_mask.append([])
segment_ids.append([])
for i in range(n_class):
input_ids[-1].append(f[i].input_ids)
input_mask[-1].append(f[i].input_mask)
segment_ids[-1].append(f[i].segment_ids)
label_id.append(f[0].label_id)
all_input_ids = paddle.to_tensor(input_ids, dtype='int64')
all_input_mask = paddle.to_tensor(input_mask, dtype='int64')
all_segment_ids = paddle.to_tensor(segment_ids, dtype='int64')
all_label_ids = paddle.to_tensor(label_id, dtype='int64')
dev_data = TensorDataset([all_input_ids, all_input_mask, all_segment_ids, all_label_ids])
return dev_data
data_dir='data'
processor = c3Processor(data_dir)
MODEL_NAME = "bert-base-chinese"
tokenizer = BertTokenizer.from_pretrained(MODEL_NAME)
max_seq_length=512
n_class=4
batch_size=4
output_dir='work'
train_data=process_train_data(output_dir,processor,tokenizer,n_class,max_seq_length)
train_data_loader = paddle.io.DataLoader(dataset=train_data,
batch_size=batch_size,
drop_last=True,
num_workers=0)
dev_data=process_validation_data(output_dir,processor,tokenizer,n_class,max_seq_length)
dev_data_loader = paddle.io.DataLoader(dataset=dev_data,
batch_size=batch_size,
drop_last=True,
num_workers=0)
[2022-01-24 18:31:25,863] [ INFO] - Already cached /home/aistudio/.paddlenlp/models/bert-base-chinese/bert-base-chinese-vocab.txt
01/24/2022 18:31:29 - INFO - __main__ - ***** Running training *****
01/24/2022 18:31:29 - INFO - __main__ - Num examples = 47476
三、模型构建
实例化BertForMultipleChoice模型,读者也可以尝试ErnieForMultipleChoice,RobertaForMultipleChoice等模型,测试一下效果。
BertForMultipleChoice的原理图如下图所示,首先数据会处理成下面的形式,每个选项拆开都构成一条单独的样本,C3数据集最多四个选项,每条数据就拆分成了4条样本,然后把数据输入到同一个BERT中,得到CLS位置的输出,然后接入全连接层FC,输出每个样本的概率值,概率最大的样本对应的选项即为最终的答案。
max_num_choices=4
model = BertForMultipleChoice.from_pretrained(MODEL_NAME,
num_choices=max_num_choices)
[2022-01-24 18:31:40,784] [ INFO] - Already cached /home/aistudio/.paddlenlp/models/bert-base-chinese/bert-base-chinese.pdparams
W0124 18:31:40.787916 3868 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0124 18:31:40.792359 3868 device_context.cc:465] device: 0, cuDNN Version: 7.6.
四、训练配置
配置训练所需要的超参数,优化器,损失函数,评估方式等。
首先实例化BertForMultipleChoice模型,读者也可以尝试ErnieForMultipleChoice等模型,测试一下效果。
EPOCH = 8
max_grad_norm = 1.0
num_training_steps = len(train_data_loader) * EPOCH
# 定义 learning_rate_scheduler,负责在训练过程中对 lr 进行调度
lr_scheduler = LinearDecayWithWarmup(2e-5, num_training_steps, 0)
# Generate parameter names needed to perform weight decay.
# All bias and LayerNorm parameters are excluded.
decay_params = [
p.name for n, p in model.named_parameters()
if not any(nd in n for nd in ["bias", "norm"])
]
grad_clip = paddle.nn.ClipGradByGlobalNorm(max_grad_norm)
# 定义 Optimizer
optimizer = paddle.optimizer.AdamW(
learning_rate=lr_scheduler,
parameters=model.parameters(),
weight_decay=0.01,
apply_decay_param_fun=lambda x: x in decay_params,
grad_clip=grad_clip)
# 交叉熵损失
criterion = paddle.nn.loss.CrossEntropyLoss()
# 评估的时候采用准确率指标
metric = paddle.metric.Accuracy()
五、模型训练
接下来是训练模型,在模型训练的过程中需要评估,这里先实现evaluate函数,主要用于训练集训练的过程中的评估
@paddle.no_grad()
def evaluate(model, dev_data_loader, metric):
all_loss = []
metric.reset()
criterion = paddle.nn.loss.CrossEntropyLoss()
model.eval()
for step, batch in enumerate(dev_data_loader):
input_ids, input_mask, segment_ids, label_id=batch
logits = model(input_ids=input_ids, token_type_ids=segment_ids,attention_mask=input_mask)
loss = criterion(logits, label_id)
correct = metric.compute(logits, label_id)
metric.update(correct)
all_loss.append(loss.numpy())
acc = metric.accumulate()
model.train()
return np.mean(all_loss), acc
def do_train(model,train_data_loader,dev_data_loader):
model.train()
global_step = 0
tic_train = time.time()
log_step = 100
for epoch in range(EPOCH):
metric.reset()
for step, batch in enumerate(train_data_loader):
input_ids, input_mask, segment_ids, label_id=batch
logits = model(input_ids=input_ids, token_type_ids=segment_ids,attention_mask=input_mask)
loss = criterion(logits, label_id)
correct = metric.compute(logits, label_id)
metric.update(correct)
acc = metric.accumulate()
global_step += 1
# 每间隔 log_step 输出训练指标
if global_step % log_step == 0:
print(
"global step %d, epoch: %d, batch: %d, loss: %.5f, accu: %.5f, speed: %.2f step/s"
% (global_step, epoch, step, loss, acc, 10 /
(time.time() - tic_train)))
tic_train = time.time()
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.clear_grad()
loss, acc = evaluate(model, dev_data_loader, metric)
print("epoch: %d, eval loss: %.5f, accu: %.5f" % (epoch, loss, acc))
model.save_pretrained("./checkpoint")
# tokenizer.save_pretrained("./checkpoint")
# 训练时间比较长,建议使用32GB的V100进行训练,或者把代码修改成多卡进行训练
do_train(model,train_data_loader,dev_data_loader)
global step 100, epoch: 0, batch: 99, loss: 0.56752, accu: 0.33750, speed: 0.22 step/s
global step 200, epoch: 0, batch: 199, loss: 1.33341, accu: 0.39750, speed: 0.22 step/s
global step 300, epoch: 0, batch: 299, loss: 1.06105, accu: 0.43667, speed: 0.22 step/s
global step 400, epoch: 0, batch: 399, loss: 1.33641, accu: 0.46000, speed: 0.22 step/s
global step 500, epoch: 0, batch: 499, loss: 0.49386, accu: 0.47550, speed: 0.22 step/s
global step 600, epoch: 0, batch: 599, loss: 0.82104, accu: 0.47417, speed: 0.22 step/s
...
从模型的日志可以看出,模型的训练过程是在收敛的。
六、模型预测
模型预测的部分主要是处理测试集合,放入模型中进行预测,然后输出json格式的结果,然后就可以传到CLUE网站上进行测试了。
首先删除生成的临时文件,防止切换数据集,新数据集未生效,这里强制每次重新生成。如果要修改测试集合,请修改data.py里面的代码:
with open(self.data_dir + "/" + "test1.0.json",
"r", encoding="utf8") as f:
data += json.load(f)
把test1.0.json修改成你的测试文件就行了。
!rm -rf data/test_examples.pkl
!rm -rf data/test_features512.pkl.pkl
def process_test_data(data_dir,processor, tokenizer,n_class,max_seq_length):
label_list = processor.get_labels()
test_examples = processor.get_test_examples()
feature_dir = os.path.join(data_dir,
'test_features{}.pkl'.format(max_seq_length))
if os.path.exists(feature_dir):
test_features = pickle.load(open(feature_dir, 'rb'))
else:
test_features = convert_examples_to_features(test_examples, label_list,
max_seq_length, tokenizer)
with open(feature_dir, 'wb') as w:
pickle.dump(test_features, w)
logger.info("***** Running testing *****")
logger.info(" Num examples = %d", len(test_examples))
input_ids = []
input_mask = []
segment_ids = []
label_id = []
for f in test_features:
input_ids.append([])
input_mask.append([])
segment_ids.append([])
for i in range(n_class):
input_ids[-1].append(f[i].input_ids)
input_mask[-1].append(f[i].input_mask)
segment_ids[-1].append(f[i].segment_ids)
label_id.append(f[0].label_id)
all_input_ids = paddle.to_tensor(input_ids, dtype='int64')
all_input_mask = paddle.to_tensor(input_mask, dtype='int64')
all_segment_ids = paddle.to_tensor(segment_ids, dtype='int64')
all_label_ids = paddle.to_tensor(label_id, dtype='int64')
test_data = TensorDataset(
[all_input_ids, all_input_mask, all_segment_ids, all_label_ids])
return test_data
test_batch_size = 4
test_data = process_test_data(output_dir,processor, tokenizer,n_class,max_seq_length)
test_dataloader = paddle.io.DataLoader(dataset=test_data,
batch_size=test_batch_size,
drop_last=True,
num_workers=0)
01/24/2022 18:31:50 - INFO - __main__ - ***** Running testing *****
01/24/2022 18:31:50 - INFO - __main__ - Num examples = 15568
加载模型,并进行预测
MODEL_NAME = 'checkpoint'
model = BertForMultipleChoice.from_pretrained(MODEL_NAME,
num_choices=max_num_choices)
logits_all = []
for input_ids, input_mask, segment_ids, label_ids in tqdm(test_dataloader):
with paddle.no_grad():
logits = model(input_ids=input_ids,
token_type_ids=segment_ids,
attention_mask=input_mask)
logits = logits.numpy()
for i in range(len(logits)):
logits_all += [logits[i]]
100%|██████████| 973/973 [02:42<00:00, 5.96it/s]
submission_test = os.path.join(output_dir, "submission_test.json")
test_preds = [int(np.argmax(logits_)) for logits_ in logits_all]
with open(submission_test, "w") as f:
json.dump(test_preds, f)
data=json.load(open(submission_test))
print(data[0])
# data=data[:len(data)//2]
print(len(data))
with open("data/test1.0.json","r",encoding="utf8") as f:
test_data = json.load(f)
# print(test_data[:5])
# print(len(test_data))
ids=[]
for item in test_data:
# print(item)
for sub_item in item[1]:
idx=sub_item['id']
# print(idx)
ids.append(idx)
print(len(ids))
label_map=[]
df=pd.DataFrame(label_map,columns=['id','label'])
df=df.to_json(orient='table')
print(df)
with open('c310_predict.json','w') as f:
for idx,item in zip(ids,data):
# label_map.append([idx,item])
f.write('{'+'"id":{},"label":{}'.format(idx,item)+'}\n')
3
3892
3892
{"schema":{"fields":[{"name":"index","type":"string"},{"name":"id","type":"string"},{"name":"label","type":"string"}],"primaryKey":["index"],"pandas_version":"0.20.0"},"data":[]}
会生成 c310_predict.json
文件,然后压缩,就可以下载下来,提交到CLUE网站啦。
七、更多PaddleEdu信息内容
1. PaddleEdu一站式深度学习在线百科awesome-DeepLearning中还有其他的能力,大家可以敬请期待:
- 深度学习入门课
- 深度学习百问
- 特色课
- 产业实践
PaddleEdu使用过程中有任何问题欢迎在awesome-DeepLearning提issue,同时更多深度学习资料请参阅飞桨深度学习平台。
记得点个Star⭐收藏噢~~
2. 飞桨PaddleEdu技术交流群(QQ)
目前QQ群已有2000+同学一起学习,欢迎扫码加入
更多推荐
所有评论(0)