基于paddlenlp的机器翻译领域适应

目前,机器翻译技术已经取得了很大的突破。机器翻译的性能不仅依赖于大规模的双语数据,还取决于训练和测试数据之间的领域匹配程度。具有丰富数据资源领域的机器翻译性能不断提高,但是由于数据资源获取困难等原因,某些特殊领域的翻译效果还不够理想。如何利用富资源领域的数据帮助低资源领域提升翻译质量是一个热点研究问题。

本项目通过在试图通过在样本数较多的oral数据集上(186970条训练数据)上进行预训练,然后再在patent数据集上(100000条训练数据)进行迁移学习,希望提高patent数据集上的模型效果。最后使用在oral数据集上训练的模型在medical数据集上直接进行测试,来确定预训练模型的效果。

# 解压数据集
!unzip -q /home/aistudio/data/data147459/train_dataset.zip -d /home/aistudio/data
# 引入相关的库
import paddle
import paddlenlp
from paddlenlp.transformers import AutoTokenizer
from paddle.io import Dataset, DataLoader
from paddle import nn
from paddlenlp.transformers import PositionalEmbedding, CrossEntropyCriterion
import warnings
from paddlenlp.metrics import BLEU
import copy
import paddlehub as hub
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import re
import os
warnings.filterwarnings('ignore')
!hub install simnet_bow==1.2.1

一、数据处理

模型需要的数据是int64的tensor,因此我们需要将字符转化为整型数字,paddlnlp提供了AutoTokenizer类,可以方便的调用不同的tokenizer,我们使用bert-base-chinese和bert-base-uncased两个预训练的tokenizer来进行分词和tokenize。
数据加载部分,我们继承Dataset类来加载我们的数据,我们将句子的最大长度设置为60。对于句子长度我最开始采取的方案是对超过60的部分进行截断,不足的进行pad补齐。但是这样会有一些问题,比方说对同一个batch里都是短句子的情况就会浪费计算资源,所以我们创建一个collate_fn来在组batch是对每一个batch进行padding,把一个batch的句子pad到同一长度,最后使用Dataloader构建dataloader

# 定义tokenizer
ch_tokenizer = AutoTokenizer.from_pretrained('bert-base-chinese', max_model_input_sizes=21128)
en_tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', max_model_input_sizes=30522)

en_dict = en_tokenizer.get_vocab()
ch_dict = ch_tokenizer.get_vocab()

# 测试
print(en_tokenizer('hello! peppa.'), ch_tokenizer('你好,佩奇!'))

# 后面评价时会用到的函数,将ids转化为tokens
def create_id_dict(tokenizer, save_path):
    dict_ = tokenizer.get_vocab()
    with open(save_path, 'w') as f:
        for k in dict_.keys():
            f.write(k+'\n')
    print(f'{save_path} saved!')

def build_id_dict(txt_path):
    id_dict = dict()
    with open(txt_path, 'r') as f:
        tokens = f.readlines()
        for idx, token in enumerate(tokens):
            id_dict[idx] = token.strip()
    return id_dict

create_id_dict(en_tokenizer, 'data/2ids.txt')

def ids2tokens(ids, id_dict):
    res = list()
    for id_ in ids[1:-1]:
        res.append(id_dict[id_])
    return res

# 医疗数据中的[]对结果影响挺大的 这里直接去掉
def delete_brackets(path, save_path):
    f1 = open(path, 'r')
    f2 = open(save_path, 'w')
    lines = f1.readlines()
    size = len(lines)
    for i in range(size):
        lines[i] = re.sub('(\[|\]|【|】)', '', lines[i])
    for l in lines:
        f2.write(l)
    f1.close()
    f2.close()

delete_brackets(path='/home/aistudio/data/medical/dev/medical-dev.zh2en', 
                save_path='/home/aistudio/data/medical/dev/medical-dev-processed.zh2en')
class SenDataset(Dataset):
    def __init__(self, file_path, en_tokenizer, ch_tokenizer, mask_radio=0., max_len=0):
        super().__init__()
        self.en_tokenizer = en_tokenizer
        self.ch_tokenizer = ch_tokenizer
        self.max_len = max_len
        self.mask_radio = mask_radio
        with open(file_path, 'r') as f:
            self.data = f.readlines()
    
    def __getitem__(self, idx):
        pair = self.data[idx]
        if self.max_len:
            if '\t' in pair:
                ch, en = pair.split('\t')
                en = en[:-1]
                # 这里510和1020只是为了避免sequence过长而设置的一个截断
                ch = ch_tokenizer(ch[:510])['input_ids'][:60]
                en = en_tokenizer(en[:1020])['input_ids'][:60]
                ch += [0] * (self.max_len - len(ch))
                en += [0] * (self.max_len - len(en))
                return paddle.to_tensor(ch, 'int64'), paddle.to_tensor(en, 'int64')
            else:
                ch = [101] + [102] + [0] * (self.max_len - 2)
                en = [101] + [102] + [0] * (self.max_len - 2)
                return paddle.to_tensor(ch, 'int64'), paddle.to_tensor(en, 'int64')    
        else:
            if '\t' in pair:
                ch, en = pair.split('\t')
                en = en[:-1]
                # 这里510和1020只是为了避免sequence过长而设置的一个截断
                ch = ch_tokenizer(ch[:510])['input_ids'] 
                en = en_tokenizer(en[:1020])['input_ids']
                return ch, en
            else:
                ch = [101] + [102]
                en = [101] + [102]
                return ch, en

    def __len__(self):
        return len(self.data)

def collate_fn(data):
    # 先记录一下每个batch里句子的最长长度
    max_ch_len, max_en_len = 0, 0
    for (ch, en) in data:
        max_ch_len = min(max(max_ch_len, len(ch)), 512)
        max_en_len = min(max(max_en_len, len(en)), 512)
    # 将中文句子和英文句子分别padding并组batch
    chs, ens = [], []
    for (ch, en) in data:
        chs.append(ch + [0] * (max_ch_len - len(ch)))
        ens.append(en + [0] * (max_en_len - len(en)))
    return paddle.to_tensor(chs, 'int64'), paddle.to_tensor(ens, 'int64')
def build_dataloader(path, bs=64):
    dataset = SenDataset(path, en_tokenizer, ch_tokenizer)
    dataloader = DataLoader(dataset, shuffle=True, batch_size=bs, num_workers=0, collate_fn=collate_fn)
    return dataloader

二、模型构建

我们使用的模型为Transformer模型。transformer最早在谷歌的论文《attention is all you need》中提出,基于注意力机制,可以实现高效的序列建模,被广泛的应用于机器翻译领域。

transformer主要有encoder和decoder组成(如图)。 encoder由n个相同的layer stack而成,每个layer会对输入的tensor进行自注意力的计算以及一次feed-forward,并且使用残差连接以及layer-norm来保证训练的稳定高效。decoder部分与encoder类似,只是多了一个交叉注意力模块,用来融合来自encoder的信息。decoder输出经过一层全连接以及softmax后输出。

paddle.nn提供了相应的api,使得我们可以方便的构造一个transformer

class TransformerModel(nn.Layer):
    def __init__(self):
        super().__init__()
        self.en_embedding = nn.Embedding(
            num_embeddings=30522, 
            embedding_dim=512)

        self.ch_embedding = nn.Embedding(
            num_embeddings=21128, 
            embedding_dim=512)

        self.pos_embedding = PositionalEmbedding(emb_dim=512, max_length=512)

        self.transformer = nn.Transformer()
        self.fc = nn.Linear(512, 30522)

    def forward(self, src, tgt):
        # word embedding
        src_ = self.ch_embedding(src)
        tgt_ = self.en_embedding(tgt)

        # position embedding
        pos = paddle.arange(0, src.shape[1], dtype='int64')
        src = src_ + self.pos_embedding(pos)[None, :, :]
        pos = paddle.arange(0, tgt.shape[1], dtype='int64')
        tgt = tgt_ + self.pos_embedding(pos)[None, :, :]

        # mask
        mask = self.transformer.generate_square_subsequent_mask(tgt.shape[1])
        out = self.transformer(src, tgt, tgt_mask=mask)

        out = self.fc(out)
        return out

三、训练

训练部分主要需要定义如下内容:

  • 损失函数定义:这里使用交叉熵损失,并使用软标签,提高模型泛化能力
  • 优化器定义:使用AdamW优化器,关于AdamW优化器的内容可以参考:DECOUPLED WEIGHT DECAY REGULARIZATION
  • lr scheduler:一般训练transformer需要warmup,lr_scheduler可以方便的实现,这里使用NoamDecay,详情可以参考这篇论文attention is all you need

训练流程一般为:

  • 前向推理:模型定义时就定义了前向的计算过程
  • 计算损失:通过损失函数计算损失
  • 反向传播:通过backward方法,实现反向传播
  • 更新参数:通过事先定义的优化器来更新参数
  • 清除梯度:清除本次的梯度,
@paddle.no_grad()
def evaluate(model, eval_loader, loss_fn):
    model.eval()
    losses = []
    for src, tgt in eval_loader:
        out = model(src, tgt)
        loss = loss_fn(out, tgt[:,:,None])[1]
        losses.append(loss.item())
    print(f"evaluation:")
    print(f"loss: {sum(losses)/len(losses)}")

def train(dataset='patent', init_from_ckpt='', save_=''):
    loss_fn = CrossEntropyCriterion(label_smooth_eps=0.1, pad_idx=0)

    model = TransformerModel()
    if init_from_ckpt:
        model.set_state_dict(paddle.load('/home/aistudio/epoch_19.pdparams'))
    scheduler = paddle.optimizer.lr.NoamDecay(512, 8000, 2., last_epoch=0)
    optimizer = paddle.optimizer.AdamW(
        learning_rate=scheduler, 
        parameters=model.parameters(), 
        grad_clip=nn.ClipGradByValue(1.),
        beta1=0.9,
        beta2=0.997,
    )

    train_loader = build_dataloader(f"/home/aistudio/data/{dataset}/train/{dataset}-train.zh2en")
    eval_loader = build_dataloader(f"/home/aistudio/data/{dataset}/dev/{dataset}-dev.zh2en")

    epochs = 1
    verbose_steps = 1
    global_steps = 0
    total = len(train_loader) * epochs
    print(f"start training! there are {total // epochs} samples in train loader")
    model.train()
    for epoch in range(epochs):
        for step, (src, tgt) in enumerate(train_loader):
            global_steps += 1
            out = model(src, tgt)
            # print(src.shape, tgt.shape)
            loss = loss_fn(out, tgt[:,:,None])[1]
            loss.backward()
            if global_steps % verbose_steps == 0:
                print(f"epoch:[{epoch+1}/{epochs}]\tstep:[{global_steps}/{total}]\tloss:{loss.item()}")
            optimizer.step()
            scheduler.step()
            optimizer.clear_grad()
        evaluate(model, eval_loader, loss_fn)
        paddle.save(model.state_dict(), f"{save_}_epoch_{epoch}.pdparams")

# train('oral')
# train('patent', 'epoch_9.pdparams', 'oral_patent')
train('patent', '', 'patent')

四、推理预测

我们加载训练好的参数,进行模型预测,为了对比在oral上预训练的效果,我们测试了如下模型在对应测试集上的效果:

  • 在oral数据集上训练的模型在oral测试集上的效果
  • 在oral数据集上训练的模型在patent测试集上的效果
  • 在oral数据集上预训练,在patent上微调的模型在patent测试集上的效果
  • 在patent数据集上训练,在patent测试集上的效果

@paddle.no_grad()
def infer(model, src, max_len=60):
    tgt = paddle.to_tensor([[101]], 'int64')
    while tgt.shape[1] < max_len:
        out = model(src, tgt)
        pred = paddle.argmax(out, axis=-1)
        tgt = paddle.concat([tgt, pred[:, -1:]], axis=-1)
        if tgt[0][-1] == 102:
            return tgt[0]
    return tgt[0]

@paddle.no_grad()
def test(model, ckpt, test_file, ch_tokenizer, en_tokenizer, tokens_file):
    if os.path.exists(ckpt):
        model.set_dict(paddle.load(ckpt))
    print('model loaded!')
    model.eval()
    id_dict = build_id_dict(tokens_file)
    recoder = list()
    tmp = dict()
    with open(test_file, 'r') as f:
        lines = f.readlines()
        for idx, line in enumerate(lines):
            src, tgt = line.strip().split('\t')
            tmp['zh'] = src
            tmp['en'] = tgt
            src = ch_tokenizer(src)['input_ids']
            src += [0] * (60 - len(src))
            src = paddle.to_tensor(src, 'int64')[None, :]
            try:
                pred = np.array(infer(model, src))
                pred = ids2tokens(pred, id_dict)
                tmp['pred'] = en_tokenizer.convert_tokens_to_string(pred)
                if idx % 100 == 0:
                    print(tmp)
                    print()
                recoder.append(copy.deepcopy(tmp))
            except:
                print(tmp)
    return recoder

def cal_sim(sim, recoder):
    size = len(recoder)
    bleu_score_list = list()
    for i in range(size):
        tmp = recoder[i]
        score = sim([[tmp['pred']], [tmp['en']]], batch_size=1)[0]['similarity']
        bleu_score_list.append(score)
        tmp['similarity'] = score
        recoder[i] = tmp
    print(f'average similarity:{sum(bleu_score_list)/len(bleu_score_list)}')
    return recoder


def save_predict(ckpt, data_type='oral', save_name='oral_patent'):
    tf = f'/home/aistudio/data/{data_type}/dev/{data_type}-dev.zh2en'
    if data_type == 'medical':
        tf = '/home/aistudio/data/medical/dev/medical-dev-processed.zh2en'
    recoder = test(
        model=TransformerModel(),  
        ckpt=ckpt, 
        test_file=tf,
        ch_tokenizer=ch_tokenizer, 
        en_tokenizer=en_tokenizer, 
        tokens_file='/home/aistudio/data/2ids.txt')

    recoder = cal_sim(hub.Module(name="simnet_bow").similarity, recoder)
    f = open(f'work/{save_name}.txt', 'w')
    for i in recoder:
        for j in i:
            f.write(
                f"{j}:{i[j]}\n"
            )

# save_predict('epoch_9.pdparams', 'oral', 'oral')
# save_predict('epoch_9.pdparams', ' patent', 'oral_patent_patent')
# save_predict('epoch_9.pdparams', 'patent', 'only_oral_patent')
# save_predict('patent_epoch_8.pdparams', 'patent', '_patent_patent')
save_predict('epoch_9.pdparams', 'medical', 'oral_medical')
def get_mean_sim(path):
    sims = []
    with open(path, 'r') as f:
        lines = f.readlines()
        size = len(lines)
        for i in range(3, size, 4):
            word, sim = lines[i].strip().split(':')
            sim = float(sim)
            sims.append(sim)
    return sum(sims) / len(sims)


# print(f"oral上训练,oral上测试,similarity: {get_mean_sim('work/oral_trans.txt')}")
# print(f"patent上训练,patent上测试,similarity: {get_mean_sim('work/_patent_patent.txt')}")
# print(f"oral上训练,patent上测试,similarity: {get_mean_sim('work/only_oral_patent.txt')}")
# print(f"oral上预训练,patent上微调,patent上测试,similarity: {get_mean_sim('work/oral_patent.txt')}")

x = range(5)
y = [
    get_mean_sim('work/oral_trans.txt'), 
    get_mean_sim('work/_patent_patent.txt'),
    get_mean_sim('work/only_oral_patent.txt'),
    get_mean_sim('work/oral_patent.txt'), 
    get_mean_sim('work/oral_medical.txt')
    ]
tick_label = [
    'train on oral\ntest on oral', 
    'train on patent\ntest on patent',
    'train on oral\ntest on patent', 
    'finetune on patent\ntest on patent', 
    'train on oral\n test on medical']
plt.figure(figsize=(8, 4))
plt.bar(x, y, label='similarity', tick_label=tick_label)
for x_, y_ in zip(x, y):
    plt.text(x_-0.1, y_+0.01, np.round(y_, 2))
plt.legend()
plt.show()

五、总结

从最后的结果可以看出来,在大的数据集上训练后迁移到其他领域的数据集效果是非常不错的,在这个过程中我其实遇到了几个小问题

  • 训练的时候transformer的decoder的输入和输出没有弄清楚,直接把decoder的输入当作输出了,其实应该是用前k个去预测第k+1个,也就是说,用tgt[:-1]来预测tgt[1:]
  • transformer不是很好训练,在oral上的loss到最后其实还是比较高的,我觉得可能是因为oral上面的数据集本身就比较难,而且batch size由于显存限制,没法调到和原始论文一致,所以精度有些差别,后面会再考虑梯度累积或者尝试对抗学习来提过模型性能
  • 一开始我是把所有的句子都截断或者pad到长度60,这样显然是不合理的,没有考虑不同batch之间的差异,在老师的指导下,我在每次组batch的时候,统计每个batch的最大长度,统一pad到最大长度(如果最大长度小于512,否则pad到512)
  • 由于时间限制,仅仅做了5组实验,后面如果有时间还可以继续完善

🌹🌹🌹最后感谢胡雷导师的耐心指导,带我一步步完成项目

六、参考文献&文档&项目:

[1] Vaswani A , Shazeer N , Parmar N , et al. Attention Is All You Need[J]. arXiv, 2017.

[2] Loshchilov I , Hutter F . Decoupled Weight Decay Regularization[J]. 2017.

[3] Miller D C , Thorpe J A . SIMNET: the advent of simulator networking[J]. Proceedings of the IEEE, 1995, 83(8):1114-1123.

[4]paddleNLP文档. https://paddlenlp.readthedocs.io/

[5]机械翻译领域适应:https://aistudio.baidu.com/aistudio/projectdetail/4341148?sUid=855899&shared=1&ts=1670858419134


此文章为搬运
原项目链接

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐