同传翻译的“前世今生”

同声传译,简称“同传”,是指译员在不打断讲话者讲话的情况下,不间断地将内容口译给听众的一种翻译方式,同声传译员通过专用的设备提供即时的翻译,这种方式适用于大型的研讨会和国际会议,同声传译效率高,能保证演讲或会议的流畅进行。同声传译员一般收入较高,但是成为同声传译的门槛也很高。当前,世界上95%的国际高端会议都采用同声传译的方式。第二次世界大战结束后,设立在德国的纽伦堡国际军事法庭在审判法西斯战犯时,首次采用同声传译,这也是世界上第一次在大型国际活动中采用同声传译。
同声传译除了广泛应用于国际会议之外,也在外交外事、会晤谈判、商务活动、新闻传媒、培训授课、电视广播、国际仲裁等诸多领域被广泛使用。目前机器同传离人类专家的水平仍然有较大差距。尤其是在重要会议如外交、商务等场合,还必须依靠人类同传高质量、专业的翻译完成。
不过目前人工同传翻译存在着以下局限之处:

1)精力体力的挑战:与交替传译不同的是,同传需要边听、边记、边翻,同步进行,对译员的要求极高。由于需要高度集中注意力,人类同传一般两人一组,且每隔20多分钟就要换人休息,对人的精力、体力都是极大的挑战。

2)译出率不高:据统计,同传译员的译出率一般在60%-70%左右。译出率不高的原因,一般由于未听清或者难翻译,人类译员通常会选择性的忽略某些句子,保证总体上的准确率和实时性。

3)全球同传译员稀缺:由于苛刻的要求,全球同传译员稀缺,只有几千人。与巨大的市场需求相比,人才严重短缺。
相比之下机器同声传译的优势有:机器最大的优势是不会因为疲倦而导致译出率下降,能将所有“听到”的句子全部翻译出来,这使得机器的“译出率”可以达到100%,远高于人类译员的60%-70%。同时,在价格上也占有优势。

本期项目我们PaddleNLP团队为大家带来一个机器同传翻译demo,先来将它的翻译效果一睹为快!

STACL机器同传

机器同传,即在句子完成之前进行翻译,同声传译的目标是实现同声传译的自动化,它可以与源语言同时翻译,延迟时间只有几秒钟。

同传翻译的难点在于源语言和目标语言之间词序的差异带来的翻译延迟。 例如,考虑将SOV(主宾谓)语言(如日语或德语)翻译为SVO(主谓宾)语言(如英语或汉语),必须等到源语言动词出现才可以准确翻译。因此,翻译系统必须求助于传统的全句翻译,因此造成至少一句话的延迟。

本项目是基于机器翻译领域主流模型 Transformer网络结构的同传模型STACL的PaddlePaddle 实现,包含模型训练,预测以及使用自定义数据等内容。用户可以基于发布的内容搭建自己的同传翻译模型。

STACL 是论文 STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework 中针对同传提出的适用于所有同传场景的翻译架构,该架构基于Transformer实现,可参考PaddleNLP的Transformer

STACL 主要具有以下优势:

  • Implicit Anticipation: Prefix-to-Prefix架构拥有预测能力,即在未看到源词的情况下仍然可以翻译出对应的目标词,克服了SOV→SVO等词序差异;

  • Controllable Latency: Wait-k策略可以不需要全句的源句,直接预测目标句,可以实现任意的字级延迟,同时保持较高的翻译质量。

Implicit Anticipation(隐式的预测能力)


图1:Implicit Anticipation

源端只输入“他 还 说 现在 正在 为 这 一 会议”,目标端前两个策略就已经翻译出“making preparations”和“making”,充分体现了STACL的预测能力。

Controllable Latency(可控的延迟)


图2: Controllable Latency (Wait-k)
Wait-k策略首先等待源端读入k个词后开始进行翻译。上图2中,k=1,第一个目标词在读入第一个1个源词后翻译,第二个目标词在读入前2个源词后翻译,以此类推,所以当源端读入“他 还 说”3个词后,目前端就已经翻译出“he also said”。当k=3,第一个目标词在读入前3个源词后翻译,所以当源端读入“他 还 说”后,目标端翻译出第一个词”he“。

Demo动画展示

如果有ASR(Automatic Speech Recognition)的api接入,即刻实现语音同传

该Demo的具体实现:STACL Demo


图3.1:文本同传Demo

图3.2:语音同传Demo

是不是看起来很强大呢?!而这样的机器同传翻译实现起来并不是很难,我们的零基础课程学员经过简单的操作已经将它实现,如果你也感兴趣,那就赶快来试试吧!
机器同传demo教程:https://github.com/PaddlePaddle/PaddleNLP/blob/develop/education/day09.md

环境介绍

  • PaddlePaddle框架,AI Studio平台已经默认安装最新版2.1。

  • PaddleNLP,深度兼容框架2.1,是飞桨框架2.1在NLP领域的最佳实践。

记得给PaddleNLP点个小小的Star⭐

开源不易,希望大家多多支持~

GitHub地址:https://github.com/PaddlePaddle/PaddleNLP
PaddleNLP文档:https://paddlenlp.readthedocs.io
本项目完整版:https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/simultaneous_translation/stacl

%cd stacl/
/home/aistudio/stacl

AI Studio平台默认安装了Paddle和PaddleNLP,并定期更新版本。 如需手动更新Paddle,可参考飞桨安装说明,安装相应环境下最新版飞桨框架。

使用如下命令确保安装最新版PaddleNLP:

# 安装依赖
!pip install --upgrade paddlenlp -i https://pypi.org/simple
!pip install -r requirements.txt
Requirement already up-to-date: paddlenlp in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2.0.8)
Requirement already satisfied, skipping upgrade: multiprocess in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.70.11.1)
Requirement already satisfied, skipping upgrade: h5py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (2.9.0)
Requirement already satisfied, skipping upgrade: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.4.4)
Requirement already satisfied, skipping upgrade: seqeval in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (1.2.2)
Requirement already satisfied, skipping upgrade: jieba in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.42.1)
Requirement already satisfied, skipping upgrade: colorlog in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (4.1.0)
Requirement already satisfied, skipping upgrade: dill>=0.3.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from multiprocess->paddlenlp) (0.3.3)
Requirement already satisfied, skipping upgrade: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp) (1.15.0)
Requirement already satisfied, skipping upgrade: numpy>=1.7 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp) (1.20.3)
Requirement already satisfied, skipping upgrade: scikit-learn>=0.21.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from seqeval->paddlenlp) (0.24.2)
Requirement already satisfied, skipping upgrade: scipy>=0.19.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (1.6.3)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (0.14.1)
Requirement already satisfied, skipping upgrade: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (2.1.0)
Looking in indexes: https://mirror.baidu.com/pypi/simple/
Requirement already satisfied: attrdict==2.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 1)) (2.0.1)
Requirement already satisfied: PyYAML==5.4.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 2)) (5.4.1)
Requirement already satisfied: subword_nmt==0.3.7 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 3)) (0.3.7)
Requirement already satisfied: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from attrdict==2.0.1->-r requirements.txt (line 1)) (1.15.0)

Pipeline


图4:Pipeline
import os
import time
import yaml
import logging
import argparse
import numpy as np
from pprint import pprint
from attrdict import AttrDict

import numpy as np
from functools import partial
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import paddle.distributed as dist
from paddle.io import DataLoader, BatchSampler
from paddlenlp.data import Vocab, Pad
from paddlenlp.datasets import load_dataset
from paddlenlp.transformers import WordEmbedding, PositionalEmbedding, position_encoding_init
from paddlenlp.utils.log import logger

from utils import CrossEntropyCriterion, post_process_seq, Decoder 

1. 数据预处理

本项目展示的训练数据为NIST的中英demo数据(1000条中英文本对),同时提供基于全量NIST中英数据训练的预训练模型下载。
中文需要Jieba+BPE,英文需要BPE

BPE(Byte Pair Encoding)

BPE优势:

  • 压缩词表;
  • 一定程度上缓解OOV(out of vocabulary)问题

图5:learn BPE

图6:Apply BPE

图7:Jieba+BPE

数据格式:

兵营 是 双@@ 枪 老@@ 大@@ 爷 的 前提 建筑 之一 。 it serves as a prerequisite for Re@@ apers to be built at the Bar@@ rac@@ ks .

与文本翻译的数据预处理几乎一致,可以参考Transformer机器翻译

!bash get_data_and_model.sh
Download model.
--2021-08-31 11:08:37--  https://paddlenlp.bj.bcebos.com/models/stacl/nist_zhen_full_w5.tar.gz
Resolving paddlenlp.bj.bcebos.com (paddlenlp.bj.bcebos.com)... 182.61.200.229, 182.61.200.195, 2409:8c04:1001:1002:0:ff:b001:368a, ...
Connecting to paddlenlp.bj.bcebos.com (paddlenlp.bj.bcebos.com)|182.61.200.229|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 849689440 (810M) [application/x-gzip]
Saving to: ‘trained_models/nist_zhen_full_w5.tar.gz’

nist_zhen_full_w5.t 100%[===================>] 810.33M  54.8MB/s    in 14s     

2021-08-31 11:08:50 (59.5 MB/s) - ‘trained_models/nist_zhen_full_w5.tar.gz’ saved [849689440/849689440]

Decompress model.
Over.

2. 构造Dataloader

下面的create_data_loader函数用于创建训练集、验证集所需要的DataLoader对象,
create_infer_loader函数用于创建预测集所需要的DataLoader对象,
DataLoader对象用于产生一个个batch的数据。下面对函数中调用的paddlenlp内置函数作简单说明:

  • paddlenlp.data.Vocab.load_vocabulary:Vocab词表类,集合了一系列文本token与ids之间映射的一系列方法,支持从文件、字典、json等一系方式构建词表
  • paddlenlp.datasets.load_dataset:从本地文件创建数据集时,推荐根据本地数据集的格式给出读取function并传入 load_dataset() 中创建数据集
  • paddlenlp.data.Pad:padding 操作

具体可参考PaddleNLP的文档


图8:构造Dataloader的流程

图9:Dataloader细节
# 自定义读取本地数据的方法
def read(src_tgt_file, only_src=False):
    with open(src_tgt_file, 'r', encoding='utf8') as src_tgt_f:
        for line in src_tgt_f:
            line = line.strip('\n')
            if not line:
                continue
            line_split = line.split('\t')
            # 判断是否是测试集,测试集不需要target
            if only_src:
                yield {"src": line_split[0]}
            else:
                if len(line_split) != 2:
                    continue
                yield {"src": line_split[0], "trg": line_split[1]}

 # 过滤掉长度 ≤min_len或者≥max_len 的数据  
def min_max_filer(data, max_len, min_len=0):
    # 获取每条src和tgt的最小长度和最大长度(+1是为了<s>或者<e>),过滤掉不满足长度范围的样本
    data_min_len = min(len(data[0]), len(data[1]))
    data_max_len = max(len(data[0]), len(data[1]))
    return (data_min_len >= min_len) and (data_max_len <= max_len)
# 为训练集和验证集创建dataloader
def create_data_loader(args, places=None):
    data_files = {'train': args.training_file, 'dev': args.validation_file}

    # 通过paddlenlp.datasets.load_dataset从本地文件创建数据集:根据本地数据集的格式给出读取function并传入load_dataset()中创建数据集
    datasets = [
        load_dataset(
            read, src_tgt_file=filename, lazy=False)
        for split, filename in data_files.items()
    ]

    # 通过Paddlenlp.data.Vocab.load_vocabulary从本地创建词表
    src_vocab = Vocab.load_vocabulary(
        args.src_vocab_fpath,
        bos_token=args.special_token[0],
        eos_token=args.special_token[1],
        unk_token=args.special_token[2])
    trg_vocab = Vocab.load_vocabulary(
        args.trg_vocab_fpath,
        bos_token=args.special_token[0],
        eos_token=args.special_token[1],
        unk_token=args.special_token[2])

    args.src_vocab_size = len(src_vocab)
    args.trg_vocab_size = len(trg_vocab)

    def convert_samples(sample):
        source = sample['src'].split()
        target = sample['trg'].split()

        # 将tokens转化为词表对应的ids
        source = src_vocab.to_indices(source) + [args.eos_idx]
        target = [args.bos_idx] + \
                 trg_vocab.to_indices(target) + [args.eos_idx]

        return source, target

    # 训练集dataloader和验证集dataloader  
    data_loaders = []
    for i, dataset in enumerate(datasets):
        # 通过Dataset的map方法将样本token转换为id;通过Dataset的filter方法过滤掉不符合条件的样本
        dataset = dataset.map(convert_samples, lazy=False).filter(
            partial(min_max_filer, max_len=args.max_length))

        # BatchSampler,批采样器BatchSampler组batch: https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/BatchSampler_cn.html
        batch_sampler = BatchSampler(dataset,batch_size=args.batch_size, shuffle=True,drop_last=False)

        # DataLoader,构造Dataloader用于后续迭代取数据进行训练/验证/测试: https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/DataLoader_cn.html
        data_loader = DataLoader(
            dataset=dataset,
            places=places,
            batch_sampler=batch_sampler,
            collate_fn=partial(
                prepare_train_input, 
                pad_idx=args.bos_idx),
                num_workers=0)

        data_loaders.append(data_loader)

    return data_loaders

def prepare_train_input(insts, pad_idx):
    # 通过paddlenlp.data.Pad来padding,用于对齐同一batch中样本的长度
    word_pad = Pad(pad_idx)
    src_word = word_pad([inst[0] for inst in insts])
    trg_word = word_pad(inst[1][:-1] for inst in insts)
    lbl_word = word_pad([inst[1][1:] for inst in insts])
    data_inputs = [src_word, trg_word, lbl_word]

    return data_inputs

# 为测试集创建dataloader(同训练集和验证集的dataloader)
def create_infer_loader(args, places=None):
    data_files = {'test': args.predict_file, }
    dataset = load_dataset(
        read, src_tgt_file=data_files['test'], only_src=True, lazy=False)

    src_vocab = Vocab.load_vocabulary(
        args.src_vocab_fpath,
        bos_token=args.special_token[0],
        eos_token=args.special_token[1],
        unk_token=args.special_token[2])

    trg_vocab = Vocab.load_vocabulary(
        args.trg_vocab_fpath,
        bos_token=args.special_token[0],
        eos_token=args.special_token[1],
        unk_token=args.special_token[2])

    args.src_vocab_size = len(src_vocab)
    args.trg_vocab_size = len(trg_vocab)

    def convert_samples(sample):
        source = sample['src'].split()
        source = src_vocab.to_indices(source) + [args.eos_idx]
        target = [args.bos_idx]
        return source, target
        
    dataset = dataset.map(convert_samples, lazy=False)

    # BatchSampler: https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/BatchSampler_cn.html
    batch_sampler = BatchSampler(dataset,batch_size=args.batch_size,drop_last=False)

    # DataLoader: https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/io/DataLoader_cn.html
    data_loader = DataLoader(
        dataset=dataset,
        places=places,
        batch_sampler=batch_sampler,
        collate_fn=partial(
            prepare_infer_input, 
            pad_idx=args.bos_idx),
            num_workers=0,
            return_list=True)

    return data_loader, trg_vocab.to_tokens

def prepare_infer_input(insts, pad_idx):
    """
    Put all padded data needed by beam search decoder into a list.
    """
    word_pad = Pad(pad_idx)
    src_word = word_pad(inst[0] for inst in insts)

    return [src_word, ]

3. 搭建模型

基于PaddlePaddle的相关API,包括:


图10:模型搭建

Encoder层

和原生Transformer一致

Decoder层

基于paddle.nn.TransformerDecoderLayer加入Wait-k策略

# 定义Decoder层,这里给出和nn.TransformerDecoderLayer不一致地方的注释
class DecoderLayer(nn.TransformerDecoderLayer):
    def __init__(self, *args, **kwargs):
        super(DecoderLayer, self).__init__(*args, **kwargs)

    def forward(self, tgt, memory, tgt_mask=None, memory_mask=None, cache=None):
        residual = tgt
        # 计算LayerNorm
        if self.normalize_before:
            tgt = self.norm1(tgt)
        # 计算self attention
        if cache is None:
            tgt = self.self_attn(tgt, tgt, tgt, tgt_mask, None)
        else:
            tgt, incremental_cache = self.self_attn(tgt, tgt, tgt, tgt_mask,
                                                    cache[0])
                                                
        # 加上残差
        tgt = residual + self.dropout1(tgt)
        if not self.normalize_before:
            tgt = self.norm1(tgt)

        residual = tgt
        # 计算LayerNorm
        if self.normalize_before:
            tgt = self.norm2(tgt)

        # 下面是添加了waitk策略的部分
        # memory为encoder的output
        if len(memory) == 1:
            # 整句模型
            tgt = self.cross_attn(tgt, memory[0], memory[0], memory_mask, None)        
        else:
            # Wait-k策略
            cross_attn_outputs = []
            for i in range(tgt.shape[1]):
                # 取当前target第i位
                q = tgt[:, i:i + 1, :]
                if i >= len(memory):
                    e = memory[-1]
                else:
                    e = memory[i]
                # 计算cross attention
                cross_attn_outputs.append(
                    self.cross_attn(q, e, e, memory_mask[:, :, i:i + 1, :
                                                         e.shape[1]], None))
            # 将target每一位计算出的cross attention进行拼接
            tgt = paddle.concat(cross_attn_outputs, axis=1)

        # 加上残差
        tgt = residual + self.dropout2(tgt)
        if not self.normalize_before:
            tgt = self.norm2(tgt)

        residual = tgt
        # 计算LayerNorm
        if self.normalize_before:
            tgt = self.norm3(tgt)
        tgt = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
        # 加上残差
        tgt = residual + self.dropout3(tgt)
        if not self.normalize_before:
            tgt = self.norm3(tgt)
        return tgt if cache is None else (tgt, (incremental_cache, ))

模型主结构

和原生Transformer基本一致,具体细节可参考可参考:paddlenlp.transformers.TransformerModel

SimultaneousTransformer:Encoder+Decoder(wait-k 策略)

Transformer

SimultaneousTransformer


图11:Example


# 定义SimultaneousTransformer,这里给出和nn.TransformerDecoderLayer不一致地方的注释
class SimultaneousTransformer(nn.Layer):
    """
    model
    """

    def __init__(self,
                 src_vocab_size,
                 trg_vocab_size,
                 max_length,
                 n_layer,
                 n_head,
                 d_model,
                 d_inner_hid,
                 dropout,
                 weight_sharing,
                 bos_id=0,
                 eos_id=1,
                 waitk=-1):
        super(SimultaneousTransformer, self).__init__()
        self.trg_vocab_size = trg_vocab_size
        self.emb_dim = d_model
        self.bos_id = bos_id
        self.eos_id = eos_id
        self.dropout = dropout
        self.waitk = waitk
        self.n_layer = n_layer
        self.n_head = n_head
        self.d_model = d_model
        
        # 声明WordEmbedding
        self.src_word_embedding = WordEmbedding(
            vocab_size=src_vocab_size, emb_dim=d_model, bos_id=self.bos_id)

        # 声明PositionalEmbedding
        self.src_pos_embedding = PositionalEmbedding(
            emb_dim=d_model, max_length=max_length)
        
        # 判断target是否要和source共享WordEmbedding
        if weight_sharing:
            assert src_vocab_size == trg_vocab_size, (
                "Vocabularies in source and target should be same for weight sharing."
            )
            self.trg_word_embedding = self.src_word_embedding
            self.trg_pos_embedding = self.src_pos_embedding
        else:
            self.trg_word_embedding = WordEmbedding(
                vocab_size=trg_vocab_size, emb_dim=d_model, bos_id=self.bos_id)
            self.trg_pos_embedding = PositionalEmbedding(
                emb_dim=d_model, max_length=max_length)

        # 声明Encoder层
        encoder_layer = nn.TransformerEncoderLayer(
            d_model=d_model,
            nhead=n_head,
            dim_feedforward=d_inner_hid,
            dropout=dropout,
            activation='relu',
            normalize_before=True,
            bias_attr=[False, True])
        encoder_norm = nn.LayerNorm(d_model)
        # 声明Encoder
        self.encoder = nn.TransformerEncoder(
            encoder_layer=encoder_layer, num_layers=n_layer, norm=encoder_norm)

        # 声明Decoder层
        decoder_layer = DecoderLayer(
            d_model=d_model,
            nhead=n_head,
            dim_feedforward=d_inner_hid,
            dropout=dropout,
            activation='relu',
            normalize_before=True,
            bias_attr=[False, False, True])
        decoder_norm = nn.LayerNorm(d_model)
        # 声明Decoder
        self.decoder = Decoder(
            decoder_layer=decoder_layer, num_layers=n_layer, norm=decoder_norm)

        if weight_sharing:
            self.linear = lambda x: paddle.matmul(
                x=x, y=self.trg_word_embedding.word_embedding.weight, transpose_y=True)
        else:
            self.linear = nn.Linear(
                in_features=d_model,
                out_features=trg_vocab_size,
                bias_attr=False)

    def forward(self, src_word, trg_word):
        src_max_len = paddle.shape(src_word)[-1]
        trg_max_len = paddle.shape(trg_word)[-1]
        base_attn_bias = paddle.cast(
            src_word == self.bos_id,
            dtype=paddle.get_default_dtype()).unsqueeze([1, 2]) * -1e9
        # 计算source端的attention mask
        src_slf_attn_bias = base_attn_bias
        src_slf_attn_bias.stop_gradient = True
        # 计算target端的attention mask
        trg_slf_attn_bias = paddle.tensor.triu(
            (paddle.ones(
                (trg_max_len, trg_max_len),
                dtype=paddle.get_default_dtype()) * -np.inf),
            1)
        trg_slf_attn_bias.stop_gradient = True
        # 计算encoder-decoder的attention mask
        trg_src_attn_bias = paddle.tile(base_attn_bias, [1, 1, trg_max_len, 1])
        src_pos = paddle.cast(
            src_word != self.bos_id, dtype="int64") * paddle.arange(
                start=0, end=src_max_len)
        trg_pos = paddle.cast(
            trg_word != self.bos_id, dtype="int64") * paddle.arange(
                start=0, end=trg_max_len)
        # 计算source的word embedding
        src_emb = self.src_word_embedding(src_word)
        # 计算source的position embedding
        src_pos_emb = self.src_pos_embedding(src_pos)
        # 得到最终Embedding:word embedding + position embedding
        src_emb = src_emb + src_pos_emb
        enc_input = F.dropout(
            src_emb, p=self.dropout,
            training=self.training) if self.dropout else src_emb
        with paddle.static.amp.fp16_guard():
            # 下面是添加了waitk策略的部分
            if self.waitk >= src_max_len or self.waitk == -1:
                # 整句模型,和API一致
                enc_outputs = [
                    self.encoder(
                        enc_input, src_mask=src_slf_attn_bias)
                ]
            else:
                # Wait-k策略
                enc_outputs = []
                for i in range(self.waitk, src_max_len + 1):
                    # 分别将子句送入encoder
                    enc_output = self.encoder(
                        enc_input[:, :i, :],
                        src_mask=src_slf_attn_bias[:, :, :, :i])
                    enc_outputs.append(enc_output)
            # 计算target的word embedding
            trg_emb = self.trg_word_embedding(trg_word)
            # 计算target的position embedding
            trg_pos_emb = self.trg_pos_embedding(trg_pos)
            # 得到最终Embedding:word embedding + position embedding
            trg_emb = trg_emb + trg_pos_emb
            dec_input = F.dropout(
                trg_emb, p=self.dropout,
                training=self.training) if self.dropout else trg_emb
            # 送入Decoder,拿到输出
            dec_output = self.decoder(
                dec_input,
                enc_outputs,
                tgt_mask=trg_slf_attn_bias,
                memory_mask=trg_src_attn_bias)
            # 经过全连接层拿到最终输出
            predict = self.linear(dec_output)

        return predict

    def greedy_search(self, src_word, max_len=256, waitk=-1):
        src_max_len = paddle.shape(src_word)[-1]
        base_attn_bias = paddle.cast(
            src_word == self.bos_id,
            dtype=paddle.get_default_dtype()).unsqueeze([1, 2]) * -1e9
        # 计算source端的attention mask
        src_slf_attn_bias = base_attn_bias
        src_slf_attn_bias.stop_gradient = True
        # 计算target端的attention mask
        trg_src_attn_bias = paddle.tile(base_attn_bias, [1, 1, 1, 1])
        src_pos = paddle.cast(
            src_word != self.bos_id, dtype="int64") * paddle.arange(
                start=0, end=src_max_len)
        # 计算source的word embedding
        src_emb = self.src_word_embedding(src_word)
        # 计算source的position embedding
        src_pos_emb = self.src_pos_embedding(src_pos)
        # 得到最终Embedding:word embedding + position embedding
        src_emb = src_emb + src_pos_emb
        enc_input = F.dropout(
            src_emb, p=self.dropout,
            training=self.training) if self.dropout else src_emb
        # 下面是添加了waitk策略的部分
        if waitk < 0 or waitk > src_max_len:
            # 整句模型
            enc_outputs = [self.encoder(enc_input, src_mask=src_slf_attn_bias)]
        else:
            # waitk策略
            enc_outputs = []
            # 依次将source子句送入Encoder,拿到输出
            for i in range(waitk, src_max_len + 1):
                enc_output = self.encoder(
                    enc_input[:, :i, :],
                    src_mask=src_slf_attn_bias[:, :, :, :i])
                enc_outputs.append(enc_output)

        batch_size = enc_outputs[-1].shape[0]
        max_len = (
            enc_outputs[-1].shape[1] + 20) if max_len is None else max_len
        end_token_tensor = paddle.full(
            shape=[batch_size, 1], fill_value=self.eos_id, dtype="int64")

        predict_ids = []
        # 初始化概率
        log_probs = paddle.full(
            shape=[batch_size, 1], fill_value=0, dtype="float32")
        # 初始化trg_word
        trg_word = paddle.full(
            shape=[batch_size, 1], fill_value=self.bos_id, dtype="int64")

        # 初始化caches:包括StaticCache和IncrementalCache
        caches = self.decoder.gen_cache(enc_outputs[-1], do_zip=False)

        for i in range(max_len):
            trg_pos = paddle.full(
                shape=trg_word.shape, fill_value=i, dtype="int64")
            # 计算target的word embedding
            trg_emb = self.trg_word_embedding(trg_word)
            # 计算target的position embedding
            trg_pos_emb = self.trg_pos_embedding(trg_pos)
            # 得到最终Embedding:word embedding + position embedding
            trg_emb = trg_emb + trg_pos_emb
            dec_input = F.dropout(
                trg_emb, p=self.dropout,
                training=self.training) if self.dropout else trg_emb

            if waitk < 0 or i >= len(enc_outputs):
                # 整句模型
                _e = enc_outputs[-1]
                dec_output, caches = self.decoder(
                    dec_input, [_e], None,
                    trg_src_attn_bias[:, :, :, :_e.shape[1]], caches)
            else:
                _e = enc_outputs[i]
                # 每次将子句的encoder输出送进去解码
                dec_output, caches = self.decoder(
                    dec_input, [_e], None,
                    trg_src_attn_bias[:, :, :, :_e.shape[1]], caches)

            dec_output = paddle.reshape(
                dec_output, shape=[-1, dec_output.shape[-1]])

            logits = self.linear(dec_output)
            # 拿到当前这一步的概率,并加到上一步的概率上
            step_log_probs = paddle.log(F.softmax(logits, axis=-1))
            log_probs = paddle.add(x=step_log_probs, y=log_probs)
            scores = log_probs
            # 获取得分最高的概率和下标
            topk_scores, topk_indices = paddle.topk(x=scores, k=1)
            
            # 判断是否遇到结束符
            finished = paddle.equal(topk_indices, end_token_tensor)
            # 更新trg_word
            trg_word = topk_indices
            # 更新概率log_probs
            log_probs = topk_scores
            
            # 将结果保留到predict_ids
            predict_ids.append(topk_indices)

            if paddle.all(finished).numpy():
                break
        
        # 将predict_ids里面的结果堆叠成Tensor
        predict_ids = paddle.stack(predict_ids, axis=0)
        finished_seq = paddle.transpose(predict_ids, [1, 2, 0])
        finished_scores = topk_scores

        return finished_seq, finished_scores

4.训练模型

运行do_train函数,
do_train函数中,配置优化器、损失函数,以及评价指标(Perplexity,即困惑度,常用来衡量语言模型优劣,也可用于机器翻译、文本生成等任务)。


图12:训练模型
# 读入参数
yaml_file = 'transformer.yaml'
with open(yaml_file, 'rt') as f:
    args = AttrDict(yaml.safe_load(f))
    pprint(args)
{'batch_size': 10,
 'beam_size': 1,
 'beta1': 0.9,
 'beta2': 0.997,
 'bos_idx': 0,
 'd_inner_hid': 2048,
 'd_model': 512,
 'device': 'gpu',
 'dropout': 0.1,
 'eos_idx': 1,
 'epoch': 1,
 'eps': '1e-9',
 'init_from_params': 'trained_models/nist_zhen_full_w5/',
 'label_smooth_eps': 0.1,
 'learning_rate': 2.0,
 'max_length': 256,
 'max_out_len': 256,
 'n_best': 1,
 'n_head': 8,
 'n_layer': 6,
 'output_file': 'train_dev_test/predict.txt',
 'predict_file': 'train_dev_test/test_08.zh.bpe',
 'print_step': 10,
 'random_seed': 42,
 'save_model': 'trained_models',
 'save_step': 20,
 'shuffle': True,
 'shuffle_batch': True,
 'special_token': ['<s>', '<e>', '<unk>'],
 'src_vocab_fpath': 'train_dev_test/nist.20k.zh.vocab',
 'src_vocab_size': 10000,
 'training_file': 'train_dev_test/demo.train.zhen.bpe',
 'trg_vocab_fpath': 'train_dev_test/nist.10k.en.vocab',
 'trg_vocab_size': 10000,
 'unk_idx': 2,
 'use_amp': False,
 'validation_file': 'train_dev_test/demo.dev.zhen.bpe',
 'waitk': 5,
 'warmup_steps': 8000,
 'weight_sharing': False}
def do_train(args):
    # 设置在GPU/CPU/XPU上运行
    paddle.set_device(args.device)

    # 设置随机种子
    random_seed = eval(str(args.random_seed))
    if random_seed is not None:
        paddle.seed(random_seed)

    # 获取Dataloader
    (train_loader), (eval_loader) = create_data_loader(
        args, places=paddle.get_device())

    # 声明模型
    transformer = SimultaneousTransformer(
        args.src_vocab_size, args.trg_vocab_size, args.max_length + 1,
        args.n_layer, args.n_head, args.d_model, args.d_inner_hid, args.dropout,
        args.weight_sharing, args.bos_idx, args.eos_idx, args.waitk)
    

    print('waitk=', args.waitk)

    # 定义Loss
    criterion = CrossEntropyCriterion(args.label_smooth_eps, args.bos_idx)

    # 定义学习率的衰减策略
    scheduler = paddle.optimizer.lr.NoamDecay(args.d_model, args.warmup_steps,
                                              args.learning_rate)
    # 定义优化器
    optimizer = paddle.optimizer.Adam(
        learning_rate=scheduler,
        beta1=args.beta1,
        beta2=args.beta2,
        epsilon=float(args.eps),
        parameters=transformer.parameters())

    step_idx = 0

    # 按epoch迭代训练
    for pass_id in range(args.epoch):
        batch_id = 0
        for input_data in train_loader:
            # 从训练集Dataloader按batch取数据
            (src_word, trg_word, lbl_word) = input_data
      
            # 获得模型输出的logits 
            logits = transformer(src_word=src_word, trg_word=trg_word)

            # 计算loss
            sum_cost, avg_cost, token_num = criterion(logits, lbl_word)

            # 计算梯度
            avg_cost.backward() 
            # 更新参数
            optimizer.step() 
            # 梯度清零
            optimizer.clear_grad() 

            if (step_idx + 1) % args.print_step == 0 or step_idx == 0:
                total_avg_cost = avg_cost.numpy()
                # 打印log
                logger.info(
                    "step_idx: %d, epoch: %d, batch: %d, avg loss: %f, "
                    " ppl: %f " %
                    (step_idx, pass_id, batch_id, total_avg_cost,
                        np.exp([min(total_avg_cost, 100)])))

            if (step_idx + 1) % args.save_step == 0:
                # 验证
                transformer.eval()
                total_sum_cost = 0
                total_token_num = 0
                with paddle.no_grad():
                    for input_data in eval_loader:
                        # 从验证集Dataloader按batch取数据
                        (src_word, trg_word, lbl_word) = input_data
                        # 获得模型输出的logits 
                        logits = transformer(
                            src_word=src_word, trg_word=trg_word)
                        # 计算loss
                        sum_cost, avg_cost, token_num = criterion(logits,
                                                                  lbl_word)
                        total_sum_cost += sum_cost.numpy()
                        total_token_num += token_num.numpy()
                        total_avg_cost = total_sum_cost / total_token_num
                    # 打印log
                    logger.info("validation, step_idx: %d, avg loss: %f, "
                                " ppl: %f" %
                                (step_idx, total_avg_cost,
                                 np.exp([min(total_avg_cost, 100)])))
                transformer.train()

                if args.save_model:
                    # 保存中间模型
                    model_dir = os.path.join(args.save_model,
                                             "step_" + str(step_idx))
                    if not os.path.exists(model_dir):
                        os.makedirs(model_dir)
                    # 保存模型参数
                    paddle.save(transformer.state_dict(),
                                os.path.join(model_dir, "transformer.pdparams"))
                    # 保存优化器参数
                    paddle.save(optimizer.state_dict(),
                                os.path.join(model_dir, "transformer.pdopt"))

            batch_id += 1
            step_idx += 1
            scheduler.step()

    if args.save_model:
        # 保存最终模型
        model_dir = os.path.join(args.save_model, "step_final")
        if not os.path.exists(model_dir):
            os.makedirs(model_dir)
        # 保存模型参数
        paddle.save(transformer.state_dict(),
                    os.path.join(model_dir, "transformer.pdparams"))
        # 保存优化器参数
        paddle.save(optimizer.state_dict(),
                    os.path.join(model_dir, "transformer.pdopt"))
do_train(args)
waitk= 5


[2021-08-31 11:09:20,496] [    INFO] - step_idx: 0, epoch: 0, batch: 0, avg loss: 9.240771,  ppl: 10308.986328 
[2021-08-31 11:09:49,161] [    INFO] - step_idx: 9, epoch: 0, batch: 9, avg loss: 9.249139,  ppl: 10395.609375 
[2021-08-31 11:10:10,887] [    INFO] - step_idx: 19, epoch: 0, batch: 19, avg loss: 9.179500,  ppl: 9696.299805 
[2021-08-31 11:10:25,589] [    INFO] - validation, step_idx: 19, avg loss: 9.172323,  ppl: 9626.964844

5. 预测和评估

模型最终训练的效果一般可通过测试集来进行测试,同传类似机器翻译场景,一般计算BLEU值。


图13: 预测和评估

def do_predict(args):
    
    paddle.set_device(args.device)

    # 获取Dataloader
    test_loader, to_tokens = create_infer_loader(args)

    # 声明模型
    transformer = SimultaneousTransformer(
        args.src_vocab_size, args.trg_vocab_size, args.max_length + 1,
        args.n_layer, args.n_head, args.d_model, args.d_inner_hid, args.dropout,
        args.weight_sharing, args.bos_idx, args.eos_idx, args.waitk)

    # 加载预训练模型
    assert args.init_from_params, (
        "Please set init_from_params to load the infer model.")

    model_dict = paddle.load(
        os.path.join(args.init_from_params, "transformer.pdparams"))

    # 避免长度大于训练时候设定的长度,这里重新设定 
    model_dict["src_pos_embedding.pos_encoder.weight"] = position_encoding_init(
        args.max_length + 1, args.d_model)
    model_dict["trg_pos_embedding.pos_encoder.weight"] = position_encoding_init(
        args.max_length + 1, args.d_model)

    transformer.load_dict(model_dict)

    # 设置评估模式
    transformer.eval()

    f = open(args.output_file, "w", encoding='utf8')

    with paddle.no_grad():
        for input_data in test_loader:
            
            (src_word, ) = input_data

            finished_seq, finished_scores = transformer.greedy_search(
                src_word, max_len=args.max_out_len, waitk=args.waitk)
            finished_seq = finished_seq.numpy()
            finished_scores = finished_scores.numpy()
            for idx, ins in enumerate(finished_seq):
                for beam_idx, beam in enumerate(ins):
                    if beam_idx >= args.n_best:
                        break
                    id_list = post_process_seq(beam, args.bos_idx, args.eos_idx)
                    word_list = to_tokens(id_list)
                    sequence = ' '.join(word_list) + "\n"
                    f.write(sequence)
    f.close()
do_predict(args)

模型评估

预测结果中每行输出是对应行输入的得分最高的翻译,对于使用 BPE 的数据,预测出的翻译结果也将是 BPE 表示的数据,要还原成原始的数据(这里指 tokenize 后的数据)才能进行正确的评估

# 还原 predict.txt 中的预测结果为 tokenize 后的数据
! sed -r 's/(@@ )|(@@ ?$)//g' train_dev_test/predict.txt > train_dev_test/predict.tok.txt
# BLEU评估工具来源于 https://github.com/moses-smt/mosesdecoder.git
! tar -zxf mosesdecoder.tar.gz
#计算multi-bleu
! perl mosesdecoder/scripts/generic/multi-bleu.perl train_dev_test/test_08.en.* < train_dev_test/predict.tok.txt
BLEU = 36.29, 72.0/45.2/28.8/18.5 (BP=1.000, ratio=1.025, hyp_len=23631, ref_len=23049)
It is not advisable to publish scores from multi-bleu.perl.  The scores depend on your tokenizer, which is unlikely to be reproducible from your paper or consistent across research groups.  Instead you should detokenize then use mteval-v14.pl, which has a standard tokenization.  Scores from multi-bleu.perl can still be used for internal purposes when you have a consistent tokenizer.

.

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐