12.28~12.30日每晚8点,百度工程师将针对问答、检索、情感分析场景带来直播讲解,深入解读语义检索等系统方案,并带来手把手项目实战。报名链接:https://paddleqiyeban.wjx.cn/vj/txe5jBZ.aspx?udsid=178137

本项目源代码全部开源在 PaddleNLP 中。

如果对您有帮助,欢迎star收藏一下,不易走丢哦~链接指路:https://github.com/PaddlePaddle/PaddleNLP


手把手搭建一个语义检索系统

一、项目说明

检索系统存在于我们日常使用的很多产品中,比如商品搜索系统、学术文献检索系等等,本方案提供了检索系统完整实现。限定场景是用户通过输入检索词 Query,快速在海量数据中查找相似文档。

所谓语义检索(也称基于向量的检索),是指检索系统不再拘泥于用户 Query 字面本身,而是能精准捕捉到用户 Query 后面的真正意图并以此来搜索,从而更准确地向用户返回最符合的结果。通过使用最先进的语义索引模型找到文本的向量表示,在高维向量空间中对它们进行索引,并度量查询向量与索引文档的相似程度,从而解决了关键词索引带来的缺陷。

例如下面两组文本 Pair,如果基于关键词去计算相似度,两组的相似度是相同的。而从实际语义上看,第一组相似度高于第二组。

车头如何放置车牌    前牌照怎么装
车头如何放置车牌    后牌照怎么装

语义检索系统的关键就在于,采用语义而非关键词方式进行召回,达到更精准、更广泛得召回相似结果的目的。

通常检索业务的数据都比较庞大,都会分为召回(索引)、排序两个环节。召回阶段主要是从至少千万级别的候选集合里面,筛选出相关的文档,这样候选集合的数目就会大大降低,在之后的排序阶段就可以使用一些复杂的模型做精细化或者个性化的排序。一般采用多路召回策略(例如关键词召回、热点召回、语义召回结合等),多路召回结果聚合后,经过统一的打分以后选出最优的 TopK 的结果。

本项目基于PaddleNLP Neural Search

以下是Neural Search的系统流程图,其中左侧为召回环节,核心是语义向量抽取模块;右侧是排序环节,核心是排序模型。图中红色虚线框表示在线计算,黑色虚线框表示离线批量处理。下面我们分别介绍召回中的语义向量抽取模块,以及排序模型。

PaddleNLP Neural Search 系统特色

  • 低门槛

    • 手把手搭建起检索系统
    • 无需标注数据也能构建检索系统
    • 提供 训练、预测、ANN 引擎一站式能力
  • 效果好

    • 针对多种数据场景的专业方案
      • 仅有无监督数据: SimCSE
      • 仅有有监督数据: InBatchNegative
      • 兼具无监督数据 和 有监督数据:融合模型
    • 进一步优化方案: 面向领域的预训练 Domain-adaptive Pretraining
  • 性能快

    • 基于 Paddle Inference 快速抽取向量
    • 基于 Milvus 快速查询和高性能建库

二、安装说明

AI Studio平台默认安装了Paddle和PaddleNLP,并定期更新版本。 如需手动更新,可参考如下说明:

  • paddlepaddle >= 2.2
    安装文档

  • PaddleNLP >= 2.2
    使用如下命令确保安装最新版PaddleNLP:

!pip install --upgrade paddlenlp -i https://pypi.org/simple
  • python >= 3.6
# 首先通过如下命令安装最新版本的 paddlenlp
!pip install --upgrade paddlenlp -i https://pypi.org/simple
Requirement already up-to-date: paddlenlp in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2.2.2)
Requirement already satisfied, skipping upgrade: h5py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (2.9.0)
Requirement already satisfied, skipping upgrade: colorlog in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (4.1.0)
Requirement already satisfied, skipping upgrade: seqeval in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (1.2.2)
Requirement already satisfied, skipping upgrade: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.4.4)
Requirement already satisfied, skipping upgrade: jieba in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.42.1)
Requirement already satisfied, skipping upgrade: multiprocess in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.70.11.1)
Requirement already satisfied, skipping upgrade: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp) (1.16.0)
Requirement already satisfied, skipping upgrade: numpy>=1.7 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp) (1.16.4)
Requirement already satisfied, skipping upgrade: scikit-learn>=0.21.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from seqeval->paddlenlp) (0.22.1)
Requirement already satisfied, skipping upgrade: dill>=0.3.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from multiprocess->paddlenlp) (0.3.3)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (0.14.1)
Requirement already satisfied, skipping upgrade: scipy>=0.17.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (1.3.0)

在项目开始之前,我们首先导入相关的库包。

import abc
import sys
from functools import partial
import argparse
import os
import random
import time

import numpy as np
from scipy import stats
import pandas as pd
from tqdm import tqdm 
import paddle

import paddle
import paddle.nn as nn
import paddle.nn.functional as F

import paddlenlp as ppnlp
from paddlenlp.data import Stack, Tuple, Pad
from paddlenlp.datasets import load_dataset, MapDataset
from paddlenlp.transformers import LinearDecayWithWarmup
from visualdl import LogWriter

from paddle import inference
from scipy.special import softmax
from scipy.special import expit
from paddlenlp.utils.downloader import get_path_from_url

from data import convert_pairwise_example
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlenlp/transformers/funnel/modeling.py:32: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable

三、召回模型方案实践

方案简介

首先利用业务上的无标注数据对SimCSE上进行无监督训练,训练导出模型,然后利用In-batch Negatives的策略在有监督数据上进行训练得到最终的召回模型。利用召回模型抽取向量,然后插入到Milvus召回系统中,进行召回。

无监督语义索引

数据准备

我们基于开源的文献检索数据集构造生成了面向语义索引的训练集、评估集、召回库。

采用文献的 query,title,keywords 三个字段内容,构造无标签数据集,每一行只有一条文本,要么是query,要么就是title和keywords的拼接句子。

样例数据如下:

睡眠障碍与常见神经系统疾病的关系睡眠觉醒障碍,神经系统疾病,睡眠,快速眼运动,细胞增殖,阿尔茨海默病
城市道路交通流中观仿真研究
城市道路交通流中观仿真研究智能运输系统;城市交通管理;计算机仿真;城市道路;交通流;路径选择
网络健康可信性研究
网络健康可信性研究网络健康信息;可信性;评估模式
脑瘫患儿家庭复原力的影响因素及干预模式雏形 研究
脑瘫患儿家庭复原力的影响因素及干预模式雏形研究脑瘫患儿;家庭功能;干预模式
地西他滨与HA方案治疗骨髓增生异常综合征转化的急性髓系白血病患者近期疗效比较
地西他滨与HA方案治疗骨髓增生异常综合征转化的急性髓系白血病患者近期疗效比较
个案工作 社会化
个案社会工作介入社区矫正再社会化研究——以东莞市清溪镇为例社会工作者;社区矫正人员;再社会化;角色定位
圆周运动加速度角速度
圆周运动向心加速度物理意义的理论分析匀速圆周运动,向心加速度,物理意义,角速度,物理量,线速度,周期

注:这里采用少量demo数据用于演示训练流程。预测阶段直接调用基于全量数据训练出来的模型进行预测。

全量数据和所有模型均已开源,见PaddleNLP Neural Search

# 数据读取逻辑
def read_simcse_text(data_path):
    """Reads data."""
    with open(data_path, 'r', encoding='utf-8') as f:
        for line in f:
            data = line.rstrip()
            yield {'text_a': data, 'text_b': data}

train_set_file='train_demo.csv'
train_ds = load_dataset(read_simcse_text, data_path=train_set_file, lazy=False)

for i  in range(3):
    print(train_ds[i])
{'text_a': '0', 'text_b': '0'}
{'text_a': '异质性机构投资者、公司治理与信息披露', 'text_b': '异质性机构投资者、公司治理与信息披露'}
{'text_a': '广东省新型冠状病毒肺炎中医药治疗方案', 'text_b': '广东省新型冠状病毒肺炎中医药治疗方案'}
# 明文数据 -> ID 序列训练数据

def create_dataloader(dataset,
                      mode='train',
                      batch_size=1,
                      batchify_fn=None,
                      trans_fn=None):
    if trans_fn:
        dataset = dataset.map(trans_fn)

    shuffle = True if mode == 'train' else False
    if mode == 'train':
        batch_sampler = paddle.io.DistributedBatchSampler(
            dataset, batch_size=batch_size, shuffle=shuffle)
    else:
        batch_sampler = paddle.io.BatchSampler(
            dataset, batch_size=batch_size, shuffle=shuffle)

    return paddle.io.DataLoader(
        dataset=dataset,
        batch_sampler=batch_sampler,
        collate_fn=batchify_fn,
        return_list=True)

def convert_example(example, tokenizer, max_seq_length=512, do_evalute=False):

    result = []

    for key, text in example.items():
        if 'label' in key:
            # do_evaluate
            result += [example['label']]
        else:
            # do_train
            encoded_inputs = tokenizer(text=text, max_seq_len=max_seq_length)
            input_ids = encoded_inputs["input_ids"]
            token_type_ids = encoded_inputs["token_type_ids"]
            result += [input_ids, token_type_ids]

    return result

max_seq_length=64
batch_size=32
tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained('ernie-1.0')

trans_func = partial(
        convert_example,
        tokenizer=tokenizer,
        max_seq_length=max_seq_length)

batchify_fn = lambda samples, fn=Tuple(
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # query_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # query_segment
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # title_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # tilte_segment
    ): [data for data in fn(samples)]

train_data_loader = create_dataloader(
        train_ds,
        mode='train',
        batch_size=batch_size,
        batchify_fn=batchify_fn,
        trans_fn=trans_func)
[2021-12-29 17:13:37,006] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-1.0/vocab.txt

展示一下输入的dataloader的数据

for idx, batch in enumerate(train_data_loader):
    if idx == 0:
        print(batch)
        break
[Tensor(shape=[32, 64], dtype=int64, place=CPUPlace, stop_gradient=True,
       [[1    , 17963, 467  , ..., 80   , 516  , 2    ],
        [1    , 17963, 23   , ..., 0    , 0    , 0    ],
        [1    , 1718 , 968  , ..., 0    , 0    , 0    ],
        ...,
        [1    , 712  , 207  , ..., 0    , 0    , 0    ],
        [1    , 17963, 96   , ..., 0    , 0    , 0    ],
        [1    , 17963, 1441 , ..., 0    , 0    , 0    ]]), Tensor(shape=[32, 64], dtype=int64, place=CPUPlace, stop_gradient=True,
       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]]), Tensor(shape=[32, 64], dtype=int64, place=CPUPlace, stop_gradient=True,
       [[1    , 17963, 467  , ..., 80   , 516  , 2    ],
        [1    , 17963, 23   , ..., 0    , 0    , 0    ],
        [1    , 1718 , 968  , ..., 0    , 0    , 0    ],
        ...,
        [1    , 712  , 207  , ..., 0    , 0    , 0    ],
        [1    , 17963, 96   , ..., 0    , 0    , 0    ],
        [1    , 17963, 1441 , ..., 0    , 0    , 0    ]]), Tensor(shape=[32, 64], dtype=int64, place=CPUPlace, stop_gradient=True,
       [[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]])]
模型构建

接下来搭建SimCSE模型,主要部分是用query和title分别得到embedding向量,然后计算余弦相似度。

class SimCSE(nn.Layer):
    def __init__(self,
                 pretrained_model,
                 dropout=None,
                 margin=0.0,
                 scale=20,
                 output_emb_size=None):

        super().__init__()

        self.ptm = pretrained_model
        self.dropout = nn.Dropout(dropout if dropout is not None else 0.1)

        # if output_emb_size is greater than 0, then add Linear layer to reduce embedding_size, 
        # we recommend set output_emb_size = 256 considering the trade-off beteween 
        # recall performance and efficiency
        self.output_emb_size = output_emb_size
        if output_emb_size > 0:
            weight_attr = paddle.ParamAttr(
                initializer=paddle.nn.initializer.TruncatedNormal(std=0.02))
            self.emb_reduce_linear = paddle.nn.Linear(
                768, output_emb_size, weight_attr=weight_attr)

        self.margin = margin
        # Used scaling cosine similarity to ease converge
        self.sacle = scale

    @paddle.jit.to_static(input_spec=[paddle.static.InputSpec(shape=[None, None], dtype='int64'),paddle.static.InputSpec(shape=[None, None], dtype='int64')])
    def get_pooled_embedding(self,
                             input_ids,
                             token_type_ids=None,
                             position_ids=None,
                             attention_mask=None,
                             with_pooler=True):

        # Note: cls_embedding is poolerd embedding with act tanh 
        sequence_output, cls_embedding = self.ptm(input_ids, token_type_ids,
                                                  position_ids, attention_mask)

        if with_pooler == False:
            cls_embedding = sequence_output[:, 0, :]

        if self.output_emb_size > 0:
            cls_embedding = self.emb_reduce_linear(cls_embedding)

        cls_embedding = self.dropout(cls_embedding)
        cls_embedding = F.normalize(cls_embedding, p=2, axis=-1)

        return cls_embedding

    def get_semantic_embedding(self, data_loader):
        self.eval()
        with paddle.no_grad():
            for batch_data in data_loader:
                input_ids, token_type_ids = batch_data
                input_ids = paddle.to_tensor(input_ids)
                token_type_ids = paddle.to_tensor(token_type_ids)

                text_embeddings = self.get_pooled_embedding(
                    input_ids, token_type_ids=token_type_ids)

                yield text_embeddings

    def cosine_sim(self,
                   query_input_ids,
                   title_input_ids,
                   query_token_type_ids=None,
                   query_position_ids=None,
                   query_attention_mask=None,
                   title_token_type_ids=None,
                   title_position_ids=None,
                   title_attention_mask=None,
                   with_pooler=True):

        query_cls_embedding = self.get_pooled_embedding(
            query_input_ids,
            query_token_type_ids,
            query_position_ids,
            query_attention_mask,
            with_pooler=with_pooler)

        title_cls_embedding = self.get_pooled_embedding(
            title_input_ids,
            title_token_type_ids,
            title_position_ids,
            title_attention_mask,
            with_pooler=with_pooler)

        cosine_sim = paddle.sum(query_cls_embedding * title_cls_embedding,
                                axis=-1)
        return cosine_sim

    def forward(self,
                query_input_ids,
                title_input_ids,
                query_token_type_ids=None,
                query_position_ids=None,
                query_attention_mask=None,
                title_token_type_ids=None,
                title_position_ids=None,
                title_attention_mask=None):
        
        # 第 1 次编码: 文本经过无监督语义索引模型编码后的语义向量 
        # [N, 768]
        query_cls_embedding = self.get_pooled_embedding(
            query_input_ids, query_token_type_ids, query_position_ids,
            query_attention_mask)

        # 第 2 次编码: 文本经过无监督语义索引模型编码后的语义向量 
        # [N, 768]
        title_cls_embedding = self.get_pooled_embedding(
            title_input_ids, title_token_type_ids, title_position_ids,
            title_attention_mask)

        # 相似度矩阵: [N, N]
        cosine_sim = paddle.matmul(
            query_cls_embedding, title_cls_embedding, transpose_y=True)

        # substract margin from all positive samples cosine_sim()
        margin_diag = paddle.full(
            shape=[query_cls_embedding.shape[0]],
            fill_value=self.margin,
            dtype=paddle.get_default_dtype())

        cosine_sim = cosine_sim - paddle.diag(margin_diag)

        # scale cosine to ease training converge
        cosine_sim *= self.sacle

        # 转化成分类任务: 对角线元素是正例,其余元素为负例
        labels = paddle.arange(0, query_cls_embedding.shape[0], dtype='int64')
        labels = paddle.reshape(labels, shape=[-1, 1])

        # 交叉熵损失函数
        loss = F.cross_entropy(input=cosine_sim, label=labels)

        return loss
训练配置
# 关键参数
scale=20 # 推荐值: 10 ~ 30
margin=0.1 # 推荐值: 0.0 ~ 0.2

dropout=0.2
output_emb_size=256
epochs=1
weight_decay=0.0
learning_rate=5E-5
warmup_proportion=0.0
加载预训练模型
  1. 加载预训练模型 ERNIE1.0 进行热启
  2. 定义优化器 AdamOptimizer
model_name_or_path='ernie-1.0'

pretrained_model = ppnlp.transformers.ErnieModel.from_pretrained(
       model_name_or_path,
       hidden_dropout_prob=dropout,
       attention_probs_dropout_prob=dropout)
print("loading model from {}".format(model_name_or_path))


model = SimCSE(
        pretrained_model,
        margin=margin,
        scale=scale,
        output_emb_size=output_emb_size)

num_training_steps = len(train_data_loader) * epochs

lr_scheduler = LinearDecayWithWarmup(learning_rate, num_training_steps,
                                         warmup_proportion)

# Generate parameter names needed to perform weight decay.
# All bias and LayerNorm parameters are excluded.
decay_params = [
        p.name for n, p in model.named_parameters()
        if not any(nd in n for nd in ["bias", "norm"])
    ]
optimizer = paddle.optimizer.AdamW(
        learning_rate=lr_scheduler,
        parameters=model.parameters(),
        weight_decay=weight_decay,
        apply_decay_param_fun=lambda x: x in decay_params)
    
[2021-12-29 17:13:37,125] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-1.0/ernie_v1_chn_base.pdparams
[2021-12-29 17:13:45,447] [    INFO] - Weights from pretrained model not used in ErnieModel: ['cls.predictions.layer_norm.weight', 'cls.predictions.decoder_bias', 'cls.predictions.transform.bias', 'cls.predictions.transform.weight', 'cls.predictions.layer_norm.bias']


loading model from ernie-1.0
模型训练

上面的训练配置完毕以后,下面就可以开始训练了。

save_dir='checkpoint'
save_steps=100
time_start=time.time()
global_step = 0
tic_train = time.time()
for epoch in range(1, epochs + 1):
    for step, batch in enumerate(train_data_loader, start=1):
        query_input_ids, query_token_type_ids, title_input_ids, title_token_type_ids = batch

        loss = model(
                query_input_ids=query_input_ids,
                title_input_ids=title_input_ids,
                query_token_type_ids=query_token_type_ids,
                title_token_type_ids=title_token_type_ids)

        global_step += 1
        if global_step % 10 == 0:
            print("global step %d, epoch: %d, batch: %d, loss: %.5f, speed: %.2f step/s"
                    % (global_step, epoch, step, loss,
                       10 / (time.time() - tic_train)))
            tic_train = time.time()

        loss.backward()
        optimizer.step()
        lr_scheduler.step()
        optimizer.clear_grad()
        if global_step % save_steps == 0:
            save_path = os.path.join(save_dir, "model_%d" % (global_step))
            if not os.path.exists(save_path):
                os.makedirs(save_path)
            save_param_path = os.path.join(save_path, 'model_state.pdparams')
            paddle.save(model.state_dict(), save_param_path)
            tokenizer.save_pretrained(save_path)
time_end=time.time()
print('totally cost {} seconds'.format(time_end-time_start))
global step 10, epoch: 1, batch: 10, loss: 1.14782, speed: 0.04 step/s
global step 20, epoch: 1, batch: 20, loss: 0.89162, speed: 0.04 step/s
global step 30, epoch: 1, batch: 30, loss: 0.57250, speed: 0.04 step/s
totally cost 722.7785019874573 seconds
模型预测

首先下载训练好的SimCSE模型,然后进行解压

# !wget https://bj.bcebos.com/v1/paddlenlp/models/simcse_model.zip
if(not os.path.exists('simcse_model.zip')):
    get_path_from_url('https://bj.bcebos.com/v1/paddlenlp/models/simcse_model.zip',root_dir='.')

!unzip -o simcse_model.zip -d pretrained/
[2021-12-29 17:25:48,606] [    INFO] - Downloading simcse_model.zip from https://bj.bcebos.com/v1/paddlenlp/models/simcse_model.zip
100%|██████████| 349M/349M [00:06<00:00, 53.4MB/s] 
[2021-12-29 17:25:55,630] [    INFO] - Decompressing ./simcse_model.zip...


Archive:  simcse_model.zip
   creating: pretrained/model_20000/
  inflating: pretrained/model_20000/model_state.pdparams  
  inflating: pretrained/model_20000/vocab.txt  
  inflating: pretrained/model_20000/tokenizer_config.json  
# 加载预训练好的无监督语义索引模型 SimCSE
params_path='pretrained/model_20000/model_state.pdparams'
state_dict = paddle.load(params_path)
model.set_dict(state_dict)

test_data = ['国有企业引入非国有资本对创新绩效的影响——基于制造业国有上市公司的经验证据', '语义检索相关的论文']

def convert_example_test(example, tokenizer, max_seq_length=512, do_evalute=False):
    result = []
    encoded_inputs = tokenizer(text=example, max_seq_len=max_seq_length)
    input_ids = encoded_inputs["input_ids"]
    token_type_ids = encoded_inputs["token_type_ids"]
    result += [input_ids, token_type_ids]

    return result

test_func = partial(
        convert_example_test,
        tokenizer=tokenizer,
        max_seq_length=max_seq_length)

test_batchify_fn = lambda samples, fn=Tuple(
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # text_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # text_segment
    ): [data for data in fn(samples)]

# conver_example function's input must be dict
corpus_ds = MapDataset(test_data)

corpus_data_loader = create_dataloader(
        corpus_ds,
        mode='predict',
        batch_size=batch_size,
        batchify_fn=test_batchify_fn,
        trans_fn=test_func)


all_embeddings = []
model.eval()
with paddle.no_grad():
    for batch_data in corpus_data_loader:
        input_ids, token_type_ids = batch_data
        input_ids = paddle.to_tensor(input_ids)
        token_type_ids = paddle.to_tensor(token_type_ids)

        text_embeddings = model.get_pooled_embedding(input_ids, token_type_ids)
        all_embeddings.append(text_embeddings)

text_embedding=all_embeddings[0]
print(text_embedding.shape)
print(text_embedding.numpy())
[2, 256]
[[-6.70653582e-02 -6.46875659e-03 -6.78319205e-03  1.66618098e-02
   7.20006675e-02 -9.79140960e-03 -1.38441322e-03  4.37441096e-02
   4.78116609e-02  1.33881107e-01  1.82927158e-02  3.23655084e-02
  -3.85488532e-02 -1.73900686e-02 -5.18566556e-02 -2.29919683e-02
  -1.52951125e-02  3.57391499e-02  3.20172198e-02  6.13060687e-03
  -5.50691374e-02 -3.22945826e-02 -7.94695318e-02 -1.69946998e-02
   1.30272536e-02 -5.41988909e-02 -2.09305398e-02 -1.16828419e-02
   1.60638705e-01  1.09788505e-02  8.89854729e-02 -5.07548526e-02
  -4.14582808e-03  1.41753154e-02  5.99361071e-03 -1.20650172e-01
   8.47129449e-02 -8.71352032e-02 -1.54689439e-02 -4.13627252e-02
   9.27081052e-03  6.10866360e-02 -8.27403888e-02  2.16985084e-02
  -6.27953187e-02  8.01926702e-02 -1.34964858e-03  2.08494253e-02
   7.28066787e-02  1.92417565e-03  3.46801244e-02  5.19680083e-02
  -2.29766201e-02  7.53930062e-02 -3.92528400e-02  1.27617225e-01
  -2.24973150e-02  6.53991327e-02 -1.07625887e-01 -3.81366871e-02
  -4.88427421e-03 -2.40459200e-02 -1.19542152e-01  1.04167059e-01
  -1.00581929e-01 -1.05414495e-01  1.02736302e-01 -4.85890843e-02
   1.93382557e-02  1.48522668e-02 -9.91824344e-02 -6.18072413e-02
  -2.51754578e-02  1.02786891e-01 -3.75800654e-02 -2.99477540e-02
  -8.78330320e-02  1.53615654e-01 -8.86131153e-02  1.58593946e-04
  -1.05467148e-01 -5.78677980e-03  1.10411039e-02  9.81814265e-02
  -6.38909489e-02  7.46430084e-02 -1.57871693e-01  3.82332802e-02
   2.64427662e-02 -1.32286372e-02 -6.66980594e-02  3.33772637e-02
   3.11504537e-03  1.22831672e-01 -1.03312112e-01 -4.33263145e-02
  -6.33278936e-02 -2.32276414e-02 -2.63233781e-02  1.42755816e-02
   2.93613551e-03 -1.09544516e-01  6.63400814e-02 -6.99489750e-03
   1.98137201e-03  5.80364577e-02 -1.24874888e-02 -8.20823908e-02
   1.19031342e-02  1.49561500e-03 -7.44999200e-02  1.48410797e-01
   6.41510263e-02 -1.17387380e-02  2.37101335e-02  2.33147331e-02
   4.18324508e-02  1.07341716e-02 -8.44500959e-02 -1.21783640e-03
  -2.78545264e-02 -3.31132263e-02 -2.02342924e-02 -6.50983974e-02
  -1.57569442e-02  5.36424629e-02  1.77019350e-02  4.59411517e-02
   7.02589303e-02  1.69274136e-02  1.42885268e-01  7.36403689e-02
  -6.29244521e-02 -2.01245341e-02  1.85547043e-02  3.30842882e-02
  -1.94400456e-02  1.40988082e-01  1.33436337e-01 -2.54075229e-02
  -6.08876757e-02 -2.02942584e-02  8.42068065e-03 -3.21294926e-02
   3.29849645e-02 -7.65311494e-02 -7.64690712e-02 -5.72191104e-02
  -1.74020398e-02 -3.17849964e-02  8.80747139e-02  1.35396525e-01
   2.59287208e-02  3.37552503e-02 -1.26722921e-02  2.68301684e-02
  -2.85320240e-03 -7.15945335e-03 -1.06740229e-01  3.84139530e-02
  -6.22030161e-03  4.02702987e-02  4.40340750e-02  1.06274471e-01
   2.84169000e-02  1.51338968e-02 -2.19574515e-02  5.36667230e-03
   1.26346871e-01 -1.87185276e-02 -8.23301449e-02  1.43698633e-01
  -2.43978817e-02  6.32821620e-02 -2.50420533e-02  3.96971181e-02
   1.14490250e-02 -2.35499628e-02 -6.89313188e-02 -1.08669698e-03
  -2.18151999e-03  4.00638282e-02  7.83986971e-02  1.09624885e-01
  -1.83762282e-01 -1.54709127e-02  5.41972667e-02  1.19059300e-02
  -7.10137049e-03  1.25610707e-02  1.09158702e-01  4.72876951e-02
   5.91809750e-02 -4.03959006e-02  1.64805017e-02  6.63371906e-02
   6.45227805e-02  6.20313920e-02  1.49584608e-02  1.63539238e-02
  -5.91176152e-02 -4.21884805e-02 -6.61176518e-02  4.13750410e-02
  -5.55421263e-02  8.79839882e-02 -2.49287300e-02  2.33348776e-02
   4.10760194e-02 -2.21674778e-02 -1.34153649e-01  2.54084170e-02
   9.31773894e-03 -4.90556732e-02  4.03249525e-02  7.56435795e-03
   1.15322219e-02 -8.48940462e-02 -9.31067094e-02 -1.24767207e-01
   7.06484020e-02 -7.15693161e-02 -5.58607355e-02 -1.08737655e-01
   1.22894933e-02 -7.71436561e-03 -5.61649352e-02 -2.86238566e-02
  -1.35169644e-02  7.08610713e-02 -4.42850329e-02 -9.78791341e-02
  -1.70360655e-02  1.12644412e-01  1.07231729e-01 -6.57930132e-03
  -4.00356092e-02 -4.09569107e-02  1.30257741e-01  3.04518472e-02
   3.77568491e-02 -3.82185392e-02  5.57129420e-02  1.83225330e-02
  -2.41781417e-02 -2.11042613e-02  2.89661977e-02  3.28728855e-02
   1.54984025e-02 -2.23081559e-02  8.16969317e-04 -3.95485349e-02
  -7.94134568e-03 -4.14427035e-02 -1.22569837e-02  1.10996030e-01]
 [ 4.06492352e-02  1.32357225e-01 -2.85783485e-02  1.00018419e-01
  -7.73834661e-02  4.43501845e-02 -7.56763667e-02 -3.37465629e-02
  -3.52975428e-02 -6.95190579e-02  2.21885485e-03  2.85168402e-02
  -6.84773177e-02  5.40942326e-03  4.37402690e-04  2.51113605e-02
  -4.43528630e-02  5.71582234e-03 -1.21775337e-01  1.30203506e-03
  -3.59791815e-02 -2.42444221e-02  1.32723555e-01 -1.25160237e-04
  -2.66676005e-02 -4.19475324e-02  8.96050185e-02 -4.88869175e-02
  -4.15950501e-03  6.56450912e-02 -5.59418388e-02  5.02068400e-02
  -1.14569860e-02 -3.82375978e-02  1.02190018e-01 -7.50110224e-02
   2.49583945e-02 -2.58142687e-02 -9.72950384e-02 -5.14224172e-02
  -9.04390961e-03 -2.90137008e-02 -2.70109400e-02 -3.76504734e-02
  -2.49852985e-03  6.42565116e-02  3.24902460e-02  4.85139303e-02
  -5.80648892e-02 -1.35258278e-02  1.23021342e-02  4.44472618e-02
   5.16758226e-02  2.32080352e-02  1.13101471e-02  1.85348801e-02
  -4.22596373e-03 -2.34399568e-02 -1.97084993e-02 -4.71424386e-02
   1.22879945e-01  6.55015633e-02  2.59171445e-02 -1.84223410e-02
   1.62000419e-03  7.22431839e-02 -7.48513592e-03 -6.54700771e-02
  -5.12268357e-02  6.99006543e-02 -1.01965643e-01 -2.18030065e-02
  -4.60692458e-02 -9.84449014e-02 -8.72710720e-02  6.13041297e-02
  -3.10804043e-02 -1.53023684e-02 -2.52748486e-02 -3.77237312e-02
   3.60397734e-02 -1.40457945e-02  4.06196043e-02  1.93714146e-02
  -4.90912087e-02 -4.51212078e-02  5.29031940e-02  7.45277703e-02
   2.73852572e-02  3.34427357e-02  4.07184921e-02 -2.31789015e-02
   1.11093530e-02  6.66387826e-02  1.21447340e-01  3.41141690e-03
  -7.11490586e-02  1.06633967e-02  1.29642216e-02 -5.08008078e-02
   9.69475135e-02 -1.30997067e-02  7.72030950e-02  2.78428439e-02
  -2.51627024e-02 -1.91606116e-03 -1.12036958e-01 -8.92495513e-02
  -2.63997208e-04  5.98824210e-02 -3.77116986e-02  4.21718992e-02
   1.12890624e-01  1.51283875e-01 -5.03947362e-02  4.61357310e-02
  -3.29296216e-02 -1.52425230e-01  5.43419048e-02  4.86292541e-02
   1.96520053e-02 -5.18047735e-02  4.18757834e-02  1.91483796e-02
   1.20331667e-01 -6.25160038e-02 -1.26190642e-02  1.11091761e-02
   2.33806707e-02  3.09665482e-02  9.59759727e-02 -7.69740343e-03
   3.20002101e-02 -8.08688700e-02 -2.47504674e-02  1.62589438e-02
  -1.70205031e-02  2.10534167e-02 -1.11018993e-01  9.48455557e-02
  -9.91367176e-03  7.44178519e-02  9.99780297e-02 -9.86075611e-04
   9.81713310e-02  1.13847367e-02 -1.17066447e-02 -5.50765842e-02
  -8.28986466e-02  6.79253554e-03 -6.89914748e-02 -4.12727930e-02
  -8.37405100e-02  9.93919149e-02 -1.41491722e-02 -1.63136004e-03
  -8.93296227e-02  8.98428913e-03  3.52715664e-02  4.38394397e-03
  -1.61882788e-02 -6.16024435e-02  9.13519412e-02  3.27084810e-02
  -1.20535800e-02  1.07433252e-01 -2.71215513e-02 -1.52852479e-02
  -2.49931626e-02 -8.96247774e-02 -6.65177479e-02 -2.15668473e-02
  -3.38161667e-03 -5.75259328e-02 -3.16486023e-02 -1.43640712e-01
   7.05700694e-03  1.14373455e-03 -1.44255003e-02  3.95025648e-02
   8.05896055e-03 -6.68729190e-03  4.53040861e-02  1.15236454e-02
  -3.81812155e-02  7.21348524e-02 -9.46189649e-03 -4.17203549e-03
  -1.97859500e-02 -9.82242003e-02 -7.71643594e-02  6.32833987e-02
   9.13111269e-02  5.65667823e-03  4.48248982e-02 -1.96743086e-02
  -2.16650479e-02  4.13594469e-02  7.61120114e-03 -7.51543511e-03
   4.20722030e-02 -4.18896563e-02  4.27075848e-03 -3.81930135e-02
   1.31949186e-01 -9.13171843e-02  5.76445758e-02  1.87291915e-03
   1.22158322e-03  1.17772952e-01 -8.92507508e-02  1.24412961e-02
  -8.23426768e-02  4.08391654e-02  8.28895792e-02 -2.76188385e-02
  -1.14614636e-01 -6.21346049e-02 -1.20549597e-01  7.36222863e-02
   1.00238979e-01 -1.13328330e-01 -7.87915289e-03  1.52900405e-02
  -7.00806081e-02 -1.07508585e-01 -1.25086367e-01  3.91913168e-02
  -5.37602380e-02  1.22826688e-01 -7.04374686e-02  4.97788228e-02
   2.98822634e-02  9.82213113e-03  2.02373136e-02  4.79692966e-02
  -1.58301815e-01 -5.94225749e-05 -1.30062804e-01 -5.42899817e-02
   5.12139779e-03 -1.34919882e-01  1.99178606e-02 -1.23106174e-01
  -2.79084742e-02  1.72577135e-03  5.80925168e-03 -1.08275507e-02
   2.26601101e-02 -6.57473430e-02 -1.42229721e-01 -1.05154209e-01
  -1.23002581e-01  1.14776492e-01  2.30699163e-02  1.05777055e-01]]

有监督语义索引

数据准备

使用文献的的query, title, keywords,构造带正标签的数据集,不包含负标签样本

宁夏社区图书馆服务体系布局现状分析	       宁夏社区图书馆服务体系布局现状分析社区图书馆,社区图书馆服务,社区图书馆服务体系
人口老龄化对京津冀经济	                 京津冀人口老龄化对区域经济增长的影响京津冀,人口老龄化,区域经济增长,固定效应模型
英语广告中的模糊语	                  模糊语在英语广告中的应用及其功能模糊语,英语广告,表现形式,语用功能
甘氨酸二肽的合成	                      甘氨酸二肽合成中缩合剂的选择甘氨酸,缩合剂,二肽
def read_text_pair(data_path):
    """Reads data."""
    with open(data_path, 'r', encoding='utf-8') as f:
        for line in f:
            data = line.rstrip().split("\t")
            if len(data) != 2:
                continue
            yield {'text_a': data[0], 'text_b': data[1]}

train_set_file='train.csv'
train_ds = load_dataset(
        read_text_pair, data_path=train_set_file, lazy=False)

for i in range(3):
    print(train_ds[i])
{'text_a': '从《唐律疏义》看唐代封爵贵族的法律特权', 'text_b': '从《唐律疏义》看唐代封爵贵族的法律特权《唐律疏义》,封爵贵族,法律特权'}
{'text_a': '宁夏社区图书馆服务体系布局现状分析', 'text_b': '宁夏社区图书馆服务体系布局现状分析社区图书馆,社区图书馆服务,社区图书馆服务体系'}
{'text_a': '人口老龄化对京津冀经济', 'text_b': '京津冀人口老龄化对区域经济增长的影响京津冀,人口老龄化,区域经济增长,固定效应模型'}
模型构建
from base_model import SemanticIndexBase

class SemanticIndexBatchNeg(SemanticIndexBase):
    def __init__(self,
                 pretrained_model,
                 dropout=None,
                 margin=0.3,
                 scale=30,
                 output_emb_size=None):
        super().__init__(pretrained_model, dropout, output_emb_size)

        self.margin = margin
        # Used scaling cosine similarity to ease converge
        self.sacle = scale

    def forward(self,
                query_input_ids,
                title_input_ids,
                query_token_type_ids=None,
                query_position_ids=None,
                query_attention_mask=None,
                title_token_type_ids=None,
                title_position_ids=None,
                title_attention_mask=None):

        query_cls_embedding = self.get_pooled_embedding(
            query_input_ids, query_token_type_ids, query_position_ids,
            query_attention_mask)

        title_cls_embedding = self.get_pooled_embedding(
            title_input_ids, title_token_type_ids, title_position_ids,
            title_attention_mask)

        cosine_sim = paddle.matmul(
            query_cls_embedding, title_cls_embedding, transpose_y=True)

        # substract margin from all positive samples cosine_sim()
        margin_diag = paddle.full(
            shape=[query_cls_embedding.shape[0]],
            fill_value=self.margin,
            dtype=paddle.get_default_dtype())

        cosine_sim = cosine_sim - paddle.diag(margin_diag)

        # scale cosine to ease training converge
        cosine_sim *= self.sacle

        labels = paddle.arange(0, query_cls_embedding.shape[0], dtype='int64')
        labels = paddle.reshape(labels, shape=[-1, 1])

        loss = F.cross_entropy(input=cosine_sim, label=labels)

        return loss
训练配置
# 关键参数
scale=20 # 推荐值: 10 ~ 30
margin=0.1 # 推荐值: 0.0 ~ 0.2

max_seq_length=64
epochs=1
learning_rate=5E-5
warmup_proportion=0.0
weight_decay=0.0
save_steps=10
batch_size=64
output_emb_size=256
pretrained_model = ppnlp.transformers.ErnieModel.from_pretrained(
        'ernie-1.0')
tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained('ernie-1.0')
trans_func = partial(
        convert_example,
        tokenizer=tokenizer,
        max_seq_length=max_seq_length) 

batchify_fn = lambda samples, fn=Tuple(
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # query_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # query_segment
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # title_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # tilte_segment
    ): [data for data in fn(samples)]  

train_data_loader = create_dataloader(
        train_ds,
        mode='train',
        batch_size=batch_size,
        batchify_fn=batchify_fn,
        trans_fn=trans_func)

model = SemanticIndexBatchNeg(
        pretrained_model,
        margin=margin,
        scale=scale,
        output_emb_size=output_emb_size)

num_training_steps = len(train_data_loader) * epochs

lr_scheduler = LinearDecayWithWarmup(learning_rate, num_training_steps,
                                         warmup_proportion) 

# Generate parameter names needed to perform weight decay.
# All bias and LayerNorm parameters are excluded.
decay_params = [
        p.name for n, p in model.named_parameters()
        if not any(nd in n for nd in ["bias", "norm"])
    ]
optimizer = paddle.optimizer.AdamW(
        learning_rate=lr_scheduler,
        parameters=model.parameters(),
        weight_decay=weight_decay,
        apply_decay_param_fun=lambda x: x in decay_params)  
[2021-12-29 17:26:07,715] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-1.0/ernie_v1_chn_base.pdparams
[2021-12-29 17:26:15,960] [    INFO] - Weights from pretrained model not used in ErnieModel: ['cls.predictions.layer_norm.weight', 'cls.predictions.decoder_bias', 'cls.predictions.transform.bias', 'cls.predictions.transform.weight', 'cls.predictions.layer_norm.bias']
[2021-12-29 17:26:16,676] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-1.0/vocab.txt
模型训练
def do_train(model,train_data_loader):
    
    global_step = 0
    tic_train = time.time()
    for epoch in range(1, epochs + 1):
        for step, batch in enumerate(train_data_loader, start=1):
            query_input_ids, query_token_type_ids, title_input_ids, title_token_type_ids = batch

            loss = model(
                query_input_ids=query_input_ids,
                title_input_ids=title_input_ids,
                query_token_type_ids=query_token_type_ids,
                title_token_type_ids=title_token_type_ids)

            global_step += 1
            if global_step % 5 == 0:
                print(
                    "global step %d, epoch: %d, batch: %d, loss: %.5f, speed: %.2f step/s"
                    % (global_step, epoch, step, loss,
                       10 / (time.time() - tic_train)))
                tic_train = time.time()
            loss.backward()
            optimizer.step()
            lr_scheduler.step()
            optimizer.clear_grad()
            if global_step % save_steps == 0:
                save_path = os.path.join(save_dir, "model_%d" % global_step)
                if not os.path.exists(save_path):
                    os.makedirs(save_path)
                save_param_path = os.path.join(save_path, 'model_state.pdparams')
                paddle.save(model.state_dict(), save_param_path)
                tokenizer.save_pretrained(save_path)

do_train(model,train_data_loader)
global step 5, epoch: 1, batch: 5, loss: 4.65643, speed: 0.06 step/s
模型预测
# !wget https://bj.bcebos.com/v1/paddlenlp/models/inbatch_model.zip 

if(not os.path.exists('inbatch_model.zip')):
    get_path_from_url('https://bj.bcebos.com/v1/paddlenlp/models/inbatch_model.zip',root_dir='.')

!unzip -o inbatch_model.zip -d pretrained/
max_seq_length=64
output_emb_size=256
batch_size=1
params_path='pretrained/model_40/model_state.pdparams'
test_data = ["国有企业引入非国有资本对创新绩效的影响——基于制造业国有上市公司的经验证据"]

state_dict = paddle.load(params_path)
model.set_dict(state_dict)

test_func = partial(
        convert_example_test,
        tokenizer=tokenizer,
        max_seq_length=max_seq_length)

test_batchify_fn = lambda samples, fn=Tuple(
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # text_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # text_segment
    ): [data for data in fn(samples)]

# conver_example function's input must be dict
corpus_ds = MapDataset(test_data)

corpus_data_loader = create_dataloader(
        corpus_ds,
        mode='predict',
        batch_size=batch_size,
        batchify_fn=test_batchify_fn,
        trans_fn=test_func)


all_embeddings = []
model.eval()
with paddle.no_grad():
    for batch_data in corpus_data_loader:
        input_ids, token_type_ids = batch_data
        input_ids = paddle.to_tensor(input_ids)
        token_type_ids = paddle.to_tensor(token_type_ids)

        text_embeddings = model.get_pooled_embedding(input_ids, token_type_ids)
        all_embeddings.append(text_embeddings)

text_embedding=all_embeddings[0]
print(text_embedding.shape)
print(text_embedding.numpy())

基于Milvus的效果展示

基于Milvus搭建召回服务

我们使用Milvus开源工具进行召回,milvus的搭建教程请参考官方教程 milvus官方安装教程本案例使用的是milvus的1.1.1版本,搭建完以后启动milvus

cd [Milvus root path]/core/milvus
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:[Milvus root path]/core/milvus/lib
cd scripts
./start_server.sh

基于Milvus的召回效果展示

输入的样本为:

国有企业引入非国有资本对创新绩效的影响——基于制造业国有上市公司的经验证据

下面分别是抽取的向量和召回的结果:

[1, 256]
[[ 0.06374735 -0.08051944  0.05118101 -0.05855767 -0.06969483  0.05318566
   0.079629    0.02667932 -0.04501902 -0.01187392  0.09590752 -0.05831281
   ....
5677638 国有股权参股对家族企业创新投入的影响混合所有制改革,国有股权,家族企业,创新投入 0.5417419672012329
1321645 高管政治联系对民营企业创新绩效的影响——董事会治理行为的非线性中介效应高管政治联系,创新绩效,民营上市公司,董事会治理行为,中介效应 0.5445536375045776
1340319 国有控股上市公司资产并购重组风险探讨国有控股上市公司,并购重组,防范对策 0.5515031218528748
....

四、排序方案实践

方案简介

基于ERNIE-Gram训练Pair-wise模型。Pair-wise 匹配模型适合将文本对相似度作为特征之一输入到上层排序模块进行排序的应用场景。

双塔模型,使用ERNIE-Gram预训练模型,使用margin_ranking_loss训练模型。

数据准备

使用点击(作为正样本)和展现未点击(作为负样本)数据构造排序阶段的训练集

样例数据如下:

个人所得税税务筹划      基于新个税视角下的个人所得税纳税筹划分析新个税;个人所得税;纳税筹划      个人所得税工资薪金税务筹划研究个人所得税,工资薪金,税务筹划
液压支架底座受力分析    ZY4000/09/19D型液压支架的有限元分析液压支架,有限元分析,两端加载,偏载,扭转       基于ANSYS的液压支架多工况受力分析液压支架,四种工况,仿真分析,ANSYS,应力集中,优化
迟发性血管痉挛  西洛他唑治疗动脉瘤性蛛网膜下腔出血后脑血管痉挛的Meta分析西洛他唑,蛛网膜下腔出血,脑血管痉挛,Meta分析     西洛他唑治疗动脉瘤性蛛网膜下腔出血后脑血管痉挛的Meta分析西洛他唑,蛛网膜下腔出血,脑血管痉挛,Meta分析
氧化亚硅        复合溶胶-凝胶一锅法制备锂离子电池氧化亚硅/碳复合负极材料氧化亚硅,溶胶-凝胶法,纳米颗粒,负极,锂离子电池   负载型聚酰亚胺-二氧化硅-银杂化膜的制备和表征聚酰亚胺,二氧化硅,银,杂化膜,促进传输
# 构建读取函数,读取原始数据
def read(src_path, is_predict=False):
    data=pd.read_csv(src_path,sep='\t')
    for index, row in tqdm(data.iterrows()):
        query=row['query']
        title=row['title']
        neg_title=row['neg_title']
        yield {'query':query, 'title':title,'neg_title':neg_title}

def read_test(src_path, is_predict=False):
    data=pd.read_csv(src_path,sep='\t')
    for index, row in tqdm(data.iterrows()):
        query=row['query']
        title=row['title']
        label=row['label']
        yield {'query':query, 'title':title,'label':label}


test_file='dev_ranking_demo.csv'
train_file='train_ranking_demo.csv'

train_ds=load_dataset(read,src_path=train_file,lazy=False)
dev_ds=load_dataset(read_test,src_path=test_file,lazy=False)
print('打印一条训练集')
print(train_ds[0])
print('打印一条验证集')
print(dev_ds[0])

模型构建

class PairwiseMatching(nn.Layer):
    def __init__(self, pretrained_model, dropout=None, margin=0.1):
        super().__init__()
        self.ptm = pretrained_model
        self.dropout = nn.Dropout(dropout if dropout is not None else 0.1)
        self.margin = margin

        # hidden_size -> 1, calculate similarity
        self.similarity = nn.Linear(self.ptm.config["hidden_size"], 1)

    @paddle.jit.to_static(input_spec=[paddle.static.InputSpec(shape=[None, None], dtype='int64'),paddle.static.InputSpec(shape=[None, None], dtype='int64')])
    def get_pooled_embedding(self,
                             input_ids,
                             token_type_ids=None,
                             position_ids=None,
                             attention_mask=None):
        _, cls_embedding = self.ptm(input_ids, token_type_ids,
                                        position_ids, attention_mask)
        cls_embedding = self.dropout(cls_embedding)
        sim = self.similarity(cls_embedding)
        return sim


    def predict(self,
                input_ids,
                token_type_ids=None,
                position_ids=None,
                attention_mask=None):

        _, cls_embedding = self.ptm(input_ids, token_type_ids, position_ids,
                                    attention_mask)

        cls_embedding = self.dropout(cls_embedding)
        sim_score = self.similarity(cls_embedding)
        sim_score = F.sigmoid(sim_score)

        return sim_score

    def forward(self,
                pos_input_ids,
                neg_input_ids,
                pos_token_type_ids=None,
                neg_token_type_ids=None,
                pos_position_ids=None,
                neg_position_ids=None,
                pos_attention_mask=None,
                neg_attention_mask=None):

        _, pos_cls_embedding = self.ptm(pos_input_ids, pos_token_type_ids,
                                        pos_position_ids, pos_attention_mask)

        _, neg_cls_embedding = self.ptm(neg_input_ids, neg_token_type_ids,
                                        neg_position_ids, neg_attention_mask)

        pos_embedding = self.dropout(pos_cls_embedding)
        neg_embedding = self.dropout(neg_cls_embedding)

        pos_sim = self.similarity(pos_embedding)
        neg_sim = self.similarity(neg_embedding)

        pos_sim = F.sigmoid(pos_sim)
        neg_sim = F.sigmoid(neg_sim)

        labels = paddle.full(
            shape=[pos_cls_embedding.shape[0]], fill_value=1.0, dtype='float32')

        loss = F.margin_ranking_loss(
            pos_sim, neg_sim, labels, margin=self.margin)

        return loss

训练配置

# 关键参数
margin=0.2 # 推荐取值 0.0 ~ 0.2
eval_step=100
max_seq_length=128
epochs=3
batch_size=32
warmup_proportion=0.0
weight_decay=0.0
save_step=100
加载预训练模型 ERNIG-Gram

基于 ERNIE-Gram 热启训练单塔 Pair-wise 排序模型,并定义数据读取的 DataLoader

pretrained_model = ppnlp.transformers.ErnieGramModel.from_pretrained(
        'ernie-gram-zh')
tokenizer = ppnlp.transformers.ErnieGramTokenizer.from_pretrained(
        'ernie-gram-zh')

trans_func_train = partial(
        convert_pairwise_example,
        tokenizer=tokenizer,
        max_seq_length=max_seq_length)

trans_func_eval = partial(
        convert_pairwise_example,
        tokenizer=tokenizer,
        max_seq_length=max_seq_length,
        phase="eval")

batchify_fn_train = lambda samples, fn=Tuple(
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # pos_pair_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # pos_pair_segment
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # neg_pair_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id)  # neg_pair_segment
    ): [data for data in fn(samples)]

batchify_fn_eval = lambda samples, fn=Tuple(
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # pair_input
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # pair_segment
        Stack(dtype="int64")  # label
    ): [data for data in fn(samples)]

train_data_loader = create_dataloader(
        train_ds,
        mode='train',
        batch_size=batch_size,
        batchify_fn=batchify_fn_train,
        trans_fn=trans_func_train)

dev_data_loader = create_dataloader(
        dev_ds,
        mode='dev',
        batch_size=batch_size,
        batchify_fn=batchify_fn_eval,
        trans_fn=trans_func_eval)
model = PairwiseMatching(pretrained_model, margin=margin)
for item in train_data_loader:
    print(item)
    break

for item in dev_data_loader:
    print(item)
    break

模型训练

下面是模型训练过程,由于在训练的时候使用了评估,所以先构建评估函数。

@paddle.no_grad()
def evaluate(model, metric, data_loader, phase="dev"):
    model.eval()
    metric.reset()

    for idx, batch in enumerate(data_loader):
        input_ids, token_type_ids, labels = batch

        pos_probs = model.predict(input_ids=input_ids, token_type_ids=token_type_ids)

        neg_probs = 1.0 - pos_probs

        preds = np.concatenate((neg_probs, pos_probs), axis=1)
        metric.update(preds=preds, labels=labels)

    print("eval_{} auc:{:.3}".format(phase, metric.accumulate()))
    metric.reset()
    model.train()

下面是排序模型的训练过程。

def do_train(model,train_data_loader,dev_data_loader):

    num_training_steps = len(train_data_loader) * epochs

    lr_scheduler = LinearDecayWithWarmup(learning_rate, num_training_steps,
                                         warmup_proportion)

    # Generate parameter names needed to perform weight decay.
    # All bias and LayerNorm parameters are excluded.
    decay_params = [
        p.name for n, p in model.named_parameters()
        if not any(nd in n for nd in ["bias", "norm"])
    ]
    optimizer = paddle.optimizer.AdamW(
        learning_rate=lr_scheduler,
        parameters=model.parameters(),
        weight_decay=weight_decay,
        apply_decay_param_fun=lambda x: x in decay_params)

    metric = paddle.metric.Auc()

    global_step = 0
    tic_train = time.time()
    for epoch in range(1, epochs + 1):
        for step, batch in enumerate(train_data_loader, start=1):
            pos_input_ids, pos_token_type_ids, neg_input_ids, neg_token_type_ids = batch

            loss = model(
                pos_input_ids=pos_input_ids,
                neg_input_ids=neg_input_ids,
                pos_token_type_ids=pos_token_type_ids,
                neg_token_type_ids=neg_token_type_ids)

            global_step += 1
            if global_step % 10 == 0 :
                print(
                    "global step %d, epoch: %d, batch: %d, loss: %.5f, speed: %.2f step/s"
                    % (global_step, epoch, step, loss,
                       10 / (time.time() - tic_train)))
                tic_train = time.time()

            loss.backward()
            optimizer.step()
            lr_scheduler.step()
            optimizer.clear_grad()

            if global_step % eval_step == 0:
                evaluate(model, metric, dev_data_loader, "dev")

            if global_step % save_step == 0:
                save_path = os.path.join(save_dir, "model_%d" % global_step)
                if not os.path.exists(save_path):
                    os.makedirs(save_path)
                save_param_path = os.path.join(save_path, 'model_state.pdparams')
                paddle.save(model.state_dict(), save_param_path)
                tokenizer.save_pretrained(save_path)

do_train(model,train_data_loader,dev_data_loader)

效果评估

下面是效果评估,首先下载训练好的预训练模型,然后进行解压。

# !wget https://bj.bcebos.com/v1/paddlenlp/models/ernie_gram_sort.zip

if(not os.path.exists('ernie_gram_sort.zip')):
    get_path_from_url('https://bj.bcebos.com/v1/paddlenlp/models/ernie_gram_sort.zip',root_dir='.')
!unzip -o ernie_gram_sort.zip -d pretrained/

加载训练好的模型,进行评估。

init_from_ckpt='pretrained/model_30000/model_state.pdparams'
state_dict = paddle.load(init_from_ckpt)
model.set_dict(state_dict)
metric = paddle.metric.Auc()
evaluate(model, metric, dev_data_loader, "dev")

模型推理

from data import read_text_pair

input_file='test_pairwise.csv'

valid_ds = load_dataset(read_text_pair, data_path=input_file, lazy=False)

print(valid_ds[0])
trans_func = partial(
        convert_pairwise_example,
        tokenizer=tokenizer,
        max_seq_length=max_seq_length,
        phase="predict")

batchify_fn = lambda samples, fn=Tuple(
        Pad(axis=0, pad_val=tokenizer.pad_token_id),  # input_ids
        Pad(axis=0, pad_val=tokenizer.pad_token_type_id),  # segment_ids
    ): [data for data in fn(samples)]


test_data_loader = create_dataloader(
        valid_ds,
        mode='predict',
        batch_size=batch_size,
        batchify_fn=batchify_fn,
        trans_fn=trans_func)

for item in test_data_loader:
    print(item)
    break
def predict(model, data_loader):

    batch_probs = []
    model.eval()

    with paddle.no_grad():
        for batch_data in data_loader:
            input_ids, token_type_ids = batch_data

            input_ids = paddle.to_tensor(input_ids)
            token_type_ids = paddle.to_tensor(token_type_ids)

            batch_prob = model.predict(
                input_ids=input_ids, token_type_ids=token_type_ids).numpy()

            batch_probs.append(batch_prob)
        if(len(batch_prob)==1):
            batch_probs=np.array(batch_probs)
        else:
            batch_probs = np.concatenate(batch_probs, axis=0)

        return batch_probs



y_probs = predict(model, test_data_loader)

valid_ds = load_dataset(read_text_pair, data_path=input_file, lazy=False)

for idx, prob in enumerate(y_probs):
    text_pair = valid_ds[idx]
    text_pair["pred_prob"] = prob[0]
    print(text_pair)

预测部署

首先把动态图模型转换成静态图模型。

output_path='output'
model.eval()

# Convert to static graph with specific input description
model = paddle.jit.to_static(
        model,
        input_spec=[
            paddle.static.InputSpec(
                shape=[None, None], dtype="int64"),  # input_ids
            paddle.static.InputSpec(
                shape=[None, None], dtype="int64")  # segment_ids
        ])
# Save in static graph model.
save_path = os.path.join(output_path, "inference")
paddle.jit.save(model, save_path)

定义Predictor用于加载静态图的模型参数进行预测。

class Predictor(object):
    def __init__(self,
                 model_dir,
                 device="gpu",
                 max_seq_length=128,
                 batch_size=32,
                 use_tensorrt=False,
                 precision="fp32",
                 cpu_threads=10,
                 enable_mkldnn=False):
        self.max_seq_length = max_seq_length
        self.batch_size = batch_size

        model_file = model_dir + "/inference.get_pooled_embedding.pdmodel"
        params_file = model_dir + "/inference.get_pooled_embedding.pdiparams"
        if not os.path.exists(model_file):
            raise ValueError("not find model file path {}".format(model_file))
        if not os.path.exists(params_file):
            raise ValueError("not find params file path {}".format(params_file))
        config = paddle.inference.Config(model_file, params_file)

        if device == "gpu":
            # set GPU configs accordingly
            # such as intialize the gpu memory, enable tensorrt
            config.enable_use_gpu(100, 0)
            precision_map = {
                "fp16": inference.PrecisionType.Half,
                "fp32": inference.PrecisionType.Float32,
                "int8": inference.PrecisionType.Int8
            }
            precision_mode = precision_map[precision]

            if use_tensorrt:
                config.enable_tensorrt_engine(
                    max_batch_size=batch_size,
                    min_subgraph_size=30,
                    precision_mode=precision_mode)
        elif device == "cpu":
            # set CPU configs accordingly,
            # such as enable_mkldnn, set_cpu_math_library_num_threads
            config.disable_gpu()
            if enable_mkldnn:
                # cache 10 different shapes for mkldnn to avoid memory leak
                config.set_mkldnn_cache_capacity(10)
                config.enable_mkldnn()
            config.set_cpu_math_library_num_threads(cpu_threads)
        elif device == "xpu":
            # set XPU configs accordingly
            config.enable_xpu(100)

        config.switch_use_feed_fetch_ops(False)
        self.predictor = paddle.inference.create_predictor(config)
        self.input_handles = [
            self.predictor.get_input_handle(name)
            for name in self.predictor.get_input_names()
        ]
        self.output_handle = self.predictor.get_output_handle(
            self.predictor.get_output_names()[0])

     

    def predict(self, data, tokenizer):
        
        examples = []
        for text in data:
            input_ids, segment_ids = convert_example_ranking(
                text,
                tokenizer,
                max_seq_length=self.max_seq_length,
                is_test=True)
            examples.append((input_ids, segment_ids))

        batchify_fn = lambda samples, fn=Tuple(
            Pad(axis=0, pad_val=tokenizer.pad_token_id),  # input
            Pad(axis=0, pad_val=tokenizer.pad_token_id),  # segment
        ): fn(samples)


        input_ids, segment_ids = batchify_fn(examples)
        self.input_handles[0].copy_from_cpu(input_ids)
        self.input_handles[1].copy_from_cpu(segment_ids)
        self.predictor.run()
        sim_score = self.output_handle.copy_to_cpu()

        sim_score = expit(sim_score)

        return sim_score

读取测试集的文本,把文本利用convert_example_ranking函数转换成id向量的形式。

def convert_example_ranking(example, tokenizer, max_seq_length=512, is_test=False):

    query, title = example["query"], example["title"]

    encoded_inputs = tokenizer(
        text=query, text_pair=title, max_seq_len=max_seq_length)

    input_ids = encoded_inputs["input_ids"]
    token_type_ids = encoded_inputs["token_type_ids"]

    if not is_test:
        label = np.array([example["label"]], dtype="int64")
        return input_ids, token_type_ids, label
    else:
        return input_ids, token_type_ids

input_file='test_pairwise.csv'

test_ds = load_dataset(read_text_pair,data_path=input_file, lazy=False)

data = [{'query': d['query'], 'title': d['title']} for d in test_ds]

batches = [
        data[idx:idx + batch_size]
        for idx in range(0, len(data), batch_size)
    ]
print(batches[0])

实例化Predictor,然后进行预测。

model_dir='output'
device='gpu'
max_seq_length=128
batch_size=32
use_tensorrt=False
precision='fp32'
cpu_threads=10
enable_mkldnn=False


predictor = Predictor(model_dir, device, max_seq_length,
                          batch_size, use_tensorrt, precision,
                          cpu_threads, enable_mkldnn)
results = []
for batch_data in batches:
    results.extend(predictor.predict(batch_data, tokenizer))

for idx, text in enumerate(data):
    print('Data: {} \t prob: {}'.format(text, results[idx]))
    
Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐