★★★ 本文源自AlStudio社区精品项目,【点击此处】查看更多精品内容 >>>
本项目提供通用场景下基于预训练模型微调的层次分类端到端应用方案,打通数据标注-模型训练-模型调优-模型压缩-预测部署全流程,有效缩短开发周期,降低AI开发落地门槛。

  • 以速卖通商品数据集为例:在模型输入标题后,我们需要输出该商品所属的商品一级类目和商品二级类目

层次文本分类任务的中数据样本具有多个标签且标签之间存在特定的层级结构,目标是预测输入句子/文本可能来自于不同级标签类别中的某一个或几个类别。下面是以图新闻文本分类为例,该新闻的一级标签为体育,二级标签为足球,体育与足球之间存在层级关系。在现实场景中,大量的数据如新闻分类、专利分类、学术论文分类等标签集合存在层次化结构,需要利用算法为文本自动标注更细粒度和更准确的标签。
image.png
数据集的标签集具有多级标签且标签之间具有层级结构关系,输入句子/文本具有一个或多个标签。在文本层次分类任务中,我们需要预测输入句子/文本可能来自于不同级标签类别中的某一个或几个类别。以上图层次分类中新闻文本为例(新闻为根节点),该新闻一级分类标签为 体育,二级分类标签为 足球。

参考paddlenlp

https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/text_classification/hierarchical
paddlenlp提供了更全的代码
层次分类数据标注-模型训练-模型分析-模型压缩-预测部署流程图

2、安装说明

AI Studio平台默认安装了Paddle和PaddleNLP,并定期更新版本。 如需手动更新,可参考如下说明:

python >= 3.6
| paddlepaddle >= 2.3
| paddlenlp >= 2.4
| scikit-learn >= 1.0.2

!pip install --upgrade paddlenlp -i https://mirror.baidu.com/pypi/simple # 运行后重启内核

!pip install scikit-learn==1.0.2
# 加载系统的API
import re
import json
import functools
import random
import time
import os
import argparse
import numpy as np
# 加载飞桨的API
import paddle
import paddle.nn.functional as F
from paddle.io import DataLoader, BatchSampler, DistributedBatchSampler
from paddlenlp.data import DataCollatorWithPadding
from paddlenlp.datasets import load_dataset
from paddlenlp.transformers import AutoModelForSequenceClassification, AutoTokenizer, LinearDecayWithWarmup
from paddlenlp.utils.log import logger

3、AliExpress全球速卖通商品数据集介绍

AliExpress全球速卖通商品数据集是我们从AE爬取的数据集,其中包括商品一级类目29个,二级类目361个,每个类目平均1200条样本,同时包括商品标题,商品图片,商品一级类目,商品二级类目。
为了根据商品标题进行层级的商品分类,我们将数据集划分如下:

  • train.txt:包括训练集标题、一级类目、二级类目,共404395条。
  • dev.txt:包含训练集标题、一级类目、二级类目,共12770条。
  • test.txt:包含训练集标题、一级类目、二级类目,共8514条

数据集链接:https://aistudio.baidu.com/aistudio/datasetdetail/172300

4、模型选择

以下展示了paddle采用的2020语言与智能技术竞赛:事件抽取任务抽取的多标签层次数据集的结果展示

精度评价指标:Micro F1分数、Macro F1分数

模型结构Micro F1(%)Macro F1(%)latency(ms)
ERNIE 1.0 Large Cw24-layer, 1024-hidden, 20-heads96.2494.245.59
ERNIE 3.0 Xbase20-layer, 1024-hidden, 16-heads96.2194.135.51
ERNIE 3.0 Base12-layer, 768-hidden, 12-heads95.6893.392.01
ERNIE 3.0 Medium6-layer, 768-hidden, 12-heads95.2693.221.01
ERNIE 3.0 Mini6-layer, 384-hidden, 12-heads94.7293.030.36
ERNIE 3.0 Micro4-layer, 384-hidden, 12-heads94.2493.080.24
ERNIE 3.0 Nano4-layer, 312-hidden, 12-heads93.9891.250.19
ERNIE 3.0 Medium + 裁剪(保留比例3/4)6-layer, 768-hidden, 9-heads95.4593.400.81
ERNIE 3.0 Medium + 裁剪(保留比例2/3)6-layer, 768-hidden, 8-heads95.2393.270.74
ERNIE 3.0 Medium + 裁剪(保留比例1/2)6-layer, 768-hidden, 6-heads94.9292.700.61

本项目由于数据集是英文,故采用ernie-2.0-base-en模型

5、模型训练

5.1 参数设置

def set_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--device', default="gpu", help="Select which device to train model, defaults to gpu.")
    parser.add_argument("--dataset_dir", default='./data/data172300', type=str, help="Local dataset directory should include train.txt, dev.txt and label.txt")
    parser.add_argument("--save_dir", default="./checkpoint", type=str, help="The output directory where the model checkpoints will be written.")
    parser.add_argument("--max_seq_length", default=40, type=int, help="The maximum total input sequence length after tokenization. Sequences longer than this will be truncated, sequences shorter will be padded.")
    parser.add_argument('--model_name', default="ernie-2.0-base-en", help="Select model to train, defaults to ernie-3.0-medium-zh.",
                        choices=["ernie-1.0-large-zh-cw","ernie-3.0-xbase-zh", "ernie-3.0-base-zh", "ernie-3.0-medium-zh", "ernie-3.0-micro-zh", "ernie-3.0-mini-zh", "ernie-3.0-nano-zh", "ernie-2.0-base-en", "ernie-2.0-large-en","ernie-m-base","ernie-m-large"])
    
    parser.add_argument("--batch_size", default=1024, type=int, help="Batch size per GPU/CPU for training.")
    parser.add_argument("--dev_batch_size", default=512, type=int, help="Batch size per GPU/CPU for training.")
    # parser.add_argument("--test_batch_size", default=512, type=int, help="Batch size per GPU/CPU for training.")

    parser.add_argument("--learning_rate", default=3e-5, type=float, help="The initial learning rate for Adam.")
    parser.add_argument("--epochs", default=300, type=int, help="Total number of training epochs to perform.")
    parser.add_argument('--early_stop', action='store_true', help='Epoch before early stop.')
    parser.add_argument('--early_stop_nums', type=int, default=5, help='Number of epoch before early stop.')
    parser.add_argument("--logging_steps", default=5, type=int, help="The interval steps to logging.")
    parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight decay if we apply some.")
    parser.add_argument('--warmup', action='store_true', help="whether use warmup strategy")
    parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup steps over the training process.")
    parser.add_argument("--init_from_ckpt", type=str, default=None, help="The path of checkpoint to be loaded.")
    parser.add_argument("--seed", type=int, default=3, help="random seed for initialization")
    parser.add_argument("--train_file", type=str, default="train.txt", help="Train dataset file name")
    parser.add_argument("--dev_file", type=str, default="dev.txt", help="Dev dataset file name")
    parser.add_argument("--test_file", type=str, default="test.txt", help="Dev dataset file name")
    parser.add_argument("--label_file", type=str, default="label.txt", help="Label file name")
    args = parser.parse_args([])
    return args
def set_seed(seed):
    """
    Sets random seed
    """
    random.seed(seed)
    np.random.seed(seed)
    paddle.seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)

def args_saving():
    argsDict = args.__dict__
    with open(os.path.join(args.save_dir, 'setting.txt'), 'w') as f:
        f.writelines('------------------ start ------------------' + '\n')
        for eachArg, value in argsDict.items():
            f.writelines(eachArg + ' : ' + str(value) + '\n')
        f.writelines('------------------- end -------------------')

5.2 模型评估

import numpy as np
from sklearn.metrics import f1_score, classification_report

from paddle.metric import Metric
from paddlenlp.utils.log import logger


class MetricReport(Metric):
    """
    评价指标,采用F1-score
    """

    def __init__(self, name='MetricReport', average='micro'):
        super(MetricReport, self).__init__()
        self.average = average
        self._name = name
        self.reset()

    def reset(self):
        """
        Resets all of the metric state.
        """
        self.y_prob = None
        self.y_true = None

    def f1_score(self, y_prob):
        """
        Compute micro f1 score and macro f1 score
        """
        threshold = 0.5
        self.y_pred = y_prob > threshold
        micro_f1_score = f1_score(y_pred=self.y_pred,
                                  y_true=self.y_true,
                                  average='micro')
        macro_f1_score = f1_score(y_pred=self.y_pred,
                                  y_true=self.y_true,
                                  average='macro')
        return micro_f1_score, macro_f1_score

    def update(self, probs, labels):
        """
        Update the probability and label
        """
        if self.y_prob is not None:
            self.y_prob = np.append(self.y_prob, probs.numpy(), axis=0)
        else:
            self.y_prob = probs.numpy()
        if self.y_true is not None:
            self.y_true = np.append(self.y_true, labels.numpy(), axis=0)
        else:
            self.y_true = labels.numpy()

    def accumulate(self):
        """
        Returns micro f1 score and macro f1 score
        """
        micro_f1_score, macro_f1_score = self.f1_score(y_prob=self.y_prob)
        return micro_f1_score, macro_f1_score

    def report(self):
        """
        Returns classification report
        """
        self.y_pred = self.y_prob > 0.5
        logger.info("classification report:\n" +
                    classification_report(self.y_true, self.y_pred, digits=4))

    def name(self):
        """
        Returns metric name
        """
        return self._name
@paddle.no_grad()
def evaluate(model, criterion, metric, data_loader):
    """
    模型验证 Given a dataset, it evaluates model and computes the metric. 
    Args:
        model(obj:`paddle.nn.Layer`): A model to classify texts.
        criterion(obj:`paddle.nn.Layer`): It can compute the loss.
        metric(obj:`paddle.metric.Metric`): The evaluation metric.
        data_loader(obj:`paddle.io.DataLoader`): The dataset loader which generates batches.
    """

    model.eval()
    metric.reset()
    losses = []
    for batch in data_loader:
        labels = batch.pop("labels")
        logits = model(**batch)
        loss = criterion(logits, labels)
        probs = F.sigmoid(logits)
        losses.append(loss.numpy())
        metric.update(probs, labels)

    micro_f1_score, macro_f1_score = metric.accumulate()
    logger.info("eval loss: %.5f, micro f1 score: %.5f, macro f1 score: %.5f" %
                (np.mean(losses), micro_f1_score, macro_f1_score))
    model.train()
    metric.reset()

    return micro_f1_score, macro_f1_score

5.3 加载数据和构建dataset

def preprocess_function(examples,
                        tokenizer,
                        max_seq_length,
                        label_nums,
                        is_test=False):
    """
    Builds model inputs from a sequence for sequence classification tasks
    by concatenating and adding special tokens.、
    将label转换成onthot编码
        
    Args:
        examples(obj:`list[str]`): List of input data, containing text and label if it have label.
        tokenizer(obj:`PretrainedTokenizer`): This tokenizer inherits from :class:`~paddlenlp.transformers.PretrainedTokenizer` 
            which contains most of the methods. Users should refer to the superclass for more information regarding methods.
        max_seq_length(obj:`int`): The maximum total input sequence length after tokenization. 
            Sequences longer than this will be truncated, sequences shorter will be padded.
        label_nums(obj:`int`): The number of the labels.
    Returns:
        result(obj:`dict`): The preprocessed data including input_ids, token_type_ids, labels.
    """
    result = tokenizer(text=examples["sentence"], max_seq_len=max_seq_length)
    # One-Hot label
    if not is_test:
        result["labels"] = [
            float(1) if i in examples["label"] else float(0)
            for i in range(label_nums)
        ]
    return result
# 加载参数
args = set_args()
if not os.path.exists(args.save_dir):
    os.makedirs(args.save_dir)
args_saving()
set_seed(args.seed)
paddle.set_device(args.device)

rank = paddle.distributed.get_rank() # paddle的多卡分布运行,默认为0
if paddle.distributed.get_world_size() > 1:
    paddle.distributed.init_parallel_env()

# 加载和构建dataset
def read_local_dataset(path, label_list=None, is_test=False):
    """
    Read dataset 
    """
    with open(path, 'r', encoding='utf-8') as f:
        for line in f:
            if is_test:
                items = line.strip().split('\t')
                sentence = ''.join(items)
                yield {'sentence': sentence}
            else:
                items = line.strip().split('\t')
                if len(items) == 0:
                    continue
                elif len(items) == 1:
                    sentence = items[0]
                    labels = []
                else:
                    sentence = ''.join(items[:-1])
                    label = items[-1]
                    labels = [label_list[l] for l in label.split(',',1)]
                yield {'sentence': sentence, 'label': labels}
label_list = {}
with open(os.path.join(args.dataset_dir, args.label_file),'r',encoding='utf-8') as f:
    for i, line in enumerate(f):
        l = line.strip()
        label_list[l] = i
        # print("第{}行".format(i))
train_ds = load_dataset(read_local_dataset,
                        path=os.path.join(args.dataset_dir,
                                            args.train_file),
                        label_list=label_list,
                        lazy=False)
dev_ds = load_dataset(read_local_dataset,
                        path=os.path.join(args.dataset_dir, args.dev_file),
                        label_list=label_list,
                        lazy=False)
test_ds = load_dataset(read_local_dataset,
                        path=os.path.join(args.dataset_dir, args.test_file),
                        label_list=label_list,
                        lazy=False)

5.4 构建Dataloader

"""
选择预训练模型,可选"ernie-1.0-large-zh-cw",
"ernie-3.0-xbase-zh", "ernie-3.0-base-zh", 
"ernie-3.0-medium-zh", "ernie-3.0-micro-zh", 
"ernie-3.0-mini-zh", "ernie-3.0-nano-zh",
 "ernie-2.0-base-en", "ernie-2.0-large-en",
 "ernie-m-base","ernie-m-large";
 默认为"ernie-3.0-medium-zh",根据任务复杂度和硬件条件进行选择。
"""
tokenizer = AutoTokenizer.from_pretrained(args.model_name)
trans_func = functools.partial(preprocess_function,
                                tokenizer=tokenizer,
                                max_seq_length=args.max_seq_length,
                                label_nums=len(label_list))
train_ds = train_ds.map(trans_func)
dev_ds = dev_ds.map(trans_func)

test_ds = test_ds.map(trans_func)


# batchify dataset
collate_fn = DataCollatorWithPadding(tokenizer)
if paddle.distributed.get_world_size() > 1:
    train_batch_sampler = DistributedBatchSampler(
        train_ds, batch_size=args.batch_size, shuffle=True)
else:
    train_batch_sampler = BatchSampler(train_ds,
                                        batch_size=args.batch_size,
                                        shuffle=True)
dev_batch_sampler = BatchSampler(dev_ds,
                                    batch_size=args.dev_batch_size,
                                    shuffle=False)
test_batch_sampler = BatchSampler(test_ds,
                                    batch_size=args.dev_batch_size,
                                    shuffle=False)

train_data_loader = DataLoader(dataset=train_ds,
                                batch_sampler=train_batch_sampler,
                                collate_fn=collate_fn)
dev_data_loader = DataLoader(dataset=dev_ds,
                                batch_sampler=dev_batch_sampler,
                                collate_fn=collate_fn)
test_data_loader = DataLoader(dataset=test_ds,
                                batch_sampler=test_batch_sampler,
                                collate_fn=collate_fn)

5.5 启动训练

# 模型加载
model = AutoModelForSequenceClassification.from_pretrained(
    args.model_name, num_classes=len(label_list))
if args.init_from_ckpt and os.path.isfile(args.init_from_ckpt):
    state_dict = paddle.load(args.init_from_ckpt)
    model.set_dict(state_dict)
model = paddle.DataParallel(model)
num_training_steps = len(train_data_loader) * args.epochs
lr_scheduler = LinearDecayWithWarmup(args.learning_rate, num_training_steps,
                                        args.warmup_steps)

# Generate parameter names needed to perform weight decay.
# All bias and LayerNorm parameters are excluded.
decay_params = [
    p.name for n, p in model.named_parameters()
    if not any(nd in n for nd in ["bias", "norm"])
]
# 定义优化器
optimizer = paddle.optimizer.AdamW(
    learning_rate=lr_scheduler,
    parameters=model.parameters(),
    weight_decay=args.weight_decay,
    apply_decay_param_fun=lambda x: x in decay_params)
criterion = paddle.nn.BCEWithLogitsLoss()
metric = MetricReport()

global_step = 0
best_f1_score = 0
early_stop_count = 0
tic_train = time.time()
start = time.time()
for epoch in range(1, args.epochs + 1):

    if early_stop_count >= args.early_stop_nums:
        # if args.early_stop and early_stop_count >= args.early_stop_nums:
        logger.info("Early stop!")
        break
        print('\n------------------------------------------------\n')
        print('Start Training Epoch', epoch, ':', time.strftime(
            "%Y-%m-%d %H:%M:%S", time.localtime(time.time())))
    for step, batch in enumerate(train_data_loader, start=1):

        labels = batch.pop("labels")
        logits = model(**batch)
        loss = criterion(logits, labels)

        loss.backward()
        optimizer.step()
        if args.warmup:
            lr_scheduler.step()
        optimizer.clear_grad()

        global_step += 1
        if global_step % args.logging_steps == 0 and rank == 0:
            logger.info(
                "global step %d, epoch: %d, batch: %d, loss: %.5f, speed: %.2f step/s"
                % (global_step, epoch, step, loss, 10 /
                    (time.time() - tic_train)))
            tic_train = time.time()

    early_stop_count += 1
    print('Eval time:', time.strftime(
                "%Y-%m-%d %H:%M:%S", time.localtime(time.time())))
    micro_f1_score, macro_f1_score =  evaluate(model, criterion, metric,
                                                dev_data_loader)
    save_best_path = args.save_dir
    if not os.path.exists(save_best_path):
        os.makedirs(save_best_path)

    # save models
    if macro_f1_score > best_f1_score:
        early_stop_count = 0
        best_f1_score = macro_f1_score
        model._layers.save_pretrained(save_best_path)
        tokenizer.save_pretrained(save_best_path)
    logger.info("Current best macro f1 score: %.5f" % (best_f1_score))
    logger.info("early_stop_num is {}".format(early_stop_count))
    print('test time:', time.strftime(
                "%Y-%m-%d %H:%M:%S", time.localtime(time.time())))
    test_micro_f1_score, test_macro_f1_score = evaluate(model, criterion, metric,
                                                test_data_loader)
logger.info("Final best macro f1 score: %.5f" % (best_f1_score))
logger.info("Save best macro f1 text classification model in %s" %
            (args.save_dir))
print('Training Time: {:.2f}s'.format(time.time() - start))

6、模型预测

我们在模型解码构成中,筛选logit大于0.5的标签,按照##拆分一级类目和二级类目,这样就成功完成了层级标签分类。

def set_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--device', default="gpu", help="Select which device to train model, defaults to gpu.")
    parser.add_argument("--params_path", default="./checkpoint/", type=str, help="The path to model parameters to be loaded.")
    parser.add_argument("--max_seq_length", default=40, type=int, help="The maximum total input sequence length after tokenization. Sequences longer than this will be truncated, sequences shorter will be padded.")
    parser.add_argument("--batch_size", default=1, type=int, help="Batch size per GPU/CPU for training.")
    parser.add_argument("--data_file", type=str, default="data.txt", help="Unlabeled data file name")
    parser.add_argument("--label_file", type=str, default="label.txt", help="Label file name")
    args = parser.parse_args([])
    return args
args = set_args()
paddle.set_device(args.device)
model = AutoModelForSequenceClassification.from_pretrained(args.params_path)
tokenizer = AutoTokenizer.from_pretrained(args.params_path)

label_list = []
label_path = os.path.join(args.dataset_dir, args.label_file)
with open(label_path, 'r', encoding='utf-8') as f:
    for i, line in enumerate(f):
        label_list.append(line.strip())

sample = 'Sparkly White Short Homecoming Dresses for Teens Girls Plus Size One Shoulder Sequin Tight Bodycon Mini Cocktail Party Gowns'
inputs=tokenizer(text=sample,max_seq_length=args.max_seq_length)
inputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}
logits = model(**inputs)
results=[]
probs = F.sigmoid(logits).numpy()
for prob in probs:
    labels = []
    for i, p in enumerate(prob):
        if p > 0.5:
            labels.append(label_list[i])
    results.append(labels)
hierarchical_labels = {}
logger.info("text: {}".format(sample))
logger.info("prediction result: {}".format(",".join(labels)))
for label in labels:
    for i, l in enumerate(label.split('##')):
        if i not in hierarchical_labels:
            hierarchical_labels[i] = []
        if l not in hierarchical_labels[i]:
            hierarchical_labels[i].append(l)
for d in range(len(hierarchical_labels)):
    logger.info("level {} : {}".format(d + 1, ','.join(
        hierarchical_labels[d])))
logger.info("--------------------")

7、总结

本项目借鉴了paddle序列分类的方法将层级多标签转换成多标签任务,巧妙的将1级标签和2级标签进行捆绑,在解码过程中进行拆解。

思路上很巧妙,但是没有学习层级标签的模式,后续可以将层级这个更突显出来,比如用图神经网络学习层级关系,辅助进行层级分类。

可以举一反三,不论是2级类目,还是3级,甚至多级标签,我们都可以将类目进行拼接这样来做,实现多层级的分类。

作者Littlefishs

我在AI Studio上获得钻石等级,点亮6个徽章,来互关呀~ https://aistudio.baidu.com/aistudio/personalcenter/thirdview/1017309

此文章为转载
原文链接

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐