【Paddle打比赛】剧本角色情感识别baseline-精度0.676
【Paddle打比赛】剧本角色情感识别baseline-精度0.676
资源
⭐ ⭐ ⭐ 欢迎点个小小的Star支持!⭐ ⭐ ⭐
开源不易,希望大家多多支持~

-
更多CV和NLP中的transformer模型(BERT、ERNIE、ViT、DeiT、Swin Transformer等)、深度学习资料,请参考:awesome-DeepLearning
-
更多的预训练语言模型,请参考paddleNLP: https://github.com/PaddlePaddle/PaddleNLP
-
飞桨框架相关资料,请参考:飞桨深度学习平台
一、竞赛介绍
CCF大数据与计算智能大赛(CCF Big Data & Computing Intelligence Contest,简称CCF BDCI)由中国计算机学会于2013年创办。大赛由国家自然科学基金委员会指导,是大数据与人工智能领域的算法、应用和系统大型挑战赛事。大赛面向重点行业和应用领域征集需求,以前沿技术与行业应用问题为导向,以促进行业发展及产业升级为目标,以众智、众包的方式,汇聚海内外产学研用多方智慧,为社会发现和培养了大量高质量数据人才。
大赛迄今已成功举办八届,累计吸引全球1500余所高校、1800家企事业单位及80余所科研机构的12万余人参与,已成为中国大数据与人工智能领域最具影响力的活动之一,是中国大数据综合赛事第一品牌。
2021年第九届大赛以“数引创新,竞促汇智”为主题,立足余杭、面向全球,于9月至12月举办。大赛将致力于解决来自政府、企业真实场景中的痛点、难点问题,邀请全球优秀团队参与数据资源开发利用,广泛征集信息技术应用解决方案。
1.1 赛题任务
比赛的地址为https://www.datafountain.cn/competitions/518
本赛题提供一部分电影剧本作为训练集,训练集数据已由人工进行标注,参赛队伍需要对剧本场景中每句对白和动作描述中涉及到的每个角色的情感从多个维度进行分析和识别。该任务的主要难点和挑战包括:1)剧本的行文风格和通常的新闻类语料差别较大,更加口语化;2)剧本中角色情感不仅仅取决于当前的文本,对前文语义可能有深度依赖。
二、多任务学习
2.1 数据处理
from tqdm import tqdm
import pandas as pd
import os
from functools import partial
import numpy as np
import time
# 导入paddle库
import paddle
import paddle.nn.functional as F
import paddle.nn as nn
from paddle.io import DataLoader
from paddle.dataset.common import md5file
# 导入paddlenlp的库
import paddlenlp as ppnlp
from paddlenlp.transformers import LinearDecayWithWarmup
from paddlenlp.metrics import ChunkEvaluator
from paddlenlp.transformers import BertTokenizer,BertPretrainedModel
from paddlenlp.data import Stack, Tuple, Pad, Dict
from paddlenlp.datasets import DatasetBuilder,get_path_from_url
# 导入所需要的py包
from paddle.io import Dataset
!unzip -o data/data110628/剧本角色情感识别.zip -d data
Archive: data/data110628/剧本角色情感识别.zip
inflating: data/submit_example.tsv
inflating: data/__MACOSX/._submit_example.tsv
inflating: data/test_dataset.tsv
inflating: data/__MACOSX/._test_dataset.tsv
inflating: data/train_dataset_v2.tsv
inflating: data/__MACOSX/._train_dataset_v2.tsv
with open('data/train_dataset_v2.tsv', 'r', encoding='utf-8') as handler:
lines = handler.read().split('\n')[1:-1]
data = list()
for line in tqdm(lines):
sp = line.split('\t')
if len(sp) != 4:
print("ERROR:", sp)
continue
data.append(sp)
train = pd.DataFrame(data)
train.columns = ['id', 'content', 'character', 'emotions']
test = pd.read_csv('data/test_dataset.tsv', sep='\t')
submit = pd.read_csv('data/submit_example.tsv', sep='\t')
train = train[train['emotions'] != '']
100%|██████████| 42790/42790 [00:00<00:00, 272613.07it/s]
train['text'] = train[ 'content'].astype(str) +'角色: ' + train['character'].astype(str)
test['text'] = test['content'].astype(str) + ' 角色: ' + test['character'].astype(str)
train['emotions'] = train['emotions'].apply(lambda x: [int(_i) for _i in x.split(',')])
train[['love', 'joy', 'fright', 'anger', 'fear', 'sorrow']] = train['emotions'].values.tolist()
test[['love', 'joy', 'fright', 'anger', 'fear', 'sorrow']] =[0,0,0,0,0,0]
train.to_csv('data/train.csv',columns=['id', 'content', 'character','text','love', 'joy', 'fright', 'anger', 'fear', 'sorrow'],
sep='\t',
index=False)
test.to_csv('data/test.csv',columns=['id', 'content', 'character','text','love', 'joy', 'fright', 'anger', 'fear', 'sorrow'],
sep='\t',
index=False)
2.2 组装batch
target_cols=['love', 'joy', 'fright', 'anger', 'fear', 'sorrow']
# PRE_TRAINED_MODEL_NAME="bert-base-chinese"
# PRE_TRAINED_MODEL_NAME='macbert-base-chinese'
# 读者可以在这里切换语言模型
# 加载BERT的分词器
# PRE_TRAINED_MODEL_NAME='macbert-large-chinese'
# tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)
# base_model=ppnlp.transformers.BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
# PRE_TRAINED_MODEL_NAME='bert-wwm-ext-chinese'
# base_model = ppnlp.transformers.BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
# roberta
PRE_TRAINED_MODEL_NAME='roberta-wwm-ext'
tokenizer = ppnlp.transformers.RobertaTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)
base_model = ppnlp.transformers.RobertaModel.from_pretrained(PRE_TRAINED_MODEL_NAME) # 加载预训练模型
# model = ppnlp.transformers.BertForSequenceClassification.from_pretrained(MODEL_NAME, num_classes=2)
[2021-10-02 11:19:25,888] [ INFO] - Found /home/aistudio/.paddlenlp/models/roberta-wwm-ext/vocab.txt
[2021-10-02 11:19:25,905] [ INFO] - Already cached /home/aistudio/.paddlenlp/models/roberta-wwm-ext/roberta_chn_base.pdparams
PaddleNLP更多预训练模型:
PaddleNLP不仅支持RoBERTa预训练模型,还支持ERNIE、BERT、RoBERTa、Electra等预训练模型。
下表汇总了目前PaddleNLP支持的各类预训练模型。用户可以使用PaddleNLP提供的模型,完成问答、序列分类、token分类等任务。同时还提供了22种预训练的参数权重供用户使用,其中包含了11种中文语言模型的预训练权重。
Model | Tokenizer | Supported Task | Model Name |
---|---|---|---|
BERT | BertTokenizer | BertModel BertForQuestionAnswering BertForSequenceClassification BertForTokenClassification |
bert-base-uncased bert-large-uncased bert-base-multilingual-uncased bert-base-cased bert-base-chinese bert-base-multilingual-cased bert-large-cased bert-wwm-chinese bert-wwm-ext-chinese |
ERNIE | ErnieTokenizer ErnieTinyTokenizer |
ErnieModel ErnieForQuestionAnswering ErnieForSequenceClassification ErnieForTokenClassification |
ernie-1.0 ernie-tiny ernie-2.0-en ernie-2.0-large-en |
RoBERTa | RobertaTokenizer | RobertaModel RobertaForQuestionAnswering RobertaForSequenceClassification RobertaForTokenClassification |
roberta-wwm-ext roberta-wwm-ext-large rbt3 rbtl3 |
ELECTRA | ElectraTokenizer | ElectraModel ElectraForSequenceClassification ElectraForTokenClassification |
electra-small electra-base electra-large chinese-electra-small chinese-electra-base |
注:其中中文的预训练模型有 bert-base-chinese, bert-wwm-chinese, bert-wwm-ext-chinese, ernie-1.0, ernie-tiny, roberta-wwm-ext, roberta-wwm-ext-large, rbt3, rbtl3, chinese-electra-base, chinese-electra-small
等。
class RoleDataset(Dataset):
def __init__(self, mode='train',trans_func=None):
super(RoleDataset, self).__init__()
if mode == 'train':
self.data = pd.read_csv('data/train.csv',sep='\t')
else:
self.data = pd.read_csv('data/test.csv',sep='\t')
self.texts=self.data['text'].tolist()
self.labels=self.data[target_cols].to_dict('records')
self.trans_func=trans_func
def __getitem__(self, index):
text=str(self.texts[index])
label=self.labels[index]
sample = {
'text': text
}
for label_col in target_cols:
sample[label_col] =label[label_col]
sample=self.trans_func(sample)
return sample
def __len__(self):
return len(self.texts)
# 转换成id的函数
def convert_example(example, tokenizer, max_seq_length=512, is_test=False):
# print(example)
sample={}
encoded_inputs = tokenizer(text=example["text"], max_seq_len=max_seq_length)
sample['input_ids'] = encoded_inputs["input_ids"]
sample['token_type_ids'] = encoded_inputs["token_type_ids"]
sample['love'] = np.array(example["love"], dtype="float32")
sample['joy'] = np.array(example["joy"], dtype="float32")
sample['anger'] = np.array(example["anger"], dtype="float32")
sample['fright'] = np.array(example["fright"], dtype="float32")
sample['fear'] = np.array(example["fear"], dtype="float32")
sample['sorrow'] = np.array(example["sorrow"], dtype="float32")
return sample
max_seq_length=128
trans_func = partial(
convert_example,
tokenizer=tokenizer,
max_seq_length=max_seq_length)
train_ds=RoleDataset('train',trans_func)
test_ds=RoleDataset('test',trans_func)
print(train_ds[0])
{'input_ids': [101, 1921, 4958, 678, 4708, 3274, 7433, 8024, 157, 8144, 3633, 1762, 5314, 10905, 4959, 7433, 6132, 8024, 800, 5632, 2346, 1316, 1372, 4959, 4708, 1296, 5946, 4638, 1092, 6163, 8024, 2130, 1059, 3274, 7463, 1762, 1920, 7433, 722, 704, 511, 6235, 5682, 131, 157, 8144, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'love': array(0., dtype=float32), 'joy': array(0., dtype=float32), 'anger': array(0., dtype=float32), 'fright': array(0., dtype=float32), 'fear': array(0., dtype=float32), 'sorrow': array(0., dtype=float32)}
epochs=3
weight_decay=0.0
data_path='data'
warmup_proportion=0.0
init_from_ckpt=None
batch_size=64
learning_rate=5e-5
# # 把训练集合转换成id
# train_ds = train_ds.map(partial(convert_example, tokenizer=tokenizer))
# # 构建训练集合的dataloader
# train_batch_sampler = paddle.io.BatchSampler(dataset=train_ds, batch_size=32, shuffle=True)
# train_data_loader = paddle.io.DataLoader(dataset=train_ds, batch_sampler=train_batch_sampler, return_list=True)
def create_dataloader(dataset,
mode='train',
batch_size=1,
batchify_fn=None):
shuffle = True if mode == 'train' else False
if mode == 'train':
batch_sampler = paddle.io.DistributedBatchSampler(
dataset, batch_size=batch_size, shuffle=shuffle)
else:
batch_sampler = paddle.io.BatchSampler(
dataset, batch_size=batch_size, shuffle=shuffle)
return paddle.io.DataLoader(
dataset=dataset,
batch_sampler=batch_sampler,
collate_fn=batchify_fn,
return_list=True)
def collate_func(batch_data):
# 获取batch数据的大小
batch_size = len(batch_data)
# 如果batch_size为0,则返回一个空字典
if batch_size == 0:
return {}
input_ids_list, attention_mask_list = [], []
love_list,joy_list,anger_list=[],[],[]
fright_list,fear_list,sorrow_list=[],[],[]
# 遍历batch数据,将每一个数据,转换成tensor的形式
for instance in batch_data:
input_ids_temp = instance["input_ids"]
attention_mask_temp = instance["token_type_ids"]
love=instance['love']
joy=instance['joy']
anger=instance['anger']
fright= instance['fright']
fear=instance['fear']
sorrow=instance['sorrow']
input_ids_list.append(paddle.to_tensor(input_ids_temp, dtype="int64"))
attention_mask_list.append(paddle.to_tensor(attention_mask_temp, dtype="int64"))
love_list.append(love)
joy_list.append(joy)
anger_list.append(anger)
fright_list.append(fright)
fear_list.append(fear)
sorrow_list.append(sorrow)
# 对一个batch内的数据,进行padding
return {"input_ids": Pad(pad_val=0, axis=0)(input_ids_list),
"token_type_ids": Pad(pad_val=0, axis=0)(attention_mask_list),
"love": Stack(dtype="int64")(love_list),
"joy": Stack(dtype="int64")(joy_list),
"anger": Stack(dtype="int64")(anger_list),
"fright": Stack(dtype="int64")(fright_list),
"fear": Stack(dtype="int64")(fear_list),
"sorrow": Stack(dtype="int64")(sorrow_list),
}
train_data_loader = create_dataloader(
train_ds,
mode='train',
batch_size=batch_size,
batchify_fn=collate_func)
2.3 模型构建
class EmotionClassifier(nn.Layer):
def __init__(self, bert,n_classes):
super(EmotionClassifier, self).__init__()
self.bert = bert
self.out_love = nn.Linear(self.bert.config["hidden_size"], n_classes)
self.out_joy = nn.Linear(self.bert.config["hidden_size"], n_classes)
self.out_fright = nn.Linear(self.bert.config["hidden_size"], n_classes)
self.out_anger = nn.Linear(self.bert.config["hidden_size"], n_classes)
self.out_fear = nn.Linear(self.bert.config["hidden_size"], n_classes)
self.out_sorrow = nn.Linear(self.bert.config["hidden_size"], n_classes)
def forward(self, input_ids, token_type_ids):
_, pooled_output = self.bert(
input_ids=input_ids,
token_type_ids=token_type_ids
)
love = self.out_love(pooled_output)
joy = self.out_joy(pooled_output)
fright = self.out_fright(pooled_output)
anger = self.out_anger(pooled_output)
fear = self.out_fear(pooled_output)
sorrow = self.out_sorrow(pooled_output)
return {
'love': love, 'joy': joy, 'fright': fright,
'anger': anger, 'fear': fear, 'sorrow': sorrow,
}
class_names=[1]
model = EmotionClassifier(base_model,4)
num_train_epochs=3
num_training_steps = len(train_data_loader) * num_train_epochs
# 定义 learning_rate_scheduler,负责在训练过程中对 lr 进行调度
lr_scheduler = LinearDecayWithWarmup(learning_rate, num_training_steps, 0.0)
# Generate parameter names needed to perform weight decay.
# All bias and LayerNorm parameters are excluded.
decay_params = [
p.name for n, p in model.named_parameters()
if not any(nd in n for nd in ["bias", "norm"])
]
# 定义 Optimizer
optimizer = paddle.optimizer.AdamW(
learning_rate=lr_scheduler,
parameters=model.parameters(),
weight_decay=0.0,
apply_decay_param_fun=lambda x: x in decay_params)
# 交叉熵损失
criterion = paddle.nn.loss.CrossEntropyLoss()
# 评估的时候采用准确率指标
metric = paddle.metric.Accuracy()
2.4 模型训练
def do_train( model, data_loader, criterion, optimizer, scheduler, metric ):
model.train()
global_step = 0
tic_train = time.time()
log_steps=100
for epoch in range(num_train_epochs):
losses = []
for step,sample in enumerate(data_loader):
# print(sample)
input_ids = sample["input_ids"]
token_type_ids = sample["token_type_ids"]
outputs = model(input_ids=input_ids,
token_type_ids=token_type_ids)
# print(outputs)
loss_love = criterion(outputs['love'], sample['love'])
loss_joy = criterion(outputs['joy'], sample['joy'])
loss_fright = criterion(outputs['fright'], sample['fright'])
loss_anger = criterion(outputs['anger'], sample['anger'])
loss_fear = criterion(outputs['fear'], sample['fear'])
loss_sorrow = criterion(outputs['sorrow'], sample['sorrow'])
loss = loss_love + loss_joy + loss_fright + loss_anger + loss_fear + loss_sorrow
for label_col in target_cols:
correct = metric.compute(outputs[label_col], sample[label_col])
metric.update(correct)
acc = metric.accumulate()
losses.append(loss.numpy())
loss.backward()
# nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
global_step += 1
# 每间隔 log_steps 输出训练指标
if global_step % log_steps == 0:
print("global step %d, epoch: %d, batch: %d, loss: %.5f, accuracy: %.5f, speed: %.2f step/s"
% (global_step, epoch, step, loss, acc,
log_steps / (time.time() - tic_train)))
optimizer.step()
scheduler.step()
optimizer.clear_grad()
metric.reset()
return np.mean(losses)
do_train(model,train_data_loader,criterion,optimizer,lr_scheduler,metric)
global step 100, epoch: 0, batch: 99, loss: 1.79375, accuracy: 0.90443, speed: 2.79 step/s
global step 200, epoch: 0, batch: 199, loss: 1.37059, accuracy: 0.91055, speed: 1.39 step/s
global step 300, epoch: 0, batch: 299, loss: 1.49624, accuracy: 0.91206, speed: 0.91 step/s
global step 400, epoch: 0, batch: 399, loss: 1.69737, accuracy: 0.91336, speed: 0.68 step/s
global step 500, epoch: 0, batch: 499, loss: 1.92461, accuracy: 0.91394, speed: 0.54 step/s
global step 600, epoch: 1, batch: 24, loss: 1.61604, accuracy: 0.92240, speed: 0.45 step/s
global step 700, epoch: 1, batch: 124, loss: 1.60615, accuracy: 0.91800, speed: 0.39 step/s
global step 800, epoch: 1, batch: 224, loss: 1.79262, accuracy: 0.91875, speed: 0.34 step/s
global step 900, epoch: 1, batch: 324, loss: 1.38649, accuracy: 0.92009, speed: 0.30 step/s
global step 1000, epoch: 1, batch: 424, loss: 1.08904, accuracy: 0.92024, speed: 0.27 step/s
global step 1100, epoch: 1, batch: 524, loss: 1.50713, accuracy: 0.92091, speed: 0.25 step/s
global step 1200, epoch: 2, batch: 49, loss: 1.18204, accuracy: 0.92964, speed: 0.23 step/s
global step 1300, epoch: 2, batch: 149, loss: 1.01111, accuracy: 0.93123, speed: 0.21 step/s
global step 1400, epoch: 2, batch: 249, loss: 1.04157, accuracy: 0.93172, speed: 0.19 step/s
global step 1500, epoch: 2, batch: 349, loss: 1.35108, accuracy: 0.93169, speed: 0.18 step/s
global step 1600, epoch: 2, batch: 449, loss: 1.44060, accuracy: 0.93215, speed: 0.17 step/s
global step 1700, epoch: 2, batch: 549, loss: 1.15994, accuracy: 0.93222, speed: 0.16 step/s
1.2622888
2.5 模型预测
from collections import defaultdict
test_data_loader = create_dataloader(
test_ds,
mode='test',
batch_size=batch_size,
batchify_fn=collate_func)
test_pred = defaultdict(list)
for step, batch in tqdm(enumerate(test_data_loader)):
b_input_ids = batch['input_ids']
token_type_ids = batch['token_type_ids']
logits = model(input_ids=b_input_ids, token_type_ids=token_type_ids)
for col in target_cols:
out2 = paddle.argmax(logits[col], axis=1)
test_pred[col].append(out2.numpy())
print(test_pred)
# print(logits)
break
0it [00:00, ?it/s]
defaultdict(<class 'list'>, {'love': [array([0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
dtype=int64)], 'joy': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
dtype=int64)], 'fright': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
dtype=int64)], 'anger': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
dtype=int64)], 'fear': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
dtype=int64)], 'sorrow': [array([0, 0, 0, 2, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
dtype=int64)]})
0it [00:00, ?it/s]
def predict(model,test_data_loader):
val_loss = 0
test_pred = defaultdict(list)
model.eval()
for step, batch in tqdm(enumerate(test_data_loader)):
b_input_ids = batch['input_ids']
token_type_ids = batch['token_type_ids']
with paddle.no_grad():
logits = model(input_ids=b_input_ids, token_type_ids=token_type_ids)
for col in target_cols:
out2 = paddle.argmax(logits[col], axis=1)
test_pred[col].extend(out2.numpy().tolist())
return test_pred
submit = pd.read_csv('data/submit_example.tsv', sep='\t')
test_pred = predict(model,test_data_loader)
334it [00:43, 7.65it/s]
print(test_pred['love'][:10])
print(len(test_pred['love']))
[0, 0, 0, 0, 0, 0, 0, 0, 3, 0]
21376
2.6 预测结果结果输出
label_preds = []
for col in target_cols:
preds = test_pred[col]
label_preds.append(preds)
print(len(label_preds[0]))
sub = submit.copy()
sub['emotion'] = np.stack(label_preds, axis=1).tolist()
sub['emotion'] = sub['emotion'].apply(lambda x: ','.join([str(i) for i in x]))
sub.to_csv('baseline_{}.tsv'.format(PRE_TRAINED_MODEL_NAME), sep='\t', index=False)
sub.head()
21376
id | emotion | |
---|---|---|
0 | 34170_0002_A_12 | 0,0,0,0,0,0 |
1 | 34170_0002_A_14 | 0,0,0,0,0,0 |
2 | 34170_0003_A_16 | 0,0,0,0,0,0 |
3 | 34170_0003_A_17 | 0,0,0,0,0,2 |
4 | 34170_0003_A_18 | 0,0,0,0,0,0 |
上述代码会生成baseline.tsv,把它下载下来提交就行了。
下面是我提交的模型的一些结果:
模型 | 得分 |
---|---|
roberta-wwm-ext | 0.674 |
macbert-large-chinese | 0.6736 |
macbert-base-chinese | 0.6738 |
三、模型优化思路
1.数据增强: 中文数据增强工具、回译等
2.尝试不同的预训练模型、调参优化等。
3.5fodls交叉验证、多模型结果融合等
关于paddlenlp:在具体使用时建议多看相关文档 PaddleNLP文档
paddlenlp的github地址:https://github.com/PaddlePaddle/PaddleNLP 有问题的话可以在github上提issue。
下面使用投票的方法进行模型融合
macbert_data=pd.read_csv('baseline_macbert-base-chinese.tsv',sep='\t')
macbert_result=macbert_data['emotion'].tolist()
macbert_data.head()
id | emotion | |
---|---|---|
0 | 34170_0002_A_12 | 0,0,0,0,0,0 |
1 | 34170_0002_A_14 | 0,0,0,0,0,0 |
2 | 34170_0003_A_16 | 0,0,0,0,0,0 |
3 | 34170_0003_A_17 | 0,0,0,0,0,0 |
4 | 34170_0003_A_18 | 0,0,0,0,0,0 |
macbert_large_data=pd.read_csv('baseline_macbert-large-chinese.tsv',sep='\t')
macbert_large_result=macbert_large_data['emotion'].tolist()
macbert_large_data.head()
id | emotion | |
---|---|---|
0 | 34170_0002_A_12 | 0,0,0,0,0,0 |
1 | 34170_0002_A_14 | 0,0,0,0,0,0 |
2 | 34170_0003_A_16 | 0,0,0,0,0,0 |
3 | 34170_0003_A_17 | 0,0,0,0,0,0 |
4 | 34170_0003_A_18 | 0,0,0,0,0,0 |
roberta_data=pd.read_csv('baseline_roberta-wwm-ext.tsv',sep='\t')
roberta_result=roberta_data['emotion'].tolist()
roberta_data.head()
id | emotion | |
---|---|---|
0 | 34170_0002_A_12 | 0,0,0,0,0,0 |
1 | 34170_0002_A_14 | 0,0,0,0,0,0 |
2 | 34170_0003_A_16 | 0,0,0,0,0,0 |
3 | 34170_0003_A_17 | 0,0,0,0,0,2 |
4 | 34170_0003_A_18 | 0,0,0,0,0,0 |
from collections import Counter
def get_counts(list_x):
count = Counter(list_x).most_common(1)
# Counter('abracadabra').most_common(3)
#[('a', 5), ('r', 2), ('b', 2)]
return count[0]
merge_result=[]
result_analyse=[]
for i in range(len(macbert_result)):
x1_arr=macbert_result[i].split(',')
x2_arr=macbert_large_result[i].split(',')
x3_arr=roberta_result[i].split(',')
result=[]
for x1,x2,x3 in zip(x1_arr,x2_arr,x3_arr):
list_x=[]
list_x.append(int(x1))
list_x.append(int(x2))
list_x.append(int(x3))
key,count=get_counts(list_x)
result.append(key)
if(count!=0 and count !=3):
result_analyse.append([i,count])
merge_result.append(result)
print(result_analyse[:10])
[[3, 2], [5, 2], [5, 2], [8, 2], [8, 2], [9, 2], [41, 2], [64, 2], [71, 2], [72, 2]]
sub_merge = submit.copy()
sub_merge['emotion'] =merge_result
sub_merge['emotion'] = sub_merge['emotion'].apply(lambda x: ','.join([str(i) for i in x]))
sub_merge.to_csv('baseline_merge.tsv', sep='\t', index=False)
sub_merge.head()
id | emotion | |
---|---|---|
0 | 34170_0002_A_12 | 0,0,0,0,0,0 |
1 | 34170_0002_A_14 | 0,0,0,0,0,0 |
2 | 34170_0003_A_16 | 0,0,0,0,0,0 |
3 | 34170_0003_A_17 | 0,0,0,0,0,0 |
4 | 34170_0003_A_18 | 0,0,0,0,0,0 |
模型 | 得分 |
---|---|
3个模型投票 | 0.67688692 |
四、更多PaddleEdu信息内容
1. PaddleEdu一站式深度学习在线百科awesome-DeepLearning中还有其他的能力,大家可以敬请期待:
- 深度学习入门课

- 深度学习百问

- 特色课

- 产业实践

PaddleEdu使用过程中有任何问题欢迎在awesome-DeepLearning提issue,同时更多深度学习资料请参阅飞桨深度学习平台。
记得点个Star⭐收藏噢~~
2. 飞桨PaddleEdu技术交流群(QQ)
目前QQ群已有2000+同学一起学习,欢迎扫码加入
更多推荐
所有评论(0)