超简单之推特文本情感13分类baseline
NLP秋季开胃菜,推特文本情感13分类, nlp入门的不二选择
·
一、推特文本情感分类
练习赛地址:https://www.heywhale.com/home/activity/detail/611cbe90ba12a0001753d1e9/content
此练习赛情感分类位13类,故得分较低。。。。。。
Twitter 的推文有许多特点,首先,与 Facebook 不同的是,推文是基于文本的,可以通过 Twitter 接口注册下载,便于作为自然语言处理所需的语料库。其次,Twitter 规定了每一个推文不超过 140 个字,实际推文中的文本长短不一、长度一般较短,有些只有一个句子甚至一个短语,这对其开展情感分类标注带来许多困难。再者,推文常常是随性所作,内容中包含情感的元素较多,口语化内容居多,缩写随处都在,并且使用了许多网络用语,情绪符号、新词和俚语随处可见。因此,与正式文本非常不同。如果采用那些适合处理正式文本的情感分类方法来对 Twitter 推文进行情感分类,效果将不尽人意。
公众情感在包括电影评论、消费者信心、政治选举、股票走势预测等众多领域发挥着越来越大的影响力。面向公共媒体内容开展情感分析是分析公众情感的一项基础工作。
二、数据说明
数据集基于推特用户发表的推文数据集,并且针对部分字段做出了一定的调整,所有的字段信息请以本练习赛提供的字段信息为准
字段信息内容参考如下:
- tweet_id string 推文数据的唯一ID,比如test_0,train_1024
- content string 推特内容
- label int 推特情感的类别,共13种情感
其中训练集train.csv包含3w条数据,字段包括tweet_id,content,label;测试集test.csv包含1w条数据,字段包括tweet_id,content。
tweet_id,content,label
tweet_1,Layin n bed with a headache ughhhh...waitin on your call...,1
tweet_2,Funeral ceremony...gloomy friday...,1
tweet_3,wants to hang out with friends SOON!,2
tweet_4,"@dannycastillo We want to trade with someone who has Houston tickets, but no one will.",3
tweet_5,"I should be sleep, but im not! thinking about an old friend who I want. but he's married now. damn, & he wants me 2! scandalous!",1
tweet_6,Hmmm.
http://www.djhero.com/ is down,4
tweet_7,@charviray Charlene my love. I miss you,1
tweet_8,cant fall asleep,3
!head data/data107057/train.csv
!head data/data107057/test.csv
!head data/data107057/submission.csv
三、数据集定义
# 目前aistudio默认最新的位2.0.7,也可以更新到2.0.8
# !pip install -U paddlenlp
# 自定义PaddleNLP dataset的read方法
import pandas as pd
train = pd.read_csv('data/data107057/train.csv')
test = pd.read_csv('data/data107057/test.csv')
sub = pd.read_csv('data/data107057/submission.csv')
print(max(train['content'].str.len()))
train.head()
166
tweet_id | content | label | |
---|---|---|---|
0 | tweet_0 | @tiffanylue i know i was listenin to bad habi... | 0 |
1 | tweet_1 | Layin n bed with a headache ughhhh...waitin o... | 1 |
2 | tweet_2 | Funeral ceremony...gloomy friday... | 1 |
3 | tweet_3 | wants to hang out with friends SOON! | 2 |
4 | tweet_4 | @dannycastillo We want to trade with someone w... | 3 |
def read(pd_data):
for index, item in pd_data.iterrows():
yield {'text': item['content'], 'label': item['label'], 'qid': item['tweet_id'].strip('tweet_')}
# 分割训练集、测试机
from paddle.io import Dataset, Subset
from paddlenlp.datasets import MapDataset
from paddlenlp.datasets import load_dataset
dataset = load_dataset(read, pd_data=train,lazy=False)
dev_ds = Subset(dataset=dataset, indices=[i for i in range(len(dataset)) if i % 5 == 1])
train_ds = Subset(dataset=dataset, indices=[i for i in range(len(dataset)) if i % 5 != 1])
for i in range(5):
print(train_ds[i])
{'text': '@tiffanylue i know i was listenin to bad habit earlier and i started freakin at his part =[', 'label': 0, 'qid': '0'}
{'text': 'Funeral ceremony...gloomy friday...', 'label': 1, 'qid': '2'}
{'text': 'wants to hang out with friends SOON!', 'label': 2, 'qid': '3'}
{'text': '@dannycastillo We want to trade with someone who has Houston tickets, but no one will.', 'label': 3, 'qid': '4'}
{'text': "I should be sleep, but im not! thinking about an old friend who I want. but he's married now. damn, & he wants me 2! scandalous!", 'label': 1, 'qid': '5'}
# 在转换为MapDataset类型
train_ds = MapDataset(train_ds)
dev_ds = MapDataset(dev_ds)
print(len(train_ds))
print(len(dev_ds))
24000
6000
四、模型选择 && 数据处理
!pip install regex
Looking in indexes: https://mirror.baidu.com/pypi/simple/
Requirement already satisfied: regex in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2021.8.28)
from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer
# 指定模型名称,一键加载模型
model = SkepForSequenceClassification.from_pretrained(pretrained_model_name_or_path="skep_ernie_2.0_large_en", num_classes=13)
# 同样地,通过指定模型名称一键加载对应的Tokenizer,用于处理文本数据,如切分token,转token_id等。
tokenizer = SkepTokenizer.from_pretrained(pretrained_model_name_or_path="skep_ernie_2.0_large_en")
[2021-09-08 10:46:03,535] [ INFO] - Already cached /home/aistudio/.paddlenlp/models/skep_ernie_2.0_large_en/skep_ernie_2.0_large_en.pdparams
[2021-09-08 10:46:15,190] [ INFO] - Found /home/aistudio/.paddlenlp/models/skep_ernie_2.0_large_en/skep_ernie_2.0_large_en.vocab.txt
from visualdl import LogWriter
writer = LogWriter("./log")
def convert_example(example,
tokenizer,
max_seq_length=512,
is_test=False):
# 将原数据处理成model可读入的格式,enocded_inputs是一个dict,包含input_ids、token_type_ids等字段
encoded_inputs = tokenizer(
text=example["text"], max_seq_len=max_seq_length)
# input_ids:对文本切分token后,在词汇表中对应的token id
input_ids = encoded_inputs["input_ids"]
# token_type_ids:当前token属于句子1还是句子2,即上述图中表达的segment ids
token_type_ids = encoded_inputs["token_type_ids"]
if not is_test:
# label:情感极性类别
label = np.array([example["label"]], dtype="int64")
return input_ids, token_type_ids, label
else:
# qid:每条数据的编号
qid = np.array([example["qid"]], dtype="int64")
return input_ids, token_type_ids, qid
def create_dataloader(dataset,
trans_fn=None,
mode='train',
batch_size=1,
batchify_fn=None):
if trans_fn:
dataset = dataset.map(trans_fn)
shuffle = True if mode == 'train' else False
if mode == "train":
sampler = paddle.io.DistributedBatchSampler(
dataset=dataset, batch_size=batch_size, shuffle=shuffle)
else:
sampler = paddle.io.BatchSampler(
dataset=dataset, batch_size=batch_size, shuffle=shuffle)
dataloader = paddle.io.DataLoader(
dataset, batch_sampler=sampler, collate_fn=batchify_fn)
return dataloader
import numpy as np
import paddle
@paddle.no_grad()
def evaluate(model, criterion, metric, data_loader):
model.eval()
metric.reset()
losses = []
for batch in data_loader:
input_ids, token_type_ids, labels = batch
logits = model(input_ids, token_type_ids)
loss = criterion(logits, labels)
losses.append(loss.numpy())
correct = metric.compute(logits, labels)
metric.update(correct)
accu = metric.accumulate()
# print("eval loss: %.5f, accu: %.5f" % (np.mean(losses), accu))
model.train()
metric.reset()
return np.mean(losses), accu
import os
from functools import partial
import numpy as np
import paddle
import paddle.nn.functional as F
from paddlenlp.data import Stack, Tuple, Pad
# 批量数据大小
batch_size = 60
# 文本序列最大长度166
max_seq_length = 166
# 将数据处理成模型可读入的数据格式
trans_func = partial(
convert_example,
tokenizer=tokenizer,
max_seq_length=max_seq_length)
# 将数据组成批量式数据,如
# 将不同长度的文本序列padding到批量式数据中最大长度
# 将每条数据label堆叠在一起
batchify_fn = lambda samples, fn=Tuple(
Pad(axis=0, pad_val=tokenizer.pad_token_id), # input_ids
Pad(axis=0, pad_val=tokenizer.pad_token_type_id), # token_type_ids
Stack() # labels
): [data for data in fn(samples)]
train_data_loader = create_dataloader(
train_ds,
mode='train',
batch_size=batch_size,
batchify_fn=batchify_fn,
trans_fn=trans_func)
dev_data_loader = create_dataloader(
dev_ds,
mode='dev',
batch_size=batch_size,
batchify_fn=batchify_fn,
trans_fn=trans_func)
import time
# 训练轮次
epochs = 10
# 训练过程中保存模型参数的文件夹
ckpt_dir = "skep_ckpt"
# len(train_data_loader)一轮训练所需要的step数
num_training_steps = len(train_data_loader) * epochs
# Adam优化器
optimizer = paddle.optimizer.AdamW(
learning_rate=2e-5,
parameters=model.parameters())
# 交叉熵损失函数
criterion = paddle.nn.loss.CrossEntropyLoss()
# accuracy评价指标
metric = paddle.metric.Accuracy()
五、训练
训练且保存最佳结果
# 开启训练
global_step = 0
best_val_acc=0
tic_train = time.time()
best_accu = 0
for epoch in range(1, epochs + 1):
for step, batch in enumerate(train_data_loader, start=1):
input_ids, token_type_ids, labels = batch
# 喂数据给model
logits = model(input_ids, token_type_ids)
# 计算损失函数值
loss = criterion(logits, labels)
# 预测分类概率值
probs = F.softmax(logits, axis=1)
# 计算acc
correct = metric.compute(probs, labels)
metric.update(correct)
acc = metric.accumulate()
global_step += 1
if global_step % 10 == 0:
print(
"global step %d, epoch: %d, batch: %d, loss: %.5f, accu: %.5f, speed: %.2f step/s"
% (global_step, epoch, step, loss, acc,
10 / (time.time() - tic_train)))
tic_train = time.time()
# 反向梯度回传,更新参数
loss.backward()
optimizer.step()
optimizer.clear_grad()
if global_step % 100 == 0:
# 评估当前训练的模型
eval_loss, eval_accu = evaluate(model, criterion, metric, dev_data_loader)
print("eval on dev loss: {:.8}, accu: {:.8}".format(eval_loss, eval_accu))
# 加入eval日志显示
writer.add_scalar(tag="eval/loss", step=global_step, value=eval_loss)
writer.add_scalar(tag="eval/acc", step=global_step, value=eval_accu)
# 加入train日志显示
writer.add_scalar(tag="train/loss", step=global_step, value=loss)
writer.add_scalar(tag="train/acc", step=global_step, value=acc)
save_dir = "best_checkpoint"
# 加入保存
if eval_accu>best_val_acc:
if not os.path.exists(save_dir):
os.mkdir(save_dir)
best_val_acc=eval_accu
print(f"模型保存在 {global_step} 步, 最佳eval准确度为{best_val_acc:.8f}!")
save_param_path = os.path.join(save_dir, 'best_model.pdparams')
paddle.save(model.state_dict(), save_param_path)
fh = open('best_checkpoint/best_model.txt', 'w', encoding='utf-8')
fh.write(f"模型保存在 {global_step} 步, 最佳eval准确度为{best_val_acc:.8f}!")
fh.close()
六、预测
训练完成后,重启环境,释放显存,开始预测
# 数据读取
import pandas as pd
from paddlenlp.datasets import load_dataset
from paddle.io import Dataset, Subset
from paddlenlp.datasets import MapDataset
test = pd.read_csv('data/data107057/test.csv')
def read_test(pd_data):
for index, item in pd_data.iterrows():
yield {'text': item['content'], 'label': 0, 'qid': item['tweet_id'].strip('tweet_')}
test_ds = load_dataset(read_test, pd_data=test,lazy=False)
# 在转换为MapDataset类型
test_ds = MapDataset(test_ds)
print(len(test_ds))
10000
def convert_example(example,
tokenizer,
max_seq_length=512,
is_test=False):
# 将原数据处理成model可读入的格式,enocded_inputs是一个dict,包含input_ids、token_type_ids等字段
encoded_inputs = tokenizer(
text=example["text"], max_seq_len=max_seq_length)
# input_ids:对文本切分token后,在词汇表中对应的token id
input_ids = encoded_inputs["input_ids"]
# token_type_ids:当前token属于句子1还是句子2,即上述图中表达的segment ids
token_type_ids = encoded_inputs["token_type_ids"]
if not is_test:
# label:情感极性类别
label = np.array([example["label"]], dtype="int64")
return input_ids, token_type_ids, label
else:
# qid:每条数据的编号
qid = np.array([example["qid"]], dtype="int64")
return input_ids, token_type_ids, qid
def create_dataloader(dataset,
trans_fn=None,
mode='train',
batch_size=1,
batchify_fn=None):
if trans_fn:
dataset = dataset.map(trans_fn)
shuffle = True if mode == 'train' else False
if mode == "train":
sampler = paddle.io.DistributedBatchSampler(
dataset=dataset, batch_size=batch_size, shuffle=shuffle)
else:
sampler = paddle.io.BatchSampler(
dataset=dataset, batch_size=batch_size, shuffle=shuffle)
dataloader = paddle.io.DataLoader(
dataset, batch_sampler=sampler, collate_fn=batchify_fn)
return dataloader
from paddlenlp.transformers import SkepForSequenceClassification, SkepTokenizer
# 指定模型名称,一键加载模型
model = SkepForSequenceClassification.from_pretrained(pretrained_model_name_or_path="skep_ernie_2.0_large_en", num_classes=13)
# 同样地,通过指定模型名称一键加载对应的Tokenizer,用于处理文本数据,如切分token,转token_id等。
tokenizer = SkepTokenizer.from_pretrained(pretrained_model_name_or_path="skep_ernie_2.0_large_en")
[2021-09-08 15:32:48,977] [ INFO] - Downloading https://paddlenlp.bj.bcebos.com/models/transformers/skep/skep_ernie_2.0_large_en.pdparams and saved to /home/aistudio/.paddlenlp/models/skep_ernie_2.0_large_en
[2021-09-08 15:32:48,981] [ INFO] - Downloading skep_ernie_2.0_large_en.pdparams from https://paddlenlp.bj.bcebos.com/models/transformers/skep/skep_ernie_2.0_large_en.pdparams
100%|██████████| 1309197/1309197 [00:17<00:00, 73637.03it/s]
[2021-09-08 15:33:17,610] [ INFO] - Downloading skep_ernie_2.0_large_en.vocab.txt from https://paddlenlp.bj.bcebos.com/models/transformers/skep/skep_ernie_2.0_large_en.vocab.txt
100%|██████████| 227/227 [00:00<00:00, 1138.42it/s]
from functools import partial
import numpy as np
import paddle
import paddle.nn.functional as F
from paddlenlp.data import Stack, Tuple, Pad
batch_size=32
max_seq_length=166
# 处理测试集数据
trans_func = partial(
convert_example,
tokenizer=tokenizer,
max_seq_length=max_seq_length,
is_test=True)
batchify_fn = lambda samples, fn=Tuple(
Pad(axis=0, pad_val=tokenizer.pad_token_id), # input
Pad(axis=0, pad_val=tokenizer.pad_token_type_id), # segment
Stack() # qid
): [data for data in fn(samples)]
test_data_loader = create_dataloader(
test_ds,
mode='test',
batch_size=batch_size,
batchify_fn=batchify_fn,
trans_fn=trans_func)
import os
# 根据实际运行情况,更换加载的参数路径
params_path = 'best_checkpoint/best_model.pdparams'
if params_path and os.path.isfile(params_path):
# 加载模型参数
state_dict = paddle.load(params_path)
model.set_dict(state_dict)
print("Loaded parameters from %s" % params_path)
Loaded parameters from best_checkpoint/best_model.pdparams
results = []
# 切换model模型为评估模式,关闭dropout等随机因素
model.eval()
for batch in test_data_loader:
input_ids, token_type_ids, qids = batch
# 喂数据给模型
logits = model(input_ids, token_type_ids)
# 预测分类
probs = F.softmax(logits, axis=-1)
idx = paddle.argmax(probs, axis=1).numpy()
idx = idx.tolist()
qids = qids.numpy().tolist()
results.extend(zip(qids, idx))
# 写入预测结果
with open( "submission.csv", 'w', encoding="utf-8") as f:
# f.write("数据ID,评分\n")
f.write("tweet_id,label\n")
for (idx, label) in results:
axis=-1)
idx = paddle.argmax(probs, axis=1).numpy()
idx = idx.tolist()
qids = qids.numpy().tolist()
results.extend(zip(qids, idx))
# 写入预测结果
with open( "submission.csv", 'w', encoding="utf-8") as f:
# f.write("数据ID,评分\n")
f.write("tweet_id,label\n")
for (idx, label) in results:
f.write('tweet_'+str(idx[0])+","+str(label)+"\n")
七、注意事项
- 1.使用pandas读取平面文件相对方便
- 2.max_seq_length用pandas统计最大值出来较为合适
- 3.用pandas可以分析数据分布
更多推荐
已为社区贡献1438条内容
所有评论(0)