★★★ 本文源自AI Studio社区精品项目,【点击此处】查看更多精品内容 >>>


0 项目背景

信息抽取任务旨在从非结构化的自然语言文本中提取结构化信息。在本系列项目中,将讨论如何又好又快地实现一个简历信息提取任务。

在前置项目简历信息提取(五):用VI-LayoutXLM提升关键信息抽取效果中,我们发现用PaddleOCR提供的VI-LayoutXLM模型能够有效做好图片格式简历文本的提取,从而提升关键信息抽取效果。

接下来,我们对这些提取的文本用PaddleNLP进行微调训练,从而能够对【图片-简历关键信息】和【word文档-简历关键信息】两条路径的结果准确性进行有效比较。

本项目还将完成图片格式简历信息抽取应用的在线部署。

0.1 参考资料

1 环境准备

# 解压缩数据集
!unzip data/data40148/train_20200121.zip
# 安装依赖库
!pip install python-docx
!pip install pypinyin
!pip install LAC
!pip install --upgrade paddlenlp
!pip install --upgrade paddleocr
!pip install pymupdf
# 首次更新完以后,重启后方能生效
import datetime
import os
import fitz  # fitz就是pip install PyMuPDF
import cv2
import shutil
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
!git clone https://gitee.com/paddlepaddle/PaddleOCR.git
!wget https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/ser_vi_layoutxlm_xfund_infer.tar
!tar -xvf ser_vi_layoutxlm_xfund_infer.tar -C ./PaddleOCR/
# 准备XFUND数据集,这里主要是为了获得字典文件class_list_xfun.txt
!mkdir ./PaddleOCR/train_data
!wget https://paddleocr.bj.bcebos.com/ppstructure/dataset/XFUND.tar
!tar -xf XFUND.tar -C ./PaddleOCR/train_data/

2 数据集准备

2.1 简历图片准备

首先是将简历数据集批量转换为图片格式。

def get_pic_info(path):
    # 将整理后的抽取结果返回为字典
    if os.path.splitext(path)[-1]=='.pdf':
        pdfDoc = fitz.open(path)
        for pg in range(pdfDoc.page_count):
            page = pdfDoc[pg]
            rotate = int(0)
            zoom_x = 4  # (1.33333333-->1056x816)   (2-->1584x1224)
            zoom_y = 4
            mat = fitz.Matrix(zoom_x, zoom_y).prerotate(rotate)
            pix = page.get_pixmap(matrix=mat, alpha=False)
            # 保存过渡图片
            pix.save(path[:-4] + '_%s.jpeg' % pg)
def get_pics(path):
    filenames = os.listdir(path)
    result = []
    for filename in tqdm(filenames):
        get_pic_info(os.path.join(path,filename))
# 将简历文档转换为图片格式
result = get_pics('resume_train_20200121/pdf')
!mkdir 'resume_train_20200121/imgs'
!mv resume_train_20200121/pdf/*.jpeg resume_train_20200121/imgs/

2.2 标注文件准备

将对简历数据集的原始标注转换为与Label Studio标注匹配的格式。

%cd ~/PaddleOCR/ppstructure
/home/aistudio/PaddleOCR/ppstructure

为提高标注数据转换效率,我们针对简历数据集的实际情况进行了定制优化。原数据集中,每份简历最多(正反)两页,这也比较好理解,因为通常来说,投递的简历不会超过一张纸。

这样,我们得到的同一份简历图片后缀,也就至多是0和1了。

转换文件的核心代码如下:

    label_list = []
    # 设置实体抽取信息
    schema = ['姓名', '出生年月', '电话', '性别', '项目名称', '项目责任', '项目时间', '籍贯', '政治面貌', '落户市县', '毕业院校', '学位', '毕业时间', '工作时间', '工作内容', '职务', '工作单位']
    os.makedirs(args.output, exist_ok=True)
    with open('/home/aistudio/resume_train_20200121/unlabeled_data.txt', mode='w', encoding='utf-8') as f_w, open('/home/aistudio/resume_train_20200121/train_data.json', "r", encoding="utf-8") as f1:
        raw_examples = json.loads(f1.read())
        line_num = 1
        text_content = ''
        for line in tqdm(raw_examples): 
            res = []
            result_list = []
            img = cv2.imread('/home/aistudio/resume_train_20200121/imgs/' + line + '_0.jpeg')
            if img is not None:
                img = img[:, :, ::-1]
                ser_res, _, elapse = ser_predictor(img)
                ser_res = ser_res[0]
                for item in ser_res:
                    res.append(item['transcription'])
            # text_content = ''.join(res)                
            # res_str = '\n{}'.format(
            # text_content, ensure_ascii=False)
            if os.path.exists('/home/aistudio/resume_train_20200121/imgs/' + line + '_1.jpeg'):
                img2 = cv2.imread('/home/aistudio/resume_train_20200121/imgs/' + line + '_1.jpeg')
                if img2 is not None:
                    img2 = img2[:, :, ::-1]
                    ser_res, _, elapse = ser_predictor(img2)
                    ser_res = ser_res[0]
                    for item in ser_res:
                        res.append(item['transcription'])
            text_content = ''.join(res)
            res_str = '{}\n'.format(
                    text_content, ensure_ascii=False)
            f_w.write(res_str)
            for item in schema:
                schema_dict = {} 
                if item in raw_examples[line] and text_content.find(raw_examples[line][item]) > 0:
                    # 找到要抽取的文本内容
                    schema_dict["text"] = raw_examples[line][item]
                    # 遍历字符串,找到首个符合匹配的字符位置
                    schema_dict["start"] = text_content.find(raw_examples[line][item])
                    # 计算文本内容结束位置
                    schema_dict["end"] = len(raw_examples[line][item]) + text_content.find(raw_examples[line][item])
                    # 保存标签信息
                    schema_dict["labels"] = [item]
                if '项目经历' in raw_examples[line]:
                        for i in range(len(raw_examples[line]['项目经历'])):
                            if item in raw_examples[line]['项目经历'][i] and text_content.find(raw_examples[line]['项目经历'][i][item]) > 0:
                                schema_dict["text"] = raw_examples[line]['项目经历'][i][item]
                                schema_dict["start"] = text_content.find(raw_examples[line]['项目经历'][i][item])
                                schema_dict["end"] = len(raw_examples[line]['项目经历'][i][item]) + text_content.find(raw_examples[line]['项目经历'][i][item])
                                schema_dict["labels"] = [item]
                if '工作经历' in raw_examples[line]:
                    for i in range(len(raw_examples[line]['工作经历'])):
                        if item in raw_examples[line]['工作经历'][i] and text_content.find(raw_examples[line]['工作经历'][i][item]) > 0:
                            schema_dict["text"] = raw_examples[line]['工作经历'][i][item]
                            schema_dict["start"] = text_content.find(raw_examples[line]['工作经历'][i][item])
                            schema_dict["end"] = len(raw_examples[line]['工作经历'][i][item]) + text_content.find(raw_examples[line]['工作经历'][i][item]) - 1
                            schema_dict["labels"] = [item]                                             
                if '教育经历' in raw_examples[line]:
                    for i in range(len(raw_examples[line]['教育经历'])):
                        if item in raw_examples[line]['教育经历'][i] and text_content.find(raw_examples[line]['教育经历'][i][item]) > 0:
                            schema_dict["text"] = raw_examples[line]['教育经历'][i][item]
                            schema_dict["start"] = text_content.find(raw_examples[line]['教育经历'][i][item])
                            schema_dict["end"] = len(raw_examples[line]['教育经历'][i][item]) + text_content.find(raw_examples[line]['教育经历'][i][item])
                            schema_dict["labels"] = [item]
                if len(schema_dict) > 0:
                    result_dict = {"value":schema_dict,
                    "id": "",
                    "from_name": "label",
                    "to_name": "text",
                    "type": "labels",
                    "origin": "manual"}
                    result_list.append(result_dict)
            line_dict = {"id": line_num,
            "annotations":[{"id":line_num,"result":result_list}],
            "data": {"text":text_content}
            }
            label_list.append(line_dict)
            line_num += 1
        json.dump(label_list, open('/home/aistudio/resume_train_20200121/label_studio.json', mode='w'), ensure_ascii=False, indent=4) 
# 用根据简历数据集定制处理的转换脚本替换原有的VI-LayoutXLM批量图片识别脚本
!cp ~/predict_kie_token_ser_2.py kie/predict_kie_token_ser.py
# 生成转换后的标注文件
!python kie/predict_kie_token_ser.py \
  --kie_algorithm=LayoutXLM \
  --ser_model_dir=../ser_vi_layoutxlm_xfund_infer \
  --use_visual_backbone=False \
  --image_dir=/home/aistudio/resume_train_20200121/imgs/ \
  --ser_dict_path=../train_data/XFUND/class_list_xfun.txt \
  --vis_font_path=../doc/fonts/simfang.ttf \
  --ocr_order_method="tb-yx"

3 模型训练

%cd ~
# 拉取PaddleNLP项目程序包
!git clone https://gitee.com/paddlepaddle/PaddleNLP.git
/home/aistudio
%cd ~/PaddleNLP/applications/information_extraction/text/
/home/aistudio/PaddleNLP/applications/information_extraction/text

3.1 切分训练数据集

本项目我们按照7:2:1转换并划分训练集、验证集和测试集。

!python ../label_studio.py \
    --label_studio_file /home/aistudio/resume_train_20200121/label_studio.json \
    --save_dir ./data \
    --splits 0.7 0.2 0.1 \
    --negative_ratio 3 \
    --task_type ext
[32m[2023-02-02 01:08:00,222] [    INFO][0m - Converting annotation data...[0m
100%|█████████████████████████████████████| 1400/1400 [00:00<00:00, 1503.12it/s]
[32m[2023-02-02 01:08:01,156] [    INFO][0m - Adding negative samples for first stage prompt...[0m
100%|████████████████████████████████████| 1400/1400 [00:00<00:00, 91362.11it/s]
[32m[2023-02-02 01:08:01,177] [    INFO][0m - Converting annotation data...[0m
100%|███████████████████████████████████████| 399/399 [00:00<00:00, 4349.45it/s]
[32m[2023-02-02 01:08:01,270] [    INFO][0m - Adding negative samples for first stage prompt...[0m
100%|██████████████████████████████████████| 399/399 [00:00<00:00, 76072.88it/s]
[32m[2023-02-02 01:08:01,277] [    INFO][0m - Converting annotation data...[0m
100%|███████████████████████████████████████| 201/201 [00:00<00:00, 5933.00it/s]
[32m[2023-02-02 01:08:01,311] [    INFO][0m - Adding negative samples for first stage prompt...[0m
100%|██████████████████████████████████████| 201/201 [00:00<00:00, 97553.24it/s]
[32m[2023-02-02 01:08:01,729] [    INFO][0m - Save 23800 examples to ./data/train.txt.[0m
[32m[2023-02-02 01:08:01,852] [    INFO][0m - Save 6783 examples to ./data/dev.txt.[0m
[32m[2023-02-02 01:08:01,915] [    INFO][0m - Save 3417 examples to ./data/test.txt.[0m
[32m[2023-02-02 01:08:01,916] [    INFO][0m - Finished! It takes 2.09 seconds[0m

3.2 微调训练

!python finetune.py  \
    --device gpu \
    --logging_steps 100 \
    --save_steps 1000 \
    --eval_steps 1000 \
    --seed 1000 \
    --model_name_or_path uie-base \
    --output_dir ./checkpoint/model_best \
    --train_path data/train.txt \
    --dev_path data/dev.txt  \
    --max_seq_len 512  \
    --per_device_train_batch_size  16 \
    --per_device_eval_batch_size 16 \
    --num_train_epochs 5 \
    --learning_rate 1e-5 \
    --do_train \
    --do_eval \
    --do_export \
    --export_model_dir ./checkpoint/model_best \
    --overwrite_output_dir \
    --disable_tqdm True \
    --metric_for_best_model eval_f1 \
    --load_best_model_at_end  True \
    --save_total_limit 1


这里我们会发现,微调模型训练效果相比于从纯文档抽取的前置项目简历信息提取(三):文本抽取的UIE格式转换与微调训练效果还有一定距离,接下来,我们就对这个情况进行分析。

3.3 模型评估

在模型评估中,我们针对测试集开启debug模式对每个正例类别分别进行评估。

!python evaluate.py \
    --model_path ./checkpoint/model_best \
    --test_path ./data/test.txt \
    --debug
[32m[2023-02-02 21:47:12,853] [    INFO][0m - We are using <class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'> to load './checkpoint/model_best'.[0m
[32m[2023-02-02 21:47:12,878] [    INFO][0m - loading configuration file ./checkpoint/model_best/config.json[0m
[32m[2023-02-02 21:47:12,879] [    INFO][0m - Model config ErnieConfig {
  "architectures": [
    "UIE"
  ],
  "attention_probs_dropout_prob": 0.1,
  "dtype": "float32",
  "enable_recompute": false,
  "fuse": false,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-12,
  "max_position_embeddings": 2048,
  "model_type": "ernie",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pad_token_id": 0,
  "paddlenlp_version": null,
  "pool_act": "tanh",
  "task_id": 0,
  "task_type_vocab_size": 3,
  "type_vocab_size": 4,
  "use_task_id": true,
  "vocab_size": 40000
}
[0m
W0202 21:47:14.519857  4892 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W0202 21:47:14.527417  4892 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.
[32m[2023-02-02 21:47:15,566] [    INFO][0m - All model checkpoint weights were used when initializing UIE.
[0m
[32m[2023-02-02 21:47:15,566] [    INFO][0m - All the weights of UIE were initialized from the model checkpoint at ./checkpoint/model_best.
If your task is similar to the task the model of the checkpoint was trained on, you can already use UIE for predictions without further training.[0m
[33m[2023-02-02 21:47:15,573] [ WARNING][0m - result['end'] - result ['start'] exceeds max_content_len, which will result in no valid instance being returned[0m
[33m[2023-02-02 21:47:15,574] [ WARNING][0m - result['end'] - result ['start'] exceeds max_content_len, which will result in no valid instance being returned[0m
[32m[2023-02-02 21:47:19,817] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:19,817] [    INFO][0m - Class Name: 姓名[0m
[32m[2023-02-02 21:47:19,817] [    INFO][0m - Evaluation Precision: 0.99265 | Recall: 0.99265 | F1: 0.99265[0m
[32m[2023-02-02 21:47:22,916] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:22,917] [    INFO][0m - Class Name: 出生年月[0m
[32m[2023-02-02 21:47:22,917] [    INFO][0m - Evaluation Precision: 1.00000 | Recall: 0.94690 | F1: 0.97273[0m
[32m[2023-02-02 21:47:28,373] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:28,373] [    INFO][0m - Class Name: 电话[0m
[32m[2023-02-02 21:47:28,373] [    INFO][0m - Evaluation Precision: 1.00000 | Recall: 1.00000 | F1: 1.00000[0m
[32m[2023-02-02 21:47:32,402] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:32,402] [    INFO][0m - Class Name: 项目名称[0m
[32m[2023-02-02 21:47:32,402] [    INFO][0m - Evaluation Precision: 0.99242 | Recall: 0.84516 | F1: 0.91289[0m
[32m[2023-02-02 21:47:35,720] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:35,720] [    INFO][0m - Class Name: 项目责任[0m
[32m[2023-02-02 21:47:35,720] [    INFO][0m - Evaluation Precision: 0.89189 | Recall: 0.74436 | F1: 0.81148[0m
[32m[2023-02-02 21:47:39,731] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:39,731] [    INFO][0m - Class Name: 项目时间[0m
[32m[2023-02-02 21:47:39,731] [    INFO][0m - Evaluation Precision: 0.99180 | Recall: 0.80132 | F1: 0.88645[0m
[32m[2023-02-02 21:47:43,577] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:43,578] [    INFO][0m - Class Name: 籍贯[0m
[32m[2023-02-02 21:47:43,578] [    INFO][0m - Evaluation Precision: 0.99306 | Recall: 1.00000 | F1: 0.99652[0m
[32m[2023-02-02 21:47:48,689] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:48,689] [    INFO][0m - Class Name: 毕业院校[0m
[32m[2023-02-02 21:47:48,689] [    INFO][0m - Evaluation Precision: 0.99425 | Recall: 0.87374 | F1: 0.93011[0m
[32m[2023-02-02 21:47:53,470] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:53,470] [    INFO][0m - Class Name: 毕业时间[0m
[32m[2023-02-02 21:47:53,470] [    INFO][0m - Evaluation Precision: 0.99415 | Recall: 0.92391 | F1: 0.95775[0m
[32m[2023-02-02 21:47:58,434] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:47:58,434] [    INFO][0m - Class Name: 工作时间[0m
[32m[2023-02-02 21:47:58,434] [    INFO][0m - Evaluation Precision: 1.00000 | Recall: 0.78261 | F1: 0.87805[0m
[32m[2023-02-02 21:48:02,322] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:48:02,323] [    INFO][0m - Class Name: 工作内容[0m
[32m[2023-02-02 21:48:02,323] [    INFO][0m - Evaluation Precision: 0.93496 | Recall: 0.77181 | F1: 0.84559[0m
[32m[2023-02-02 21:48:06,706] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:48:06,706] [    INFO][0m - Class Name: 职务[0m
[32m[2023-02-02 21:48:06,706] [    INFO][0m - Evaluation Precision: 0.96581 | Recall: 0.68902 | F1: 0.80427[0m
[32m[2023-02-02 21:48:12,174] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:48:12,174] [    INFO][0m - Class Name: 工作单位[0m
[32m[2023-02-02 21:48:12,174] [    INFO][0m - Evaluation Precision: 0.98052 | Recall: 0.75500 | F1: 0.85311[0m
[32m[2023-02-02 21:48:14,388] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:48:14,388] [    INFO][0m - Class Name: 性别[0m
[32m[2023-02-02 21:48:14,388] [    INFO][0m - Evaluation Precision: 1.00000 | Recall: 1.00000 | F1: 1.00000[0m
[32m[2023-02-02 21:48:17,908] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:48:17,908] [    INFO][0m - Class Name: 学位[0m
[32m[2023-02-02 21:48:17,908] [    INFO][0m - Evaluation Precision: 0.99254 | Recall: 0.98519 | F1: 0.98885[0m
[32m[2023-02-02 21:48:19,506] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:48:19,506] [    INFO][0m - Class Name: 政治面貌[0m
[32m[2023-02-02 21:48:19,506] [    INFO][0m - Evaluation Precision: 1.00000 | Recall: 1.00000 | F1: 1.00000[0m
[32m[2023-02-02 21:48:21,264] [    INFO][0m - -----------------------------[0m
[32m[2023-02-02 21:48:21,265] [    INFO][0m - Class Name: 落户市县[0m
[32m[2023-02-02 21:48:21,265] [    INFO][0m - Evaluation Precision: 1.00000 | Recall: 0.98462 | F1: 0.99225[0m

很显然,模型对于姓名、性别、电话这些特征明显的信息抽取效果良好,但是在长句识别的表现上不太乐观,项目责任、职务、工作内容这些信息抽取的f1 score都只在0.8左右,严重拖了后腿。

我们在回到输入数据上看,从unlabeled_data.txt中随便找一份图片简历的文本提取结果:

简历教育背景2001.06-2005.06北京师范大学生物工程硕士学位2008.05-2012.05北京林业大学历史学学士学位工作经历1992.09-2017.10深圳大运置业有限公司.net 后端开发工程师个人信息工作内容:姓名幕墙系统的概念设计及深化设计,并对建筑幕墙提出建设性的意见。与李冠光建筑师和业主进行沟通,了解建筑师和业主的建筑构想,并将他们的构思融入幕墙的系统设计。对幕墙系统的设计、系统规格及材料技术规格出生年月进行分析并提供指导。1933年10月籍贯新疆省阿克苏市项目经验政治面貌港澳同胞2002.08-2010.08和谐劳动”视野下的劳动关系协调机制研究户籍澳门省澳门市项目职责:电话1、带领和指导技术研发团队进行体外诊断试剂的研发、设计开发;2、13405045281负责总体技术规划,不断快速提升核心技术,构建稳定、高效的业务;Email3、负责团队目标和工作计划的制定和高效执行,保证诊断试剂研发部工作目标的达成;4、负责与其他部门之间的沟通与协作,满足和协调公司91jfa@sohu. com各相关部门提出的技术更新、新产品等技术需求;5、负责技术团队的管理,包括团队建设,人员激励、考评和培养;6、有效提升团队的工作热情、工作效率和质量;7、指导技术团队学习、交流,并不断提升整体团队技术水平。个人技能吃饭喝茶

很大一个原因,在于排版未能处理好。因为投递简历时,绝大多数人会使用格式各样的模板,如果遇到这类的简历,个人信息被独立在侧边栏。这种情况下,使用PaddleOCR的KIE提取,固然能够单独识别到内容,但由于OCR是逐行识别的,合并文本时简单的个人信息会混杂在其它内容中,就像我们上面看到的这段话,乍看还行,但是细看整个段落有几处非常不通顺,姓名手机号等信息已经混杂在文本中了。

也就是说,对于如果是逐行信息按部就班的表单,直接用PaddleOCR的KIE提取结果拼接,不失为一种快速有效的办法。但是对于简历这类“花样百出”的表单场景,如果要引入OCR技术,我们可能还是需要回溯到上游的版面识别任务中或者微调VI-LayoutXLM模型,寻找更加精确的解决方法。

4. 模型部署

尽管在长句识别上模型有待进一步优化,但是从测试集评估效果看,VI-LayoutXLM + PaddleNLP Taskflow微调得到的模型,在姓名、性别、出生年月、籍贯这类基本信息提取上,效果还是不错的,不失为实现图片简历信息抽取的一个比较好的办法。

下面,我们就通过应用创建工具实现这个模型的部署。由于VI-LayoutXLM模型还不支持whl安装,且抽取结果还有待进一步优化,这里,我们直接使用Taskflow后端调用OCR解析图片文档的能力。只对微调后的Taskflow文本抽取模型进行部署。

在使用测试图片部署效果时,使用streamlit部署有一个非常重要的注意事项,那就是streamlit默认读取的上传文件是字节流,虽然可以用st.image()直接显示图片,但是我们在使用Taskflow的文本抽取能力时,是无法解析的。

本项目中,我们使用PIL进行中转,解决办法如下:

per_image = st.file_uploader("上传图片", type=['png', 'jpg'], label_visibility='hidden')
if per_image:
    from io import BytesIO
    from PIL import Image
    st.image(per_image)
    # To read file as bytes:
    bytes_data = per_image.getvalue()
    #将字节数据转化成字节流
    bytes_data = BytesIO(bytes_data)
    #Image.open()可以读字节流
    capture_img = Image.open(bytes_data)
    capture_img = capture_img.convert('RGB')
    capture_img.save('temp.jpeg', quality=95)

5 小结

在本项目中,我们将VI-LayoutXLM提取的图片格式简历处理后,结合原有数据标注进行了PaddleNLP的文本抽取模型微调训练,结果显示,虽然整体f1 socre>0.9+,但是因为缺失了关键的排版顺序信息,模型在长本文的抽取效果上还有较大提升空间。后续,我们将结合针对这个问题,结合文档抽取模型的微调训练等工作,进一步研究引入OCR后,如何提升简历信息的抽取效果。

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐