★★★ 本文源自AlStudio社区精品项目,【点击此处】查看更多精品内容 >>>

1. 项目背景

随着AI技术的快速发展,深度学习技术在各个领域的应用越来越广泛。作为一种革命性的机器学习算法,深度学习可以自动从大量数据中学习并提取特征,无需手动编写规则,大大提高了人工智能的自动化水平。

但是另一方面,随着AI技术广泛应用,AI技术落困难的问题也日渐显现。根据Gartner的调查发现,只有53%的项目能够从AI原型转化为生产。AI项目落地难一方面是因为AI项目跨部门协作难度大,另一方面是缺乏合适的工具链。而PaddleX作为一个简单易用的深度学习平台,让用户能够快速构建和训练深度学习模型,而无需关心底层的技术细节。为了实现这个目标,PaddleX 提供了丰富的模型和算法库,包括图像分类、目标检测、实例分割等,同时也提供了易用的工具和接口,如模型转换工具、数据集工具、可视化工具等,用户只需要通过简单的命令行或图形界面操作,就可以完成模型的训练和部署。

本项目基于开源的数据使用paddex的API训练模型,之后再将模型导出并转换为部署模型,并通过paddlex内置的paddlehub将模型的推理服务进行服务化部署。通过在后台发布RestfulAPI接口,让其他程序(服务)可以通过http协议进行调用,从而完成一个完整的AI模型从训练到落地部署的整个流程。

2. 数据查看

使用coco数据集中的抽取的car, bus, truck三个类别作为训练和验证数据集,同时提供相应的标注文件

!unzip -oq /home/aistudio/data/data171362/car_coco.zip -d /home/aistudio/work

3. 开发环境准备

本项目使用paddleX的API开发模型来构建和准换模型,下载安装paddleX即可

# 下载paddleX
!pip install paddlex==2.1.0 -i https://mirror.baidu.com/pypi/simple
Looking in indexes: https://mirror.baidu.com/pypi/simple
Collecting paddlex==2.1.0
  Downloading https://mirror.baidu.com/pypi/packages/ca/03/b401c6a34685aa698e7c2fbcfad029892cbfa4b562eaaa7722037fef86ed/paddlex-2.1.0-py3-none-any.whl (1.6 MB)
[2K     [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m1.6/1.6 MB[0m [31m9.7 MB/s[0m eta [36m0:00:00[0mta [36m0:00:01[0m
[?25hRequirement already satisfied: opencv-python in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (4.1.1.26)
Collecting scikit-learn==0.23.2
  Downloading https://mirror.baidu.com/pypi/packages/f4/cb/64623369f348e9bfb29ff898a57ac7c91ed4921f228e9726546614d63ccb/scikit_learn-0.23.2-cp37-cp37m-manylinux1_x86_64.whl (6.8 MB)
[2K     [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m6.8/6.8 MB[0m [31m9.5 MB/s[0m eta [36m0:00:00[0m:00:01[0m0:01[0mm
[?25hCollecting pycocotools
  Downloading https://mirror.baidu.com/pypi/packages/ef/c6/90220be3b39fbc4cbd203775ca47dd8dc97fae06fbd2b500637395621b7c/pycocotools-2.0.6.tar.gz (24 kB)
  Installing build dependencies ... [?25ldone
[?25h  Getting requirements to build wheel ... [?25ldone
[?25h  Preparing metadata (pyproject.toml) ... [?25ldone
[?25hRequirement already satisfied: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (0.4.4)
Collecting paddleslim==2.2.1
  Downloading https://mirror.baidu.com/pypi/packages/0b/dc/f46c4669d4cb35de23581a2380d55bf9d38bb6855aab1978fdb956d85da6/paddleslim-2.2.1-py3-none-any.whl (310 kB)
[2K     [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m310.9/310.9 kB[0m [31m16.6 MB/s[0m eta [36m0:00:00[0m
[?25hRequirement already satisfied: flask-cors in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (3.0.8)
Requirement already satisfied: visualdl>=2.2.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (2.4.0)
Collecting motmetrics
  Downloading https://mirror.baidu.com/pypi/packages/45/41/b019fe934eb811b9aba9b335f852305b804b9c66f098d7e35c2bdb09d1c8/motmetrics-1.2.5-py3-none-any.whl (161 kB)
[2K     [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m161.1/161.1 kB[0m [31m7.8 MB/s[0m eta [36m0:00:00[0m
[?25hRequirement already satisfied: openpyxl in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (3.0.5)
Requirement already satisfied: scipy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (1.3.0)
Collecting lap
  Downloading https://mirror.baidu.com/pypi/packages/bf/64/d9fb6a75b15e783952b2fec6970f033462e67db32dc43dfbb404c14e91c2/lap-0.4.0.tar.gz (1.5 MB)
[2K     [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m1.5/1.5 MB[0m [31m8.8 MB/s[0m eta [36m0:00:00[0mta [36m0:00:01[0m
[?25h  Preparing metadata (setup.py) ... [?25ldone
[?25hRequirement already satisfied: pyyaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (5.1.2)
Requirement already satisfied: tqdm in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (4.27.0)
Collecting shapely>=1.7.0
  Downloading https://mirror.baidu.com/pypi/packages/1d/a4/931d0780f31f3ea8c4f9ef6464a2825137c5241e6707a5fb03bef760a7eb/shapely-2.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)
[2K     [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m2.3/2.3 MB[0m [31m10.4 MB/s[0m eta [36m0:00:00[0m00:01[0m00:01[0m
[?25hRequirement already satisfied: chardet in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlex==2.1.0) (3.0.4)
Requirement already satisfied: matplotlib in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleslim==2.2.1->paddlex==2.1.0) (2.2.3)
Requirement already satisfied: pillow in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleslim==2.2.1->paddlex==2.1.0) (8.2.0)
Requirement already satisfied: pyzmq in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleslim==2.2.1->paddlex==2.1.0) (23.2.1)
Requirement already satisfied: numpy>=1.13.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn==0.23.2->paddlex==2.1.0) (1.19.5)
Collecting threadpoolctl>=2.0.0
  Downloading https://mirror.baidu.com/pypi/packages/61/cf/6e354304bcb9c6413c4e02a747b600061c21d38ba51e7e544ac7bc66aecc/threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn==0.23.2->paddlex==2.1.0) (0.14.1)
Requirement already satisfied: bce-python-sdk in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.2.2->paddlex==2.1.0) (0.8.53)
Requirement already satisfied: requests in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.2.2->paddlex==2.1.0) (2.24.0)
Requirement already satisfied: six>=1.14.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.2.2->paddlex==2.1.0) (1.16.0)
Requirement already satisfied: Flask-Babel>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.2.2->paddlex==2.1.0) (1.0.0)
Requirement already satisfied: pandas in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.2.2->paddlex==2.1.0) (1.1.5)
Requirement already satisfied: protobuf>=3.11.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.2.2->paddlex==2.1.0) (3.20.0)
Requirement already satisfied: flask>=1.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.2.2->paddlex==2.1.0) (1.1.1)
Collecting xmltodict>=0.12.0
  Downloading https://mirror.baidu.com/pypi/packages/94/db/fd0326e331726f07ff7f40675cd86aa804bfd2e5016c727fa761c934990e/xmltodict-0.13.0-py2.py3-none-any.whl (10.0 kB)
Requirement already satisfied: et-xmlfile in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from openpyxl->paddlex==2.1.0) (1.0.1)
Requirement already satisfied: jdcal in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from openpyxl->paddlex==2.1.0) (1.4.1)
Requirement already satisfied: Jinja2>=2.10.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.2.2->paddlex==2.1.0) (3.0.0)
Requirement already satisfied: click>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.2.2->paddlex==2.1.0) (7.0)
Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.2.2->paddlex==2.1.0) (0.16.0)
Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.2.2->paddlex==2.1.0) (1.1.0)
Requirement already satisfied: Babel>=2.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl>=2.2.2->paddlex==2.1.0) (2.8.0)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl>=2.2.2->paddlex==2.1.0) (2019.3)
Requirement already satisfied: cycler>=0.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.2.1->paddlex==2.1.0) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.2.1->paddlex==2.1.0) (3.0.9)
Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.2.1->paddlex==2.1.0) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->paddleslim==2.2.1->paddlex==2.1.0) (1.1.0)
Requirement already satisfied: pycryptodome>=3.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl>=2.2.2->paddlex==2.1.0) (3.9.9)
Requirement already satisfied: future>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl>=2.2.2->paddlex==2.1.0) (0.18.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.2.2->paddlex==2.1.0) (1.25.6)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.2.2->paddlex==2.1.0) (2019.9.11)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.2.2->paddlex==2.1.0) (2.8)
Requirement already satisfied: MarkupSafe>=2.0.0rc2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Jinja2>=2.10.1->flask>=1.1.1->visualdl>=2.2.2->paddlex==2.1.0) (2.0.1)
Requirement already satisfied: setuptools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib->paddleslim==2.2.1->paddlex==2.1.0) (41.4.0)
Building wheels for collected packages: lap, pycocotools
  Building wheel for lap (setup.py) ... [?25ldone
[?25h  Created wheel for lap: filename=lap-0.4.0-cp37-cp37m-linux_x86_64.whl size=1593877 sha256=c13e702296da58c5cb0a3c930558a30a42801f4a543c74de6d4b578cfb0c9226
  Stored in directory: /home/aistudio/.cache/pip/wheels/95/5f/20/9e2b2cfb8b2bfae5a5374e947511a47c8909e74aaf6d6d4464
  Building wheel for pycocotools (pyproject.toml) ... [?25ldone
[?25h  Created wheel for pycocotools: filename=pycocotools-2.0.6-cp37-cp37m-linux_x86_64.whl size=275212 sha256=0bcb1717b89380c1e9a423184a7093b22e174f7ed014959f7e27ceac981b03c2
  Stored in directory: /home/aistudio/.cache/pip/wheels/aa/8d/b1/0b72d6fd177590eb1d615c319107402bcb326ebe94ac41b330
Successfully built lap pycocotools
Installing collected packages: lap, xmltodict, threadpoolctl, shapely, scikit-learn, pycocotools, paddleslim, motmetrics, paddlex
  Attempting uninstall: scikit-learn
    Found existing installation: scikit-learn 0.22.1
    Uninstalling scikit-learn-0.22.1:
      Successfully uninstalled scikit-learn-0.22.1
Successfully installed lap-0.4.0 motmetrics-1.2.5 paddleslim-2.2.1 paddlex-2.1.0 pycocotools-2.0.6 scikit-learn-0.23.2 shapely-2.0.1 threadpoolctl-3.1.0 xmltodict-0.13.0

[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m A new release of pip available: [0m[31;49m22.1.2[0m[39;49m -> [0m[32;49m23.1.2[0m
[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m To update, run: [0m[32;49mpip install --upgrade pip[0m

4. 使用paddleX训练车辆检测模型

# 查看环境内各个模块的版本
import paddle, paddlex, paddlehub
print(paddle.__version__)
print(paddlex.__version__)
print(paddlehub.__version__)

4.1 定义数据集

本项目的数据集使用coco格式的数据标注,使用paddleX进行训练时,使用CocoDatectionAPI接口

该接口的参数如下:

  • paddlex.datasets.CocoDetection(data_dir, ann_file, transforms=None, num_workers=‘auto’, shuffle=False, allow_empty=False, empty_ratio=1.)
import paddlex as pdx
from paddlex import transforms as T

train_transforms = T.Compose([
    T.Resize(512),
    T.RandomHorizontalFlip(),
    T.Normalize(
            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])

eval_transforms = T.Compose([
    T.Resize(512),
    T.Normalize(
            mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])

train_dataset = pdx.datasets.CocoDetection(
                    data_dir='/home/aistudio/work/train2017',
                    ann_file='/home/aistudio/work/annotations/train2017.json',
                    transforms=train_transforms)
eval_dataset = pdx.datasets.CocoDetection(
                    data_dir='/home/aistudio/work/val2017',
                    ann_file='/home/aistudio/work/annotations/val2017.json',
                    transforms=eval_transforms)
loading annotations into memory...
Done (t=0.36s)
creating index...
index created!
2023-06-01 10:20:32 [INFO]	Starting to read file list from dataset...
2023-06-01 10:20:32 [INFO]	16270 samples in file /home/aistudio/work/annotations/train2017.json, including 16270 positive samples and 0 negative samples.
loading annotations into memory...
Done (t=0.25s)
creating index...
index created!
2023-06-01 10:20:33 [INFO]	Starting to read file list from dataset...
2023-06-01 10:20:33 [INFO]	707 samples in file /home/aistudio/work/annotations/val2017.json, including 707 positive samples and 0 negative samples.

4.2. 组网训练,使用经典的YOLOV3目标检测网络

使用经典的YOLO V3模型, 入网尺寸为512*512, 在Telsa V100 32G环境下可以设置bs=64

num_classes = len(train_dataset.labels)
model = pdx.det.YOLOv3(num_classes=num_classes, backbone='MobileNetV3')
model.train(
    num_epochs=80,
    train_dataset=train_dataset,
    train_batch_size=48,
    eval_dataset=eval_dataset,
    pretrain_weights='COCO',
    learning_rate=0.00001,
    warmup_steps=1000,
    warmup_start_lr=0.0,
    lr_decay_epochs=[35, 50, 90],
    save_interval_epochs=5,
    save_dir='output/yolov3_darknet53')
2023-06-01 10:20:37 [INFO]	Downloading yolov3_mobilenet_v3_large_270e_coco.pdparams from https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v3_large_270e_coco.pdparams


100%|██████████| 139189/139189 [00:11<00:00, 11694.77KB/s]


2023-06-01 10:20:52 [INFO]	Loading pretrained model from output/yolov3_darknet53/pretrain/yolov3_mobilenet_v3_large_270e_coco.pdparams
2023-06-01 10:20:53 [WARNING]	[SKIP] Shape of pretrained params yolo_head.yolo_output.0.weight doesn't match.(Pretrained: [255, 1024, 1, 1], Actual: [24, 1024, 1, 1])
2023-06-01 10:20:53 [WARNING]	[SKIP] Shape of pretrained params yolo_head.yolo_output.0.bias doesn't match.(Pretrained: [255], Actual: [24])
2023-06-01 10:20:53 [WARNING]	[SKIP] Shape of pretrained params yolo_head.yolo_output.1.weight doesn't match.(Pretrained: [255, 512, 1, 1], Actual: [24, 512, 1, 1])
2023-06-01 10:20:53 [WARNING]	[SKIP] Shape of pretrained params yolo_head.yolo_output.1.bias doesn't match.(Pretrained: [255], Actual: [24])
2023-06-01 10:20:53 [WARNING]	[SKIP] Shape of pretrained params yolo_head.yolo_output.2.weight doesn't match.(Pretrained: [255, 256, 1, 1], Actual: [24, 256, 1, 1])
2023-06-01 10:20:53 [WARNING]	[SKIP] Shape of pretrained params yolo_head.yolo_output.2.bias doesn't match.(Pretrained: [255], Actual: [24])
2023-06-01 10:20:53 [INFO]	There are 362/368 variables loaded into YOLOv3.
  • 训练过程中,训练的log位于output/yolov3_darknet53/vdl_log目录下,可以使用可视化服务同步开启训练过程可视化,监控loss和mAP等指标的变化
loss_clslrbbox_mmAP

5. 导出部署模型

在使用PaddleX 2.0及以上版本进行训练时通过设置save_interval_epochs参数可以在训练过程中,保存训练阶段的模型。
在保存的模型文件夹中,主要包含四个文件:

  • model.pdopt,训练模型参数的优化器
  • model.pdparams,模型参数
  • model.yml,模型的配置文件(包括预处理参数、模型定义等)
  • eval_details.json,模型评估时的预测结果和真值

需要注意的是,训练保存的模型不能直接用于部署,需要导出成部署格式后才能用于部署。

在服务端部署模型时需要将训练过程中保存的模型导出为inference格式模型,在paddleX中直接使用export_inference这个API即可将训练模型导出
导出的inference格式模型包括五个文件:

  • model.pdmodel,模型网络结构
  • model.pdiparams,模型权重
  • model.pdiparams.info,模型权重名称
  • model.yml,模型的配置文件(包括预处理参数、模型定义等)
  • pipeline.yml,可用于PaddleX Manufacture SDK的流程配置文件

下面导出本项目的部署模型

%cd /home/aistudio
!paddlex --export_inference --model_dir=/home/aistudio/output/yolov3_darknet53/best_model/ --save_dir=./inference_model
/home/aistudio
[05-24 14:08:02 MainThread @logger.py:242] Argv: /opt/conda/envs/python35-paddle120-env/bin/paddlex --export_inference --model_dir=/home/aistudio/output/yolov3_darknet53/best_model/ --save_dir=./inference_model
[05-24 14:08:02 MainThread @utils.py:79] WRN paddlepaddle version: 2.3.2. The dynamic graph version of PARL is under development, not fully tested and supported
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/parl/remote/communication.py:38: FutureWarning: 'pyarrow.default_serialization_context' is deprecated as of 2.0.0 and will be removed in a future version. Use pickle or the pyarrow IPC functionality instead.
  context = pyarrow.default_serialization_context()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import MutableMapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable, Mapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Sized
2023-05-24 14:08:03,055-WARNING: type object 'QuantizationTransformPass' has no attribute '_supported_quantizable_op_type'
2023-05-24 14:08:03,056-WARNING: If you want to use training-aware and post-training quantization, please use Paddle >= 1.8.4 or develop version
W0524 14:08:04.458117  4264 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W0524 14:08:04.462399  4264 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.
2023-05-24 14:08:04 [INFO]	Model[YOLOv3] loaded.
2023-05-24 14:08:09 [INFO]	The model for the inference deployment is saved in ./inference_model/inference_model.
# 查看导出的文件结构
!tree -L 1 /home/aistudio/inference_model/inference_model
/home/aistudio/inference_model/inference_model
├── model.pdiparams
├── model.pdiparams.info
├── model.pdmodel
├── model.yml
└── pipeline.yml

0 directories, 5 files

6. 模型转换

借助PaddleHub-Serving,可以将PaddleX的Inference Model进行快速部署,以提供在线预测的能力。

要实现在线部署,需要将PaddleX的Inference Model转换成PaddleHub的预训练模型,使用命令hub convert即可一键转换,对此命令的说明如下:

  • 格式: hub convert --model_dir --module_name --module_version --output_dir

  • 参数:

    • –model_dir PaddleX Inference Model所在的目录
    • –module_name 生成预训练模型的名称
    • –module_version 生成预训练模型的版本,默认为1.0.0
    • –output_dir 生成预训练模型的存放位置,默认为{module_name}_{timestamp}

下面代码演示本项目的模型转换

!hub convert --model_dir /home/aistudio/inference_model/inference_model \
              --module_name car_det \
              --module_version 1.0
The converted module is stored in `car_det_1684908581.863285`.

7. 安装模型

使用hub conver API进行模型转后得到了一个.tar.gz格式的预训练模型压缩包,在进行部署之前需要先安装到本机,使用命令hub install即可一键安装,对此命令的说明如下:

  • 格式: hub install {MODULE_DIR}
    其中MODULE_DIR为要安装的预训练模型文件路径。

安装成功后会打印提示信息,如下:

Successfully installed Successfully installed xxxx

下面代码进行本项目的模型安装。另外需要注意,如果读者使用自己训练并导出转换的模型,需要将下面的代码中的路径修改成自己的模型路径。
同时为了方便用户体验,本项目下也挂载了转换好的模型用于安装体验,运行下面的代码即可

安装完成后可以在当前环境中的.paddlehub/module/model目录下看到安装的模型

!hub install /home/aistudio/car_det_1684908581.863285/car_det.tar.gz
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/setuptools/depends.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
Decompress /home/aistudio/car_det_1684908581.863285/car_det.tar.gz
[##################################################] 100.00%
[32m[06-01 10:24:15 MainThread @logger.py:242][0m Argv: /opt/conda/envs/python35-paddle120-env/bin/hub install /home/aistudio/car_det_1684908581.863285/car_det.tar.gz
[0m[33m[06-01 10:24:15 MainThread @utils.py:79][0m [5m[33mWRN[0m paddlepaddle version: 2.3.2. The dynamic graph version of PARL is under development, not fully tested and supported
[0m2023-06-01 10:24:16,422-WARNING: type object 'QuantizationTransformPass' has no attribute '_supported_quantizable_op_type'
[0m2023-06-01 10:24:16,423-WARNING: If you want to use training-aware and post-training quantization, please use Paddle >= 1.8.4 or develop version
[0m[32m[2023-06-01 10:24:17,248] [    INFO][0m - Successfully installed car_det-1.0[0m
[0m
# 安装完成后,可以查看目录文件
!tree -L 2 /home/aistudio/.paddlehub/modules
/home/aistudio/.paddlehub/modules
└── car_det
    ├── assets
    ├── __init__.py
    ├── module.py
    ├── __pycache__
    └── serving_client_demo.py

3 directories, 3 files

8. 一键部署模型发布

在终端使用 hub serving start -m car_det 进行发布, 运行后,见到如下提示表明服务开启成功

paddlehub使用Flask作为web服务后端,默认的服务端口为8866。发布时可以同时使用 --port参数指定端口。

9. 客户端发送请求验证推理服务

后台服务启动后,我们可以在前端通过post请求的方式传入需要推理的图片数据,然后解析返回后的数据即可得到推理结果。
通过RESTful API接口发布AI推理服务,可以使得服务更加模块化、灵活和安全,可以广泛应用于各种场景中。

下面的演示基于paddlehub提供的推理脚本,向后台post请求并处理返回结果,以及可视化推理结果

9.1 单图预测

# coding: utf8
%matplotlib inline
import matplotlib.pyplot as plt # plt 用于显示图片
import requests
import json
import cv2
import base64
import numpy as np
import colorsys
import warnings
warnings.filterwarnings("ignore")


from paddlex.det import visualize

def cv2_to_base64(image):
    data = cv2.imencode('.jpg', image)[1]
    return base64.b64encode(data.tostring()).decode('utf8')

if __name__ == '__main__':
    # 获取图片的base64编码格式
    img_path = "/home/aistudio/demo_image/test1.jpg"
    img1 = cv2_to_base64(cv2.imread(img_path))
    data = {'images': [img1]}
   
    # 指定content-type
    headers = {"Content-type": "application/json"}
    # 发送HTTP请求
    url = "http://127.0.0.1:8866/predict/car_det"
    r = requests.post(url=url, headers=headers, data=json.dumps(data))
    
    # 打印预测结果,注意,r.json()["results"]本身就是一个数组,要取到对应图片的预测结果,需指定元素位置,如r.json()["results"][0]
    print(r.json()["results"][0])
    # 使用重写的visualize()方法完成预测结果后处理
    # 显示第一张图片的预测效果
    image = visualize(cv2.imread(img_path),r.json()["results"][0], threshold=0.2,save_dir=None)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    plt.imshow(image)
    plt.axis('off') # 不显示坐标轴
    plt.show()
[{'bbox': [675.3522338867188, 633.1513671875, 109.446533203125, 96.7626953125], 'category': 'car', 'category_id': 0, 'score': 0.9230645895004272}, {'bbox': [1068.6134033203125, 717.2249145507812, 128.878662109375, 79.77508544921875], 'category': 'car', 'category_id': 0, 'score': 0.8694654703140259}, {'bbox': [817.390869140625, 636.6602783203125, 136.7486572265625, 107.619873046875], 'category': 'car', 'category_id': 0, 'score': 0.8162567019462585}, {'bbox': [362.31573486328125, 581.7606201171875, 125.5926513671875, 93.572998046875], 'category': 'car', 'category_id': 0, 'score': 0.7825871109962463}, {'bbox': [315.59149169921875, 734.1367797851562, 148.85296630859375, 62.86322021484375], 'category': 'car', 'category_id': 0, 'score': 0.7481681108474731}, {'bbox': [184.5955352783203, 516.3251953125, 99.91166687011719, 89.7103271484375], 'category': 'car', 'category_id': 0, 'score': 0.7179760932922363}, {'bbox': [750.9552612304688, 513.614501953125, 99.6092529296875, 80.00146484375], 'categ

在这里插入图片描述

  • 通过可视化可以看到大部分的目标都可以检测到,但是对于一些比较拥挤的小目标,召回较低。
  • 上面的可视化推理脚本中设置了threshold=0.2,这个是将返回结果中置信度小于0.2的bbox过滤掉,在实际上线前,需要通过一些边界来测试寻找合适的阈值

9.2 批量预测

将需要预测的批量图片传入列表即可实现批量预测

# coding: utf8
%matplotlib inline
import os
import matplotlib.pyplot as plt # plt 用于显示图片
import requests
import json
import cv2
import base64
import numpy as np
import colorsys
import warnings
warnings.filterwarnings("ignore")


from paddlex.det import visualize

def cv2_to_base64(image):
    data = cv2.imencode('.jpg', image)[1]
    return base64.b64encode(data.tostring()).decode('utf8')

if __name__ == '__main__':
    # 获取图片的base64编码格式
    img_root = "/home/aistudio/demo_image/"
    img_list = os.listdir(img_root)
    imgs_path = [os.path.join(img_root, img) for img in img_list if img.endswith((".png", ".jpg", ".bmp"))]
    img_data = [cv2_to_base64(cv2.imread(img_path)) for img_path in imgs_path]
    
    data = {'images': img_data}
    print("There are {} images in the data".format(len(img_data)))
    
    # 指定content-type
    headers = {"Content-type": "application/json"}
    # 发送HTTP请求
    url = "http://127.0.0.1:8866/predict/car_det"
    r = requests.post(url=url, headers=headers, data=json.dumps(data))
    
    # 使用重写的visualize()方法完成预测结果后处理
    # 显示所有图片的预测效果
    for index, img_path in enumerate(imgs_path):
        image = visualize(cv2.imread(img_path),r.json()["results"][index], threshold=0.15,save_dir=None)
        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        plt.imshow(image)
        plt.axis('off') # 不显示坐标轴
        plt.show()
There are 5 images in the data

.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
plt.axis(‘off’) # 不显示坐标轴
plt.show()

There are 5 images in the data

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

  • 可以看到同样的问题,即对小目标和拥挤程度比较高的目标检测能力有限,这个是由于YOLOV3网络的原理决定的。读者可以本项目的基础上尝试ppyolo系列网络,该系列网络在YOLOV3的基础上做了很多优化,检测效果更佳!

10. 总结

本项目展示使用paddleX全流程AI开发工具来处理AI模型两个最关键的问题: 训练和部署。
通过使用API一键调用的方式,将繁琐的流程简化,让开发者将更多的精力放在优化数据和业务上,从而有效的推动AI项目的落地提供生产力价值。

关于作者:

wolfmax老狼,PPDE, AICA六期学员
某半导体CIM软件集成商图像算法工程师,主要方向为图像相关相关的检测分割等算法开发
我在AI Studio上获得钻石等级,点亮7个徽章,来互关呀~ https://aistudio.baidu.com/aistudio/personalcenter/thirdview/801106

此文章为搬运
原项目链接

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐