基于PaddleX的X光安检图像检测挑战赛2022
预先处理数据集为VOC格式,再通过PaddleX后台任务进行模型训练,最终预测提交结果。
一、X光安检图像检测挑战赛3.0
地址:https://challenge.xfyun.cn/topic/info?type=Xray-2022
1.赛事背景
X光安检是目前在城市轨交、铁路、机场、物流业广泛使用的物检手段。使用人工智能技术,辅助一线安检员进行X光安检判图,可以有效降低因为安检员经验、能力或工作状态造成的错漏检问题。在实际场景中,因待检测物品的多样性、成像角度、重叠遮挡等问题,X光安检图像检测算法研究存在一定挑战。
2.赛事任务
本赛事的任务是:基于科大讯飞提供的真实X光安检图像集构建检测模型,对X光安检图像中的指定类别的物品进行检测。
3.评审规则
3.1 数据说明
此次比赛提供带标注的训练数据,即待检测物品在包裹中的X光图像及其标注文件。
本次比赛标注文件中的类别为8类,包括:
刀(knife)、剪刀(scissors)、打火机(lighter)、优盘(USBFlashDisk)、压力容器(pressure)、带喷嘴塑料瓶(plasticBottleWithaNozzle)、公章(seal)、电池(battery)。
待识别物品的X光成像示意图如图所示。
比赛提供的X光图像及其矩形框标注的文件按照数据来源存放在不同的文件夹中,图像文件采用jpg格式,标注文件采用xml格式,各字段含义参照voc数据集。voc各字段含义对应表为:
-
filename 文件名
-
size 图像尺寸
-
width 图像宽度
-
height 图像高度
-
depth 图像深度,一般为3表示是彩色图像
-
-
object 图像中的目标,可能有多个
-
name 该目标的标签名称
-
bndbox 该目标的标注框
-
xmin 该目标的左上角宽度方向坐标
-
ymin 该目标的左上角高度方向坐标
-
xmax 该目标的右下角宽度方向坐标
-
ymax 该目标的右下角高度方向坐标
-
-
3.2 评估指标
评测方式采用计算mAP(IoU = 0.5)的方式。
首先计算每个类的AP:
(1)根据预测框和标注框的IoU是否达到阈值0.5判断该预测框是真阳性还是假阳性;
(2)根据每个预测框的置信度进行从高到低排序;
(3)在不同置信度阈值下计算精确率和召回率,得到若干组PR值;
(4)绘制PR曲线并计算AP值。
然后计算mAP:把所有类的AP值求平均得到mAP。
3.3 作品提交要求
作品必须健康、合法、无任何不良信息及商业宣传行为,不违反任何中华人民共和国有关法律。须保证原创性,不侵犯任何第三方知识产权或其他权利;一经发现或经权利人指出,主办方将直接取消其参赛资格,科大讯飞保留赛事解释权。
选手需要提交json格式文件,详情见示例。其中,坐标值必须为大于0的正数且不能超过图像的宽高。
按照赛题数据页面2022gamedatasettest1.txt里面的顺序组织json
提交文件需按序排列,首先按图片顺序排列,然后按类别顺序排列,置信度顺序随意。
- (a) 图片顺序,请按照图片编号顺序。
- (b) 类别顺序,请参照下列顺序: {‘knife’: 1, ‘scissors’: 2, ‘lighter’: 3, ‘USBFlashDisk’: 4, ‘pressure’: 5, ‘plasticBottleWithaNozzle’: 6, ‘seal’: 7, ‘battery’: 8}
二、数据集处理
1.标注文件整理
# 解压缩数据集
!unzip -qoa data/data165820/round1.zip -d data/
!unzip -qoa data/data165820/round2_test.zip -d data/
!mv data/讯飞研究院-X光安检图像检测挑战赛2022公开数据 data/round1
!mkdir data/round1/train/XML
import glob
import shutil
import os
def move_xml(domain_name):
xml_dir=f"data/round1/train/{domain_name}/XML"
filenames=glob.glob(f'{xml_dir}/*.xml')
print(f"{xml_dir}移动的XML共有{len(filenames)}个。")
if len(filenames)==0:
return
for filename in filenames:
file_path, shortname=os.path.split(filename)
new_name=domain_name+'_'+shortname
new_name=os.path.join("data/round1/train/XML", new_name)
shutil.move(filename, new_name)
shutil.rmtree(xml_dir)
print("重命名并移动文件完成。")
move_xml('domain1')
move_xml('domain2')
move_xml('domain3')
data/round1/train/domain1/XML移动的XML共有1323个。
重命名并移动文件完成。
data/round1/train/domain2/XML移动的XML共有1383个。
重命名并移动文件完成。
data/round1/train/domain3/XML移动的XML共有1308个。
重命名并移动文件完成。
print(len(glob.glob('data/round1/train/XML/*.xml')))
4014
!mv data/round1/train/XML data/round1/train/Annotations
2.图片文件整理
import glob
import shutil
import os
def move_xml(domain_name):
jpg_dir=f"data/round1/train/{domain_name}/"
filenames=glob.glob(f'{jpg_dir}/*.jpg')
print(f"{jpg_dir}移动的XML共有{len(filenames)}个。")
if len(filenames)==0:
return
for filename in filenames:
file_path, shortname=os.path.split(filename)
new_name=domain_name+'_'+shortname
new_name=os.path.join("data/round1/train/JPEGImages", new_name)
shutil.move(filename, new_name)
shutil.rmtree(jpg_dir)
print("重命名并移动文件完成。")
!mkdir data/round1/train/JPEGImages
move_xml('domain1')
move_xml('domain2')
move_xml('domain3')
data/round1/train/domain1/移动的XML共有1323个。
重命名并移动文件完成。
data/round1/train/domain2/移动的XML共有1383个。
重命名并移动文件完成。
data/round1/train/domain3/移动的XML共有1308个。
重命名并移动文件完成。
print(len(glob.glob('data/round1/train/JPEGImages/*.jpg')))
4014
3.PaddleX 安装
!python -m pip install --upgrade -q pip --user
!pip install -q -U paddlex
!pip list|grep paddle
paddle-bfloat 0.1.7
paddlefsl 1.0.0
paddlehub 2.0.4
paddlenlp 2.1.1
paddlepaddle-gpu 2.3.1.post101
paddleslim 2.2.1
paddlex 2.1.0
tb-paddle 0.3.6
4.数据集划分
!paddlex --split_dataset --format VOC --dataset_dir data/round1/train/ --val_value 0.203
[32m[08-24 01:01:01 MainThread @logger.py:242][0m Argv: /opt/conda/envs/python35-paddle120-env/bin/paddlex --split_dataset --format VOC --dataset_dir data/round1/train/ --val_value 0.203
[0m[33m[08-24 01:01:01 MainThread @utils.py:79][0m [5m[33mWRN[0m paddlepaddle version: 2.3.1. The dynamic graph version of PARL is under development, not fully tested and supported
[0m/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/parl/remote/communication.py:38: DeprecationWarning: 'pyarrow.default_serialization_context' is deprecated as of 2.0.0 and will be removed in a future version. Use pickle or the pyarrow IPC functionality instead.
context = pyarrow.default_serialization_context()
[0m/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import MutableMapping
[0m/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Iterable, Mapping
[0m/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Sized
[0m2022-08-24 01:01:02,135-WARNING: type object 'QuantizationTransformPass' has no attribute '_supported_quantizable_op_type'
[0m2022-08-24 01:01:02,135-WARNING: If you want to use training-aware and post-training quantization, please use Paddle >= 1.8.4 or develop version
[0m2022-08-24 01:01:03 [INFO] Dataset split starts...[0m
[0m2022-08-24 01:01:04 [INFO] Dataset split done.[0m
[0m2022-08-24 01:01:04 [INFO] Train samples: 3200[0m
[0m2022-08-24 01:01:04 [INFO] Eval samples: 814[0m
[0m2022-08-24 01:01:04 [INFO] Test samples: 0[0m
[0m2022-08-24 01:01:04 [INFO] Split files saved in data/round1/train/[0m
[0m[0m
!ls data/round1/train/
Annotations JPEGImages labels.txt train_list.txt val_list.txt
5.图片查看
三、模型训练
1.transforms定义
# 定义训练和验证时的transforms
# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/transforms/transforms.md
import paddlex as pdx
from paddlex import transforms as T
train_transforms = T.Compose([
# T.MixupImage(mixup_epoch=-1),
T.RandomDistort(),
T.RandomHorizontalFlip(),
T.RandomVerticalFlip(),
T.BatchRandomResize(
target_sizes=[320, 352, 384, 416, 448, 480, 512, 544, 576, 608],
interp='RANDOM'),
# T.Resize(target_size=224, interp='LINEAR'),
T.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
eval_transforms = T.Compose([
T.Resize(
224, interp='CUBIC'), T.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
[08-24 01:01:06 MainThread @utils.py:79] WRN paddlepaddle version: 2.3.1. The dynamic graph version of PARL is under development, not fully tested and supported
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/parl/remote/communication.py:38: DeprecationWarning: 'pyarrow.default_serialization_context' is deprecated as of 2.0.0 and will be removed in a future version. Use pickle or the pyarrow IPC functionality instead.
context = pyarrow.default_serialization_context()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import MutableMapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Iterable, Mapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Sized
2022-08-24 01:01:07,469-WARNING: type object 'QuantizationTransformPass' has no attribute '_supported_quantizable_op_type'
2022-08-24 01:01:07,470-WARNING: If you want to use training-aware and post-training quantization, please use Paddle >= 1.8.4 or develop version
2.数据集定义
# 定义训练和验证所用的数据集
# API说明:https://github.com/PaddlePaddle/PaddleX/blob/develop/docs/apis/datasets.md
train_dataset = pdx.datasets.VOCDetection(
data_dir='data/round1/train/',
file_list='data/round1/train/train_list.txt',
label_list='data/round1/train/labels.txt',
transforms=train_transforms,
shuffle=True)
eval_dataset = pdx.datasets.VOCDetection(
data_dir='data/round1/train',
file_list='data/round1/train/val_list.txt',
label_list='data/round1/train/labels.txt',
transforms=eval_transforms,
shuffle=False)
2022-08-22 23:08:02 [INFO] Starting to read file list from dataset...
2022-08-22 23:08:06 [INFO] 3200 samples in file data/round1/train/train_list.txt, including 3200 positive samples and 0 negative samples.
creating index...
index created!
2022-08-22 23:08:06 [INFO] Starting to read file list from dataset...
2022-08-22 23:08:07 [INFO] 814 samples in file data/round1/train/val_list.txt, including 814 positive samples and 0 negative samples.
creating index...
index created!
3.模型定义
# YOLO检测模型的预置anchor生成
# API说明: https://github.com/PaddlePaddle/PaddleX/blob/release/2.0.0/paddlex/tools/anchor_clustering/yolo_cluster.py
import numpy as np
anchors = train_dataset.cluster_yolo_anchor(num_anchors=9, image_size=480)
anchor_masks = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
# 初始化模型,并进行训练
# 可使用VisualDL查看训练指标,参考https://github.com/PaddlePaddle/PaddleX/tree/release/2.0.0/tutorials/train#visualdl可视化训练指标
num_classes = len(train_dataset.labels)
model = pdx.det.YOLOv3(
num_classes=num_classes,
backbone='DarkNet53',
anchors=anchors.tolist() if isinstance(anchors, np.ndarray) else anchors,
anchor_masks=[[6, 7, 8], [3, 4, 5], [0, 1, 2]],
label_smooth=True,
ignore_threshold=0.6)
2022-08-22 23:09:01 [INFO] Running kmeans for 9 anchors on 6829 points...
Evolving anchors with Genetic Algorithm: fitness = 0.7607: 100%|██████████| 1000/1000 [00:02<00:00, 493.54it/s]
W0822 23:09:04.492769 98 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0822 23:09:04.496325 98 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
4.模型定义
主要是batch size 选择:
- batch size: 8, 对应显存 13421
- batch size:19.37, 对应显存 32000
选择16即可。
Sun Aug 21 17:54:18 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:05:00.0 Off | 0 |
| N/A 38C P0 62W / 300W | 28077MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
# API说明:https://github.com/PaddlePaddle/PaddleX/blob/release/2.0.0/paddlex/cv/models/detector.py
# 各参数介绍与调整说明:https://paddlex.readthedocs.io/zh_CN/develop/appendix/parameters.html
model.train(
num_epochs=200, # 训练轮次
train_dataset=train_dataset, # 训练数据
eval_dataset=eval_dataset, # 验证数据
train_batch_size=16, # 批大小
pretrain_weights='COCO', # 预训练权重
learning_rate=0.005 / 12, # 学习率
warmup_steps=500, # 预热步数
warmup_start_lr=0.0, # 预热起始学习率
save_interval_epochs=5, # 每5个轮次保存一次,有验证数据时,自动评估
lr_decay_epochs=[85, 135], # step学习率衰减
save_dir='output/yolov3_darknet53', # 保存路径
use_vdl=True) # 其用visuadl进行可视化训练记录
5.提交后台执行
完成上述步骤即可生成版本、提交后台运行,很快得到对应的模型。可下载输出结果,含notebook代码、log日志、保存的模型等信息文件。
visualdl 本地训练可视化如下:
四、模型预测
1.加载训练保存的模型预测
模型在训练过程中,会每间隔一定轮数保存一次模型,在验证集上评估效果最好的一轮会保存在save_dir目录下的best_model文件夹。通过如下方式可加载模型,进行预测:
# 解压缩后台任务运行得到的最佳模型
!unzip -qoa data/data166176/best_model.zip
!wget https://ai-contest-static.xfyun.cn/2022/%E6%95%B0%E6%8D%AE%E9%9B%86/1/2022gamedatasettest1.txt
!wget https://ai-contest-static.xfyun.cn/2022/%E6%95%B0%E6%8D%AE%E9%9B%86/2022gamedatasettest2.txt
--2022-08-24 01:03:42-- https://ai-contest-static.xfyun.cn/2022/%E6%95%B0%E6%8D%AE%E9%9B%86/1/2022gamedatasettest1.txt
Resolving ai-contest-static.xfyun.cn (ai-contest-static.xfyun.cn)... 220.181.53.219
Connecting to ai-contest-static.xfyun.cn (ai-contest-static.xfyun.cn)|220.181.53.219|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19057 (19K) [text/plain]
Saving to: ‘2022gamedatasettest1.txt.1’
2022gamedatasettest 100%[===================>] 18.61K --.-KB/s in 0s
2022-08-24 01:03:43 (289 MB/s) - ‘2022gamedatasettest1.txt.1’ saved [19057/19057]
--2022-08-24 01:03:43-- https://ai-contest-static.xfyun.cn/2022/%E6%95%B0%E6%8D%AE%E9%9B%86/2022gamedatasettest2.txt
Resolving ai-contest-static.xfyun.cn (ai-contest-static.xfyun.cn)... 220.181.53.219
Connecting to ai-contest-static.xfyun.cn (ai-contest-static.xfyun.cn)|220.181.53.219|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 76209 (74K) [text/plain]
Saving to: ‘2022gamedatasettest2.txt.1’
2022gamedatasettest 100%[===================>] 74.42K --.-KB/s in 0.03s
2022-08-24 01:03:43 (2.73 MB/s) - ‘2022gamedatasettest2.txt.1’ saved [76209/76209]
%cd ~
img_file=[]
with open('2022gamedatasettest1.txt','r') as f:
img_file=f.readlines()
print(len(img_file))
/home/aistudio
1003
%cd data/round1/test/test1
import paddlex as pdx
model = pdx.load_model('/home/aistudio/best_model')
result=[]
# 不要批量预测,显存容易爆
for img in img_file:
item = model.predict(img.strip('\n'))
result.append(item)
/home/aistudio/data/round1/test/test1
W0824 01:04:04.971199 145 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0824 01:04:04.978477 145 gpu_resources.cc:91] device: 0, cuDNN Version: 7.6.
2022-08-24 01:04:05 [INFO] Model[YOLOv3] loaded.
for item in result[5]:
print(item)
{'category_id': 2, 'category': 'knife', 'bbox': [97.13526916503906, 365.67645263671875, 114.161376953125, 209.08056640625], 'score': 0.020859738811850548}
label_list = ['knife','scissors', 'lighter', 'USBFlashDisk', 'pressure', 'plasticBottleWithaNozzle', 'seal', 'battery']
print(len(result[0]))
for item in result[0]:
print(item)
6
{'category_id': 0, 'category': 'USBFlashDisk', 'bbox': [101.52272033691406, 393.29205322265625, 50.66070556640625, 80.339599609375], 'score': 0.01219225861132145}
{'category_id': 0, 'category': 'USBFlashDisk', 'bbox': [87.89407348632812, 260.2281494140625, 53.05218505859375, 102.7266845703125], 'score': 0.010960005223751068}
{'category_id': 1, 'category': 'battery', 'bbox': [109.76924133300781, 261.1977844238281, 38.988037109375, 63.71624755859375], 'score': 0.12186391651630402}
{'category_id': 1, 'category': 'battery', 'bbox': [81.07035827636719, 261.7168273925781, 58.88621520996094, 123.1175537109375], 'score': 0.09327761828899384}
{'category_id': 1, 'category': 'battery', 'bbox': [264.4331970214844, 305.2523193359375, 47.3126220703125, 64.0133056640625], 'score': 0.02472269907593727}
{'category_id': 1, 'category': 'battery', 'bbox': [101.52272033691406, 393.29205322265625, 50.66070556640625, 80.339599609375], 'score': 0.016887478530406952}
2.格式化提交结果
%cd ~
import os
import json
label_list = ['knife','scissors', 'lighter', 'USBFlashDisk', 'pressure', 'plasticBottleWithaNozzle', 'seal', 'battery']
save_result=[]
for item in result:
temp_result = [[], [], [], [], [], [], [], []]
for item2 in item:
if not len(item2):
continue
# name序号对齐
name=item2['category']
index = label_list.index(name)
# 获取bbox
bbox=item2['bbox']
# 获取置信度
score=item2['score']
temp_result[index].append([bbox[0], bbox[1], bbox[0]+bbox[2], bbox[3]+bbox[1], score])
save_result.append(temp_result)
with open('/home/aistudio/result.json', 'w') as fp:
0], bbox[1], bbox[0]+bbox[2], bbox[3]+bbox[1], score])
save_result.append(temp_result)
with open('/home/aistudio/result.json', 'w') as fp:
json.dump(save_result, fp, ensure_ascii=False)
/home/aistudio
此文章为搬运
原项目链接
更多推荐
所有评论(0)