★★★ 本文源自AlStudio社区精品项目,【点击此处】查看更多精品内容 >>>
本项目是实现基于Yolov3目标检测算对来自kaggle的 Global Wheat Detection 数据集的训练和预测。yolov3本融合多种先进方法,尤其在小目标检测上效果有一定的提升,是一个速度和精度均衡的目标检测网络。


香软好吃的面包、美味小笼包、可口的饺子、以及各种特色诱人面食,你常常都会品尝到小麦加工的产品,这些作为备受欢迎的食物使小麦被广泛研究。植物科学家使用“麦穗”(含有谷物的植物顶部的尖刺)的图像检测方法,图像是全球麦田的大量准确数据,估计不同品种小麦头的密度和大小。农民在管理小麦决策时,可以使用这些数据来评估健康状况和成熟度。

然而,在室外田间图像中准确检测小麦头在视觉上可能具有挑战性。茂密的小麦植物经常重叠,风会模糊照片。两者都使得很难识别单个头部。此外,外观因成熟度、颜色、基因型和头部方向而异。最后,由于小麦在世界范围内种植,因此必须考虑不同的品种、种植密度、模式和田间条件。为小麦表型开发的模型需要在不同的生长环境之间进行泛化。目前的检测方法涉及单级和两级检测器(Yolo-V3和Faster-RCNN),但即使使用大型数据集进行训练,仍然存在对训练区域的偏差。

小麦是全球的主食,这就必须考虑到不同的生长条件。为小麦表型开发的模型需要能够在环境之间进行泛化。如果成功,研究人员可以准确地估计不同品种的小麦头的密度和大小。通过改进的检测,农民可以更好地评估他们的作物。而目标检测是先行条件。

YOLO系列算法模型设计思想

YOLO系列算法的基本流程:

  • 样本标注。按一定规则在图片上产生一系列的候选区域,然后根据这些候选区域与图片上物体真实框之间的位置关系对候选区域进行标注。跟真实框足够接近的那些候选区域会被标注为正样本,同时将真实框的位置作为正样本的位置目标。偏离真实框较大的那些候选区域则会被标注为负样本,负样本不需要预测位置或者类别。
  • 建立损失函数。使用卷积神经网络模型提取图片特征并对候选区域的位置和类别进行预测。这样每个预测框就可以看成是一个样本,根据真实框相对它的位置和类别进行了标注而获得标签值,通过网络模型预测其位置和类别,将网络预测值和标签值进行比较,就可以建立起损失函数(位置回归损失,类别交叉熵损失)。

YOLO系列算法训练过程的流程图如 图1 所示:



图1:YOLO系列算法训练流程图

  • 图1 左边是输入图片,上半部分所示的过程是使用卷积神经网络对图片提取特征,随着网络不断向前传播,特征图的尺寸越来越小,每个像素点会代表更加抽象的特征模式,直到输出特征图,其尺寸减小为原图的 1 32 \frac{1}{32} 321
  • 图1 下半部分描述了生成候选区域的过程,首先将原图划分成多个小方块,每个小方块的大小是 32 × 32 32 \times 32 32×32,然后以每个小方块为中心分别生成一系列锚框,整张图片都会被锚框覆盖到。在每个锚框的基础上产生一个与之对应的预测框,根据锚框和预测框与图片上物体真实框之间的位置关系,对这些预测框进行标注。
  • 将上方支路中输出的特征图与下方支路中产生的预测框标签建立关联,创建损失函数,开启端到端的训练过程。

一、数据处理

数据集是来自kaggle的全球小麦检测数据集,真实框格式为左上点xywh,而yolov3的输入输出都为中心点xywh格式。需要转换:

bboxes[:, 0] = bboxes[:, 0] + bboxes[:, 2] / 2.0  # 将真实框位置由左上转为中心点的xywh格式
bboxes[:, 1] = bboxes[:, 1] + bboxes[:, 3] / 2.0
解压数据集
# 解压数据集
!unzip -q -d data data/data198878/global-wheat-detection.zip 
# 导入模块
import numpy as np
import pandas as pd
import paddle

import os
import cv2
from PIL import Image, ImageDraw, ImageEnhance
from paddle.vision import transforms as T
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm # 进度条模块
ROOT_PATH = 'data' 

def get_path(*args,fp_postfix=None):
    # 获取文件路径
    '''
    Params: fp_postfix 文件后缀 如:jpg、png、gif等
    Params: *args 获取路径参数
    example:
        f = get_path('a', 'b', 'c', 'd')
        print(f) # data/a/b/c/d
        f = get_path('a', 'b', 'c', 'd',fp_postfix='jpg')
        print(f) # data/a/b/c/d.jpg
    '''
    if fp_postfix:
        obj_path = os.path.join(ROOT_PATH, *args) + '.' + fp_postfix
        return obj_path

    obj_path = os.path.join(ROOT_PATH, *args)
    return obj_path

# 查看数据集信息
img_nums = len(os.listdir(get_path('train')))
print(f'train中的图片张数:{img_nums}') # 查看train中的图片张数

labels =  pd.read_csv(get_path('train.csv')) # 读取train.csv中的数据
print("\n训练集有效图片数目:{}".format(labels["image_id"].nunique()))
print("训练的真实框gt_box数目:",labels.shape[0])
labels.head()
train中的图片张数:3422

训练集有效图片数目:3373
训练的真实框gt_box数目: 147793
image_idwidthheightbboxsource
0b6ab77fd710241024[834.0, 222.0, 56.0, 36.0]usask_1
1b6ab77fd710241024[226.0, 548.0, 130.0, 58.0]usask_1
2b6ab77fd710241024[377.0, 504.0, 74.0, 160.0]usask_1
3b6ab77fd710241024[834.0, 95.0, 109.0, 107.0]usask_1
4b6ab77fd710241024[26.0, 144.0, 124.0, 117.0]usask_1

在原始数据中,bbox的数据格式为字符串,需转换为数组。下面的代码根据image_id对边界框bbox进行分组,并将边界框作为 numpy 数组放置在每个image_id中,便于使用image_id快速检索所有边界框。

def group_boxes(group):
    # 将image_id相同的图片
    boundaries = group["bbox"].str.split(",", expand = True)
    boundaries[0] = boundaries[0].str.slice(start = 1) # 去掉bbox字符串的'['
    boundaries[3] = boundaries[3].str.slice(stop = -1) # 去掉bbox字符串的']'
    
    return boundaries.values.astype(float)

labels = labels.groupby("image_id").apply(group_boxes)

以下是其中一张图片的信息展示。

print('单张图片真实框形状:', labels["ffbf75e5b"].shape)
labels["ffbf75e5b"]

从数据中提取的labels,需要将图像加载为 numpy 数组。此时,值得将数据拆分为训练验证数据集。由于数据集很小,为了绝大多数图像作为训练数据,所以只将最后15张图像作为验证数据集。这可能不是标准验证的最佳尺寸,但考虑到可用图像的数量和任务的复杂性,采取了折衷方案。

# 切分数据集
train_image_ids = np.unique(labels.index.values)[0:3358]
val_image_ids = np.unique(labels.index.values)[3358:3373]
# 加载训图片,将尺寸1024,1024 转成 256,256,以便更快训练
def load_image(image_id):
    img_path = get_path('train', image_id, fp_postfix='jpg')
    img = Image.open(img_path)
    img = img.resize((256, 256))
    
    return np.asarray(img)
# 获取图像和真实框并存储为字典
def get_data(image_ids):
    data_pixels = {} # 图像内容
    data_labels = {} # 真实框坐标

    for image_id in tqdm(image_ids):
        data_pixels[image_id] = load_image(image_id)
        data_labels[image_id] = labels[image_id].copy() / 4
    return data_pixels, data_labels
# 训练集数据
train_pixels, train_labels = get_data(train_image_ids)
  0%|          | 0/3358 [00:00<?, ?it/s]
# 验证集数据
val_pixels, val_labels = get_data(val_image_ids)

  0%|          | 0/15 [00:00<?, ?it/s]
可视化图像

在继续之前,先看看数据集中的一些图像和边界框。

def draw_bboxes(image_id, bboxes, source = "train"):  
    img_path = get_path(source, image_id, fp_postfix="jpg")
    image = Image.open(img_path) # 读取图片
    image = image.resize((256,256)) # 调整尺寸256*256

    # image = transform()(image) # 测试图像增广#################
    
    draw = ImageDraw.Draw(image) # 实例化图片
            
    for bbox in bboxes: # 遍历bboxes
        draw_bbox(draw, bbox) # 画出bbox
    
    return np.asarray(image)


def draw_bbox(draw, bbox): # 画方框函数
    x, y, width, height = bbox
    draw.rectangle([x, y, x + width, y + height], width = 2, outline='red')
def show_images(image_ids, bboxes, source = 'train'):
    # 多个图像多次调用此函数。
    pixels = []
    
    for image_id in image_ids:
        pixels.append(
            draw_bboxes(image_id, bboxes[image_id], source)
        )
    
    num_of_images = len(image_ids)
    fig, axes = plt.subplots(1, num_of_images, figsize = (5 * num_of_images, 5 * num_of_images))
    
    for i, image_pixels in enumerate(pixels):
        axes[i].imshow(image_pixels)
show_images(train_image_ids[0:2], train_labels)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-V9vQWv4Y-1681980113467)(main_files/main_21_0.png)]

标注边框优化

此数据集中有少量不包含麦穗的边界框。虽然很少,但仍会影响麦穗检测,导致不准确。下面是搜索不含麦穗的微小边界框,以及标注出错的巨大边框。

tiny_bboxes = []

for i, image_id in enumerate(train_image_ids):
    for label in train_labels[image_id]:
        if (label[2] * label[3]) <= 10 and label[2] * label[3] != 0:
            tiny_bboxes.append((image_id, i))

            
print(str(len(tiny_bboxes)) + " 个微小边框")
# print(tiny_bboxes)
50 个微小边框
huge_bboxes = []

for i, image_id in enumerate(train_image_ids):
    for label in train_labels[image_id]:
        if label[2] * label[3] > 8000:
            huge_bboxes.append((image_id, i))

            
print(str(len(huge_bboxes)) + " 个巨大边框")
# print(huge_bboxes)
13 个巨大边框
# 展示部分无麦穗的边框
show_images(train_image_ids[19:21], train_labels)
# 抽取边框好的训练数据,重新生成训练集,不改变原有数据
def clean_labels(train_image_ids, train_labels):
    good_labels = {}
    
    for i, image_id in enumerate(train_image_ids):
        good_labels[image_id] = []
        
        for j, label in enumerate(train_labels[image_id]):

            # remove huge bbox
            if label[2] * label[3] > 8000 and i not in [1079, 1371, 2020]:
                continue

            # remove tiny bbox
            elif label[2] < 5 or label[3] < 5:
                continue
                
            else:
                good_labels[image_id].append(
                    train_labels[image_id][j]
                )
                
    return good_labels

train_labels = clean_labels(train_image_ids, train_labels)
数据准备完整代码
# 数据加载全整体代码
# 导入模块
import os
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm # 进度条模块

import cv2
from PIL import Image, ImageDraw, ImageEnhance
import matplotlib.pyplot as plt

import paddle
from paddle.vision import transforms as T


# 数据集目录
ROOT_PATH = 'data' 
# 输入图片尺寸
IMG_SIZE = 416

def get_path(*args,fp_postfix=None):
    # 获取文件路径
    '''
    Params: fp_postfix 文件后缀 如:jpg、png、gif等
    Params: *args 获取路径参数
    example:
        f = get_path('a', 'b', 'c', 'd')
        print(f) # data/a/b/c/d
        f = get_path('a', 'b', 'c', 'd',fp_postfix='jpg')
        print(f) # data/a/b/c/d.jpg
    '''
    if fp_postfix:
        obj_path = os.path.join(ROOT_PATH, *args) + '.' + fp_postfix
        return obj_path

    obj_path = os.path.join(ROOT_PATH, *args)
    return obj_path

# bbox分组,转numpy
def group_boxes(group):
    # 将image_id相同的图片
    boundaries = group["bbox"].str.split(",", expand = True)
    boundaries[0] = boundaries[0].str.slice(start = 1) # 去掉bbox字符串的'['
    boundaries[3] = boundaries[3].str.slice(stop = -1) # 去掉bbox字符串的']'
    
    return boundaries.values.astype(float)

# 加载训图片,将尺寸1024,1024 转成 IMG_SIZE*IMG_SIZE,以便更快训练
def load_image(image_id):
    img_path = get_path('train', image_id, fp_postfix='jpg')
    img = Image.open(img_path)
    img = img.resize((IMG_SIZE, IMG_SIZE))
    
    return np.asarray(img)

# 切分数据集
def train_seq_val(ration_size):
    train_image_ids = np.unique(labels.index.values)[0:ration_size]
    val_image_ids = np.unique(labels.index.values)[ration_size:3373]
    return train_image_ids, val_image_ids

# 获取图像和真实框并存储为字典
def get_data(image_ids):
    data_pixels = {} # 图像内容
    data_labels = {} # 真实框坐标

    for image_id in tqdm(image_ids):
        data_pixels[image_id] = load_image(image_id)
        data_labels[image_id] = labels[image_id].copy() / (1024 / IMG_SIZE)  # 缩放标签
    return data_pixels, data_labels

# 抽取边框好的训练数据,重新生成训练集,不改变原有数据
def clean_labels(train_image_ids, train_labels):
    good_labels = {}
    
    for i, image_id in enumerate(train_image_ids):
        good_labels[image_id] = []
        
        for j, label in enumerate(train_labels[image_id]):

            # remove huge bbox
            if label[2] * label[3] > 8000 and i not in [1079, 1371, 2020]:
                continue

            # remove tiny bbox
            elif label[2] < 5 or label[3] < 5:
                continue
                
            else:
                good_labels[image_id].append(
                    train_labels[image_id][j])
                
    return good_labels


labels = pd.read_csv(get_path('train.csv')) # 读取train.csv中的数
labels = labels.groupby("image_id").apply(group_boxes) # 格式转换,真实框分组
# 切分数据集
train_image_ids, val_image_ids = train_seq_val(ration_size=3358)

# 训练集数据
train_pixels, train_labels = get_data(train_image_ids)
# 验证集数据
val_pixels, val_labels = get_data(val_image_ids)
# 进一步优化真实框数据
train_labels = clean_labels(train_image_ids, train_labels)
  0%|          | 0/3358 [00:00<?, ?it/s]



  0%|          | 0/15 [00:00<?, ?it/s]

生成数据集

通常我会使用 Paddle data API 数据生成器来构建pipeline用于将数据传入模型中。需要为此模型完成的预处理并非微不足道,事实证明创建自定义数据生成器更容易。

  • 定义数据集大小。
  • 随机打乱数据集顺序。
  • 获取图像并对其进行扩充,以增加数据集的多样性。这包括在图像中的麦穗变化时修改边界框。
  • 将边界框的形状调整为标签网格。

麦穗检测可以看作是一分类目标检测任务,由于原数据集没有类别标签数据,需对数据添加类别标签。

def get_bbox(self, gt_bbox):
        # 对于一般的检测任务来说,一张图片上往往会有多个目标物体
        # 设置参数MAX_NUM = 55, 即一张图片最多取55个真实框;如果真实
        # 框的数目少于55个,则将不足部分的gt_bbox的各项数值全设置为0
        MAX_NUM = 55
        gt_bbox2 = np.zeros((MAX_NUM, 4))
        gt_class2 = np.zeros((MAX_NUM,))    # 1分类   
        gt_bbox = np.array(gt_bbox)
        for i in range(len(gt_bbox)):
            if i >= MAX_NUM:
                break
            gt_bbox2[i, :] = gt_bbox[i, :]
            gt_class2[i] = 0 # 一分类标签为0
                   
        return gt_bbox2, gt_class2
生成数据集
class MyDataset(paddle.io.Dataset):
    def __init__(self, image_ids, image_pixels, img_size, labels = None, mode = None):
        super(MyDataset, self).__init__()
        self.image_ids = image_ids
        self.image_pixels = image_pixels
        self.img_size = img_size
        self.labels = labels
        self.transform = None
        self.mode = mode
        if self.mode == "train":
            self.transform =T.Compose([                      
                T.BrightnessTransform(0.4), # 亮度调节
                T.ContrastTransform(0.4),   # 对比度调节
                T.HueTransform(0.4),        # 色调
                # T.RandomErasing(),        # 随机擦除
                T.Normalize(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375],data_format='HWC'), # 标准化
                T.Transpose()             # 数据格式转换,Transpose默认参数(2,0,1)
            ])
        if self.mode == "val":
            self.transform =T.Compose([
                T.Normalize(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375],data_format='HWC'),
                T.Transpose()
            ])     
    
    def get_bbox(self, gt_bbox):
        # 对于一般的检测任务来说,一张图片上往往会有多个目标物体
        # 设置参数MAX_NUM = 55, 即一张图片最多取55个真实框;如果真实
        # 框的数目少于55个,则将不足部分的gt_bbox的各项数值全设置为0
        MAX_NUM = 55
        gt_bbox2 = np.zeros((MAX_NUM, 4))
        gt_class2 = np.zeros((MAX_NUM,))    # 1分类   
        gt_bbox = np.array(gt_bbox)
        for i in range(len(gt_bbox)):
            if i >= MAX_NUM:
                break
            gt_bbox2[i, :] = gt_bbox[i, :]
            gt_class2[i] = 0
                   
        return gt_bbox2, gt_class2

    def __getitem__(self, index):
        image_id = self.image_ids[index]     

        X = self.image_pixels[image_id]

        w = X.shape[0]
        h = X.shape[1]
        
        bboxes, gt_labels = self.get_bbox(self.labels[image_id])
        box_idx = np.arange(bboxes.shape[0]) # 随机打乱真实框排列顺序
        np.random.shuffle(box_idx)
        gt_labels = gt_labels[box_idx]

        bboxes = bboxes[box_idx] # 真实框位置是左上点的xywh格式
        bboxes[:, 0] = bboxes[:, 0] + bboxes[:, 2] / 2.0  # 将真实框位置由左上转为中心点的xywh格式
        bboxes[:, 1] = bboxes[:, 1] + bboxes[:, 3] / 2.0
        
        y = bboxes

        if self.transform:
            X = self.transform(X)

        return X.astype('float32'), np.array(y, dtype = 'float32') / self.img_size, \
            np.array(gt_labels, dtype = 'int32'), np.array([h, w],dtype='int32')

    def __len__(self):
        return len(self.image_ids)
train_dataset = MyDataset(train_image_ids,
                        train_pixels,
                        IMG_SIZE,
                        train_labels,
                        mode='train')

val_dataset = MyDataset(val_image_ids,
                        val_pixels,
                        IMG_SIZE,
                        val_labels,
                        mode='val')
# 查看数据集形状
d = paddle.io.DataLoader(train_dataset, batch_size=4, shuffle=True, num_workers=0)
print(next(d())[0].shape, next(d())[1].shape)
[4, 3, 416, 416] [4, 55, 4]

二、网络搭建

准备好数据后,我将定义和训练模型。通过连续使用多层卷积和池化等操作,能得到语义含义更加丰富的特征图。在检测问题中,也使用卷积神经网络逐层提取图像特征,通过最终的输出特征图来表征物体位置和类别等信息。

YOLOv3算法使用的骨干网络是Darknet53。Darknet53网络的具体结构如 图16 所示,在ImageNet图像分类任务上取得了很好的成绩。在检测任务中,将图中C0后面的平均池化、全连接层和Softmax去掉,保留从输入到C0部分的网络结构,作为检测模型的基础网络结构,也称为骨干网络。YOLOv3模型会在骨干网络的基础上,再添加检测相关的网络模块。

图2 DarkNet53网络结构图

网络层输出C0->y1、C1->y2、C2->y3 ,由YoloDetectionBlock实现

图3 YoloDetectionBlock右边的框

输出形状对应的信息如下图:

图4 输出特征形状
YoloV3 模型的基本组件
# YoloV3 模型的基本组件
import paddle
import paddle.nn.functional as F
import numpy as np

class ConvBNLayer(paddle.nn.Layer):
    def __init__(self, ch_in, ch_out, 
                 kernel_size=3, stride=1, groups=1,
                 padding=0, act="leaky"):
        super(ConvBNLayer, self).__init__()
    
        self.conv = paddle.nn.Conv2D(
            in_channels=ch_in,
            out_channels=ch_out,
            kernel_size=kernel_size,
            stride=stride,
            padding=padding,
            groups=groups,
            weight_attr=paddle.ParamAttr(
                initializer=paddle.nn.initializer.Normal(0., 0.02)),
            bias_attr=False)
    
        self.batch_norm = paddle.nn.BatchNorm2D(
            num_features=ch_out,
            weight_attr=paddle.ParamAttr(
                initializer=paddle.nn.initializer.Normal(0., 0.02),
                regularizer=paddle.regularizer.L2Decay(0.)),
            bias_attr=paddle.ParamAttr(
                initializer=paddle.nn.initializer.Constant(0.0),
                regularizer=paddle.regularizer.L2Decay(0.)))
        self.act = act

        
    def forward(self, inputs):
        out = self.conv(inputs)
        out = self.batch_norm(out)
        if self.act == 'leaky':
            out = F.leaky_relu(x=out, negative_slope=0.1)
        return out
    
class DownSample(paddle.nn.Layer):
    # 下采样,图片尺寸减半,具体实现方式是使用stirde=2的卷积
    def __init__(self,
                 ch_in,
                 ch_out,
                 kernel_size=3,
                 stride=2,
                 padding=1):

        super(DownSample, self).__init__()

        self.conv_bn_layer = ConvBNLayer(
            ch_in=ch_in,
            ch_out=ch_out,
            kernel_size=kernel_size,
            stride=stride,
            padding=padding)
        self.ch_out = ch_out
    def forward(self, inputs):
        out = self.conv_bn_layer(inputs)
        return out

class BasicBlock(paddle.nn.Layer):
    """
    基本残差块的定义,输入x经过两层卷积,然后接第二层卷积的输出和输入x相加
    """
    def __init__(self, ch_in, ch_out):
        super(BasicBlock, self).__init__()

        self.conv1 = ConvBNLayer(
            ch_in=ch_in,
            ch_out=ch_out,
            kernel_size=1,
            stride=1,
            padding=0
            )
        self.conv2 = ConvBNLayer(
            ch_in=ch_out,
            ch_out=ch_out*2,
            kernel_size=3,
            stride=1,
            padding=1
            )
    def forward(self, inputs):
        conv1 = self.conv1(inputs)
        conv2 = self.conv2(conv1)
        out = paddle.add(x=inputs, y=conv2)
        return out

     
class LayerWarp(paddle.nn.Layer):
    """
    添加多层残差块,组成Darknet53网络的一个层级
    """
    def __init__(self, ch_in, ch_out, count, is_test=True):
        super(LayerWarp,self).__init__()

        self.basicblock0 = BasicBlock(ch_in,
            ch_out)
        self.res_out_list = []
        for i in range(1, count):
            res_out = self.add_sublayer("basic_block_%d" % (i), # 使用add_sublayer添加子层
                BasicBlock(ch_out*2,
                    ch_out))
            self.res_out_list.append(res_out)

    def forward(self,inputs):
        y = self.basicblock0(inputs)
        for basic_block_i in self.res_out_list:
            y = basic_block_i(y)
        return y

# DarkNet 每组残差块的个数,来自DarkNet的网络结构图
DarkNet_cfg = {53: ([1, 2, 8, 8, 4])}

class DarkNet53_conv_body(paddle.nn.Layer):
    def __init__(self):
        super(DarkNet53_conv_body, self).__init__()
        self.stages = DarkNet_cfg[53]
        self.stages = self.stages[0:5]

        # 第一层卷积
        self.conv0 = ConvBNLayer(
            ch_in=3,
            ch_out=32,
            kernel_size=3,
            stride=1,
            padding=1)

        # 下采样,使用stride=2的卷积来实现
        self.downsample0 = DownSample(
            ch_in=32,
            ch_out=32 * 2)

        # 添加各个层级的实现
        self.darknet53_conv_block_list = []
        self.downsample_list = []
        for i, stage in enumerate(self.stages):
            conv_block = self.add_sublayer(
                "stage_%d" % (i),
                LayerWarp(32*(2**(i+1)),
                32*(2**i),
                stage))
            self.darknet53_conv_block_list.append(conv_block)
        # 两个层级之间使用DownSample将尺寸减半
        for i in range(len(self.stages) - 1):
            downsample = self.add_sublayer(
                "stage_%d_downsample" % i,
                DownSample(ch_in=32*(2**(i+1)),
                    ch_out=32*(2**(i+2))))
            self.downsample_list.append(downsample)

    def forward(self,inputs):
        out = self.conv0(inputs)
        #print("conv1:",out.numpy())
        out = self.downsample0(out)
        #print("dy:",out.numpy())
        blocks = []
        for i, conv_block_i in enumerate(self.darknet53_conv_block_list): #依次将各个层级作用在输入上面
            out = conv_block_i(out)
            blocks.append(out)
            if i < len(self.stages) - 1:
                out = self.downsample_list[i](out)
        return blocks[-1:-4:-1] # 将C0, C1, C2作为返回值


# Yolo检测头,指定输出P0、P1或P2特征
class YoloDetectionBlock(paddle.nn.Layer):
    # define YOLOv3 detection head
    # 使用多层卷积和BN提取特征
    def __init__(self,ch_in,ch_out,is_test=True):
        super(YoloDetectionBlock, self).__init__()

        assert ch_out % 2 == 0, \
            "channel {} cannot be divided by 2".format(ch_out)

        self.conv0 = ConvBNLayer(
            ch_in=ch_in,
            ch_out=ch_out,
            kernel_size=1,
            stride=1,
            padding=0)
        self.conv1 = ConvBNLayer(
            ch_in=ch_out,
            ch_out=ch_out*2,
            kernel_size=3,
            stride=1,
            padding=1)
        self.conv2 = ConvBNLayer(
            ch_in=ch_out*2,
            ch_out=ch_out,
            kernel_size=1,
            stride=1,
            padding=0)
        self.conv3 = ConvBNLayer(
            ch_in=ch_out,
            ch_out=ch_out*2,
            kernel_size=3,
            stride=1,
            padding=1)
        self.route = ConvBNLayer(
            ch_in=ch_out*2,
            ch_out=ch_out,
            kernel_size=1,
            stride=1,
            padding=0)
        self.tip = ConvBNLayer(
            ch_in=ch_out,
            ch_out=ch_out*2,
            kernel_size=3,
            stride=1,
            padding=1)
    def forward(self, inputs):
        out = self.conv0(inputs)
        out = self.conv1(out)
        out = self.conv2(out)
        out = self.conv3(out)
        route = self.route(out)
        tip = self.tip(route)
        return route, tip


# 查看P0输出特征
NUM_ANCHORS = 3
NUM_CLASSES = 1
num_filters=NUM_ANCHORS * (NUM_CLASSES + 5)

backbone = DarkNet53_conv_body()
detection = YoloDetectionBlock(ch_in=1024, ch_out=512)
conv2d_pred = paddle.nn.Conv2D(in_channels=1024, out_channels=num_filters, kernel_size=1)

x = np.random.randn(1, 3, IMG_SIZE, IMG_SIZE).astype('float32')
x = paddle.to_tensor(x)
C0, C1, C2 = backbone(x)
route, tip = detection(C0)
P0 = conv2d_pred(tip)

print(P0.shape)
[1, 18, 13, 13]
Yolov3多尺度检测模型

多尺度检测可以解决目标稠密以及大小差异大的问题。

# 定义上采样模块
class Upsample(paddle.nn.Layer):
    def __init__(self, scale=2):
        super(Upsample,self).__init__()
        self.scale = scale

    def forward(self, inputs):
        # get dynamic upsample output shape
        shape_nchw = paddle.shape(inputs)
        shape_hw = paddle.slice(shape_nchw, axes=[0], starts=[2], ends=[4])
        shape_hw.stop_gradient = True
        in_shape = paddle.cast(shape_hw, dtype='int32')
        out_shape = in_shape * self.scale
        out_shape.stop_gradient = True

        # reisze by actual_shape
        out = paddle.nn.functional.interpolate(
            x=inputs, scale_factor=self.scale, mode="NEAREST")
        return out


# 定义YOLOv3模型
class YOLOv3(paddle.nn.Layer):
    def __init__(self, num_classes=7):
        super(YOLOv3,self).__init__()

        self.num_classes = num_classes
        # 提取图像特征的骨干代码
        self.block = DarkNet53_conv_body()
        self.block_outputs = []
        self.yolo_blocks = []
        self.route_blocks_2 = []
        # 生成3个层级的特征图P0, P1, P2
        for i in range(3):
            # 添加从ci生成ri和ti的模块
            yolo_block = self.add_sublayer(
                "yolo_detecton_block_%d" % (i),
                YoloDetectionBlock(
                                   ch_in=512//(2**i)*2 if i==0 else 512//(2**i)*2 + 512//(2**i),
                                   ch_out = 512//(2**i)))
            self.yolo_blocks.append(yolo_block)

            num_filters = 3 * (self.num_classes + 5)

            # 添加从ti生成pi的模块,这是一个Conv2D操作,输出通道数为3 * (num_classes + 5)
            block_out = self.add_sublayer(
                "block_out_%d" % (i),
                paddle.nn.Conv2D(in_channels=512//(2**i)*2,
                       out_channels=num_filters,
                       kernel_size=1,
                       stride=1,
                       padding=0,
                       weight_attr=paddle.ParamAttr(
                           initializer=paddle.nn.initializer.Normal(0., 0.02)),
                       bias_attr=paddle.ParamAttr(
                           initializer=paddle.nn.initializer.Constant(0.0),
                           regularizer=paddle.regularizer.L2Decay(0.))))
            self.block_outputs.append(block_out)
            if i < 2:
                # 对ri进行卷积
                route = self.add_sublayer("route2_%d"%i,
                                          ConvBNLayer(ch_in=512//(2**i),
                                                      ch_out=256//(2**i),
                                                      kernel_size=1,
                                                      stride=1,
                                                      padding=0))
                self.route_blocks_2.append(route)
            # 将ri放大以便跟c_{i+1}保持同样的尺寸
            self.upsample = Upsample()


    def forward(self, inputs):
        outputs = []
        blocks = self.block(inputs)
        for i, block in enumerate(blocks):
            if i > 0:
                # 将r_{i-1}经过卷积和上采样之后得到特征图,与这一级的ci进行拼接
                block = paddle.concat([route, block], axis=1)
            # 从ci生成ti和ri
            route, tip = self.yolo_blocks[i](block)
            # 从ti生成pi
            block_out = self.block_outputs[i](tip)
            # 将pi放入列表
            outputs.append(block_out)

            if i < 2:
                # 对ri进行卷积调整通道数
                route = self.route_blocks_2[i](route)
                # 对ri进行放大,使其尺寸和c_{i+1}保持一致
                route = self.upsample(route)
        return outputs


    def get_loss(self, outputs, gtbox, gtlabel, gtscore=None,
                 anchors = [10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326],
                 anchor_masks = [[6, 7, 8], [3, 4, 5], [0, 1, 2]],
                 ignore_thresh=0.7,
                 use_label_smooth=False):
        """
        使用paddle.vision.ops.yolo_loss,直接计算损失函数,过程更简洁,速度也更快
        """
        self.losses = []
        downsample = 32
        for i, out in enumerate(outputs): # 对三个层级分别求损失函数
            anchor_mask_i = anchor_masks[i]
            loss = paddle.vision.ops.yolo_loss(
                    x=out,  # out是P0, P1, P2中的一个
                    gt_box=gtbox,  # 真实框坐标
                    gt_label=gtlabel,  # 真实框类别
                    gt_score=gtscore,  # 真实框得分,使用mixup训练技巧时需要,不使用该技巧时直接设置为1,形状与gtlabel相同
                    anchors=anchors,   # 锚框尺寸,包含[w0, h0, w1, h1, ..., w8, h8]共9个锚框的尺寸
                    anchor_mask=anchor_mask_i, # 筛选锚框的mask,例如anchor_mask_i=[3, 4, 5],将anchors中第3、4、5个锚框挑选出来给该层级使用
                    class_num=self.num_classes, # 分类类别数
                    ignore_thresh=ignore_thresh, # 当预测框与真实框IoU > ignore_thresh,标注objectness = -1
                    downsample_ratio=downsample, # 特征图相对于原图缩小的倍数,例如P0是32, P1是16,P2是8
                    use_label_smooth=False)      # 使用label_smooth训练技巧时会用到,这里没用此技巧,直接设置为False
            self.losses.append(paddle.mean(loss))  #mean对每张图片求和
            downsample = downsample // 2 # 下一级特征图的缩放倍数会减半
        return sum(self.losses) # 对每个层级求和


    def get_pred(self,
                 outputs,
                 im_shape=None,
                 anchors = [10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326],
                 anchor_masks = [[6, 7, 8], [3, 4, 5], [0, 1, 2]],
                 valid_thresh = 0.01):
        downsample = 32
        total_boxes = []
        total_scores = []
        for i, out in enumerate(outputs):
            anchor_mask = anchor_masks[i]
            anchors_this_level = []
            for m in anchor_mask:
                anchors_this_level.append(anchors[2 * m])
                anchors_this_level.append(anchors[2 * m + 1])

            boxes, scores = paddle.vision.ops.yolo_box(
                   x=out,
                   img_size=im_shape,
                   anchors=anchors_this_level,
                   class_num=self.num_classes,
                   conf_thresh=valid_thresh,
                   downsample_ratio=downsample,
                   name="yolo_box" + str(i))
            total_boxes.append(boxes)
            total_scores.append(
                        paddle.transpose(
                        scores, perm=[0, 2, 1]))
            downsample = downsample // 2

        yolo_boxes = paddle.concat(total_boxes, axis=1)
        yolo_scores = paddle.concat(total_scores, axis=2)
        return yolo_boxes, yolo_scores

三、模型训练

  • anchor:由于我的训练图像尺寸为416 * 416,使用默认anchor。
  • loss :yolo的loss比较复杂,直接使用飞桨的提供的yolo loss算子:
    loss_obj = paddle.nn.fucntional.binary_cross_entropy_with_logits(pred_classification, label_classification)
  • 训练流程:
    训练过程如 图 所示,输入图片经过特征提取得到三个层级的输出特征图P0(stride=32)、P1(stride=16)和P2(stride=8),相应的分别使用不同大小的小方块区域去生成对应的锚框和预测框,并对这些锚框进行标注。

P0层级特征图,对应着使用32×32的小方块,在每个区域中心生成大小分别为 [116,90], [156,198], [373,326] 的三种锚框。

P1层级特征图,对应着使用16×16大小的小方块,在每个区域中心生成大小分别为 [30,61], [62,45], [59,119] 的三种锚框。

P2层级特征图,对应着使用8×8大小的小方块,在每个区域中心生成大小分别为 [10,13], [16,30], [33,23] 的三种锚框。

将三个层级的特征图与对应锚框之间的标签关联起来,并建立损失函数,总的损失函数等于三个层级的损失函数相加。通过极小化损失函数,可以开启端到端的训练过程。

图5 训练流程图
开启训练
# 生成anchor
def new_anchors(nim_size, im_size=416):
    ANCHORS = [10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326]
    scal_size = im_size / nim_size 
    new_anchors = np.floor(np.array(ANCHORS) * scal_size).astype('int32')
    new_anchors = list(new_anchors)
    # print(type(new_anchors), new_anchors)
    return new_anchors
import time
import os
import paddle


ANCHORS = new_anchors(IMG_SIZE)
ANCHOR_MASKS = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
IGNORE_THRESH = 0.7
# 类别数
NUM_CLASSES = 1
# 训练轮数
MAX_EPOCH = 10

def get_lr(base_lr = 0.000125, lr_decay = 0.1):
    bd = [10000, 20000]
    lr = [base_lr, base_lr * lr_decay, base_lr * lr_decay * lr_decay]
    learning_rate = paddle.optimizer.lr.PiecewiseDecay(boundaries=bd, values=lr)
    return learning_rate


if __name__ == '__main__':
    # 设置gpu
    paddle.device.set_device("gpu")
    # 实例化数据集
    train_dataset = MyDataset(train_image_ids,
                        train_pixels,
                        IMG_SIZE,
                        train_labels,
                        mode='train')
    val_dataset = MyDataset(val_image_ids,
                        val_pixels,
                        IMG_SIZE,
                        val_labels,
                        mode='val')

    # 实例化数据生成器
    train_loader = paddle.io.DataLoader(train_dataset, batch_size=10, shuffle=True, num_workers=2)
    val_loader = paddle.io.DataLoader(val_dataset, batch_size=15, shuffle=False, num_workers=2)
    # 实例化模型
    model = YOLOv3(num_classes = NUM_CLASSES)
    # 学习率设置
    learning_rate = get_lr()
    # 优化策略
    opt = paddle.optimizer.Momentum(
                 learning_rate=learning_rate,
                 momentum=0.9,
                 weight_decay=paddle.regularizer.L2Decay(0.0005),
                 parameters=model.parameters())  #创建优化器
    # opt = paddle.optimizer.Adam(learning_rate=learning_rate, weight_decay=paddle.regularizer.L2Decay(0.0005), parameters=model.parameters())
    
    # 开启训练
    for epoch in range(MAX_EPOCH):
        for i, data in enumerate(train_loader()):
            img, gt_boxes, gt_labels, img_scale = data
            gt_scores = np.ones(gt_labels.shape).astype('float32')
            gt_scores = paddle.to_tensor(gt_scores)
            img = paddle.to_tensor(img)
            gt_boxes = paddle.to_tensor(gt_boxes)
            gt_labels = paddle.to_tensor(gt_labels)
            outputs = model(img)  # 前向传播,输出[P0, P1, P2]
            loss = model.get_loss(outputs, gt_boxes, gt_labels, gtscore=gt_scores,
                                  anchors = ANCHORS,
                                  anchor_masks = ANCHOR_MASKS,
                                  ignore_thresh=IGNORE_THRESH,
                                  use_label_smooth=False)  # 计算损失函数
            loss.backward()    # 反向传播计算梯度
            opt.step()  # 更新参数
            opt.clear_grad()
            if i % 100 == 0:
                timestring = time.strftime("%Y-%m-%d %H:%M:%S",time.localtime(time.time()))
                print('{}[TRAIN]epoch {}, iter {}, output loss: {}'.format(timestring, epoch, i, loss.numpy()))

        # 保存参数
        if (epoch % 5 == 0) or (epoch == MAX_EPOCH -1) or (epoch == 6):  # 这里是为了保存第6轮
            paddle.save(model.state_dict(), 'yolo_epoch{}'.format(epoch))

        # 每个epoch结束之后在验证集上进行测试
        model.eval()
        for i, data in enumerate(val_loader()):
            img, gt_boxes, gt_labels, img_scale = data
            gt_scores = np.ones(gt_labels.shape).astype('float32')
            gt_scores = paddle.to_tensor(gt_scores)
            img = paddle.to_tensor(img)
            gt_boxes = paddle.to_tensor(gt_boxes)
            gt_labels = paddle.to_tensor(gt_labels)
            outputs = model(img)
            loss = model.get_loss(outputs, gt_boxes, gt_labels, gtscore=gt_scores,
                                  anchors = ANCHORS,
                                  anchor_masks = ANCHOR_MASKS,
                                  ignore_thresh=IGNORE_THRESH,
                                  use_label_smooth=False)
            if i % 1 == 0:
                timestring = time.strftime("%Y-%m-%d %H:%M:%S",time.localtime(time.time()))
                print('{}[VALID]epoch {}, iter {}, output loss: {}'.format(timestring, epoch, i, loss.numpy()))
        model.train()
2023-04-20 14:22:52[TRAIN]epoch 0, iter 0, output loss: [7510.9775]
2023-04-20 14:23:15[TRAIN]epoch 0, iter 100, output loss: [324.96252]
2023-04-20 14:23:38[TRAIN]epoch 0, iter 200, output loss: [235.63728]
2023-04-20 14:24:02[TRAIN]epoch 0, iter 300, output loss: [231.69421]
2023-04-20 14:24:12[VALID]epoch 0, iter 0, output loss: [273.81647]
2023-04-20 14:24:13[TRAIN]epoch 1, iter 0, output loss: [269.77158]
2023-04-20 14:24:36[TRAIN]epoch 1, iter 100, output loss: [268.44305]
2023-04-20 14:24:59[TRAIN]epoch 1, iter 200, output loss: [249.40031]
2023-04-20 14:25:22[TRAIN]epoch 1, iter 300, output loss: [224.89734]
2023-04-20 14:25:32[VALID]epoch 1, iter 0, output loss: [244.91182]
2023-04-20 14:25:33[TRAIN]epoch 2, iter 0, output loss: [288.53384]
2023-04-20 14:25:56[TRAIN]epoch 2, iter 100, output loss: [247.09848]
2023-04-20 14:26:19[TRAIN]epoch 2, iter 200, output loss: [231.36879]
2023-04-20 14:26:43[TRAIN]epoch 2, iter 300, output loss: [235.89069]
2023-04-20 14:26:52[VALID]epoch 2, iter 0, output loss: [224.12994]
2023-04-20 14:26:53[TRAIN]epoch 3, iter 0, output loss: [211.3769]
2023-04-20 14:27:17[TRAIN]epoch 3, iter 100, output loss: [256.08087]
2023-04-20 14:27:40[TRAIN]epoch 3, iter 200, output loss: [209.41452]
2023-04-20 14:28:03[TRAIN]epoch 3, iter 300, output loss: [266.95062]
2023-04-20 14:28:12[VALID]epoch 3, iter 0, output loss: [214.07669]
2023-04-20 14:28:14[TRAIN]epoch 4, iter 0, output loss: [194.76587]
2023-04-20 14:28:37[TRAIN]epoch 4, iter 100, output loss: [182.13737]
2023-04-20 14:29:00[TRAIN]epoch 4, iter 200, output loss: [202.55937]
2023-04-20 14:29:23[TRAIN]epoch 4, iter 300, output loss: [225.5325]
2023-04-20 14:29:32[VALID]epoch 4, iter 0, output loss: [214.80347]
2023-04-20 14:29:34[TRAIN]epoch 5, iter 0, output loss: [194.15092]
2023-04-20 14:29:57[TRAIN]epoch 5, iter 100, output loss: [189.21162]
2023-04-20 14:30:20[TRAIN]epoch 5, iter 200, output loss: [157.70525]
2023-04-20 14:30:43[TRAIN]epoch 5, iter 300, output loss: [212.45743]
2023-04-20 14:30:53[VALID]epoch 5, iter 0, output loss: [217.91043]
2023-04-20 14:30:54[TRAIN]epoch 6, iter 0, output loss: [185.25941]
2023-04-20 14:31:18[TRAIN]epoch 6, iter 100, output loss: [175.25998]
2023-04-20 14:31:41[TRAIN]epoch 6, iter 200, output loss: [197.69388]
2023-04-20 14:32:04[TRAIN]epoch 6, iter 300, output loss: [168.3432]
2023-04-20 14:32:13[VALID]epoch 6, iter 0, output loss: [204.05635]
2023-04-20 14:32:14[TRAIN]epoch 7, iter 0, output loss: [194.63239]
2023-04-20 14:32:38[TRAIN]epoch 7, iter 100, output loss: [183.41855]
2023-04-20 14:33:01[TRAIN]epoch 7, iter 200, output loss: [185.08008]
2023-04-20 14:33:24[TRAIN]epoch 7, iter 300, output loss: [198.30933]
2023-04-20 14:33:33[VALID]epoch 7, iter 0, output loss: [205.9926]
2023-04-20 14:33:34[TRAIN]epoch 8, iter 0, output loss: [162.00359]
2023-04-20 14:33:57[TRAIN]epoch 8, iter 100, output loss: [171.00761]
2023-04-20 14:34:20[TRAIN]epoch 8, iter 200, output loss: [226.80994]
2023-04-20 14:34:43[TRAIN]epoch 8, iter 300, output loss: [182.58046]
2023-04-20 14:34:53[VALID]epoch 8, iter 0, output loss: [207.60168]
2023-04-20 14:34:54[TRAIN]epoch 9, iter 0, output loss: [179.91187]
2023-04-20 14:35:17[TRAIN]epoch 9, iter 100, output loss: [165.21002]
2023-04-20 14:35:40[TRAIN]epoch 9, iter 200, output loss: [180.10706]
2023-04-20 14:36:03[TRAIN]epoch 9, iter 300, output loss: [217.1472]
2023-04-20 14:36:13[VALID]epoch 9, iter 0, output loss: [204.6864]
2023-04-20 14:36:14[TRAIN]epoch 10, iter 0, output loss: [159.20427]
2023-04-20 14:36:37[TRAIN]epoch 10, iter 100, output loss: [217.40918]
2023-04-20 14:37:00[TRAIN]epoch 10, iter 200, output loss: [193.83948]
2023-04-20 14:37:23[TRAIN]epoch 10, iter 300, output loss: [153.38046]
2023-04-20 14:37:35[VALID]epoch 10, iter 0, output loss: [206.86063]
2023-04-20 14:37:36[TRAIN]epoch 11, iter 0, output loss: [177.40372]
2023-04-20 14:37:59[TRAIN]epoch 11, iter 100, output loss: [193.87794]
2023-04-20 14:38:22[TRAIN]epoch 11, iter 200, output loss: [193.70139]
2023-04-20 14:38:45[TRAIN]epoch 11, iter 300, output loss: [162.81955]
2023-04-20 14:38:54[VALID]epoch 11, iter 0, output loss: [210.25536]
2023-04-20 14:38:55[TRAIN]epoch 12, iter 0, output loss: [171.99713]
2023-04-20 14:39:18[TRAIN]epoch 12, iter 100, output loss: [186.39813]
2023-04-20 14:39:41[TRAIN]epoch 12, iter 200, output loss: [190.40303]
2023-04-20 14:40:04[TRAIN]epoch 12, iter 300, output loss: [196.51154]
2023-04-20 14:40:14[VALID]epoch 12, iter 0, output loss: [206.05359]
2023-04-20 14:40:15[TRAIN]epoch 13, iter 0, output loss: [131.79619]
2023-04-20 14:40:38[TRAIN]epoch 13, iter 100, output loss: [190.6022]
2023-04-20 14:41:01[TRAIN]epoch 13, iter 200, output loss: [182.60373]
2023-04-20 14:41:24[TRAIN]epoch 13, iter 300, output loss: [206.00534]
2023-04-20 14:41:33[VALID]epoch 13, iter 0, output loss: [213.86777]
2023-04-20 14:41:35[TRAIN]epoch 14, iter 0, output loss: [139.04294]
2023-04-20 14:41:58[TRAIN]epoch 14, iter 100, output loss: [172.8224]
2023-04-20 14:42:21[TRAIN]epoch 14, iter 200, output loss: [170.48174]
2023-04-20 14:42:44[TRAIN]epoch 14, iter 300, output loss: [152.83138]
2023-04-20 14:42:53[VALID]epoch 14, iter 0, output loss: [213.08821]
2023-04-20 14:42:54[TRAIN]epoch 15, iter 0, output loss: [161.47107]
2023-04-20 14:43:17[TRAIN]epoch 15, iter 100, output loss: [149.25409]
2023-04-20 14:43:40[TRAIN]epoch 15, iter 200, output loss: [169.13098]
2023-04-20 14:44:03[TRAIN]epoch 15, iter 300, output loss: [213.57532]
2023-04-20 14:44:13[VALID]epoch 15, iter 0, output loss: [213.9826]
2023-04-20 14:44:14[TRAIN]epoch 16, iter 0, output loss: [156.36336]
2023-04-20 14:44:37[TRAIN]epoch 16, iter 100, output loss: [130.54562]
2023-04-20 14:45:00[TRAIN]epoch 16, iter 200, output loss: [154.66592]
2023-04-20 14:45:23[TRAIN]epoch 16, iter 300, output loss: [128.23657]
2023-04-20 14:45:32[VALID]epoch 16, iter 0, output loss: [226.38898]
2023-04-20 14:45:33[TRAIN]epoch 17, iter 0, output loss: [194.20915]
2023-04-20 14:45:56[TRAIN]epoch 17, iter 100, output loss: [146.12419]
2023-04-20 14:46:19[TRAIN]epoch 17, iter 200, output loss: [159.56387]
2023-04-20 14:46:42[TRAIN]epoch 17, iter 300, output loss: [129.69179]
2023-04-20 14:46:52[VALID]epoch 17, iter 0, output loss: [226.12967]
2023-04-20 14:46:53[TRAIN]epoch 18, iter 0, output loss: [192.89014]
2023-04-20 14:47:16[TRAIN]epoch 18, iter 100, output loss: [126.73334]
2023-04-20 14:47:39[TRAIN]epoch 18, iter 200, output loss: [142.55736]
2023-04-20 14:48:02[TRAIN]epoch 18, iter 300, output loss: [165.43265]
2023-04-20 14:48:11[VALID]epoch 18, iter 0, output loss: [231.56403]
2023-04-20 14:48:12[TRAIN]epoch 19, iter 0, output loss: [164.94872]
2023-04-20 14:48:35[TRAIN]epoch 19, iter 100, output loss: [173.22055]
2023-04-20 14:48:58[TRAIN]epoch 19, iter 200, output loss: [162.95493]
2023-04-20 14:49:21[TRAIN]epoch 19, iter 300, output loss: [111.32781]
2023-04-20 14:49:31[VALID]epoch 19, iter 0, output loss: [224.92621]

四、模型预测

预测过程流程如下所示:


图6 预测流程图

预测过程可以分为两步:

通过网络输出计算出预测框位置和所属类别的得分。
使用非极大值抑制来消除重叠较大的预测框。
对于第1步,前面我们已经讲过如何通过网络输出值计算pred_objectness_probability, pred_boxes以及pred_classification_probability,这里推荐大家直接使用paddle.vision.ops.yolo_box,关键参数含义如下:

paddle.vision.ops.yolo_box(x, img_size, anchors, class_num, conf_thresh, downsample_ratio, clip_bbox=True, name=None, scale_x_y=1.0)

:x,网络输出特征图,例如上面提到的P0或者P1、P2。

:imgsize,输入图片尺寸。

:anchors,使用到的anchor的尺寸,如’[10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326]。

:classnum,物体类别数。

:confthresh, 置信度阈值,得分低于该阈值的预测框位置数值不用计算直接设置为0.0。

:downsampleratio, 特征图的下采样比例,例如P0是32,P1是16,P2是8。

:name=None,名字,例如’yolobox’,一般无需设置,默认值为None。

返回值包括两项,boxes和scores,其中boxes是所有预测框的坐标值,scores是所有预测框的得分。

预测框得分的定义是所属类别的概率乘以其预测框是否包含目标物体的objectness概率,即

$score = P_{obj} * P_{classification}$

在上面定义的类YOLOv3下面添加函数,get_pred,通过调用paddle.vision.ops.yolo_box获得P0、P1、P2三个层级的特征图对应的预测框和得分,并将他们拼接在一块,即可得到所有的预测框及其属于各个类别的得分。

非极大值抑制NMS
# 非极大值抑制
def nms(bboxes, scores, score_thresh, nms_thresh, i=0, c=0):
    """
    nms
    """
    inds = np.argsort(scores)
    inds = inds[::-1]
    keep_inds = []
    while(len(inds) > 0):
        cur_ind = inds[0]
        cur_score = scores[cur_ind]
        # if score of the box is less than score_thresh, just drop it
        if cur_score < score_thresh:
            break

        keep = True
        for ind in keep_inds:
            current_box = bboxes[cur_ind]
            remain_box = bboxes[ind]
            iou = box_iou_xyxy(current_box, remain_box)
            if iou > nms_thresh:
                keep = False
                break
        if i == 0 and c == 4 and cur_ind == 951:
            print('suppressed, ', keep, i, c, cur_ind, ind, iou)
        if keep:
            keep_inds.append(cur_ind)
        inds = inds[1:]

    return np.array(keep_inds)

# 多分类非极大值抑制
def multiclass_nms(bboxes, scores, score_thresh=0.01, nms_thresh=0.45, pos_nms_topk=400):
    """
    This is for multiclass_nms
    """
    batch_size = bboxes.shape[0]
    class_num = scores.shape[1]
    rets = []
    for i in range(batch_size):
        bboxes_i = bboxes[i]
        scores_i = scores[i]
        ret = []
        for c in range(class_num):
            scores_i_c = scores_i[c]
            keep_inds = nms(bboxes_i, scores_i_c, score_thresh, nms_thresh, i=i, c=c)
            if len(keep_inds) < 1:
                continue
            keep_bboxes = bboxes_i[keep_inds]
            keep_scores = scores_i_c[keep_inds]
            keep_results = np.zeros([keep_scores.shape[0], 6])
            keep_results[:, 0] = c
            keep_results[:, 1] = keep_scores[:]
            keep_results[:, 2:6] = keep_bboxes[:, :]
            ret.append(keep_results)
        if len(ret) < 1:
            rets.append(ret)
            continue
        ret_i = np.concatenate(ret, axis=0)
        scores_i = ret_i[:, 1]
        if len(scores_i) > pos_nms_topk:
            inds = np.argsort(scores_i)[::-1]
            inds = inds[:pos_nms_topk]
            ret_i = ret_i[inds]

        rets.append(ret_i)

    return rets


# 计算IoU,矩形框的坐标形式为xyxy,这个函数会被保存在box_utils.py文件中
def box_iou_xyxy(box1, box2):
    # 获取box1左上角和右下角的坐标
    x1min, y1min, x1max, y1max = box1[0], box1[1], box1[2], box1[3]
    # 计算box1的面积
    s1 = (y1max - y1min + 1.) * (x1max - x1min + 1.)
    # 获取box2左上角和右下角的坐标
    x2min, y2min, x2max, y2max = box2[0], box2[1], box2[2], box2[3]
    # 计算box2的面积
    s2 = (y2max - y2min + 1.) * (x2max - x2min + 1.)
    
    # 计算相交矩形框的坐标
    xmin = np.maximum(x1min, x2min)
    ymin = np.maximum(y1min, y2min)
    xmax = np.minimum(x1max, x2max)
    ymax = np.minimum(y1max, y2max)
    # 计算相交矩形行的高度、宽度、面积
    inter_h = np.maximum(ymax - ymin + 1., 0.)
    inter_w = np.maximum(xmax - xmin + 1., 0.)
    intersection = inter_h * inter_w
    # 计算相并面积
    union = s1 + s2 - intersection
    # 计算交并比
    iou = intersection / union
    return iou
加载测试集
# 将 list形式的batch数据 转化成多个array构成的tuple
def make_test_array(batch_data):
    img_name_array = np.array([item[0] for item in batch_data])
    img_data_array = np.array([item[1] for item in batch_data], dtype = 'float32')
    img_scale_array = np.array([item[2] for item in batch_data], dtype='int32')
    return img_name_array, img_data_array, img_scale_array

# 测试数据读取
def test_data_loader(datadir, batch_size= 10, test_image_size=IMG_SIZE, mode='test'):
    """
    加载测试用的图片,测试数据没有groundtruth标签
    """
    image_names = os.listdir(datadir)
    def reader():
        batch_data = []
        img_size = test_image_size
        for image_name in image_names:
            file_path = os.path.join(datadir, image_name)
            img = cv2.imread(file_path)
            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
            H = img.shape[0]
            W = img.shape[1]
            img = cv2.resize(img, (img_size, img_size))

            mean = [0.485, 0.456, 0.406]
            std = [0.229, 0.224, 0.225]
            mean = np.array(mean).reshape((1, 1, -1))
            std = np.array(std).reshape((1, 1, -1))
            out_img = (img / 255.0 - mean) / std
            out_img = out_img.astype('float32').transpose((2, 0, 1))
            img = out_img  # np.transpose(out_img, (2,0,1))
            im_shape = [H, W]

            batch_data.append((image_name.split('.')[0], img, im_shape))
            if len(batch_data) == batch_size:
                yield make_test_array(batch_data)
                batch_data = []
        if len(batch_data) > 0:
            yield make_test_array(batch_data)

    return reader
测试结果并保存
import json
import os


ANCHOR_MASKS = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
NMS_POSK = 400
NMS_THRESH = 0.15
VALID_THRESH = 0.15
NUM_CLASSES = 1

if __name__ == '__main__':

    model = YOLOv3(num_classes=NUM_CLASSES)
    params_file_path = 'yolo_epoch10'
    model_state_dict = paddle.load(params_file_path)
    model.load_dict(model_state_dict)
    model.eval()

    total_results = []
    test_loader = test_data_loader(datadir='data/test/', mode='test')
    for i, data in enumerate(test_loader()):
        img_name, img_data, img_scale_data = data
        img = paddle.to_tensor(img_data)
        img_scale = paddle.to_tensor(img_scale_data)

        outputs = model.forward(img)
        bboxes, scores = model.get_pred(outputs,
                                 im_shape=img_scale,
                                 anchors=ANCHORS,
                                 anchor_masks=ANCHOR_MASKS,
                                 valid_thresh = VALID_THRESH)

        bboxes_data = bboxes.numpy()
        scores_data = scores.numpy()
        result = multiclass_nms(bboxes_data, scores_data,
                      score_thresh=VALID_THRESH, 
                      nms_thresh=NMS_THRESH, 
                      pos_nms_topk=NMS_POSK)
        for j in range(len(result)):
            result_j = result[j]

            img_name_j = img_name[j]
            total_results.append([img_name_j, result_j.tolist()])
        print('processed {} pictures'.format(len(total_results)))

    print('')
    json.dump(total_results, open('{}_pred_results.json'.format(params_file_path.split('/')[-1]), 'w'))
processed 10 pictures

en(‘{}_pred_results.json’.format(params_file_path.split(‘/’)[-1]), ‘w’))


    processed 10 pictures
    


## 五、模型效果及可视化展示


上面的程序展示了如何读取测试数据集的图片,并将最终结果保存在json格式的文件中。


json文件中保存着测试结果,是包含所有图片预测结果的list,其构成如下:

[[img_name, [[label, score, x1, y1, x2, y2], …, [label, score, x1, y1, x2, y2]]],
[img_name, [[label, score, x1, y1, x2, y2], …, [label, score, x1, y1, x2, y2]]],

[img_name, [[label, score, x1, y1, x2, y2],…, [label, score, x1, y1, x2, y2]]]]

list中的每一个元素是一张图片的预测结果,list的总长度等于图片的数目,每张图片预测结果的格式是:

[img_name, [[label, score, x1, y1, x2, y2],…, [label, score, x1, y1, x2, y2]]]

其中第一个元素是图片名称image_name,第二个元素是包含该图片所有预测框的list, 预测框列表:

[[label, score, x1, x2, y1, y2],…, [label, score, x1, y1, x2, y2]]

预测框列表中每个元素[label, score, x1, y1, x2, y2]描述了一个预测框,label是预测框所属类别标签,score是预测框的得分;x1, y1, x2, y2对应预测框左上角坐标(x1, y1),右下角坐标(x2, y2)。每张图片可能有很多个预测框,则将其全部放在预测框列表中。



为了更直观的展示模型效果,下面的程序添加了读取单张图片,并画出其产生的预测框。

#### 单张图片加载


```python
# 读取单张测试图片方法
def single_image_data_loader(filename, test_image_size=IMG_SIZE, mode='test'):
   """
   加载测试用的图片,测试数据没有groundtruth标签
   """
   batch_size= 1
   def reader():
       batch_data = []
       img_size = test_image_size
       file_path = os.path.join(filename)
       img = cv2.imread(file_path)
       img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
       H = img.shape[0]
       W = img.shape[1]
       img = cv2.resize(img, (img_size, img_size))

       mean = [0.485, 0.456, 0.406]
       std = [0.229, 0.224, 0.225]
       mean = np.array(mean).reshape((1, 1, -1))
       std = np.array(std).reshape((1, 1, -1))
       out_img = (img / 255.0 - mean) / std
       out_img = out_img.astype('float32').transpose((2, 0, 1))
       img = out_img #np.transpose(out_img, (2,0,1))
       im_shape = [H, W]

       batch_data.append((image_name.split('.')[0], img, im_shape))
       if len(batch_data) == batch_size:
           yield make_test_array(batch_data)
           batch_data = []

   return reader
定义画图函数
# 定义画图函数
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.image import imread
import math


# INSECT_NAMES = ['wheat']
INSECT_NAMES = ['']

# 定义画矩形框的函数 
def draw_rectangle(currentAxis, bbox, edgecolor = 'k', facecolor = 'y', fill=False, linestyle='-'):
    # currentAxis,坐标轴,通过plt.gca()获取
    # bbox,边界框,包含四个数值的list, [x1, y1, x2, y2]
    # edgecolor,边框线条颜色
    # facecolor,填充颜色
    # fill, 是否填充
    # linestype,边框线型
    # patches.Rectangle需要传入左上角坐标、矩形区域的宽度、高度等参数
    rect=patches.Rectangle((bbox[0], bbox[1]), bbox[2]-bbox[0]+1, bbox[3]-bbox[1]+1, linewidth=1,
                           edgecolor=edgecolor,facecolor=facecolor,fill=fill, linestyle=linestyle)
    currentAxis.add_patch(rect)

# 定义绘制预测结果的函数
def draw_results(result, filename, draw_thresh=0.5):
    plt.figure(figsize=(7, 7))
    im = imread(filename)
    plt.imshow(im)
    currentAxis=plt.gca()
    colors = ['r', 'g', 'b', 'k', 'y', 'c', 'purple']
    for item in result:
        box = item[2:6]
        label = int(item[0])
        name = INSECT_NAMES[label]
        if item[1] > draw_thresh:
            draw_rectangle(currentAxis, box, edgecolor = colors[label])
            plt.text(box[0], box[1], name, fontsize=12, color=colors[label])
单样本展示
import json
import paddle


ANCHOR_MASKS = [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
NMS_POSK = 400
NMS_THRESH = 0.15
VALID_THRESH = 0.15 
NUM_CLASSES = 1

if __name__ == '__main__':
    
    image_name = 'data/test/2fd875eaa.jpg'
    # image_name = 'data/test/348a992bb.jpg'
    # image_name = 'data/test/796707dd7.jpg'
    image_name = 'data/test/aac893a91.jpg'
    params_file_path = 'yolo_epoch10'

    model = YOLOv3(num_classes=NUM_CLASSES)
    model_state_dict = paddle.load(params_file_path)
    model.load_dict(model_state_dict)
    model.eval()

    total_results = []
    test_loader = single_image_data_loader(image_name, mode='test')

    for i, data in enumerate(test_loader()):
        img_name, img_data, img_scale_data = data
        img = paddle.to_tensor(img_data)
        img_scale = paddle.to_tensor(img_scale_data)

        outputs = model.forward(img)
        bboxes, scores = model.get_pred(outputs,
                                 im_shape=img_scale,
                                 anchors=ANCHORS,
                                 anchor_masks=ANCHOR_MASKS,
                                 valid_thresh = VALID_THRESH)

        bboxes_data = bboxes.numpy()
        scores_data = scores.numpy()
        results = multiclass_nms(bboxes_data, scores_data,
                      score_thresh=VALID_THRESH, 
                      nms_thresh=NMS_THRESH,  
                      pos_nms_topk=NMS_POSK)

result = results[0]
draw_results(result, image_name, draw_thresh=0.15)  

# NMS 中得分阈值 VALID_THRESH 的值在起初设置是很小的0.01
# 调整 draw_thresh 当其令显示结果精准时的值作为最终 VALID_THRESH 的值

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6YRnPOZT-1681980113468)(main_files/main_55_0.png)]

# 查看依赖包的版本
# !pip list

# 导出依赖包信息
# !pip list --format freeze > requirements.txt

总结

本次实践执行了标准的机器学习步骤:数据准备-模型搭建-模型训练参数调优-模型预测,通过过程中遇到困惑求解释疑,对机器学习有了更进一步的认识,同时也感觉到自己的不足。

  • bbox 聚类在 K_means_act.ipynb 文件中,可尝试,本实验没用。
  • loss的大小几乎不能像准确率、召回率和精确率三者用来衡量模型指标,但可以用来分析模型训练过程中存在的问题。
  • 训练的轮数比较少,只是调整了极大值抑制的参数(thresh越小,展示的预测框就越多),因此还有更多参数调节未尝试。
  • 图像处理尺寸416*416,可以更改 IMG_SIZE 其尺寸。
  • 数据增广还有旋转,翻转,随机缩放未添加,可以尝试。
  • NMS 中得分阈值 VALID_THRESH 的值在起初设置是很小的0.01,通过调整 draw_thresh 当其令结果更好时的值作为最终 VALID_THRESH 的值。

感谢飞桨的李文博导师的悉心指导,此致谢意。

请点击此处查看本环境基本用法.

Please click here for more detailed instructions.

此文章Wie转载
原文链接

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐