用Paddle2.2高层API实现车道线打角回归预测模型+Paddle inference边缘设备GPU部署


应该是全网第一个用Paddle2.0+高层API实现小车车道线预测回归模型的搭建、训练和部署全流程的项目啦,希望能帮助做小车和机器人的小伙伴快速实现小车的无人驾驶功能~

车道线打角回归预测模型也是每年的全国大学生智能汽车竞赛都会包含的基础任务之一,希望能作为一个baseline让大家快速学习起来。

数据采集的整体架构如下图所示:

智能小车上使用标准的广角摄像头作为视觉传感器,其基本参数如下表所示:

采集的数据已上传至AI Studio:车道线检测回归数据集

1、导入必要库

import os
import cv2
import io
from tqdm import tqdm
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image as PilImage
import paddle
from paddle.nn import functional as F

paddle.__version__
'2.2.0'
!unzip -oq /home/aistudio/data/data46903/0728_carline.zip

2、数据集制作+数据预处理

数据采集

通过手柄遥控小车,小车实时记录当前图片和手柄的打角数据,从而得到数据集。


数据集信息

本数据集共13448张图像和对应的打角数据。

数据具体格式

图像 >——> 打角数据

每一张图像都会对应一个打角数据。
图像和打角数据分别保存在imgdata.txt下,在dataset的建立过程中,我们只需要将图像和打角数据一一对应进行封装即可。

数据对应形式如图所示:

2.1、定义数据集处理变量

img_folder_path = "data/img"
dataset_path = "data"
txt_path = "data/data.txt"
IMAGE_SIZE = (224, 224)
# 检查数据集中图像和打角数据个数是否对应
import os
# 1、读取图像数据和角度数据
angle_list = []
f = open(txt_path)
for txt in f.readlines():
    txt.split("\n")
    angle_num = int(txt)
    angle_list.append(angle_num)
img_list = []
for img_path in tqdm(os.listdir(img_folder_path)):
    img_path = os.path.join(img_folder_path, img_path)
    img_list.append(img_path)
# 2、数据归一化(如果过于发散,模型不容易收敛)
# 这里我的打角数据大小为:900-2100,中间值为1500
for i, angle in enumerate(angle_list): 
    angle = (angle-1500)/600
    angle_list[i] = angle
try:
    os.remove(os.path.join(dataset_path, "train.txt"))
    os.remove(os.path.join(dataset_path, "eval.txt"))
except:
    pass
train_txt = open(os.path.join(dataset_path, "train.txt"), "w")
eval_txt = open(os.path.join(dataset_path, "eval.txt"), "w")
n = 0
for img_path, label in tqdm(zip(img_list, angle_list)):
    n += 1
    if n % 10 != 0:
        train_txt.write(img_path+" "+str(label))
        train_txt.write("\n")
    else:
        eval_txt.write(img_path+" "+str(label))
        eval_txt.write("\n")
train_txt.close()
eval_txt.close()

2.3、定义数据集MyDataset


创建PaddlePaddle高层API—Dataloder可直接读入的数据集格式

import random
import io
from paddle.io import Dataset
from paddle.vision.transforms import transforms as T
from PIL import Image as PilImage
import numpy as np

class MyDataset(Dataset):
    """
    数据集定义
    """
    def __init__(self, mode, dataset_path):
        """
        构造函数
        """
        self.image_size = IMAGE_SIZE
        self.mode = mode.lower()
        self.dataset_path = dataset_path
        
        assert self.mode in ['train', 'test', 'eval'], \
            "mode should be 'train' or 'test' or 'eval', but got {}".format(self.mode)
        
        self.train_images = []
        self.label_list = []

        with open(os.path.join(self.dataset_path, ('{}.txt'.format(self.mode))), 'r') as f:
            
            for line in tqdm(f.readlines()):
                image, label = line.strip().split(' ')
                
                img = PilImage.open(image)

                self.train_images.append(image)
                self.label_list.append(label)
        

    def _load_img(self, path, color_mode='rgb', transforms=[]):
        """
        统一的图像处理接口封装,用于规整图像大小和通道
        """
        with open(path, 'rb') as f:
            img = PilImage.open(io.BytesIO(f.read()))
            if color_mode == 'grayscale':
                # if image is not already an 8-bit, 16-bit or 32-bit grayscale image
                # convert it to an 8-bit grayscale image.
                if img.mode not in ('L', 'I;16', 'I'):
                    img = img.convert('L')
            elif color_mode == 'rgba':
                if img.mode != 'RGBA':
                    img = img.convert('RGBA')
            elif color_mode == 'rgb':
                if img.mode != 'RGB':
                    img = img.convert('RGB')
            else:
                raise ValueError('color_mode must be "grayscale", "rgb", or "rgba"')
            
            return T.Compose([
                T.Resize(self.image_size)
            ] + transforms)(img)

    def __getitem__(self, idx):
        """
        返回 image, label
        """
        train_image = self._load_img(self.train_images[idx], 
                                     transforms=[
                                         T.Transpose(), 
                                         T.Normalize(mean=[127.5], std=[127.5]) # 归一化处理,将0~255归一化到-1~1
                                     ]) # 加载原始图像
        label = self.label_list[idx] # 加载Label
    
        # 返回image, label
        train_image = np.array(train_image, dtype='float32')
        label = np.array(label, dtype='float32')
        return train_image, label
        
    def __len__(self):
        """
        返回数据集总数
        """
        return len(self.train_images)

# 定义训练集和验证集
train_dataset = MyDataset(mode='train', dataset_path=dataset_path) # 训练数据集
val_dataset = MyDataset(mode='eval', dataset_path=dataset_path) # 验证数据集
100%|██████████| 13448/13448 [00:01<00:00, 12012.05it/s]
100%|██████████| 1494/1494 [00:00<00:00, 12004.41it/s]

2.4、查看数据集的具体格式,并展示一张图像和label的对应关系

x, y = val_dataset.__getitem__(0)
print(x, y)
print(x.shape)
print(train_dataset.__len__())

with open(os.path.join(dataset_path, 'train.txt'), 'r') as f:
    i = 0

    for line in f.readlines():
        image_path, label = line.strip().split(' ')
        image = np.array(PilImage.open(image_path))
    
        if i > 2:
            break
        # 进行图片的展示
        plt.figure()

        plt.title(label)
        plt.imshow(image.astype('uint8'))
        plt.axis('off')

        plt.show()
        i = i + 1

3、基于高层API自定义模型结构


模型结构可自行修改以获得更好的效果。

import paddle
import paddle.nn as nn
import paddle.nn.functional as F

class My_Model(nn.Layer):
    def __init__(self, num_classes):
        super(My_Model, self).__init__()

        self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=32, kernel_size=(3, 3))
        self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

        self.conv2 = paddle.nn.Conv2D(in_channels=32, out_channels=64, kernel_size=(3,3))
        self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

        self.conv3 = paddle.nn.Conv2D(in_channels=64, out_channels=32, kernel_size=(3,3))
        self.pool3 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)
        
        self.conv4 = paddle.nn.Conv2D(in_channels=32, out_channels=16, kernel_size=(3,3))

        self.flatten = paddle.nn.Flatten()

        self.linear1 = paddle.nn.Linear(in_features=9216, out_features=16)
        self.linear2 = paddle.nn.Linear(in_features=16, out_features=1)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x)
        x = self.pool1(x)

        x = self.conv2(x)
        x = F.relu(x)
        x = self.pool2(x)

        x = self.conv3(x)
        x = F.relu(x)
        x = self.pool3(x)

        x= self.conv4(x)
        x = F.relu(x)

        x = self.flatten(x)
        x = self.linear1(x)
        x = F.relu(x)
        x = self.linear2(x)
        return x

paddle.summary(My_Model(num_classes=1), input_size=(1,3, 224,224))

4、模型训练

4.1、配置具体模型的优化器、损失函数及其他可选项

import paddle

class SaveBestModel(paddle.callbacks.Callback):
    def __init__(self, target=0.5, path='./best_model', verbose=0):
        self.target = target
        self.epoch = None
        self.path = path

    def on_epoch_end(self, epoch, logs=None):
        self.epoch = epoch

    def on_eval_end(self, logs=None):
        if logs.get('loss')[0] < self.target:
            self.target = logs.get('loss')[0]
            self.model.save(self.path)
            print('best model is loss {} at epoch {}'.format(self.target, self.epoch))

callback_visualdl = paddle.callbacks.VisualDL(log_dir='./')
callback_savebestmodel = SaveBestModel(target=1, path='./model')
callbacks = [callback_visualdl, callback_savebestmodel]

# 模型初始化
model = paddle.Model(My_Model(num_classes=1)) # 线性回归模型

# 优化器
optim = paddle.optimizer.Momentum(learning_rate=0.0001, 
                                 momentum=0.9, 
                                 parameters=model.parameters())

# 损失函数
loss = paddle.nn.MSELoss()
model.prepare(optimizer=optim, loss=loss) # 线性回归模型一定不能用交叉熵,因为这个自带softmax,需要指定输出类别

# 用 DataLoader 实现数据加载
train_loader = paddle.io.DataLoader(train_dataset, places=paddle.CUDAPlace(0), batch_size=64)
eval_loader = paddle.io.DataLoader(val_dataset, places=paddle.CUDAPlace(0), batch_size= 64)

4.2、模型训练

model.fit(train_loader, 
          eval_loader, 
          epochs=5, 
          callbacks=callbacks,
          verbose=1)

4.3、保存模型


这里直接保存模型为可本地部署的形式。

# training=True时,只会保存优化器参数和模型参数
# training=False时,会保存模型结构、参数和优化器结构
model.save("./model_dir/model", training=False)

5、边缘端预测


这里的预测部分和本地预测代码相同,需要保证输入图像的预处理和训练时对齐。

import cv2
import numpy as np
from paddle.inference import Config
from paddle.inference import create_predictor

# ————————————————图像预处理函数————————————————
def resize(img, target_size):
    """ resize """
    percent = float(target_size) / min(img.shape[0], img.shape[1])
    resized_width = int(round(img.shape[1] * percent))
    resized_height = int(round(img.shape[0] * percent))
    resized_short = cv2.resize(img, (resized_width, resized_height))
    resized = cv2.resize(resized_short, (target_size, target_size))
    return resized

def preprocess(img, target_size):
    mean = [127.5, 127.5, 127.5]
    std = [127.5, 127.5, 127.5]
    # resize
    img = resize(img, target_size)
    # bgr-> rgb && hwc->chw
    img = img[:, :, ::-1].astype('float32').transpose((2, 0, 1))
    img_mean = np.array(mean).reshape((3, 1, 1))
    img_std = np.array(std).reshape((3, 1, 1))
    img -= img_mean
    img /= img_std
    return img[np.newaxis, :]

#——————————————————————模型配置、预测相关函数——————————————————————————
def predict_config(model_file, params_file):
    # 根据预测部署的实际情况,设置Config
    config = Config()
    # 读取模型文件
    config.set_prog_file(model_file)
    config.set_params_file(params_file)
    # Config默认是使用CPU预测,若要使用GPU预测,需要手动开启,设置运行的GPU卡号和分配的初始显存。
    config.enable_use_gpu(500, 0)
    # 可以设置开启IR优化、开启内存优化。
    config.switch_ir_optim()
    config.enable_memory_optim()
    predictor = create_predictor(config)
    return predictor

def predict(image, predictor, target_size):
    img = preprocess(image, target_size)
    input_names = predictor.get_input_names()
    input_tensor = predictor.get_input_handle(input_names[0])
    input_tensor.reshape(img.shape)
    input_tensor.copy_from_cpu(img.copy())
    # 执行Predictor
    predictor.run()
    # 获取输出
    output_names = predictor.get_output_names()
    output_tensor = predictor.get_output_handle(output_names[0])
    output_data = output_tensor.copy_to_cpu()
    print("output_names", output_names)
    print("output_tensor", output_tensor)
    print("output_data", output_data)
    return output_data


if __name__ == '__main__':
    model_file = "model_dir/model.pdmodel"
    params_file = "model_dir/model.pdiparams"
    
    import random
    # image = cv2.imread("data/img/7419.jpg")
    # image = cv2.imread("data/img/8891.jpg")
    image = cv2.imread("data/img/{}.jpg".format(random.randint(0,5000)))
    
    predictor = predict_config(model_file, params_file)
    res = predict(image, predictor, target_size=224)
    # 进行图片的展示
    plt.figure()

    plt.title(res)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    plt.imshow(image.astype('uint8'))
    plt.axis('on')

jpg".format(random.randint(0,5000)))
    
    predictor = predict_config(model_file, params_file)
    res = predict(image, predictor, target_size=224)
    # 进行图片的展示
    plt.figure()

    plt.title(res)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    plt.imshow(image.astype('uint8'))
    plt.axis('on')

    plt.show()
output_names ['relu_0.tmp_0']
output_tensor <paddle.fluid.core_avx.PaddleInferTensor object at 0x7f3eefbed3f0>
output_data [[0.03796072]]


[1m[35m--- Running analysis [ir_graph_build_pass][0m
[1m[35m--- Running analysis [ir_graph_clean_pass][0m
[1m[35m--- Running analysis [ir_analysis_pass][0m
[32m--- Running IR pass [is_test_pass][0m
[32m--- Running IR pass [simplify_with_basic_ops_pass][0m
[32m--- Running IR pass [conv_affine_channel_fuse_pass][0m
[32m--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass][0m
[32m--- Running IR pass [conv_bn_fuse_pass][0m
[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass][0m
[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass][0m
[32m--- Running IR pass [multihead_matmul_fuse_pass_v2][0m
[32m--- Running IR pass [squeeze2_matmul_fuse_pass][0m
[32m--- Running IR pass [reshape2_matmul_fuse_pass][0m
[32m--- Running IR pass [flatten2_matmul_fuse_pass][0m
[32m--- Running IR pass [map_matmul_v2_to_mul_pass][0m
I1117 16:08:14.154837   128 fuse_pass_base.cc:57] ---  detected 2 subgraphs
[32m--- Running IR pass [map_matmul_v2_to_matmul_pass][0m
[32m--- Running IR pass [map_matmul_to_mul_pass][0m
[32m--- Running IR pass [fc_fuse_pass][0m
I1117 16:08:14.155347   128 fuse_pass_base.cc:57] ---  detected 2 subgraphs
[32m--- Running IR pass [fc_elementwise_layernorm_fuse_pass][0m
[32m--- Running IR pass [conv_elementwise_add_act_fuse_pass][0m
[32m--- Running IR pass [conv_elementwise_add2_act_fuse_pass][0m
[32m--- Running IR pass [conv_elementwise_add_fuse_pass][0m
[32m--- Running IR pass [transpose_flatten_concat_fuse_pass][0m
[32m--- Running IR pass [runtime_context_cache_pass][0m
[1m[35m--- Running analysis [ir_params_sync_among_devices_pass][0m
I1117 16:08:14.158038   128 ir_params_sync_among_devices_pass.cc:45] Sync params from CPU to GPU
[1m[35m--- Running analysis [adjust_cudnn_workspace_size_pass][0m
[1m[35m--- Running analysis [inference_op_replace_pass][0m
[1m[35m--- Running analysis [memory_optimize_pass][0m
I1117 16:08:14.159536   128 memory_optimize_pass.cc:214] Cluster name : x  size: 38535168
I1117 16:08:14.159554   128 memory_optimize_pass.cc:214] Cluster name : relu_0.tmp_0  size: 403734528
I1117 16:08:14.159556   128 memory_optimize_pass.cc:214] Cluster name : relu_3.tmp_0  size: 2359296
I1117 16:08:14.159559   128 memory_optimize_pass.cc:214] Cluster name : pool2d_0.tmp_0  size: 100933632
[1m[35m--- Running analysis [ir_graph_to_program_pass][0m
I1117 16:08:14.166565   128 analysis_predictor.cc:717] ======= optimize end =======
I1117 16:08:14.166777   128 naive_executor.cc:98] ---  skip [feed], feed -> x
I1117 16:08:14.167583   128 naive_executor.cc:98] ---  skip [relu_0.tmp_0], fetch -> fetch

在这里插入图片描述

6、车道线自动驾驶效果展示(第一人称视角)

7、总结

1.在比赛中,进行模型的适当优化是非常值得推荐的,可以提高小车运行的稳定性;同时如果对图像进行了多种预处理操作,也能够提高模型对于多种光线条件的适应能力,进而提高比赛成绩。

2.如果能在阈值较为鲁棒的条件下对车道线进行提取,再进行回归预测,也能够降低图像的学习难度,提高模型的效果。

作者:高鸿志 东北大学秦皇岛分校大四学生

方向:机器人控制、计算机视觉、深度学习

主要经历:参加过两届全国大学生智能汽车竞赛百度人工智能组别,均获得了国家一等奖

AI studio主页:https://aistudio.baidu.com/aistudio/personalcenter/thirdview/215659


Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐