转自AI Studio,原文链接:经典医学图像模型与交通领域的交叉应用 - 飞桨AI Studio

📖 1 项目背景

随着交通出行与社会生活的密切相关,人们对于良好的路面服务水平提出了更高的要求,从而导致了对沥青混合料使用性能的关注。传统道路研究方法是世界道路事业发展进步的基石,在现代社会和科学技术的迅猛发展下,传统研究方法已经逐渐无法满足工程需要。21世纪以来,计算机技术的迅猛发展和数码产品的更新换代,采用图像处理技术的成本越来越低,逐渐成为了科学研究者们追逐的浪潮。图像技术在沥青混凝土或其他土木工程领域的应用逐渐多起来,图像处理技术具有天生的优势,在沥青混泥土和其他土木工程领域属于较新的技术。此应用领域一旦取得重大进展,后续的研究充满了机遇。 基于图像技术本身的优势,将图像分割运用到交通领域上,这种自身的特点恰好克服了传统研究方法所不能避免的缺陷。比如:

1.运用数字图像技术便可以很方便的对沥青混合料CT扫描图像中沥青、集料和空隙的二维分布情况,进行定性和定量的研究。

2.在特定问题中,数字技术可以打开不同的研究思路。李智利用数字图像方法,以颗粒主轴取向作为指标,针对工程中常用的压实方法以及实际路面进行了较系统的对比,认为马歇尔击实法无法有效模拟实际路面,轮碾成型试件与旋转压实试件具有与路面最接近的颗粒状态和密实程度。

3.利用数字图像处理方法可直接量测沥青混合料的体积参数,无须通过密度等中间变量换算。

4.利用数字图像处理技术可方便的与有限元、离散元和边界元等力学分析工具结合,对沥青混合料力学相应进行模拟。

该项目作为经典医学图像分割模型与交通领域的交叉应用问题,非常适合计算机视觉领域与道路、桥梁、健康度检测等交叉学科的同学互相交流学习

🔋2 经典U-Net模型简介

常应用于医学图像的语义分割,适用于小样本,高精度。U-Net模型是一种改进的FCN结构,因其结构经论文作者画出来形似字母U而得名。它由左半边的压缩通道(Contracting Path)和右半边扩展通道(Expansive Path)组成。压缩通道是典型的卷积神经网络结构,它重复采用2个卷积层和1个最大池化层的结构,每进行一次池化操作后特征图的维数就增加1倍。在扩展通道,先进行1次反卷积操作,使特征图的维数减半,然后拼接对应压缩通道裁剪得到的特征图,重新组成一个2倍大小的特征图,再采用2个卷积层进行特征提取,并重复这一结构。在最后的输出层,用2个卷积层将64维的特征图映射成2维的输出图。如图所示:

🔋3 改进的U-Net模型简介

为了捕捉足够大的感受野,从而捕捉语义上下文信息,在标准的CNN架构中,特征图网格被逐渐下采样。但是,部分表现出较大感受野的可变性小物体特征,降低其假阳性预测仍是一项棘手的工作。为了提高准确性,当前的分割框架依赖手工标注的数据集进行定位来将任务划分为手工标注和后续分割两个步骤。 有研究者提出了通过在标准的CNN模型中集成注意力门来实现这一思想,这就避免了多模型和大量复杂参数参与训练的过程。与分阶段神经模型相比,其自动增益控制会逐步区别不相关背景中的特征响应区域,而不需要在网络中划分感兴趣区域。 Attention UNet在UNet中引入注意力门,在对编码器每个分辨率上的特征与解码器中对应特征进行拼接之前,使用了一个注意力门控模块,重新调整了编码器的输出特征。该模块生成一个门控信号,用来控制不同空间位置处特征的重要性,如图中粉色圆圈所示。

该方法的注意力门控模块的内部如下图所示,该模块通过1x1x1的卷积分别与ReLU和Sigmoid结合,生成一个权重 α , 通过与编码器中的特征相乘来对其进行校正。

此类自动增益控制机制被合并到标准的U-Net架构中,以突出通过跳跃连接传递的显著特征。从粗尺度中提取的信息用于选通,以消除跳跃连接中不相关和有噪声响应的歧义。这是在串联操作之前执行的,以便只合并相关的激活。此外,自动增益控制过滤正向通过和反向通过期间的神经元激活。源自背景区域的梯度在向后传递期间被向下加权。这允许较浅层中的模型参数主要基于与给定任务相关的空间区域来更新。L1层中卷积参数的更新规则可表述如下:

右手边的第一个梯度项用进行缩放。在多维AGs的情况下, 对应于每个网格比例下的一个向量。在每个子AG中,补充信息被提取和融合以定义跳过连接的输出。为了减少可训练参数的数量和AGs的计算复杂度,线性变换在没有任何空间支持的情况下执行(1×1×1卷积),并且输入特征映射被下采样到门控信号的分辨率,类似于非局部块。相应的线性变换将特征映射解耦,并将它们映射到较低维度的空间,用于门控操作。如中所提出的,低级特征映射,即第一跳跃连接,不用于选通函数,因为它们不代表高维空间中的输入数据。我们使用深度监督来强制中间特征图在每个图像尺度上具有语义区分性。这有助于确保不同尺度的注意力单元有能力影响对大范围图像前景内容的响应。因此,我们防止从跳过连接的小子集重构密集预测。

评估代码参考:

from sklearn.metrics import f1_score
  
y_pred = [0, 1, 1, 1, 0, 1]
y_true = [0, 1, 0, 1, 1, 1] 

score = f1_score(y_true, y_pred, average='macro')

💡 4 算法流程

本项目算法的整体流程主要包括三个步骤:

预处理阶段对于原始待分割图像,采用直方图均衡和双边滤波变换方式处理使其细节特征更清晰拥有更多的信息;

数据扩充为神经网络防止网络过拟合;

改进U-Net模型引入:嵌套的和密集的跳跃连接进一步加强这些连接,目的是减少编码器和解码器之间的语义差距门控机制,分别在其上采样和下采样结构中密集嵌入关注特征通道和空间区域的注意力门控模块,针对沥青混泥土数据集来说,前者更加关注同一尺寸结构下各个特征通道间重要程度而后者则更为关注同一特征图上各个位置区域不同要素的重要性权重。 常见预处理包括去噪去雾等操作,由于需要从CT图像样本中提取出感兴趣区域,所以对其进行预处理,预处理结果作为训练网络和进行测试所使用的数据样本。首先对沥青混合料 CT 图像进行限制对比度自适应直方图均衡化( Clahe),通过计算图像的局部直方图,然后重新分布亮度来改变图像对比度,有利于改进图像的局部对比度以及获得更多的图像 细节,然后经过双边滤波( bilateral filter,BF) 。双边滤波采用了两个高斯滤波的结合,一个负责计算空间邻近度的权值,而另一个负责计算像素值相似度的权值,将图像空间信息与邻近像素信息相结合,在滤除噪声平滑图像的同时,又做到边缘保存。


💡5 实验平台及数据集

  • AI Studio在线平台,算力器为A100AI Studio后台云服务器

  • 使用了私有数据集沥青混合料CT扫描切片图像验证本文的算法
    (因私有数据集,本项目只上传了少量图片作效果演示)

💡6 相关代码展示

❀6.1 项目准备

In [6]
# 下载paddleseg
! git clone --depth=1 https://gitee.com/paddlepaddle/PaddleSeg.git  

import sys
sys.path.append('PaddleSeg')  # paddleseg

# 数据信息查看
# import os
# import numpy as np
# from PIL import Image

# 查看大小
# img_size = []
# imgs_folder_path = 'datas'
# imgs_name = os.listdir(imgs_folder_path)
# for name in imgs_name:
#     img_path = os.path.join(imgs_folder_path, name)
#     img = np.asarray(Image.open(img_path))
#     img_size.append(img.shape)
# print(set(img_size))

# 查看标签数值
# label = np.asarray(Image.open('datas/lable/1068.tif'))
# print(set(label.flatten()))
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">fatal: destination path 'PaddleSeg' already exists and is not an empty directory.
</span></span>

❀6.2 经典Unet模型-代码展示

In [7]
import paddle
from paddle import nn
class Encoder(nn.Layer):            #五层下采样:两层卷积+两层归一化+池化。
    def __init__(self, num_channels, num_filters):
        super(Encoder,self).__init__()   #继承初始化
        self.conv1 = nn.Conv2D(in_channels=num_channels,
                              out_channels=num_filters,
                              kernel_size=3,    # 3x3的卷积核,步长1,填充1,保持图片尺寸
                              stride=1,
                              padding=1)
        self.bn1   = nn.BatchNorm(num_filters,act="relu")      #归一化与relu激活
        
        self.conv2 = nn.Conv2D(in_channels=num_filters,
                              out_channels=num_filters,
                              kernel_size=3,
                              stride=1,
                              padding=1)
        self.bn2   = nn.BatchNorm(num_filters,act="relu")
        
        self.pool  = nn.MaxPool2D(kernel_size=2,stride=2,padding="SAME")  #池化层
        
    def forward(self,inputs):
        x = self.conv1(inputs)
        x = self.bn1(x)
        x = self.conv2(x)
        x = self.bn2(x)
        x_conv = x              
        x_pool = self.pool(x)   
        return x_conv, x_pool
    
    
class Decoder(nn.Layer):   #上采样:一层反卷积+两层卷积层+两层归一化
    def __init__(self, num_channels, num_filters):
        super(Decoder,self).__init__()
        self.up = nn.Conv2DTranspose(in_channels=num_channels,
                                    out_channels=num_filters,
                                    kernel_size=2,
                                    stride=2,
                                    padding=0)   #图片尺寸变大一倍

        self.conv1 = nn.Conv2D(in_channels=num_filters*2,
                              out_channels=num_filters,
                              kernel_size=3,
                              stride=1,
                              padding=1)
        self.bn1   = nn.BatchNorm(num_filters,act="relu")
        
        self.conv2 = nn.Conv2D(in_channels=num_filters,
                              out_channels=num_filters,
                              kernel_size=3,
                              stride=1,
                              padding=1)
        self.bn2   = nn.BatchNorm(num_filters,act="relu")
        
    def forward(self,input_conv,input_pool):
        x = self.up(input_pool)
        h_diff = (input_conv.shape[2]-x.shape[2])
        w_diff = (input_conv.shape[3]-x.shape[3])
        pad = nn.Pad2D(padding=[h_diff//2, h_diff-h_diff//2, w_diff//2, w_diff-w_diff//2])
        x = pad(x)                   #以下采样保存的feature map为基准,填充上采样的feature map尺寸
        x = paddle.concat(x=[input_conv,x],axis=1)   #考虑上下文信息,in_channels扩大两倍
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.conv2(x)
        x = self.bn2(x)
        return x
    
class UNet(nn.Layer):
    def __init__(self,num_classes=59):
        super(UNet,self).__init__()
        self.down1 = Encoder(num_channels=  3, num_filters=64) #下采样
        self.down2 = Encoder(num_channels= 64, num_filters=128)
        self.down3 = Encoder(num_channels=128, num_filters=256)
        self.down4 = Encoder(num_channels=256, num_filters=512)
        
        self.mid_conv1 = nn.Conv2D(512,1024,1)       #中间层
        self.mid_bn1   = nn.BatchNorm(1024,act="relu")
        self.mid_conv2 = nn.Conv2D(1024,1024,1)
        self.mid_bn2   = nn.BatchNorm(1024,act="relu")

        self.up4 = Decoder(1024,512)     #上采样
        self.up3 = Decoder(512,256)
        self.up2 = Decoder(256,128)
        self.up1 = Decoder(128,64)
        
        self.last_conv = nn.Conv2D(64,num_classes,1)   #1x1卷积,softmax分类
        
    def forward(self,inputs):
        x1, x = self.down1(inputs)
        x2, x = self.down2(x)
        x3, x = self.down3(x)
        x4, x = self.down4(x)
        
        x = self.mid_conv1(x)
        x = self.mid_bn1(x)
        x = self.mid_conv2(x)
        x = self.mid_bn2(x)
        
        x = self.up4(x4, x)
        x = self.up3(x3, x)
        x = self.up2(x2, x)
        x = self.up1(x1, x)
        
        x = self.last_conv(x)
        
        return x
paddle.summary(UNet(), (1, 3, 600, 600))
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">----------------------------------------------------------------------------------------------------------------------
   Layer (type)                  Input Shape                              Output Shape                   Param #    
======================================================================================================================
    Conv2D-54                 [[1, 3, 600, 600]]                       [1, 64, 600, 600]                  1,792     
   BatchNorm-19              [[1, 64, 600, 600]]                       [1, 64, 600, 600]                   256      
    Conv2D-55                [[1, 64, 600, 600]]                       [1, 64, 600, 600]                 36,928     
   BatchNorm-20              [[1, 64, 600, 600]]                       [1, 64, 600, 600]                   256      
   MaxPool2D-6               [[1, 64, 600, 600]]                       [1, 64, 300, 300]                    0       
    Encoder-5                 [[1, 3, 600, 600]]             [[1, 64, 600, 600], [1, 64, 300, 300]]         0       
    Conv2D-56                [[1, 64, 300, 300]]                       [1, 128, 300, 300]                73,856     
   BatchNorm-21              [[1, 128, 300, 300]]                      [1, 128, 300, 300]                  512      
    Conv2D-57                [[1, 128, 300, 300]]                      [1, 128, 300, 300]                147,584    
   BatchNorm-22              [[1, 128, 300, 300]]                      [1, 128, 300, 300]                  512      
   MaxPool2D-7               [[1, 128, 300, 300]]                      [1, 128, 150, 150]                   0       
    Encoder-6                [[1, 64, 300, 300]]            [[1, 128, 300, 300], [1, 128, 150, 150]]        0       
    Conv2D-58                [[1, 128, 150, 150]]                      [1, 256, 150, 150]                295,168    
   BatchNorm-23              [[1, 256, 150, 150]]                      [1, 256, 150, 150]                 1,024     
    Conv2D-59                [[1, 256, 150, 150]]                      [1, 256, 150, 150]                590,080    
   BatchNorm-24              [[1, 256, 150, 150]]                      [1, 256, 150, 150]                 1,024     
   MaxPool2D-8               [[1, 256, 150, 150]]                       [1, 256, 75, 75]                    0       
    Encoder-7                [[1, 128, 150, 150]]            [[1, 256, 150, 150], [1, 256, 75, 75]]         0       
    Conv2D-60                 [[1, 256, 75, 75]]                        [1, 512, 75, 75]                1,180,160   
   BatchNorm-25               [[1, 512, 75, 75]]                        [1, 512, 75, 75]                  2,048     
    Conv2D-61                 [[1, 512, 75, 75]]                        [1, 512, 75, 75]                2,359,808   
   BatchNorm-26               [[1, 512, 75, 75]]                        [1, 512, 75, 75]                  2,048     
   MaxPool2D-9                [[1, 512, 75, 75]]                        [1, 512, 38, 38]                    0       
    Encoder-8                 [[1, 256, 75, 75]]              [[1, 512, 75, 75], [1, 512, 38, 38]]          0       
    Conv2D-62                 [[1, 512, 38, 38]]                       [1, 1024, 38, 38]                 525,312    
   BatchNorm-27              [[1, 1024, 38, 38]]                       [1, 1024, 38, 38]                  4,096     
    Conv2D-63                [[1, 1024, 38, 38]]                       [1, 1024, 38, 38]                1,049,600   
   BatchNorm-28              [[1, 1024, 38, 38]]                       [1, 1024, 38, 38]                  4,096     
Conv2DTranspose-15           [[1, 1024, 38, 38]]                        [1, 512, 76, 76]                2,097,664   
    Conv2D-64                [[1, 1024, 75, 75]]                        [1, 512, 75, 75]                4,719,104   
   BatchNorm-29               [[1, 512, 75, 75]]                        [1, 512, 75, 75]                  2,048     
    Conv2D-65                 [[1, 512, 75, 75]]                        [1, 512, 75, 75]                2,359,808   
   BatchNorm-30               [[1, 512, 75, 75]]                        [1, 512, 75, 75]                  2,048     
    Decoder-5       [[1, 512, 75, 75], [1, 1024, 38, 38]]               [1, 512, 75, 75]                    0       
Conv2DTranspose-16            [[1, 512, 75, 75]]                       [1, 256, 150, 150]                524,544    
    Conv2D-66                [[1, 512, 150, 150]]                      [1, 256, 150, 150]               1,179,904   
   BatchNorm-31              [[1, 256, 150, 150]]                      [1, 256, 150, 150]                 1,024     
    Conv2D-67                [[1, 256, 150, 150]]                      [1, 256, 150, 150]                590,080    
   BatchNorm-32              [[1, 256, 150, 150]]                      [1, 256, 150, 150]                 1,024     
    Decoder-6       [[1, 256, 150, 150], [1, 512, 75, 75]]             [1, 256, 150, 150]                   0       
Conv2DTranspose-17           [[1, 256, 150, 150]]                      [1, 128, 300, 300]                131,200    
    Conv2D-68                [[1, 256, 300, 300]]                      [1, 128, 300, 300]                295,040    
   BatchNorm-33              [[1, 128, 300, 300]]                      [1, 128, 300, 300]                  512      
    Conv2D-69                [[1, 128, 300, 300]]                      [1, 128, 300, 300]                147,584    
   BatchNorm-34              [[1, 128, 300, 300]]                      [1, 128, 300, 300]                  512      
    Decoder-7      [[1, 128, 300, 300], [1, 256, 150, 150]]            [1, 128, 300, 300]                   0       
Conv2DTranspose-18           [[1, 128, 300, 300]]                      [1, 64, 600, 600]                 32,832     
    Conv2D-70                [[1, 128, 600, 600]]                      [1, 64, 600, 600]                 73,792     
   BatchNorm-35              [[1, 64, 600, 600]]                       [1, 64, 600, 600]                   256      
    Conv2D-71                [[1, 64, 600, 600]]                       [1, 64, 600, 600]                 36,928     
   BatchNorm-36              [[1, 64, 600, 600]]                       [1, 64, 600, 600]                   256      
    Decoder-8      [[1, 64, 600, 600], [1, 128, 300, 300]]             [1, 64, 600, 600]                    0       
    Conv2D-72                [[1, 64, 600, 600]]                       [1, 59, 600, 600]                  3,835     
======================================================================================================================
Total params: 18,476,155
Trainable params: 18,452,603
Non-trainable params: 23,552
----------------------------------------------------------------------------------------------------------------------
Input size (MB): 4.12
Forward/backward pass size (MB): 3998.34
Params size (MB): 70.48
Estimated Total Size (MB): 4072.94
----------------------------------------------------------------------------------------------------------------------

</span></span>
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">{'total_params': 18476155, 'trainable_params': 18452603}</span></span>

❀6.3 代码搭建改进Unet模型

❀6.4 构建数据集

In [3]
! pip install paddleseg

import os
import random
from PIL import Image
import matplotlib.pyplot as plt

%matplotlib inline

def create_list(data_path):
    image_path = os.path.join(data_path, 'image')
    label_path = os.path.join(data_path, 'label')
    data_names = os.listdir(image_path)
    random.shuffle(data_names)  # 打乱数据
    with open(os.path.join(data_path, 'train_list.txt'), 'w') as tf:
        with open(os.path.join(data_path, 'val_list.txt'), 'w') as vf:
            for idx, data_name in enumerate(data_names):
                img = os.path.join('image', data_name)
                lab = os.path.join('label', data_name.replace('jpg', 'png'))
                if idx % 9 == 0:  # 90%的作为训练集
                    vf.write(img + ' ' + lab + '\n')
                else:
                    tf.write(img + ' ' + lab + '\n')
    print('数据列表生成完成')

data_path = 'DataSet'
create_list(data_path)  # 生成数据列表

# 查看一组数据
vis_img = Image.open('DataSet/image/1066.tif')
vis_lab = Image.open('DataSet/label/1066.tif')
plt.figure(figsize=(10, 20))
plt.subplot(121);plt.imshow(vis_img);plt.xticks([]);plt.yticks([]);plt.title('Image')
plt.subplot(122);plt.imshow(vis_lab);plt.xticks([]);plt.yticks([]);plt.title('Label')
plt.show()
<span style="color:rgba(0, 0, 0, 0.85)"><span style="background-color:#ffffff">Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: paddleseg in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2.5.0)
Requirement already satisfied: pyyaml>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleseg) (5.1.2)
Requirement already satisfied: scipy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleseg) (1.6.3)
Requirement already satisfied: tqdm in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleseg) (4.27.0)
Requirement already satisfied: sklearn in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleseg) (0.0)
Requirement already satisfied: visualdl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleseg) (2.2.3)
Requirement already satisfied: filelock in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleseg) (3.0.12)
Requirement already satisfied: opencv-python in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleseg) (4.1.1.26)
Requirement already satisfied: prettytable in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddleseg) (0.7.2)
Requirement already satisfied: flask>=1.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (1.1.1)
Requirement already satisfied: pre-commit in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (1.21.0)
Requirement already satisfied: requests in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (2.24.0)
Requirement already satisfied: matplotlib in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (2.2.3)
Requirement already satisfied: protobuf>=3.11.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (3.14.0)
Requirement already satisfied: shellcheck-py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (0.7.1.1)
Requirement already satisfied: Pillow>=7.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (8.2.0)
Requirement already satisfied: six>=1.14.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (1.16.0)
Requirement already satisfied: bce-python-sdk in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (0.8.53)
Requirement already satisfied: flake8>=3.7.9 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (4.0.1)
Requirement already satisfied: Flask-Babel>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (1.0.0)
Requirement already satisfied: pandas in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (1.1.5)
Requirement already satisfied: numpy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.0.0->paddleseg) (1.19.5)
Requirement already satisfied: scikit-learn in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from sklearn->paddleseg) (0.24.2)
Requirement already satisfied: mccabe<0.7.0,>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl>=2.0.0->paddleseg) (0.6.1)
Requirement already satisfied: pyflakes<2.5.0,>=2.4.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl>=2.0.0->paddleseg) (2.4.0)
Requirement already satisfied: pycodestyle<2.9.0,>=2.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl>=2.0.0->paddleseg) (2.8.0)
Requirement already satisfied: importlib-metadata<4.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl>=2.0.0->paddleseg) (4.2.0)
Requirement already satisfied: Jinja2>=2.10.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.0.0->paddleseg) (3.0.0)
Requirement already satisfied: click>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.0.0->paddleseg) (7.0)
Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.0.0->paddleseg) (1.1.0)
Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.0.0->paddleseg) (0.16.0)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl>=2.0.0->paddleseg) (2022.1)
Requirement already satisfied: Babel>=2.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl>=2.0.0->paddleseg) (2.9.1)
Requirement already satisfied: future>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl>=2.0.0->paddleseg) (0.18.0)
Requirement already satisfied: pycryptodome>=3.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl>=2.0.0->paddleseg) (3.9.9)
Requirement already satisfied: cycler>=0.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->visualdl>=2.0.0->paddleseg) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->visualdl>=2.0.0->paddleseg) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->visualdl>=2.0.0->paddleseg) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->visualdl>=2.0.0->paddleseg) (3.0.7)
Requirement already satisfied: aspy.yaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.0.0->paddleseg) (1.3.0)
Requirement already satisfied: identify>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.0.0->paddleseg) (1.4.10)
Requirement already satisfied: toml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.0.0->paddleseg) (0.10.0)
Requirement already satisfied: cfgv>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.0.0->paddleseg) (2.0.1)
Requirement already satisfied: virtualenv>=15.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.0.0->paddleseg) (16.7.9)
Requirement already satisfied: nodeenv>=0.11.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.0.0->paddleseg) (1.3.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.0.0->paddleseg) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.0.0->paddleseg) (1.25.11)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.0.0->paddleseg) (2021.10.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl>=2.0.0->paddleseg) (3.0.4)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn->sklearn->paddleseg) (0.14.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn->sklearn->paddleseg) (2.1.0)
</span></span>

❀6.5 模型训练

In [4]
import paddle
from paddleseg.models import UNetPlusPlus
import paddleseg.transforms as T
from paddleseg.datasets import Dataset
from paddleseg.models.losses import BCELoss
from paddleseg.core import train

def train_model(base_lr=0.00001, iters=10000, batch_size=8, save_interval=1000, model_path=None):
    # 网络定义
    model = UNetPlusPlus(in_channels=3, num_classes=2, use_deconv=True)
    if model_path is not None:
        para_state_dict = paddle.load(model_path)
        model.set_dict(para_state_dict)
    # 构建训练集
    train_transforms = [
        T.RandomHorizontalFlip(),
        T.RandomVerticalFlip(),
        T.RandomRotation(),
        T.RandomScaleAspect(),
        T.RandomDistort(),
        T.Resize(target_size=(512, 512)),
        T.Normalize()
    ]
    train_dataset = Dataset(
        transforms=train_transforms,
        dataset_root='DataSet',
        num_classes=2,
        mode='train',
        train_path='DataSet/train_list.txt',
        separator=' ',
    )
    # 构建验证集
    val_transforms = [
        T.Resize(target_size=(512, 512)),
        T.Normalize()
    ]
    val_dataset = Dataset(
        transforms=val_transforms,
        dataset_root='DataSet',
        num_classes=2,
        mode='val',
        val_path='DataSet/val_list.txt',
        separator=' ',
    )
    # 参数设置
    lr = paddle.optimizer.lr.CosineAnnealingDecay(base_lr, T_max=2000, last_epoch=0.5)
    optimizer = paddle.optimizer.Adam(lr, parameters=model.parameters())
    losses = {}
    losses['types'] = [BCELoss()]
    losses['coef'] = [1]
    # 训练
    train(
        model=model,
        train_dataset=train_dataset,
        val_dataset=val_dataset,
        optimizer=optimizer,
        save_dir='output',
        iters=iters,
        batch_size=batch_size,
        save_interval=save_interval,
        log_iters=10,
        num_workers=0,
        losses=losses,
        use_vdl=False)

开始“机器学习”啦

In [5]
train_model(base_lr=0.00001, iters=10000, batch_size=1)
In [ ]
#继续训练
#train_model(base_lr=0.0001, iters=20000, batch_size=2)
In [ ]
#设置模型路径
ModelPath='output/iter_20000/model.pdparams'

❀6.6 模型评估

In [ ]
import paddle
from paddleseg.models import UNetPlusPlus
import paddleseg.transforms as T
from paddleseg.datasets import Dataset
from paddleseg.core import evaluate

def eval_model(model_path=None):
    # 网络定义
    model = UNetPlusPlus(in_channels=3, num_classes=2, use_deconv=True)
    if model_path is not None:
        para_state_dict = paddle.load(model_path)
        model.set_dict(para_state_dict)
    # 构建验证集
    val_transforms = [
        T.Resize(target_size=(512, 512)),
        T.Normalize()
    ]
    val_dataset = Dataset(
        transforms=val_transforms,
        dataset_root='DataSet',
        num_classes=2,
        mode='val',
        val_path='DataSet/val_list.txt',
        separator=' ',
    )
    evaluate(
        model,  
        val_dataset
    )

eval_model(model_path=ModelPath)

❀6.7 模型预测

In [ ]
#import numpy as np
#import paddle
#from PIL import Image
#from paddleseg.models import UNetPlusPlus
#import paddleseg.transforms as T
#from paddleseg.core import infer
#import matplotlib.pyplot as plt

# ModelPath='output/model_kp0.9085/model.pdparams'
# ModelPath='output/iter_4000/model.pdparams'

#def nn_infer(img_path, lab_path, model_path=ModelPath, show=True):
    # 网络定义
#    model = UNetPlusPlus(in_channels=3, num_classes=2, use_deconv=True)
#    if model_path is not None:
#       para_state_dict = paddle.load(model_path)
#        model.set_dict(para_state_dict)
    # 预测结果
#    transforms = T.Compose([
#        T.Resize(target_size=(512, 512)),
#        T.Normalize()
#    ])
#    img, lab = transforms(img_path, lab_path)
#    img = paddle.to_tensor(img[np.newaxis, :])
#    pre = infer.inference(model, img)
#    pred = paddle.argmax(pre, axis=1).numpy().reshape((512, 512))
#    if show:
#        plt.figure(figsize=(15, 45))
#        plt.subplot(131);plt.imshow(Image.open(img_path));plt.xticks([]);plt.yticks([]);plt.title('Image')
#        plt.subplot(132);plt.imshow(lab.astype('uint8'));plt.xticks([]);plt.yticks([]);plt.title('Label')
#        plt.subplot(133);plt.imshow(pred.astype('uint8'));plt.xticks([]);plt.yticks([]);plt.title('Prediction')
#        plt.show()
#    return pred.astype('uint8')

#name = '1066'
#img_path = 'DataSet/image//' + name + '.jpg'
#lab_path = 'DataSet/label//' + name + '.png'
#_ = nn_infer(img_path, lab_path)

💡 7 实验结果评价指标-介绍

沥青混合料CT扫描图像分割就是将CT扫描图中的像素点分成集料颗粒和背景。为了判断本文CT扫描图像分割方法是否有效,本文采用了如下分割评价指标:

❀ 7.1 PA,MPA系数

Pixel Accuracy(PA,像素精确度):这是最简单的指标,用来计算被正确分类的像素个数和总像素数之间的比例。

Mean pixel Accuracy(MPA,平均像素精确度):这是在PA基础上做了微整提升,为类别内像素正确分类概率的平均值:

💡 8 实验结果-展示

从分割结果中可以看出本项目的分割结果基本上和手工标注分割标签图结果一致。与原始的U-Net算法相比,本文引入注意力机制改进的U-Net模型能够较好地克服沥青混合料CT图像分割中的难点并在局部分割处取得较好的分割结果。

🔣 9 方法总结

针对现有算法因沥青混合料CT扫描图像中混合料密集区域和光照不均等因素导致的局部分割精度低的问题,我们可以引入注意力机制的U-Net模型应用到沥青混合料CT扫描图像分割中。分别在其上采样和下采样结构中嵌入关注特征通道和空间区域的注意力模块,对沥青混合料局部复杂区域进行通道增强,提高网络模型对局部复杂区域的分割能力。通过在沥青混泥土数据集上进行测试,将医学图像分割领域的各模型运用到交通等相关的领域交叉应用,也是具有较高的效率和较好的分割精度。但对于沥青混合料切片内部的情况分割不完整,在之后的研究中,我们应该将CT图像内部的空间特征和边缘特征相结合,使得我们能够更加精准的分割出集料。

🐱 10 项目总结

  • 通过在沥青混泥土数据集上进行测试,将医学图像分割领域的各模型运用到交通等相关的领域交叉应用,也是具有较高的效率和较好的分割精度

  • 分别在其上采样和下采样结构中嵌入关注特征通道和空间区域的注意力模块,在计算机视觉中是一种常规的手段。此处交叉应用到路面材料研究,发现其也能很好的对沥青混合料扫描图的局部复杂区域进行一个通道增强,从而提高网络模型对局部复杂区域的分割和识别能力。

  • 但对于沥青混合料切片内部的情况分割不完整,在之后的研究中,我们可以考虑将CT图像内部的空间特征和边缘特征相结合,使得我们能够更加精准的分割出集料。

  • 欢迎研究交叉领域应用的同学 指正,提意见,交流和学习!



有任何问题,欢迎评论区留言交流。
Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐