转自AI Studio,原文链接:【AI达人创造营第二期】基于PaddleClas的新冠肺炎CT影像的分类 - 飞桨AI Studio

一、项目背景

新近爆发的2019新型冠状病毒(SARS-CoV-2)具有较高的传染性,其临床主要表现为新型冠状病毒肺炎(COVID-19)。CT是临床筛检和诊断COVID-19的首选方式之一,也是构造生命防线的精锐卫士,正确认识COVID-19的CT表现对于明确诊断具有重要意义。

1.COVID-19病变早期CT表现 病变局限,斑片状、亚段或节段性分布为主。病变常于外1/3肺野、胸膜下分布。表现为单发或多发磨玻璃样密度(GGO)结节状、斑片或片状影,伴或不伴小叶间隔增厚为主,其内可见空气支气管征和血管增粗表现。

2.COVID-19进展期CT表现 病变进展,病灶增多、范围扩大。可累及多个肺叶,下叶居多。病灶变密实,境界不清,实性结节周围可环绕磨玻璃样渗出影呈“晕征”,部分表现为“反晕征”。病变周围小叶间隔可由于间质水肿而增厚,重叠于磨玻璃影上出现“铺路石征”。

3.COVID-19重症期CT表现 双肺呈弥漫性病变,肺实质广泛渗出、实变,实变影为主,肺结构扭曲,支气管扩张,亚段性肺不张,严重时呈“白肺”。

4.恢复期 2019-nCoV肺炎患者治疗过程中病灶变化较快, CT表现与临床病情紧密相关。部分病例经过1-2周抗病毒抗炎等积极治疗后病变范围缩小,密度减低;同 时在其他肺野也会出现新的斑片状影;病变吸收过程中可出现不规则长索条状 影,部分索条影可完全吸收,支气管增粗的程度减轻。部分病例治疗效果不佳,合并其他病原菌感染,肺内病变由局限性进展为多发、弥漫性病变,累及范围明显扩大,密实,合并肺不张、胸腔积液。

二、数据集介绍

COVID CT 2019冠状病毒疾病CT图像包括349例CT图像,其中216例为COVID-19。他们在/图像处理/CT_新冠病毒。zip非新冠病毒CT扫描正在进行中/已处理图像/CT\U非彩色。zip我们提供了一个数据拆分/数据分割。数据分割信息见DenseNet_predict自述。md元信息(例如,患者ID、患者信息、DOI、图像标题)在新冠肺炎CT元信息中。XLSX 2019冠状病毒疾病图像来自于来自JAMA、MeRixv、BioXiV、NEJM、LANCET等的COVID19相关论文,通过阅读论文中的图形标题来选择含有COVID-19异常的CTS。

新冠肺炎CT图片数据集

In [3]

cd PaddleClas/
/home/aistudio/PaddleClas

In [34]

#下载PaddleClas代码
!git clone https://gitee.com/paddlepaddle/PaddleClas.git -b release/2.1
%cd PaddleClas/

三、模型介绍

为了让用户能够更方便地训练并使用图像分类模型,完成图像分类任务,飞桨开源了PaddleClas图像分类套件,打通模型开发、训练、压缩、部署全流程,助力开发者更好的开发和应用图像分类模型。PaddleClas有如下特色:

  • 提供了丰富的模型库,多达29个系列,同时也提供了134个模型在ImageNet1k数据集上的训练配置以及预训练模型。
  • 提供了8种数据增广方式,可更加便捷地进行数据增广扩充,提升模型的鲁棒性。
  • 开源了自研的SSLD(Simple Semi-supervised Label Distillation)知识蒸馏方案,模型效果普遍提升3%以上,在ImageNet1k数据集上,ResNet50_vd模型的精度达到了84.0%。
  • 开源了自研的10万类图像分类预训练模型,识别准确率最高可以提升30%。
  • 提供了PaddleLite、HubServing、TensorRT等工业级部署推理方案,无论在服务器端还是移动端、嵌入式硬件,都可以方便地部署模型。
ResNet及其Vd系列
ResNet系列模型是在2015年提出的,一举在ILSVRC2015比赛中取得冠军,Top5错误率为3.57%。该网络创新性的提出了残差结构,通过堆叠多个残差结构从而构建了ResNet网络。实验表明使用残差块可以有效地提升收敛速度和精度。由于ResNet卓越的性能,越来越多的来自学术界和工业界的学者和工程师对其结构进行了改进,其中ResNet-vd的参数量和计算量与ResNet几乎一致,但是结合适当的训练策略,最终的精度提升高达2.5%。

解压数据集

In [1]

!unzip -oq /home/aistudio/data/data27732/CT_NonCOVID.zip -d /home/aistudio/data/
!unzip -oq /home/aistudio/data/data27732/CT_COVID.zip -d /home/aistudio/data/

四、样本的可视化展示

In [36]

import cv2
import matplotlib.pyplot as plt
%matplotlib inline
image_path_1 = '/home/aistudio/data/CT_COVID'
image_path_2 = '/home/aistudio/data/CT_NonCOVID'
image_path_list_1 = sorted(os.listdir(image_path_1))
image_path_list_2 = sorted(os.listdir(image_path_2))
image_path_list_1 = [os.path.join(image_path_1, path) for path in image_path_list_1]
image_path_list_2 = [os.path.join(image_path_2, path) for path in image_path_list_2]
image_path_list = image_path_list_1 + image_path_list_2
sample_image_path_list = ['/home/aistudio/data/CT_COVID/2020.03.01.20029769-p21-73_3.png', '/home/aistudio/data/CT_COVID/2020.03.04.20031047-p13-84%4.png']

plt.figure(figsize=(12, 3))
for i in range(len(sample_image_path_list)):
    plt.subplot(1,len(sample_image_path_list), i+1) #subplot(行数,列数,第几行第几列的第几幅图)
    #plt.title(image_path_list[i])
    pic = cv2.imread(sample_image_path_list[i])
    plt.imshow(pic)

print("图片总共有{}张".format(len(image_path_list)))
plt.tight_layout()
plt.show()
图片总共有746张

<Figure size 864x216 with 2 Axes>

五、CT图片文件整合 数据集划分

In [37]

# data目录下新建jpg文件夹
!mkdir /home/aistudio/PaddleClas/dataset/ct
!mkdir /home/aistudio/PaddleClas/dataset/ct/jpg

In [38]

import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
import cv2
import os

root_path = "/home/aistudio/data"
lung_path = ["CT_COVID", "CT_NonCOVID"]
jpg_path = "jpg"
save_root_path = "/home/aistudio/PaddleClas/dataset/ct"
f_train = open(os.path.join(save_root_path, 'train_list.txt'), 'w', encoding='utf-8')
f_val = open(os.path.join(save_root_path, 'val_list.txt'), 'w', encoding='utf-8')
for ind, path_ind in enumerate(lung_path):
    image_path = os.listdir(os.path.join(root_path, path_ind))
    for index, img_path in enumerate(image_path):
        scan = cv2.imread(os.path.join(root_path, path_ind, img_path))
        #print(scan.shape)
        save_path = os.path.join(save_root_path, jpg_path, img_path)
        print(save_path)
        cv2.imwrite(save_path, scan)
        if index < int(0.9 * len(image_path)):
            print("{} {}".format(os.path.join(save_root_path, jpg_path, img_path), ind), file=f_train)
        else:
            print("{} {}".format(os.path.join(save_root_path, jpg_path, img_path), ind), file=f_val)
f_train.close()
f_val.close()

六、添加环境变量,并启动不加载预训练模型的训练

In [53]

!python tools/train.py -c /home/aistudio/PaddleClas/configs/quick_start/ResNet50_vd.yaml 
# 日志中最后的The best top1 acc 0.30294, in epoch: 18反映了模型在验证集上的最高精度指标。
# top1为准确率,也就是我们最为关注的一个指标,本模型为72% 。

配置文件

mode: 'train'
ARCHITECTURE:
    name: 'ResNet50_vd'

checkpoints: ""
pretrained_model: ""
use_gpu: True
model_save_dir: "./output/"
classes_num: 2
total_images: 746
save_interval: 1
validate: True
valid_interval: 1
epochs: 20
topk: 2
image_shape: [3, 224, 224]

LEARNING_RATE:
    function: 'Cosine'          
    params:                   
        lr: 0.0125

OPTIMIZER:
    function: 'Momentum'
    params:
        momentum: 0.9
    regularizer:
        function: 'L2'
        factor: 0.00001

TRAIN:
    batch_size: 4
    num_workers: 0
    file_list: "./dataset/ct/train_list.txt"
    data_dir: "./dataset/ct/"
    shuffle_seed: 0
    transforms:
        - DecodeImage:
            to_rgb: True
            channel_first: False
        - RandCropImage:
            size: 224
        - RandFlipImage:
            flip_code: 1
        - NormalizeImage:
            scale: 1./255.
            mean: [0.485, 0.456, 0.406]
            std: [0.229, 0.224, 0.225]
            order: ''
        - ToCHWImage:

VALID:
    batch_size: 20
    num_workers: 2
    file_list: "./dataset/ct/val_list.txt"
    data_dir: "./dataset/ct/"
    shuffle_seed: 0
    transforms:
        - DecodeImage:
            to_rgb: True
            channel_first: False
        - ResizeImage:
            resize_short: 256
        - CropImage:
            size: 224
        - NormalizeImage:
            scale: 1.0/255.0
            mean: [0.485, 0.456, 0.406]
            std: [0.229, 0.224, 0.225]
            order: ''
        - ToCHWImage:

输出

2022-05-12 20:31:16 INFO: epoch:19 , train step:0   , top1: 0.75000, top2: 1.00000, loss: 0.54429, lr: 0.000655, batch_cost: 0.07180 s, reader_cost: 0.02649 s, ips: 55.70680 images/sec, eta: 0:00:11
2022-05-12 20:31:16 INFO: epoch:19 , train step:10  , top1: 0.50000, top2: 1.00000, loss: 0.93867, lr: 0.000632, batch_cost: 0.04564 s, reader_cost: 0.00018 s, ips: 87.63511 images/sec, eta: 0:00:07
2022-05-12 20:31:17 INFO: epoch:19 , train step:20  , top1: 0.50000, top2: 1.00000, loss: 0.73728, lr: 0.000609, batch_cost: 0.04648 s, reader_cost: 0.00025 s, ips: 86.06224 images/sec, eta: 0:00:06
2022-05-12 20:31:17 INFO: epoch:19 , train step:30  , top1: 0.75000, top2: 1.00000, loss: 0.43915, lr: 0.000586, batch_cost: 0.04397 s, reader_cost: 0.00023 s, ips: 90.97230 images/sec, eta: 0:00:06
2022-05-12 20:31:17 INFO: epoch:19 , train step:40  , top1: 0.75000, top2: 1.00000, loss: 0.66203, lr: 0.000564, batch_cost: 0.04334 s, reader_cost: 0.00023 s, ips: 92.28924 images/sec, eta: 0:00:05
2022-05-12 20:31:18 INFO: epoch:19 , train step:50  , top1: 0.75000, top2: 1.00000, loss: 0.71424, lr: 0.000542, batch_cost: 0.04282 s, reader_cost: 0.00023 s, ips: 93.42359 images/sec, eta: 0:00:05
2022-05-12 20:31:18 INFO: epoch:19 , train step:60  , top1: 0.50000, top2: 1.00000, loss: 0.79321, lr: 0.000521, batch_cost: 0.04187 s, reader_cost: 0.00022 s, ips: 95.53529 images/sec, eta: 0:00:04
2022-05-12 20:31:19 INFO: epoch:19 , train step:70  , top1: 0.75000, top2: 1.00000, loss: 0.50051, lr: 0.000500, batch_cost: 0.04096 s, reader_cost: 0.00022 s, ips: 97.65287 images/sec, eta: 0:00:03
2022-05-12 20:31:19 INFO: epoch:19 , train step:80  , top1: 0.75000, top2: 1.00000, loss: 0.57756, lr: 0.000480, batch_cost: 0.04030 s, reader_cost: 0.00021 s, ips: 99.24896 images/sec, eta: 0:00:03
2022-05-12 20:31:19 INFO: epoch:19 , train step:90  , top1: 0.50000, top2: 1.00000, loss: 0.67360, lr: 0.000460, batch_cost: 0.03976 s, reader_cost: 0.00021 s, ips: 100.60338 images/sec, eta: 0:00:03
2022-05-12 20:31:20 INFO: epoch:19 , train step:100 , top1: 0.25000, top2: 1.00000, loss: 0.80680, lr: 0.000440, batch_cost: 0.03956 s, reader_cost: 0.00021 s, ips: 101.10506 images/sec, eta: 0:00:02
2022-05-12 20:31:20 INFO: epoch:19 , train step:110 , top1: 0.75000, top2: 1.00000, loss: 0.39250, lr: 0.000421, batch_cost: 0.03931 s, reader_cost: 0.00021 s, ips: 101.76695 images/sec, eta: 0:00:02
2022-05-12 20:31:20 INFO: epoch:19 , train step:120 , top1: 0.50000, top2: 1.00000, loss: 0.68426, lr: 0.000402, batch_cost: 0.03912 s, reader_cost: 0.00021 s, ips: 102.25257 images/sec, eta: 0:00:01
2022-05-12 20:31:21 INFO: epoch:19 , train step:130 , top1: 0.50000, top2: 1.00000, loss: 0.71473, lr: 0.000384, batch_cost: 0.03890 s, reader_cost: 0.00021 s, ips: 102.84071 images/sec, eta: 0:00:01
2022-05-12 20:31:21 INFO: epoch:19 , train step:140 , top1: 0.75000, top2: 1.00000, loss: 0.57981, lr: 0.000366, batch_cost: 0.03871 s, reader_cost: 0.00021 s, ips: 103.32368 images/sec, eta: 0:00:01
2022-05-12 20:31:21 INFO: epoch:19 , train step:150 , top1: 0.75000, top2: 1.00000, loss: 0.59369, lr: 0.000348, batch_cost: 0.03861 s, reader_cost: 0.00020 s, ips: 103.58886 images/sec, eta: 0:00:00
2022-05-12 20:31:22 INFO: epoch:19 , train step:160 , top1: 0.50000, top2: 1.00000, loss: 0.84634, lr: 0.000331, batch_cost: 0.03852 s, reader_cost: 0.00020 s, ips: 103.84844 images/sec, eta: 0:00:00
2022-05-12 20:31:22 INFO: END epoch:19  train top1: 0.65569, top2: 1.00000, loss: 0.63072,  batch_cost: 0.03836 s, reader_cost: 0.00020 s, batch_cost_sum: 6.02269 s, ips: 104.27231 images/sec.
2022-05-12 20:31:22 INFO: valid step:0   , top1: 0.45000, top2: 1.00000, loss: 1.29959, lr: 0.000000, batch_cost: 0.37006 s, reader_cost: 0.34073 s, ips: 54.04460 images/sec
2022-05-12 20:31:23 INFO: END epoch:19  valid top1: 0.61333, top2: 1.00000, loss: 0.91289,  batch_cost: 0.12841 s, reader_cost: 0.10396 s, batch_cost_sum: 0.51362 s, ips: 116.81747 images/sec.
2022-05-12 20:31:23 INFO: The best top1 acc 0.72000, in epoch: 9
2022-05-12 20:31:23 INFO: Already save model in ./output/ResNet50_vd/19

七、模型转换

In [54]

!python tools/export_model.py --model "ResNet50_vd" \
--p=output/ResNet50_vd/best_model/ppcls \
--o=output/inference
W0512 20:40:20.724928 12478 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0512 20:40:20.731453 12478 device_context.cc:465] device: 0, cuDNN Version: 7.6.
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1441: UserWarning: Skip loading for out.weight. out.weight receives a shape [2048, 2], but the expected shape is [2048, 1000].
  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1441: UserWarning: Skip loading for out.bias. out.bias receives a shape [2], but the expected shape is [1000].
  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return (isinstance(seq, collections.Sequence) and

八、模型推理

In [55]

!python tools/infer/predict.py --use_gpu=0 \
--image_file=dataset/ct/jpg/709.png \
--model_file=output/inference/inference.pdmodel \
--params_file=output/inference/inference.pdiparams
File:709.png, Top-1 result: class id(s): [579], score(s): [0.00]

九、个人总结

本人燕山大学2021级硕士研究生在读,研究方向为智能制造,寻求数控加工与深度学习的交叉融合。

前期学过吴恩达老师的ML、DL基础课程,刚刚入门深度学习领域,参加过“华为杯”数学建模竞赛以及河北省数学建模竞赛,学习之路任重而道远。

欢迎大家fork喜欢评论三连,感兴趣的朋友也可互相关注一下~

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐