基于SwinTransformer的肺炎诊断模型

一、项目介绍

  • 作为一种呼吸道感染,肺炎以其强大的传播力和较高的死亡率而受到世界各国的高度重视。对于肺炎,及早发现和治疗将大大降低其死亡率。当前,X射线诊断被认为是相对有效的方法。由经验丰富的医生对患者的X射线胸片进行视觉分析大约需要5到15分钟。当病例集中时,无疑将对医生的临床诊断施加巨大压力。因此,依靠成像医生的肉眼效率非常低。因此,将人工智能用于肺炎的临床图像诊断是必要的。

  • 考虑到现在肺炎患者数量的日益增加,医生的诊断压力也越来越大,我们打算通过一些已经标注好的CT影像片训练出一个可以辅助医生进行诊断(判断CT片是正常的,还是患病的)的模型,这样可以大幅度减少医生诊断的压力。

  • 项目任务:二分类任务。通过给模型输入CT影像片,模型给出“患有肺炎或者正常”两种输出

二、数据集介绍

  • 这个数据是一个公开数据,每张图片都是儿童的胸部CT片(数据集包括两类:正常与不正常)
    请添加图片描述

  • 本数据集共4975张CT影像片,其中包括1344张正常CT片和3631张患有肺炎的CT片

三、模型的选择

Swin Transformer是由微软提出的一种新的视觉领域的Transformer模型,来自论文“Swin Transformer: Hierarchical Vision Transformer using Shifted Windows”,该模型可以作为计算机视觉任务的backbone。

请添加图片描述

Transformer最早被提出来主要用于自然语言处理(NLP)领域,最近有很多工作将Transformer应用到了视觉领域。将自然语言处理领域的Transformer技术应用到计算机视觉领域主要的领域差异有:

1)视觉实体的尺度变化较大,Transformer里面token大部分都是一个固定的尺寸;

2)相比于文本中的单词,图像的像素分辨率较高。Transformer基于全局自注意力的计算导致计算量较大,需要image size平方的时间复杂度,代价很高。

对于CV领域经典的目标检测(Object Detection)任务和语义分割(Semantic Segmentation)任务,目标尺度变化较大,ViT和Dei这类模型并不能得到SOTA的结果。

为了解决NLP和Vision两个领域的这些差异,作者提出了一种新的Transformer模型-Swin Transformer。该论文的创新点如下:

1)引入类似于CNN的层次化构建方式构建Transformer模型;

2)引入locality思想,对无重合的window区域进行单独的self-attention计算。

具体地,通过移动窗口(Shifted Windows)来得到图像的多层次特征表达。移动窗口策略将自注意力计算限制在不重叠的局部窗口,同时还允许跨窗口连接,从而达到效率提升的目的。这种分层架构具有在各种尺度上建模的灵活性,并且相对于图像大小具有线性计算复杂性。Swin Transformer 的这些特性使其可以应用到多种视觉任务中,包括图像分类和密集预测任务,例如目标检测和语义分割。
在这里插入图片描述

!unzip -o data/data122287/data.zip -d /home/aistudio/work #解压数据集

四、数据预处理

为了让数据集更加适应模型,我们对将图片做以下操作:

  • 随机旋转0到10度
  • 随机翻转
  • 随机调整图片的对比度
  • 随机调整图片的亮度
  • 调整图片大小为240,240
  • 从240大小中随机裁剪出224
  • 最后统一做归一化
#生成数据列表
label = {'pneumonia_':1,'normal_':0}

import os
from PIL import Image
import random
random.seed(2021)
dataset_path = '/home/aistudio/work/data'
trainf = open(os.path.join(dataset_path, 'train_list.txt'), 'w')
valf = open(os.path.join(dataset_path, 'val_list.txt'), 'w')
testf = open(os.path.join(dataset_path, 'test_list.txt'), 'w')

for key,value in label.items():
    img_dir = os.path.join(dataset_path, key)
    imgs_name = os.listdir(img_dir)
    random.shuffle(imgs_name)
    for idx, name in enumerate(imgs_name):
        img_path = os.path.join(img_dir, name)
        if idx % 10 == 0:
            valf.write((img_path + ' ' + str(value) + '\n'))
        elif idx % 9 == 0:
            testf.write((img_path + ' ' + str(value) + '\n'))
        else:
            trainf.write((img_path + ' ' + str(value) + '\n'))

trainf.close()
valf.close()
testf.close()
print('finished!')
finished!
#对CT图像进行预处理处理并创建Dataset列表
from paddle.vision.transforms import Compose,Transpose, BrightnessTransform,Resize,Normalize,RandomHorizontalFlip,RandomRotation,ContrastTransform,RandomCrop
from paddle.io import DataLoader, Dataset
import cv2
import numpy as np
train_transform = Compose([RandomRotation(degrees=10),#随机旋转0到10度
                    RandomHorizontalFlip(),#随机翻转
                    ContrastTransform(0.1),#随机调整图片的对比度
                    BrightnessTransform(0.1),#随机调整图片的亮度
                    Resize(size=(240,240)),#调整图片大小为240,240
                    RandomCrop(size=(224,224)),#从240大小中随机裁剪出224
                    Normalize(mean=[127.5, 127.5, 127.5],std=[127.5, 127.5, 127.5],data_format='HWC'),#归一化
                    Transpose()])#对‘HWC’转换成‘CHW’

val_transform = Compose([
                    Resize(size=(224,224)),
                    Normalize(mean=[127.5, 127.5, 127.5],std=[127.5, 127.5, 127.5],data_format='HWC'),
                    Transpose()])

# 定义DataSet
class XChestDateset(Dataset):
    def __init__(self, txt_path, transform=None,mode='train'):
        super(XChestDateset, self).__init__()
        self.mode = mode
        self.data_list = []
        self.transform = transform

        if mode == 'train':
            self.data_list = np.loadtxt(txt_path, dtype='str')
        elif mode == 'valid':
            self.data_list = np.loadtxt(txt_path, dtype='str')
        elif mode == 'test':
            self.data_list = np.loadtxt(txt_path, dtype='str')

    def __getitem__(self, idx):
        img_path = self.data_list[idx][0]
        img = cv2.imread(img_path)
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        if self.transform:
            img = self.transform(img)
        return img, int(self.data_list[idx][1])

    def __len__(self):
        return self.data_list.shape[0]

train_txt = 'work/data/train_list.txt'
val_txt = 'work/data/val_list.txt'
BATCH_SIZE = 16
trn_dateset = XChestDateset(train_txt,train_transform, 'train')
train_loader = DataLoader(trn_dateset, shuffle=True, batch_size=BATCH_SIZE  )
val_dateset = XChestDateset(val_txt, val_transform,'valid')
valid_loader = DataLoader(val_dateset, shuffle=False, batch_size=BATCH_SIZE)
len(trn_dateset),len(val_dateset)
(3980, 499)
#可视化观察CT图像
import matplotlib.pyplot as plt 
def imshow(img):
    img = np.transpose(img, (1,2,0))
    img = img*127.5 + 127.5  #反归一化,还原图片
    img = img.astype(np.int32)
    plt.imshow(img)

dataiter = iter(train_loader)
images, labels = dataiter.next()
num = images.shape[0]
row = 4
fig = plt.figure(figsize=(14,14))
for idx in range(num):
    ax = fig.add_subplot(row,int(num/row), idx+1, xticks=[], yticks=[])
    imshow(images[idx])
    if labels[idx]:
        ax.set_title('pneumonia')
    else:
        ax.set_title('normal')

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-j3h05cca-1642075362363)(output_8_0.png)]

五、SwinTransformer模型的建立

import paddle
import paddle.nn.functional as F
import numpy as np
from paddle.vision.transforms import Compose, Resize, Transpose, Normalize
import paddle.nn as nn
paddle.device.set_device('gpu:0') #使用GPU计算
CUDAPlace(0)
# 定义一些必要的函数
from itertools import repeat
def masked_fill(tensor, mask, value):
    cover = paddle.full_like(tensor, value)
    out = paddle.where(mask, tensor, cover)

    return out

def swapdim(x,num1,num2):
    a=list(range(len(x.shape)))
    a[num1], a[num2] = a[num2], a[num1]

    return x.transpose(a)


def to_2tuple(x):
    return tuple(repeat(x, 2))


def drop_path(x, drop_prob = 0., training = False):

    if drop_prob == 0. or not training:
        return x
    keep_prob = 1 - drop_prob
    shape = (x.shape[0],) + (1,) * (x.ndim - 1)  
    random_tensor = paddle.to_tensor(keep_prob) + paddle.rand(shape)
    random_tensor = paddle.floor(random_tensor) 
    output = x.divide(keep_prob) * random_tensor
    return output

class DropPath(nn.Layer):

    def __init__(self, drop_prob=None):
        super(DropPath, self).__init__()
        self.drop_prob = drop_prob

    def forward(self, x):
        return drop_path(x, self.drop_prob, self.training)


class Identity(nn.Layer):                      

    def __init__(self, *args, **kwargs):
        super(Identity, self).__init__()
 
    def forward(self, input):
        return input

PatchEmbed的定义

请添加图片描述

class PatchEmbed(nn.Layer):
    """ Image to Patch Embedding

    Args:
        img_size (int): Image size.  Default: 224.
        patch_size (int): Patch token size. Default: 4.
        in_chans (int): Number of input image channels. Default: 3.
        embed_dim (int): Number of linear projection output channels. Default: 96.
        norm_layer (nn.Module, optional): Normalization layer. Default: None
    """

    def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
        super().__init__()
        img_size = to_2tuple(img_size)
        patch_size = to_2tuple(patch_size)
        patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
        self.img_size = img_size
        self.patch_size = patch_size
        self.patches_resolution = patches_resolution
        self.num_patches = patches_resolution[0] * patches_resolution[1]

        self.in_chans = in_chans
        self.embed_dim = embed_dim

        self.proj = nn.Conv2D(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
        if norm_layer is not None:
            self.norm = norm_layer(embed_dim)
        else:
            self.norm = None

    def forward(self, x):
        B, C, H, W = x.shape
        # FIXME look at relaxing size constraints
        assert H == self.img_size[0] and W == self.img_size[1], \
            f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
        x = swapdim(self.proj(x).flatten(2), 1, 2)  # B Ph*Pw C
        if self.norm is not None:
            x = self.norm(x)
        return x

Mlp层的建立

class Mlp(nn.Layer):
    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
        super().__init__()
        out_features = out_features or in_features
        hidden_features = hidden_features or in_features
        self.fc1 = nn.Linear(in_features, hidden_features)
        self.act = act_layer()
        self.fc2 = nn.Linear(hidden_features, out_features)
        self.drop = nn.Dropout(drop)

    def forward(self, x):
        x = self.fc1(x)
        x = self.act(x)
        x = self.drop(x)
        x = self.fc2(x)
        x = self.drop(x)
        return x

Window Parition

请添加图片描述

class WindowAttention(nn.Layer):
    """ Window based multi-head self attention (W-MSA) module with relative position bias.
    It supports both of shifted and non-shifted window.

    Args:
        dim (int): Number of input channels.
        window_size (tuple[int]): The height and width of the window.
        num_heads (int): Number of attention heads.
        qkv_bias (bool, optional):  If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
        attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
        proj_drop (float, optional): Dropout ratio of output. Default: 0.0
    """

    def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):

        super().__init__()
        self.dim = dim
        self.window_size = window_size  # Wh, Ww
        self.num_heads = num_heads
        head_dim = dim // num_heads
        self.scale = qk_scale or head_dim ** -0.5

        # define a parameter table of relative position bias
        relative_position_bias_table = self.create_parameter(
            shape=((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads), default_initializer=nn.initializer.Constant(value=0))  # 2*Wh-1 * 2*Ww-1, nH
        self.add_parameter("relative_position_bias_table", relative_position_bias_table)

        # get pair-wise relative position index for each token inside the window
        coords_h = paddle.arange(self.window_size[0])
        coords_w = paddle.arange(self.window_size[1])
        coords = paddle.stack(paddle.meshgrid([coords_h, coords_w]))                   # 2, Wh, Ww
        coords_flatten = paddle.flatten(coords, 1)                                     # 2, Wh*Ww
        relative_coords = coords_flatten.unsqueeze(-1) - coords_flatten.unsqueeze(1)   # 2, Wh*Ww, Wh*Ww
        relative_coords = relative_coords.transpose([1, 2, 0])                         # Wh*Ww, Wh*Ww, 2
        relative_coords[:, :, 0] += self.window_size[0] - 1                            # shift to start from 0
        relative_coords[:, :, 1] += self.window_size[1] - 1
        relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
        self.relative_position_index = relative_coords.sum(-1)                         # Wh*Ww, Wh*Ww
        self.register_buffer("relative_position_index", self.relative_position_index)

        self.qkv = nn.Linear(dim, dim * 3, bias_attr=qkv_bias)
        self.attn_drop = nn.Dropout(attn_drop)
        self.proj = nn.Linear(dim, dim)
        self.proj_drop = nn.Dropout(proj_drop)

        self.softmax = nn.Softmax(axis=-1)

    def forward(self, x, mask=None):
        """
        Args:
            x: input features with shape of (num_windows*B, N, C)
            mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
        """
        B_, N, C = x.shape
        qkv = self.qkv(x).reshape([B_, N, 3, self.num_heads, C // self.num_heads]).transpose([2, 0, 3, 1, 4])
        q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)

        q = q * self.scale
        attn = q @ swapdim(k ,-2, -1)

        relative_position_bias = paddle.index_select(self.relative_position_bias_table,
                                                     self.relative_position_index.reshape((-1,)),axis=0).reshape((self.window_size[0] * self.window_size[1],self.window_size[0] * self.window_size[1], -1))

        relative_position_bias = relative_position_bias.transpose([2, 0, 1])  # nH, Wh*Ww, Wh*Ww
        attn = attn + relative_position_bias.unsqueeze(0)

        if mask is not None:
            nW = mask.shape[0]
            attn = attn.reshape([B_ // nW, nW, self.num_heads, N, N]) + mask.unsqueeze(1).unsqueeze(0)
            attn = attn.reshape([-1, self.num_heads, N, N])
            attn = self.softmax(attn)
        else:
            attn = self.softmax(attn)

        attn = self.attn_drop(attn)

        x = swapdim((attn @ v),1, 2).reshape([B_, N, C])
        x = self.proj(x)
        x = self.proj_drop(x)
        return x

W-MSA原理图

在这里插入图片描述

SW-MSA原理图

请添加图片描述

SwinTransformerBlock的建立

在这里插入图片描述

def window_partition(x, window_size):
    """
    Args:
        x: (B, H, W, C)
        window_size (int): window size

    Returns:
        windows: (num_windows*B, window_size, window_size, C)
    """
    B, H, W, C = x.shape
    x = x.reshape([B, H // window_size, window_size, W // window_size, window_size, C])
    windows = x.transpose([0, 1, 3, 2, 4, 5]).reshape([-1, window_size, window_size, C])
    return windows


def window_reverse(windows, window_size, H, W):
    """
    Args:
        windows: (num_windows*B, window_size, window_size, C)
        window_size (int): Window size
        H (int): Height of image
        W (int): Width of image

    Returns:
        x: (B, H, W, C)
    """
    B = int(windows.shape[0] / (H * W / window_size / window_size))
    x = windows.reshape([B, H // window_size, W // window_size, window_size, window_size, -1])
    x = x.transpose([0, 1, 3, 2, 4, 5]).reshape([B, H, W, -1])
    return x
class SwinTransformerBlock(nn.Layer):
    """ Swin Transformer Block.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resulotion.
        num_heads (int): Number of attention heads.
        window_size (int): Window size.
        shift_size (int): Shift size for SW-MSA.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        drop_path (float, optional): Stochastic depth rate. Default: 0.0
        act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm
    """

    def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
                 act_layer=nn.GELU, norm_layer=nn.LayerNorm):
        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.num_heads = num_heads
        self.window_size = window_size
        self.shift_size = shift_size
        self.mlp_ratio = mlp_ratio
        if min(self.input_resolution) <= self.window_size:
            # if window size is larger than input resolution, we don't partition windows
            self.shift_size = 0
            self.window_size = min(self.input_resolution)
        assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"

        self.norm1 = norm_layer(dim)
        self.attn = WindowAttention(
            dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
            qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)

        self.drop_path = DropPath(drop_path) if drop_path > 0. else Identity() 


        self.norm2 = norm_layer(dim)
        mlp_hidden_dim = int(dim * mlp_ratio)
        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)

        if self.shift_size > 0:
            # calculate attention mask for SW-MSA
            H, W = self.input_resolution
            img_mask = paddle.zeros((1, H, W, 1))  # 1 H W 1

            h_slices = (slice(0, -self.window_size),
                        slice(-self.window_size, -self.shift_size),
                        slice(-self.shift_size, None))
            w_slices = (slice(0, -self.window_size),
                        slice(-self.window_size, -self.shift_size),
                        slice(-self.shift_size, None))
            cnt = 0
            for h in h_slices:
                for w in w_slices:
                    img_mask[:, h, w, :] = cnt
                    cnt += 1

            mask_windows = window_partition(img_mask, self.window_size)  # nW, window_size, window_size, 1
            mask_windows = mask_windows.reshape([-1, self.window_size * self.window_size])
            attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)

            attn_mask = masked_fill(attn_mask, attn_mask == 0, float(-100.0))
            attn_mask = masked_fill(attn_mask, attn_mask != 0, float(0.0))

        else:
            attn_mask = None

        self.register_buffer("attn_mask", attn_mask)

    def forward(self, x):
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"

        shortcut = x
        x = self.norm1(x)
        x = x.reshape([B, H, W, C])

        # cyclic shift
        if self.shift_size > 0:
            shifted_x = paddle.roll(x, shifts=(-self.shift_size, -self.shift_size), axis=(1, 2))
        else:
            shifted_x = x

        # partition windows
        x_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, C
        x_windows = x_windows.reshape([-1, self.window_size * self.window_size, C])  # nW*B, window_size*window_size, C

        # W-MSA/SW-MSA
        attn_windows = self.attn(x_windows, mask=self.attn_mask)  # nW*B, window_size*window_size, C

        # merge windows
        attn_windows = attn_windows.reshape([-1, self.window_size, self.window_size, C])
        shifted_x = window_reverse(attn_windows, self.window_size, H, W)  # B H' W' C

        # reverse cyclic shift
        if self.shift_size > 0:
            x = paddle.roll(shifted_x, shifts=(self.shift_size, self.shift_size), axis=(1, 2))
        else:
            x = shifted_x
        x = x.reshape([B, H * W, C])

        # FFN
        x = shortcut + self.drop_path(x)
        x = x + self.drop_path(self.mlp(self.norm2(x)))
        return x

Partch Merging

请添加图片描述

class PatchMerging(nn.Layer):
    """ Patch Merging Layer.

    Args:
        input_resolution (tuple[int]): Resolution of input feature.
        dim (int): Number of input channels.
        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm
    """

    def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
        super().__init__()
        self.input_resolution = input_resolution
        self.dim = dim
        self.reduction = nn.Linear(4 * dim, 2 * dim, bias_attr=False)
        self.norm = norm_layer(4 * dim)

    def forward(self, x):
        """
        x: B, H*W, C
        """
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"
        assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."

        x = x.reshape([B, H, W, C])

        x0 = x[:, 0::2, 0::2, :]  # B H/2 W/2 C
        x1 = x[:, 1::2, 0::2, :]  # B H/2 W/2 C
        x2 = x[:, 0::2, 1::2, :]  # B H/2 W/2 C
        x3 = x[:, 1::2, 1::2, :]  # B H/2 W/2 C
        x = paddle.concat([x0, x1, x2, x3], -1)  # B H/2 W/2 4*C
        x = x.reshape([B, -1, 4 * C])  # B H/2*W/2 4*C
        x = self.norm(x)
        x = self.reduction(x)
        return x

将SwinTransformerBlock和partch_Merging合并----BasicLayer

class BasicLayer(nn.Layer):
    """ A basic Swin Transformer layer for one stage.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resolution.
        depth (int): Number of blocks.
        num_heads (int): Number of attention heads.
        window_size (int): Local window size.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
        norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
        downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
    """

    def __init__(self, dim, input_resolution, depth, num_heads, window_size,
                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
                 drop_path=0., norm_layer=nn.LayerNorm, downsample=None):

        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.depth = depth
        

        # build blocks
        self.blocks = nn.LayerList([
            SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
                                 num_heads=num_heads, window_size=window_size,
                                 shift_size=0 if (i % 2 == 0) else window_size // 2,
                                 mlp_ratio=mlp_ratio,
                                 qkv_bias=qkv_bias, qk_scale=qk_scale,
                                 drop=drop, attn_drop=attn_drop,
                                 drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
                                 norm_layer=norm_layer) 
                                 for i in range(depth)])

        # patch merging layer
        if downsample is not None:
            self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
        else:
            self.downsample = None

    def forward(self, x):
        for blk in self.blocks:
                x = blk(x)
        if self.downsample is not None:
            x = self.downsample(x)
        return x

SwinTransformer模型的建立

#SwinTransformer的建立
class SwinTransformer(nn.Layer):
    """ Swin Transformer
        A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows`  -
          https://arxiv.org/pdf/2103.14030

    Args:
        img_size (int | tuple(int)): Input image size. Default 224
        patch_size (int | tuple(int)): Patch size. Default: 4
        in_chans (int): Number of input image channels. Default: 3
        num_classes (int): Number of classes for classification head. Default: 1000
        embed_dim (int): Patch embedding dimension. Default: 96
        depths (tuple(int)): Depth of each Swin Transformer layer.
        num_heads (tuple(int)): Number of attention heads in different layers.
        window_size (int): Window size. Default: 7
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
        qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
        drop_rate (float): Dropout rate. Default: 0
        attn_drop_rate (float): Attention dropout rate. Default: 0
        drop_path_rate (float): Stochastic depth rate. Default: 0.1
        norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
        ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
        patch_norm (bool): If True, add normalization after patch embedding. Default: True
        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
    """

    def __init__(self, img_size=224, patch_size=4, in_chans=3, num_classes=1000,
                 embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24],
                 window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
                 drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
                 norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
                 **kwargs):
        super().__init__()

        self.num_classes = num_classes
        self.num_layers = len(depths)
        self.embed_dim = embed_dim
        self.ape = ape
        self.patch_norm = patch_norm
        self.num_features = int(embed_dim * 2 ** (self.num_layers - 1))
        self.mlp_ratio = mlp_ratio

        # split image into non-overlapping patches
        self.patch_embed = PatchEmbed(
            img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
            norm_layer=norm_layer if self.patch_norm else None)
        num_patches = self.patch_embed.num_patches
        patches_resolution = self.patch_embed.patches_resolution
        self.patches_resolution = patches_resolution

        # absolute position embedding
        if self.ape:
            self.absolute_pos_embed = self.create_parameter(shape=(1, num_patches, embed_dim),default_initializer=nn.initializer.Constant(value=0))

            self.add_parameter("absolute_pos_embed", self.absolute_pos_embed)

        self.pos_drop = nn.Dropout(p=drop_rate)

        # stochastic depth
        dpr = [x for x in paddle.linspace(0, drop_path_rate, sum(depths))]  # stochastic depth decay rule

        # build layers
        self.layers = nn.LayerList()
        for i_layer in range(self.num_layers):
            layer = BasicLayer(dim=int(embed_dim * 2 ** i_layer),
                               input_resolution=(patches_resolution[0] // (2 ** i_layer),
                                                 patches_resolution[1] // (2 ** i_layer)),
                               depth=depths[i_layer],
                               num_heads=num_heads[i_layer],
                               window_size=window_size,
                               mlp_ratio=self.mlp_ratio,
                               qkv_bias=qkv_bias, qk_scale=qk_scale,
                               drop=drop_rate, attn_drop=attn_drop_rate,
                               drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
                               norm_layer=norm_layer,
                               downsample=PatchMerging if (i_layer < self.num_layers - 1) else None
                               )
            self.layers.append(layer)

        self.norm = norm_layer(self.num_features)
        self.avgpool = nn.AdaptiveAvgPool1D(1)
        self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else Identity()



    def forward_features(self, x):
        x = self.patch_embed(x)
        if self.ape:
            x = x + self.absolute_pos_embed
        x = self.pos_drop(x)

        for layer in self.layers:
            x = layer(x)

        x = self.norm(x)  # B L C
        x = self.avgpool(swapdim(x,1, 2))  # B C 1
        x = paddle.flatten(x, 1)
        return x

    def forward(self, x):
        x = self.forward_features(x)
        x = self.head(x)
        return x

六、给定初始化参数,并查看参数量

def swin_tiny(**kwargs):
    model = SwinTransformer(img_size = 224,
                            embed_dim = 96,
                            depths = [ 2, 2, 6, 2 ],
                            num_heads = [ 3, 6, 12, 24 ],
                            window_size = 7,
                            drop_path_rate=0.2,
                            **kwargs)
    return model
model = swin_tiny(num_classes = 2)
model = paddle.Model(model)
model.summary((1,3,224,224))
-----------------------------------------------------------------------------------
     Layer (type)           Input Shape          Output Shape         Param #    
===================================================================================
       Conv2D-1          [[1, 3, 224, 224]]    [1, 96, 56, 56]         4,704     
      LayerNorm-1         [[1, 3136, 96]]       [1, 3136, 96]           192      
     PatchEmbed-1        [[1, 3, 224, 224]]     [1, 3136, 96]            0       
       Dropout-1          [[1, 3136, 96]]       [1, 3136, 96]            0       
      LayerNorm-2         [[1, 3136, 96]]       [1, 3136, 96]           192      
       Linear-1            [[64, 49, 96]]       [64, 49, 288]         27,936     
       Softmax-1         [[64, 3, 49, 49]]     [64, 3, 49, 49]           0       
       Dropout-2         [[64, 3, 49, 49]]     [64, 3, 49, 49]           0       
       Linear-2            [[64, 49, 96]]        [64, 49, 96]          9,312     
       Dropout-3           [[64, 49, 96]]        [64, 49, 96]            0       
   WindowAttention-1       [[64, 49, 96]]        [64, 49, 96]           507      
      Identity-1          [[1, 3136, 96]]       [1, 3136, 96]            0       
      LayerNorm-3         [[1, 3136, 96]]       [1, 3136, 96]           192      
       Linear-3           [[1, 3136, 96]]       [1, 3136, 384]        37,248     
        GELU-1            [[1, 3136, 384]]      [1, 3136, 384]           0       
       Dropout-4          [[1, 3136, 96]]       [1, 3136, 96]            0       
       Linear-4           [[1, 3136, 384]]      [1, 3136, 96]         36,960     
         Mlp-1            [[1, 3136, 96]]       [1, 3136, 96]            0       
SwinTransformerBlock-1    [[1, 3136, 96]]       [1, 3136, 96]            0       
      LayerNorm-4         [[1, 3136, 96]]       [1, 3136, 96]           192      
       Linear-5            [[64, 49, 96]]       [64, 49, 288]         27,936     
       Softmax-2         [[64, 3, 49, 49]]     [64, 3, 49, 49]           0       
       Dropout-5         [[64, 3, 49, 49]]     [64, 3, 49, 49]           0       
       Linear-6            [[64, 49, 96]]        [64, 49, 96]          9,312     
       Dropout-6           [[64, 49, 96]]        [64, 49, 96]            0       
   WindowAttention-2       [[64, 49, 96]]        [64, 49, 96]           507      
      DropPath-1          [[1, 3136, 96]]       [1, 3136, 96]            0       
      LayerNorm-5         [[1, 3136, 96]]       [1, 3136, 96]           192      
       Linear-7           [[1, 3136, 96]]       [1, 3136, 384]        37,248     
        GELU-2            [[1, 3136, 384]]      [1, 3136, 384]           0       
       Dropout-7          [[1, 3136, 96]]       [1, 3136, 96]            0       
       Linear-8           [[1, 3136, 384]]      [1, 3136, 96]         36,960     
         Mlp-2            [[1, 3136, 96]]       [1, 3136, 96]            0       
SwinTransformerBlock-2    [[1, 3136, 96]]       [1, 3136, 96]            0       
      LayerNorm-6         [[1, 784, 384]]       [1, 784, 384]           768      
       Linear-9           [[1, 784, 384]]       [1, 784, 192]         73,728     
    PatchMerging-1        [[1, 3136, 96]]       [1, 784, 192]            0       
     BasicLayer-1         [[1, 3136, 96]]       [1, 784, 192]            0       
      LayerNorm-7         [[1, 784, 192]]       [1, 784, 192]           384      
       Linear-10          [[16, 49, 192]]       [16, 49, 576]         111,168    
       Softmax-3         [[16, 6, 49, 49]]     [16, 6, 49, 49]           0       
       Dropout-8         [[16, 6, 49, 49]]     [16, 6, 49, 49]           0       
       Linear-11          [[16, 49, 192]]       [16, 49, 192]         37,056     
       Dropout-9          [[16, 49, 192]]       [16, 49, 192]            0       
   WindowAttention-3      [[16, 49, 192]]       [16, 49, 192]          1,014     
      DropPath-2          [[1, 784, 192]]       [1, 784, 192]            0       
      LayerNorm-8         [[1, 784, 192]]       [1, 784, 192]           384      
       Linear-12          [[1, 784, 192]]       [1, 784, 768]         148,224    
        GELU-3            [[1, 784, 768]]       [1, 784, 768]            0       
      Dropout-10          [[1, 784, 192]]       [1, 784, 192]            0       
       Linear-13          [[1, 784, 768]]       [1, 784, 192]         147,648    
         Mlp-3            [[1, 784, 192]]       [1, 784, 192]            0       
SwinTransformerBlock-3    [[1, 784, 192]]       [1, 784, 192]            0       
      LayerNorm-9         [[1, 784, 192]]       [1, 784, 192]           384      
       Linear-14          [[16, 49, 192]]       [16, 49, 576]         111,168    
       Softmax-4         [[16, 6, 49, 49]]     [16, 6, 49, 49]           0       
      Dropout-11         [[16, 6, 49, 49]]     [16, 6, 49, 49]           0       
       Linear-15          [[16, 49, 192]]       [16, 49, 192]         37,056     
      Dropout-12          [[16, 49, 192]]       [16, 49, 192]            0       
   WindowAttention-4      [[16, 49, 192]]       [16, 49, 192]          1,014     
      DropPath-3          [[1, 784, 192]]       [1, 784, 192]            0       
     LayerNorm-10         [[1, 784, 192]]       [1, 784, 192]           384      
       Linear-16          [[1, 784, 192]]       [1, 784, 768]         148,224    
        GELU-4            [[1, 784, 768]]       [1, 784, 768]            0       
      Dropout-13          [[1, 784, 192]]       [1, 784, 192]            0       
       Linear-17          [[1, 784, 768]]       [1, 784, 192]         147,648    
         Mlp-4            [[1, 784, 192]]       [1, 784, 192]            0       
SwinTransformerBlock-4    [[1, 784, 192]]       [1, 784, 192]            0       
     LayerNorm-11         [[1, 196, 768]]       [1, 196, 768]          1,536     
       Linear-18          [[1, 196, 768]]       [1, 196, 384]         294,912    
    PatchMerging-2        [[1, 784, 192]]       [1, 196, 384]            0       
     BasicLayer-2         [[1, 784, 192]]       [1, 196, 384]            0       
     LayerNorm-12         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-19           [[4, 49, 384]]       [4, 49, 1152]         443,520    
       Softmax-5         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
      Dropout-14         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
       Linear-20           [[4, 49, 384]]        [4, 49, 384]         147,840    
      Dropout-15           [[4, 49, 384]]        [4, 49, 384]            0       
   WindowAttention-5       [[4, 49, 384]]        [4, 49, 384]          2,028     
      DropPath-4          [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-13         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-21          [[1, 196, 384]]       [1, 196, 1536]        591,360    
        GELU-5            [[1, 196, 1536]]      [1, 196, 1536]           0       
      Dropout-16          [[1, 196, 384]]       [1, 196, 384]            0       
       Linear-22          [[1, 196, 1536]]      [1, 196, 384]         590,208    
         Mlp-5            [[1, 196, 384]]       [1, 196, 384]            0       
SwinTransformerBlock-5    [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-14         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-23           [[4, 49, 384]]       [4, 49, 1152]         443,520    
       Softmax-6         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
      Dropout-17         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
       Linear-24           [[4, 49, 384]]        [4, 49, 384]         147,840    
      Dropout-18           [[4, 49, 384]]        [4, 49, 384]            0       
   WindowAttention-6       [[4, 49, 384]]        [4, 49, 384]          2,028     
      DropPath-5          [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-15         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-25          [[1, 196, 384]]       [1, 196, 1536]        591,360    
        GELU-6            [[1, 196, 1536]]      [1, 196, 1536]           0       
      Dropout-19          [[1, 196, 384]]       [1, 196, 384]            0       
       Linear-26          [[1, 196, 1536]]      [1, 196, 384]         590,208    
         Mlp-6            [[1, 196, 384]]       [1, 196, 384]            0       
SwinTransformerBlock-6    [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-16         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-27           [[4, 49, 384]]       [4, 49, 1152]         443,520    
       Softmax-7         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
      Dropout-20         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
       Linear-28           [[4, 49, 384]]        [4, 49, 384]         147,840    
      Dropout-21           [[4, 49, 384]]        [4, 49, 384]            0       
   WindowAttention-7       [[4, 49, 384]]        [4, 49, 384]          2,028     
      DropPath-6          [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-17         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-29          [[1, 196, 384]]       [1, 196, 1536]        591,360    
        GELU-7            [[1, 196, 1536]]      [1, 196, 1536]           0       
      Dropout-22          [[1, 196, 384]]       [1, 196, 384]            0       
       Linear-30          [[1, 196, 1536]]      [1, 196, 384]         590,208    
         Mlp-7            [[1, 196, 384]]       [1, 196, 384]            0       
SwinTransformerBlock-7    [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-18         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-31           [[4, 49, 384]]       [4, 49, 1152]         443,520    
       Softmax-8         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
      Dropout-23         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
       Linear-32           [[4, 49, 384]]        [4, 49, 384]         147,840    
      Dropout-24           [[4, 49, 384]]        [4, 49, 384]            0       
   WindowAttention-8       [[4, 49, 384]]        [4, 49, 384]          2,028     
      DropPath-7          [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-19         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-33          [[1, 196, 384]]       [1, 196, 1536]        591,360    
        GELU-8            [[1, 196, 1536]]      [1, 196, 1536]           0       
      Dropout-25          [[1, 196, 384]]       [1, 196, 384]            0       
       Linear-34          [[1, 196, 1536]]      [1, 196, 384]         590,208    
         Mlp-8            [[1, 196, 384]]       [1, 196, 384]            0       
SwinTransformerBlock-8    [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-20         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-35           [[4, 49, 384]]       [4, 49, 1152]         443,520    
       Softmax-9         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
      Dropout-26         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
       Linear-36           [[4, 49, 384]]        [4, 49, 384]         147,840    
      Dropout-27           [[4, 49, 384]]        [4, 49, 384]            0       
   WindowAttention-9       [[4, 49, 384]]        [4, 49, 384]          2,028     
      DropPath-8          [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-21         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-37          [[1, 196, 384]]       [1, 196, 1536]        591,360    
        GELU-9            [[1, 196, 1536]]      [1, 196, 1536]           0       
      Dropout-28          [[1, 196, 384]]       [1, 196, 384]            0       
       Linear-38          [[1, 196, 1536]]      [1, 196, 384]         590,208    
         Mlp-9            [[1, 196, 384]]       [1, 196, 384]            0       
SwinTransformerBlock-9    [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-22         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-39           [[4, 49, 384]]       [4, 49, 1152]         443,520    
      Softmax-10         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
      Dropout-29         [[4, 12, 49, 49]]     [4, 12, 49, 49]           0       
       Linear-40           [[4, 49, 384]]        [4, 49, 384]         147,840    
      Dropout-30           [[4, 49, 384]]        [4, 49, 384]            0       
  WindowAttention-10       [[4, 49, 384]]        [4, 49, 384]          2,028     
      DropPath-9          [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-23         [[1, 196, 384]]       [1, 196, 384]           768      
       Linear-41          [[1, 196, 384]]       [1, 196, 1536]        591,360    
        GELU-10           [[1, 196, 1536]]      [1, 196, 1536]           0       
      Dropout-31          [[1, 196, 384]]       [1, 196, 384]            0       
       Linear-42          [[1, 196, 1536]]      [1, 196, 384]         590,208    
        Mlp-10            [[1, 196, 384]]       [1, 196, 384]            0       
SwinTransformerBlock-10   [[1, 196, 384]]       [1, 196, 384]            0       
     LayerNorm-24         [[1, 49, 1536]]       [1, 49, 1536]          3,072     
       Linear-43          [[1, 49, 1536]]        [1, 49, 768]        1,179,648   
    PatchMerging-3        [[1, 196, 384]]        [1, 49, 768]            0       
     BasicLayer-3         [[1, 196, 384]]        [1, 49, 768]            0       
     LayerNorm-25          [[1, 49, 768]]        [1, 49, 768]          1,536     
       Linear-44           [[1, 49, 768]]       [1, 49, 2304]        1,771,776   
      Softmax-11         [[1, 24, 49, 49]]     [1, 24, 49, 49]           0       
      Dropout-32         [[1, 24, 49, 49]]     [1, 24, 49, 49]           0       
       Linear-45           [[1, 49, 768]]        [1, 49, 768]         590,592    
      Dropout-33           [[1, 49, 768]]        [1, 49, 768]            0       
  WindowAttention-11       [[1, 49, 768]]        [1, 49, 768]          4,056     
      DropPath-10          [[1, 49, 768]]        [1, 49, 768]            0       
     LayerNorm-26          [[1, 49, 768]]        [1, 49, 768]          1,536     
       Linear-46           [[1, 49, 768]]       [1, 49, 3072]        2,362,368   
        GELU-11           [[1, 49, 3072]]       [1, 49, 3072]            0       
      Dropout-34           [[1, 49, 768]]        [1, 49, 768]            0       
       Linear-47          [[1, 49, 3072]]        [1, 49, 768]        2,360,064   
        Mlp-11             [[1, 49, 768]]        [1, 49, 768]            0       
SwinTransformerBlock-11    [[1, 49, 768]]        [1, 49, 768]            0       
     LayerNorm-27          [[1, 49, 768]]        [1, 49, 768]          1,536     
       Linear-48           [[1, 49, 768]]       [1, 49, 2304]        1,771,776   
      Softmax-12         [[1, 24, 49, 49]]     [1, 24, 49, 49]           0       
      Dropout-35         [[1, 24, 49, 49]]     [1, 24, 49, 49]           0       
       Linear-49           [[1, 49, 768]]        [1, 49, 768]         590,592    
      Dropout-36           [[1, 49, 768]]        [1, 49, 768]            0       
  WindowAttention-12       [[1, 49, 768]]        [1, 49, 768]          4,056     
      DropPath-11          [[1, 49, 768]]        [1, 49, 768]            0       
     LayerNorm-28          [[1, 49, 768]]        [1, 49, 768]          1,536     
       Linear-50           [[1, 49, 768]]       [1, 49, 3072]        2,362,368   
        GELU-12           [[1, 49, 3072]]       [1, 49, 3072]            0       
      Dropout-37           [[1, 49, 768]]        [1, 49, 768]            0       
       Linear-51          [[1, 49, 3072]]        [1, 49, 768]        2,360,064   
        Mlp-12             [[1, 49, 768]]        [1, 49, 768]            0       
SwinTransformerBlock-12    [[1, 49, 768]]        [1, 49, 768]            0       
     BasicLayer-4          [[1, 49, 768]]        [1, 49, 768]            0       
     LayerNorm-29          [[1, 49, 768]]        [1, 49, 768]          1,536     
  AdaptiveAvgPool1D-1      [[1, 768, 49]]        [1, 768, 1]             0       
       Linear-52             [[1, 768]]             [1, 2]             1,538     
===================================================================================
Total params: 27,520,892
Trainable params: 27,520,892
Non-trainable params: 0
-----------------------------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 282.34
Params size (MB): 104.98
Estimated Total Size (MB): 387.90
-----------------------------------------------------------------------------------






{'total_params': 27520892, 'trainable_params': 27520892}

七、模型的训练

from paddle.regularizer import L2Decay
from paddle.nn import CrossEntropyLoss
from paddle.metric import Accuracy

BATCH_SIZE = 12 
EPOCHS = 10 #训练次数
decay_steps = int(len(trn_dateset)/BATCH_SIZE * EPOCHS)

train_loader = DataLoader(trn_dateset, shuffle=True, batch_size=BATCH_SIZE)
valid_loader = DataLoader(val_dateset, shuffle=False, batch_size=BATCH_SIZE)

model = paddle.Model(swin_tiny(num_classes = 2))
base_lr = 0.0125
lr = paddle.optimizer.lr.PolynomialDecay(base_lr, power=0.9, decay_steps=decay_steps, end_lr=0.0)
# 定义优化器
optimizer = paddle.optimizer.Momentum(learning_rate=lr,
                     momentum=0.9,
                     weight_decay=L2Decay(1e-4),
                     parameters=model.parameters())
model.prepare(optimizer, CrossEntropyLoss(), Accuracy(topk=(1, 5)))
# 启动训练
model.fit(train_loader,
          valid_loader,
          epochs=EPOCHS,
          batch_size=BATCH_SIZE,
          eval_freq =5,#多少epoch 进行验证
          save_freq = 5,#多少epoch 进行模型保存
          log_freq =100,#多少steps 打印训练信息
          save_dir='/home/aistudio/checkpoint')
The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/10


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return (isinstance(seq, collections.Sequence) and


step 100/332 - loss: 1.0937 - acc_top1: 0.6333 - acc_top5: 1.0000 - 160ms/step
step 200/332 - loss: 0.6372 - acc_top1: 0.6550 - acc_top5: 1.0000 - 161ms/step
step 300/332 - loss: 0.4926 - acc_top1: 0.6475 - acc_top5: 1.0000 - 158ms/step
step 332/332 - loss: 0.9784 - acc_top1: 0.6447 - acc_top5: 1.0000 - 157ms/step
save checkpoint at /home/aistudio/checkpoint/0
Eval begin...
step 42/42 - loss: 1.6944 - acc_top1: 0.6072 - acc_top5: 1.0000 - 65ms/step
Eval samples: 499
Epoch 2/10
step 100/332 - loss: 0.3584 - acc_top1: 0.7025 - acc_top5: 1.0000 - 162ms/step
step 200/332 - loss: 0.3872 - acc_top1: 0.7096 - acc_top5: 1.0000 - 154ms/step
step 300/332 - loss: 0.3513 - acc_top1: 0.7067 - acc_top5: 1.0000 - 157ms/step
step 332/332 - loss: 1.0030 - acc_top1: 0.7098 - acc_top5: 1.0000 - 156ms/step
Epoch 3/10

八、验证集上的验证

model.evaluate(valid_loader, log_freq=30, verbose=2)
Eval begin...
step 30/42 - loss: 0.1236 - acc_top1: 0.9278 - acc_top5: 1.0000 - 56ms/step
step 42/42 - loss: 0.8242 - acc_top1: 0.8537 - acc_top5: 1.0000 - 53ms/step
Eval samples: 499





{'loss': [0.8241846], 'acc_top1': 0.8537074148296593, 'acc_top5': 1.0}

九、项目总结

在这个肺炎CT片二分类项目中,我们使用了SwinTransformer对4975张CT片进行训练,在验证集上得到了85.4%的正确率,尽管相较于RestNet50的效果还有一些差距,但可以通过后期增加训练集的数量等手段提升模型的效果。

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐