转载自AI Studio
项目链接https://aistudio.baidu.com/aistudio/projectdetail/3202434

赛题介绍

赛题背景

图像分类(image classification)是计算机视觉领域中最简单最基础的任务,学习研究图像分类是每个计算机视觉研究者的必经之路,图像分类网络也是很多更复杂任务(如目标检测、语义分割等)算法的基础。本练习赛旨在让选手们用图像分类任务来以赛代练、熟悉深度学习框架和比赛流程。

在图像分类学习中,MNIST数据集常被用来作为入门教学数据集。但是,MNIST数据集存在一些问题:首先,MNIST数据集对于现在的卷积神经网络来说过于简单,SOTA模型的分类精度达到了99.84%,甚至传统机器学习方法也能达到97%的精度,因此模型的精度在此达到了饱和,几乎没有提升的空间;再者,有些专家对MNIST数据集提出了质疑,比如谷歌的深度学习专家、Keras的作者François Chollet曾表示:“MNIST存在很多问题,但最重要的是,它真的不具有计算机视觉任务的代表性。”并补充道:“很多好点子(比如batch norm)在MNIST上效果差,但相反的,一些差的方法可能在MNIST产生好效果,却不能迁移到真实计算机视觉任务中。”

数据说明

本练习赛采用和MNIST同等规模但更有难度的数据集Fashion-MNIST(github链接:https://github.com/zalandoresearch/fashion-mnist),Fashion-MNIST由60000张训练集图像、10000张测试集图像及对应的标签构成,每张图像是分辨率为28x28的灰度图像,包含10种分类:T恤、裤子、套头衫、连衣裙、大衣、凉鞋、衬衫、运动鞋、包、短靴。

本练习赛的参赛者可以使用Tensorflow、Keras、Pytorch、Paddlepaddle等开源深度学习框架来进行模型的搭建、训练和预测。

import pandas as pd
import numpy as np

# 数据读取
train_df = pd.read_csv('fashion-mnist_train.csv')
test_df = pd.read_csv('fashion-mnist_test_data.csv')
train_df.head()
labelpixel1pixel2pixel3pixel4pixel5pixel6pixel7pixel8pixel9...pixel775pixel776pixel777pixel778pixel779pixel780pixel781pixel782pixel783pixel784
02000000000...0000000000
19000000000...0000000000
26000000050...000304300000
30000120000...3000010000
43000000000...0000000000

5 rows × 785 columns

数据可视化

%pylab inline
# 绘制10*10的图片
plt.figure(figsize=(10, 10))
for idx in range(100):
    # 图片尺寸从784 -》 28 * 28
    xy = train_df.iloc[idx].values[1:].reshape(28,28)
    plt.subplot(10, 10, idx+1)
    # 展示图片
    plt.imshow(xy, cmap='gray')
    plt.xticks([]); plt.yticks([])
Populating the interactive namespace from numpy and matplotlib

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VkAmml3R-1646556780505)(output_4_1.png)]

import paddle
paddle.__version__
'2.2.0'

数据读取

from paddle.io import DataLoader, Dataset
from PIL import Image

# 自定义数据集完成数据读取
class MyDataset(Dataset):
    def __init__(self, img, label):
        super(MyDataset, self).__init__()
        self.img = img
        self.label = label
    
    def __getitem__(self, index):
        img = self.img[index]
        # 对数据集进行归一化
        return img/255, int(self.label[index])

    def __len__(self):
        return len(self.label)

# 使用5.9w进行训练
train_dataset = MyDataset(
    train_df.iloc[:-1000, 1:].values.reshape(59000, 28, 28).astype(np.float32), 
    paddle.to_tensor(train_df.label.iloc[:-1000].values.astype(np.float32))
)
train_loader = DataLoader(train_dataset, batch_size=300, shuffle=True)

# 最后1k作为验证集
val_dataset = MyDataset(
    train_df.iloc[-1000:, 1:].values.reshape(1000, 28, 28).astype(np.float32), 
    paddle.to_tensor(train_df.label.iloc[-1000:].values.astype(np.float32))
)
val_loader = DataLoader(val_dataset, batch_size=300, shuffle=False)

# 最终测试集
test_dataset = MyDataset(
    test_df.iloc[:, 1:].values.reshape(10000, 28, 28).astype(np.float32),
    paddle.to_tensor(np.zeros((test_df.shape[0])))
)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)

全连接模型

# 构建全连接模型,先需要从多维转变为二维
model = paddle.nn.Sequential(
    paddle.nn.Flatten(),
    paddle.nn.Linear(28*28,128),
    paddle.nn.LeakyReLU(),
    paddle.nn.Linear(128, 10)
)

paddle.summary(model, (64, 28, 28))
W1207 20:53:04.353837   104 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W1207 20:53:04.359086   104 device_context.cc:465] device: 0, cuDNN Version: 7.6.


---------------------------------------------------------------------------
 Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================
   Flatten-1       [[64, 28, 28]]         [64, 784]              0       
   Linear-1         [[64, 784]]           [64, 128]           100,480    
  LeakyReLU-1       [[64, 128]]           [64, 128]              0       
   Linear-2         [[64, 128]]            [64, 10]            1,290     
===========================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.19
Forward/backward pass size (MB): 0.51
Params size (MB): 0.39
Estimated Total Size (MB): 1.09
---------------------------------------------------------------------------






{'total_params': 101770, 'trainable_params': 101770}
# 定义优化器,损失函数
optimizer = paddle.optimizer.Adam(parameters=model.parameters(), learning_rate=0.0001)
criterion = paddle.nn.CrossEntropyLoss()

# 每个epoch迭代训练
#     每个batch -> 正向传播 -> 计算损失 -> 更新参数
for epoch in range(0, 5):
    Train_Loss, Val_Loss = [], []
    Train_ACC, Val_ACC = [], []

    # 训练部分
    model.train()
    for i, (x, y) in enumerate(train_loader):
        pred = model(x)
        loss = criterion(pred, y)
        Train_Loss.append(loss.item())
        loss.backward()
        optimizer.step()
        optimizer.clear_grad()
        Train_ACC.append((pred.numpy().argmax(1) == y.numpy()).mean())
        
        if i % 100 == 0:
            print(f'{i}/{len(train_loader)}\t Loss {np.mean(Train_Loss):3.5f} {np.mean(Train_ACC):3.5f}')

    # 验证部分
    model.eval()
    for i, (x, y) in enumerate(val_loader):
        pred = model(x)
        loss = criterion(pred, y)
        Val_Loss.append(loss.item())
        Val_ACC.append((pred.numpy().argmax(1) == y.numpy()).mean())
    
    if epoch % 1 == 0:
        print(f'\nEpoch: {epoch}')
        print(f'Loss {np.mean(Train_Loss):3.5f}/{np.mean(Val_Loss):3.5f}')
        print(f'ACC {np.mean(Train_ACC):3.5f}/{np.mean(Val_ACC):3.5f}')
0/197	 Loss 2.33518 0.17000
100/197	 Loss 1.50995 0.55713

Epoch: 0
Loss 1.20884/0.83340
ACC 0.63190/0.71083
0/197	 Loss 0.77190 0.74667
100/197	 Loss 0.73694 0.76122

Epoch: 1
Loss 0.69631/0.65332
ACC 0.77584/0.77250
0/197	 Loss 0.61251 0.80333
100/197	 Loss 0.60704 0.80515

Epoch: 2
Loss 0.59082/0.57583
ACC 0.80878/0.80167
0/197	 Loss 0.52229 0.84333
100/197	 Loss 0.55234 0.81921

Epoch: 3
Loss 0.53963/0.53572
ACC 0.82233/0.80417
0/197	 Loss 0.54005 0.81667
100/197	 Loss 0.51667 0.82782

Epoch: 4
Loss 0.50867/0.50455
ACC 0.83046/0.82083

卷积模型

# 卷积模型
model = paddle.nn.Sequential(
    paddle.nn.Conv2D(1, 10, (5, 5)),
    paddle.nn.ReLU(),
    paddle.nn.Conv2D(10, 20, (5, 5)),
    paddle.nn.ReLU(),
    paddle.nn.MaxPool2D((2, 2)),

    paddle.nn.Flatten(),
    paddle.nn.Linear(2000, 10),
)

paddle.summary(model, (64, 1, 28, 28))
---------------------------------------------------------------------------
 Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================
   Conv2D-5      [[64, 1, 28, 28]]     [64, 10, 24, 24]         260      
    ReLU-5       [[64, 10, 24, 24]]    [64, 10, 24, 24]          0       
   Conv2D-6      [[64, 10, 24, 24]]    [64, 20, 20, 20]        5,020     
    ReLU-6       [[64, 20, 20, 20]]    [64, 20, 20, 20]          0       
  MaxPool2D-4    [[64, 20, 20, 20]]    [64, 20, 10, 10]          0       
   Flatten-4     [[64, 20, 10, 10]]       [64, 2000]             0       
   Linear-5         [[64, 2000]]           [64, 10]           20,010     
===========================================================================
Total params: 25,290
Trainable params: 25,290
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.19
Forward/backward pass size (MB): 15.40
Params size (MB): 0.10
Estimated Total Size (MB): 15.68
---------------------------------------------------------------------------






{'total_params': 25290, 'trainable_params': 25290}
# 优化器与损失函数
optimizer = paddle.optimizer.Adam(parameters=model.parameters(), learning_rate=0.0001)
criterion = paddle.nn.CrossEntropyLoss()

# 训练部分
for epoch in range(0, 5):
    Train_Loss, Val_Loss = [], []
    Train_ACC, Val_ACC = [], []

    model.train()
    for i, (x, y) in enumerate(train_loader):
        pred = model(x.reshape((-1, 1, 28, 28)))
        loss = criterion(pred, y)
        Train_Loss.append(loss.item())
        loss.backward()
        optimizer.step()
        optimizer.clear_grad()
        Train_ACC.append((pred.numpy().argmax(1) == y.numpy()).mean())
        
        if i % 100 == 0:
            print(f'{i}/{len(train_loader)}\t Loss {np.mean(Train_Loss):3.5f} {np.mean(Train_ACC):3.5f}')

    model.eval()
    for i, (x, y) in enumerate(val_loader):
        pred = model(x.reshape((-1, 1, 28, 28)))
        loss = criterion(pred, y)
        Val_Loss.append(loss.item())
        Val_ACC.append((pred.numpy().argmax(1) == y.numpy()).mean())
    
    if epoch % 1 == 0:
        print(f'\nEpoch: {epoch}')
        print(f'Loss {np.mean(Train_Loss):3.5f}/{np.mean(Val_Loss):3.5f}')
        print(f'ACC {np.mean(Train_ACC):3.5f}/{np.mean(Val_ACC):3.5f}')
0/197	 Loss 2.29863 0.15333
100/197	 Loss 1.61717 0.53125

Epoch: 0
Loss 1.23357/0.74845
ACC 0.62601/0.74167
0/197	 Loss 0.73336 0.76000
100/197	 Loss 0.66099 0.76261

Epoch: 1
Loss 0.63249/0.61879
ACC 0.77187/0.77333
0/197	 Loss 0.56483 0.80000
100/197	 Loss 0.55956 0.79855

Epoch: 2
Loss 0.54548/0.56240
ACC 0.80436/0.79167
0/197	 Loss 0.58850 0.77333
100/197	 Loss 0.51179 0.81703

Epoch: 3
Loss 0.50036/0.52460
ACC 0.82221/0.80583
0/197	 Loss 0.41658 0.86667
100/197	 Loss 0.47553 0.83274

Epoch: 4
Loss 0.47199/0.50074
ACC 0.83300/0.82333

总结和展望

  1. 本项目使用全连接模型和卷积模型来完成模型分类过程。
  2. 可以使用数据扩增方法来的增加模型分类精度。
  3. 可以使用resnet等网络结构来代替现有结构。

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐