转载AI Studio项目链接https://aistudio.baidu.com/aistudio/projectdetail/3465456

基于resnext50_64x4d为主干模型,densenet161,resnet50,resnet152,vgg19的多模型融合,使用imgaug的强力数据增强 猫12分类

各模型均训练了100epoch,参数未经过细致的调优,可达到93.3+的精度,调参后以及延长训练时间可达到95+精度

单模型约88-90精度

深度学习(英语:deep learning)是机器学习的分支,是一种以人工神经网络为架构,对资料进行表征学习的算法。[1][2][3][4][5]

深度学习是机器学习中一种基于对数据进行表征学习的算法。观测值(例如一幅图像)可以使用多种方式来表示,如每个像素强度值的向量,或者更抽象地表示成一系列边、特定形状的区域等。而使用某些特定的表示方法更容易从实例中学习任务(例如,人脸识别或面部表情识别[6])。深度学习的好处是用非监督式或半监督式的特征学习和分层特征提取高效算法来替代手工获取特征。[7]

表征学习的目标是寻求更好的表示方法并创建更好的模型来从大规模未标记数据中学习这些表示方法。表示方法来自神经科学,并松散地创建在类似神经系统中的信息处理和对通信模式的理解上,如神经编码,试图定义拉动神经元的反应之间的关系以及大脑中的神经元的电活动之间的关系。[8]

至今已有数种深度学习框架,如深度神经网络、卷积神经网络和深度置信网络和循环神经网络已被应用在计算机视觉、语音识别、自然语言处理、音频识别与生物信息学等领域并获取了极好的效果。

import os
import pandas as pd
import random
import logging 
import paddle
from paddle.nn import Conv2D, BatchNorm2D, LeakyReLU, MaxPool2D, LSTM, Linear, Dropout
from paddle.io import Dataset
import cv2
import numpy as np
from paddle.vision.transforms import Compose, Resize
from paddle.vision.models import ResNet, resnet34
from PIL import Image
import numpy as np

仅需要执行一次即可

#解压数据集图片
# !unzip data/data10954/cat_12_train.zip 
# !unzip data/data10954/cat_12_test.zip 
class cof():
    split_train_val=0.8
    resize_f=240
    crop_size=(224,224)
    batch_size_train=40
    batch_size_val=16
    classify_num=12

configuration=cof()

仅需要执行一次即可

# # #按比例随机切割数据集
# train_ratio=configuration.split_train_val

# train=open('train_split_list.txt','w')
# val=open('val_split_list.txt','w')

# with open('data/data10954/train_list.txt','r') as f:
#     lines=f.readlines()
#     for line in lines:
#         if random.uniform(0, 1) <= train_ratio: 
#             train.write(line) 
#         else: 
#             val.write(line)

# train.close()
# val.close()

安装imgaug,图像增强

!pip install imgaug
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting imgaug
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/66/b1/af3142c4a85cba6da9f4ebb5ff4e21e2616309552caca5e8acefe9840622/imgaug-0.4.0-py2.py3-none-any.whl (948 kB)
     |████████████████████████████████| 948 kB 18.3 MB/s            
[?25hRequirement already satisfied: numpy>=1.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug) (1.19.5)
Requirement already satisfied: opencv-python in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug) (4.1.1.26)
Collecting Shapely
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ae/20/33ce377bd24d122a4d54e22ae2c445b9b1be8240edb50040b40add950cd9/Shapely-1.8.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.1 MB)
     |████████████████████████████████| 1.1 MB 39.4 MB/s            
[?25hRequirement already satisfied: scipy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug) (1.6.3)
Requirement already satisfied: imageio in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug) (2.6.1)
Requirement already satisfied: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug) (1.16.0)
Collecting scikit-image>=0.14.2
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9a/44/8f8c7f9c9de7fde70587a656d7df7d056e6f05192a74491f7bc074a724d0/scikit_image-0.19.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (13.3 MB)
     |████████████████████████████████| 13.3 MB 5.1 MB/s            
[?25hRequirement already satisfied: matplotlib in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug) (2.2.3)
Requirement already satisfied: Pillow in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from imgaug) (8.2.0)
Collecting tifffile>=2019.7.26
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d8/38/85ae5ed77598ca90558c17a2f79ddaba33173b31cf8d8f545d34d9134f0d/tifffile-2021.11.2-py3-none-any.whl (178 kB)
     |████████████████████████████████| 178 kB 13.4 MB/s            
[?25hRequirement already satisfied: networkx>=2.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-image>=0.14.2->imgaug) (2.4)
Requirement already satisfied: packaging>=20.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-image>=0.14.2->imgaug) (21.3)
Collecting PyWavelets>=1.1.1
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a1/9c/564511b6e1c4e1d835ed2d146670436036960d09339a8fa2921fe42dad08/PyWavelets-1.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (6.1 MB)
     |████████████████████████████████| 6.1 MB 8.5 MB/s            
[?25hRequirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug) (1.1.0)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug) (2019.3)
Requirement already satisfied: cycler>=0.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug) (2.8.2)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->imgaug) (3.0.7)
Requirement already satisfied: setuptools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib->imgaug) (56.2.0)
Requirement already satisfied: decorator>=4.3.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from networkx>=2.2->scikit-image>=0.14.2->imgaug) (4.4.2)
Installing collected packages: tifffile, PyWavelets, Shapely, scikit-image, imgaug
Successfully installed PyWavelets-1.2.0 Shapely-1.8.0 imgaug-0.4.0 scikit-image-0.19.1 tifffile-2021.11.2
[33mWARNING: You are using pip version 21.3.1; however, version 22.0.3 is available.
You should consider upgrading via the '/opt/conda/envs/python35-paddle120-env/bin/python -m pip install --upgrade pip' command.[0m
import imgaug as ia
import imgaug.augmenters as iaa
import matplotlib.pyplot as plt
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import MutableMapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable, Mapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Sized

imgaug,定义自己的BaseTransform,进行强力的数据增强,减少因为模型过于复杂带来的过拟合

一些训练技巧

batch_size 在gpu允许的范围内,越大越好

shuffle如果不打乱顺序,则每一个epoch都会读入相同的数据,则数据的多样性不够(不过测试时可以不用打乱)

样本分类训练数量不平衡会引发一些问题(可以通过增大权重来改善)

预训练(迁移学习)有助于初始化一个较好的点,初始化的值很关键

多模型融合(多个网络分别预测结果并对结果加权)+ 迁移学习 一般可以获得很好的模型

import os
import paddle.vision.transforms as T
from paddle.io import Dataset
import numpy as np
import paddle
import cv2
import imgaug.augmenters as iaa

class CustomTransform(T.BaseTransform):

    def __init__(self, prob=0.5, keys=None):
        super(CustomTransform, self).__init__(keys)
        self.prob = prob
        self.augSeq=iaa.Sequential([
            iaa.Fliplr(0.5),
            # 	水平镜面翻转
            iaa.Flipud(0.3), # 垂直翻转
            #   随机裁剪图片边长比例的0~0.1
            iaa.Crop(percent=(0,0.2)),

            #Sometimes是指指针对50%的图片做处理
            iaa.Sometimes(
                0.5,
                #高斯模糊
                iaa.GaussianBlur(sigma=(0,0.5))
            ),

            #增强或减弱图片的对比度
            iaa.LinearContrast((0.75,1.5)),

            #添加高斯噪声
            #对于50%的图片,这个噪采样对于每个像素点指整张图片采用同一个值
            #剩下的50%的图片,对于通道进行采样(一张图片会有多个值)
            #改变像素点的颜色(不仅仅是亮度)
            iaa.AdditiveGaussianNoise(loc=0,scale=(0.0,0.05*255),per_channel=0.5),

            #让一些图片变的更亮,一些图片变得更暗
            #对20%的图片,针对通道进行处理
            #剩下的图片,针对图片进行处理
            iaa.Multiply((0.8,1.2),per_channel=0.2),

            #仿射变换
            iaa.Affine(
                #缩放变换
                scale={"x":(0.7,1.3),"y":(0.7,1.3)},
                #平移变换
                translate_percent={"x":(-0.3,0.3),"y":(-0.3,0.3)},
                #旋转
                rotate=(-25,25),
                #剪切
                shear=(-8,8),
                # 以下是指定这些新的像素点的生成方法,这种指定通过设置cval和mode两个参数来实现。参数order用来设置插值方法。
                order=[0, 1],
                cval=(0, 255),
                mode=ia.ALL
            ),

            # iaa.Sometimes(
            #     0.5,
            #     # 浮雕效果
            #     iaa.Emboss(alpha=(0, 0.3), strength=(0, 2.0)),
            # ),

            iaa.Sometimes(
                0.5,
                # 锐化
                iaa.Sharpen(alpha=(0, 0.3), lightness=(0.75, 1.5)),
            ),

            iaa.Add((-10, 10), per_channel=0.5),

            #使用随机组合上面的数据增强来处理图片
            ],random_order=True
        )

    def _get_params(self, inputs):
        image = inputs[self.keys.index('image')]
        params = {}
        params['trans'] = np.random.random() < self.prob
        #params['size'] = _get_image_size(image)
        return params

    def _apply_image(self, image):
        if self.params['trans']:
            return self.augSeq(image=image)
        return image

归一化与转换为tensor

一定要注意,如果测试集读入图片方式与训练集不同,会造成很大的精度损失

如cv2和plt读入的通道数一个为BGR,一个为RGB

如果也可也convert BGR

from paddle.vision import transforms as T

def preprocess(img):
    transform = Compose([
        T.Resize((configuration.resize_f)),
        CustomTransform(1),
        T.RandomCrop(configuration.crop_size),
        # T.ColorJitter(brightness=0.3, contrast=0.3, saturation=0.1),
        # T.RandomHorizontalFlip(0.5),
        # T.RandomVerticalFlip(0.5),
        # T.RandomRotation(10),
        T.ToTensor(),
        T.Normalize(mean=[0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225])
        ])
    return transform(img)

def preprocess_val(img):
    transform = Compose([
        T.Resize(configuration.crop_size), 
        T.ToTensor(),
        T.Normalize(mean=[0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225])
        ])
    return transform(img)
train_data = list()
with open('train_split_list.txt', "r") as f:
    for line in f:
        name, label = line.strip().split('\t')
        img=cv2.imread(name)
        if img is not None:
            train_data.append([name, label])
# print(train_data[:5])


val_data = list()
with open('val_split_list.txt', "r") as f:
    for line in f:
        name, label = line.strip().split('\t')
        img=cv2.imread(name)
        if img is not None:
            val_data.append([name, label])
# print(val_data[:5])

dataset

一定一定要排除一下

读到的图片是不是空的

最好之前就扫一遍数据集,要注意,路径下有图片不代表图片是好的

训练与验证数据

import random

class Reader(Dataset):
    def __init__(self, data,if_train):
        super().__init__()
        self.samples = data
        self.flag=if_train

    def __getitem__(self, index):
        # 处理图像
        img_path = self.samples[index][0]
        
        img = Image.open(img_path).convert('RGB')
        img=cv2.cvtColor(np.asarray(img),cv2.COLOR_RGB2BGR)
        if self.flag==True:
            img = preprocess(img)
        else:
            img = preprocess_val(img)
        # 处理标签
        label = self.samples[index][1]
        label = paddle.to_tensor(int(label))
        
        return img, label

    def __len__(self):
        return len(self.samples)

train_loader = paddle.io.DataLoader(Reader(train_data,True), batch_size=configuration.batch_size_train, shuffle=True)
val_loader = paddle.io.DataLoader(Reader(val_data,False), batch_size=configuration.batch_size_val, shuffle=False)

测试数据

test_data = list()

for line in open("testpath.txt"): 
    img_name=line.strip()
    test_data.append(img_name)
test_data[:5]

class InferReader(Dataset):
    def __init__(self, data):
        super().__init__()
        self.samples = data

    def __getitem__(self, index):
        img=Image.open(self.samples[index]).convert('RGB')
        img=cv2.cvtColor(np.asarray(img),cv2.COLOR_RGB2BGR)
        img = preprocess_val(img)
        return img

    def __len__(self):
        return len(self.samples)

# print(test_data[:5])
test_loader = paddle.io.DataLoader(InferReader(test_data),batch_size=1, shuffle=False)

网络 利用paddle.hub预训练的网络

预训练(迁移学习)有助于初始化一个较好的点,初始化的值很关键

可以选择的模型如下,模型的差异性越大越好,更详细可见api文档

https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/hub/Overview_cn.html

alexnet

vgg16

vgg19

resnet50

resnet152

densenet161

resnext50_64x4d

如下图,左边是ResNet的基本结构,右边是ResNeXt的基本结构:

img

类似ResNet,作者选择了很简单的基本结构,每一组C个不同的分支都进行相同的简单变换,下面是ResNeXt-50(32x4d)的配置清单,32指进入网络的第一个ResNeXt基本结构的分组数量C(即基数)为32,4d表示depth即每一个分组的通道数为4(所以第一个基本结构输入通道数为128):

class Net(paddle.nn.Layer):
    def __init__(self):
        super(Net, self).__init__()
        # 替换模型名称可以使用不同的模型
        self.net = paddle.hub.load('PaddlePaddle/PaddleClas:develop', 'resnext50_64x4d', source='gitee', force_reload=False, pretrained=True)
        self.linear = Linear(1000, configuration.classify_num)

    def forward(self, x):
        # print(x.shape)
        # print(x.shape)
        x = self.net(x)
        # print(x.shape)
        x = self.linear(x)
        # print(x.shape)
        return x
#实例化模型
model=Net()
W0216 21:56:13.800290   119 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0216 21:56:13.804551   119 device_context.cc:465] device: 0, cuDNN Version: 7.6.


[2022/02/16 21:56:18] root INFO: unique_endpoints {''}
[2022/02/16 21:56:18] root INFO: Downloading ResNeXt50_64x4d_pretrained.pdparams from https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNeXt50_64x4d_pretrained.pdparams


  0%|          | 0/267151 [00:00<?, ?it/s]100%|██████████| 267151/267151 [00:03<00:00, 77935.12it/s]

训练,可以调节学习率,或者改变为adamw,这里留给大佬作为精度提升的实现,可以在评论区交流

提前停止很重要

paddle高层api没啥意思

限制了很大的自由度

这是比较常用的等间隔动态调整方法,该方法的原理为:每隔step_size个epoch就对每一参数组的学习率按gamma参数进行一次衰减。在它的形参中,第一个形参optimizer是之前定义好的优化器的实例名;第二个形参step_size是学习率衰减的周期(隔多少个epoch衰减一次);第三个形参gamma是学习率衰减的乘法因子,默认值为0.1;

scheduler = paddle.optimizer.lr.StepDecay(learning_rate=1e-4, step_size=10, gamma=0.98, verbose=False)
optimizer = paddle.optimizer.Adam(learning_rate=scheduler,
                                parameters=model.parameters())
loss_fn = paddle.nn.CrossEntropyLoss()

# 设置迭代次数 可以适当加大
epochs = 100
max_acc=0
for epoch in range(epochs):
    model.train()
    epoch_acc,epoch_loss,count=0,0,0
    for batch_id, data in enumerate(train_loader):

        x_data = data[0]            # 训练数据
        y_data = data[1]            # 训练数据标签
        predicts = model(x_data)    # 预测结果

        # 计算损失 等价于 prepare 中loss的设置
        loss = loss_fn(predicts, y_data)

        # 计算准确率 等价于 prepare 中metrics的设置
        acc = paddle.metric.accuracy(predicts, y_data)

        # 反向传播
        loss.backward()

        epoch_acc+=acc.numpy()
        epoch_loss+=loss.numpy()
        count+=1
        # 更新参数
        optimizer.step()
        # 梯度清零
        optimizer.clear_grad()
    print("epoch: {}, loss is: {}, acc is: {}".format(epoch, epoch_loss/count, epoch_acc/count))

    model.eval()
    epoch_acc_v,epoch_loss_v,count_v=0,0,0
    for batch_id, data in enumerate(val_loader):
        x_data = data[0]            # 测试数据
        y_data = data[1]            # 测试数据标签
        predicts = model(x_data)    # 预测结果
        # 计算损失与精度
        loss = loss_fn(predicts, y_data)
        acc = paddle.metric.accuracy(predicts, y_data)
        epoch_acc_v+=acc.numpy()
        epoch_loss_v+=loss.numpy()
        count_v+=1
    print("Val epoch: {}, loss is: {}, acc is: {}".format(epoch, epoch_loss_v/count_v, epoch_acc_v/count_v))

    temp_acc=epoch_acc_v/count_v

    if temp_acc>max_acc:
        max_acc=temp_acc
        with open('model_result.csv', 'w') as f:
            for i,data in enumerate(test_loader):
                x_data = data
                predicts = model(x_data).cpu().numpy().squeeze()
                # print(predicts)
                # print(np.argmax(predicts))
                # print(test_data[i].split('/')[-1])
                f.write(test_data[i].split('/')[-1]+','+str(np.argmax(predicts)) + '\n')
            print("----------write finished----------")

验证

验证的时候必须加eval,否则结果会极度不准

model.eval()
with open('model_result_after_end_epoch.csv', 'w') as fe:
    for i,data in enumerate(test_loader):
        x_data = data
        # print(x_data.shape)
        predicts = model(x_data).cpu().numpy().squeeze()
        # print(predicts)
        # print(np.argmax(predicts))
        fe.write(test_data[i].split('/')[-1]+','+str(np.argmax(predicts)) + '\n')

多模型融合部分,这里采用硬投票,有好的想法可以发评论,一起学习

import pandas as pd

r1 = pd.read_csv("./result1.csv", names=['img_name', 'result1'])
r2 = pd.read_csv("./result2.csv", names=['img_name', 'result2'])
r3 = pd.read_csv("./result3.csv", names=['img_name', 'result3'])
r4 = pd.read_csv("./result4.csv", names=['img_name', 'result4'])
r5 = pd.read_csv("./result5.csv", names=['img_name', 'result5'])
t=pd.merge(r1, r2, how='left', on='img_name')
t=pd.merge(t, r3, how='left', on='img_name')
t=pd.merge(t, r4, how='left', on='img_name')
t=pd.merge(t, r5, how='left', on='img_name')
t
img_nameresult1result2result3result4result5
0E9j20wT54W3gzhsVev1N6KZpyUSxnMrO.jpg94444
1ZV5KpcoqEl1yFgmRX7QhAJ20uD8BIC4x.jpg99999
2HAJPda7QMtIBCkWZGmo6gUE1KqLDT2Xb.jpg66666
368r3FljBN0HU1WYoMwsRAOv5CKcpXxyu.jpg87878
4krEVI3eSjO9FMKybdhLQovCYnG2DwlaR.jpg00000
.....................
235YTpMHX8Edt7o34vq0CmlxrIGiegkhfsn.jpg00000
236S9Dpt3OBuPk1dM2578UYsTn4ZVoxilzE.jpg00000
237E4kKFP7heD3gu1wcGY5JTSd9n0ibryLZ.jpg94444
2383ZlItXUDgHEJLPr0bSAzisp8YfvxuGBW.jpg44444
239s27hCRpL5yam1iztKTDVOlAPXb4InBEr.jpg00000

240 rows × 6 columns

import collections
# 对于每一行,通过列名name访问对应的元素
series_end_result=[]
for index, row in t.iterrows():
    list_temp=[row['result1'], row['result2'],row['result3'], row['result4'],row['result5']]
    m=collections.Counter(list_temp)
    temp_max,key_max=0,0
    for k,v in m.items():
        if v>temp_max:
            temp_max=v
            key_max=k
    # print(key_max)
    series_end_result.append(key_max)
series_end_result[:5]
[4, 9, 6, 8, 0]
from pandas import DataFrame
from pandas import Series
t['img_name']
0      E9j20wT54W3gzhsVev1N6KZpyUSxnMrO.jpg
1      ZV5KpcoqEl1yFgmRX7QhAJ20uD8BIC4x.jpg
2      HAJPda7QMtIBCkWZGmo6gUE1KqLDT2Xb.jpg
3      68r3FljBN0HU1WYoMwsRAOv5CKcpXxyu.jpg
4      krEVI3eSjO9FMKybdhLQovCYnG2DwlaR.jpg
                       ...                 
235    YTpMHX8Edt7o34vq0CmlxrIGiegkhfsn.jpg
236    S9Dpt3OBuPk1dM2578UYsTn4ZVoxilzE.jpg
237    E4kKFP7heD3gu1wcGY5JTSd9n0ibryLZ.jpg
238    3ZlItXUDgHEJLPr0bSAzisp8YfvxuGBW.jpg
239    s27hCRpL5yam1iztKTDVOlAPXb4InBEr.jpg
Name: img_name, Length: 240, dtype: object
cc = {'w':t['img_name'],'ww':series_end_result}
cc = pd.DataFrame(cc)
   236    S9Dpt3OBuPk1dM2578UYsTn4ZVoxilzE.jpg
    237    E4kKFP7heD3gu1wcGY5JTSd9n0ibryLZ.jpg
    238    3ZlItXUDgHEJLPr0bSAzisp8YfvxuGBW.jpg
    239    s27hCRpL5yam1iztKTDVOlAPXb4InBEr.jpg
    Name: img_name, Length: 240, dtype: object




```python
cc = {'w':t['img_name'],'ww':series_end_result}
cc = pd.DataFrame(cc)
cc
www
0E9j20wT54W3gzhsVev1N6KZpyUSxnMrO.jpg4
1ZV5KpcoqEl1yFgmRX7QhAJ20uD8BIC4x.jpg9
2HAJPda7QMtIBCkWZGmo6gUE1KqLDT2Xb.jpg6
368r3FljBN0HU1WYoMwsRAOv5CKcpXxyu.jpg8
4krEVI3eSjO9FMKybdhLQovCYnG2DwlaR.jpg0
.........
235YTpMHX8Edt7o34vq0CmlxrIGiegkhfsn.jpg0
236S9Dpt3OBuPk1dM2578UYsTn4ZVoxilzE.jpg0
237E4kKFP7heD3gu1wcGY5JTSd9n0ibryLZ.jpg4
2383ZlItXUDgHEJLPr0bSAzisp8YfvxuGBW.jpg4
239s27hCRpL5yam1iztKTDVOlAPXb4InBEr.jpg0

240 rows × 2 columns

cc.to_csv('./end_result.csv',header=False,index =False)

欢迎各位大佬交流~


Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐