项目背景

  • 旋转设备的故障诊断属于故障预测与健康管理(PHM)领域的问题,PHM是综合利用现代信息技术、人工智能技术构建的一种全新的设备健康管理解决方案,聚焦于复杂设备故障状态的监测、诊断、预测和管理。
  • 旋转设备是工业场景中非常常见的一种设备,像发电机/压缩机/泵等等都 是旋转设备,在日常工业生产中发挥重要作用,所以对旋转设备进行预测性维护是非常有必要的。滚动轴承是旋转设备中的关键零部件,轴承在持久运转过程中会出现磨损、变形、剥落等情况,在研究数据表明,在旋转设备发生故障时,有30%都是由于轴承故障造成的,所以可以从滚动轴承的角度出发,达到对旋转设备预测性维护的目的。
  • 目前对滚动轴承的故障诊断有多种方法,包括油液分析法、温度监测法、声发射分析法、振动分析法;这其中又以振动分析法用的最广,性价比最高,本次实践就以振动分析法为例,介绍如何用飞桨深度学习框架完成对轴承的故障诊断。
    在这里插入图片描述
!ls /home/aistudio/data/
!ls /home/aistudio/data/data149212
! rm *mat
!unzip /home/aistudio/data/data149212/cwru.zip

导入相关库函数

  • 包括系统库、paddle核心库,其他相关库
#!coding=utf8
#导入系统库
import sys
import os

#导入paddle库
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from paddle.metric import Accuracy

#导入其他库
import numpy as np
import pandas as pd
import random
from scipy.io import loadmat
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from sklearn import preprocessing
#固定随机种子,保证每次运行效果可复现
seed = 102
paddle.seed(seed)
np.random.seed(seed)
random.seed(seed)

数据准备

  • 本项目使用的数据来自西储大学的轴承数据集,该数据集是轴承故障诊断领域的权威数据集 下载链接:https://engineering.case.edu/bearingdatacenter/download-data-file

  • 该数据集包括正常数据和故障数据,其中故障数据根据发生的部位不同又分为内圈故障、外圈故障、滚动体故障。

在这里插入图片描述

  • 在本实践中,我们构建一个十分类的轴承故障诊断任务:
    在这里插入图片描述

  • 正常数据和故障数据对比图
    在这里插入图片描述

#故障数据标签
FAULT_LABEL_DICT = {'97': 0,
                    '105': 1,
                    '118': 2,
                    '130': 3,
                    '169': 4,
                    '185': 5,
                    '197': 6,
                    '209': 7,
                    '222': 8,
                    '234': 9}
#选取驱动端的数据进行建模
AXIS = '_DE_time'

继承paddle.io.Dataset类,实现CWRUDataset,里面一些关键参数包括:

  • time_steps:样本的长度
  • window:两个相邻样本重合的长度
  • mode:数据集,包括train/eval/test三个;
  • val_rate:eval的比例(1-val_rate为train比例)
  • test_rate:eval中有多少比例作为test
  • noise:是否添加噪声
  • snr:添加多少分贝的噪声
  • network:网络结构,不同的网络结构要求输入的数据格式不一样
class CWRUDataset(paddle.io.Dataset):
    """
    继承paddle.io.Dataset类
    """
    def __init__(self, data_dir, time_steps=1024, window=128, mode='train', val_rate=0.3, test_rate=0.5, \
                 noise=False, snr=None, network='MLP'):
        """
        实现构造函数,定义数据读取方式,划分训练和测试数据集
        time_steps: 样本的长度
        window:相邻样本之间重合的点数
        mode:数据集合
        val_rate:
        test_rate:
        noise:是否添加噪声
        snr:添加噪声的分贝数
        network:网络类型(决定生成的数据格式)
        
        """
        super(CWRUDataset, self).__init__()
        self.time_steps = time_steps
        self.mode = mode
        self.noise = noise
        self.snr = snr
        self.network = network
        self.feature_all, self.label_all = self.transform(data_dir)
        #训练集和验证集的划分
        train_feature, val_feature, train_label, val_label = \
        train_test_split(self.feature_all, self.label_all, test_size=val_rate, random_state=seed)
        #标准化
        train_feature, val_feature = self.standardization(train_feature, val_feature)
        #验证集和测试集的划分
        val_feature, test_feature, val_label, test_label = \
        train_test_split(val_feature, val_label, test_size=test_rate, random_state=seed)
        if self.mode == 'train':
            self.feature = train_feature
            self.label = train_label
        elif self.mode == 'val':
            self.feature = val_feature
            self.label = val_label
        elif self.mode == 'test':
            self.feature = test_feature
            self.label = test_label
        else:
            raise Exception("mode can only be one of ['train', 'val', 'test']")


    def transform(self, data_dir) :
        """
        转换函数,获取数据
        """
        feature, label = [], []
        for fault_type in FAULT_LABEL_DICT:
            lab = FAULT_LABEL_DICT[fault_type]
            totalaxis = 'X' + fault_type + AXIS
            if fault_type == '97':
                totalaxis = 'X0' + fault_type + AXIS
            #加载并解析mat文件
            mat_data = loadmat(data_dir + fault_type + '.mat')[totalaxis]
            #start, end = 0, self.time_steps
            #每隔self.time_steps窗口构建一个样本,指定样本之间重叠的数目
            for i in range(0, len(mat_data) - self.time_steps, window):
                sub_mat_data = mat_data[i: (i+self.time_steps)].reshape(-1,)
                #是否往数据中添加噪声
                if self.noise:
                    sub_mat_data = self.awgn(sub_mat_data, snr)
                feature.append(sub_mat_data)
                label.append(lab)
                
        return np.array(feature, dtype='float32'), np.array(label, dtype="int64")

    def __getitem__(self, index):
        """
        实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据
        """
        feature = self.feature[index]
        if self.network == 'CNNNet':
            #增加一列满足cnn的输入格式要求
            feature = feature[np.newaxis,:]
        elif self.network == 'ResNet':
            #增加一列并将通道复制三份满足resnet的输入要求
            n = int(np.sqrt(len(feature)))
            feature = np.reshape(feature, (n, n))
            feature = feature[np.newaxis,:]
            feature = np.concatenate((feature, feature, feature), axis=0)
        label = self.label[index]
        feature = feature.astype('float32')
        label = np.array([label], dtype="int64")
        return feature, label

    def __len__(self):
        """
        实现__len__方法,返回数据集总数目
        """
        return len(self.feature)

    def awgn(self, data, snr, seed=seed):
        """
        添加高斯白噪声
        """
        np.random.seed(seed)
        snr = 10 ** (snr / 10.0)
        xpower = np.sum(data ** 2) / len(data)
        npower = xpower / snr
        noise = np.random.randn(len(data)) * np.sqrt(npower)
        return np.array(data + noise)
    
    def standardization(self, train_data, val_data):
        """
        标准化
        """
        scalar = preprocessing.StandardScaler().fit(train_data)
        train_data = scalar.transform(train_data)
        val_data = scalar.transform(val_data)
        return train_data, val_data

构建网络

包括MLP、CNN、ResNet

MLP网络

  • 经典的MLP网络结构,包括输入层、隐含层、输出层
    在这里插入图片描述
class MLPNet(nn.Layer):
    """
    定义网络结构
    """
    def __init__(self, num_classes):
        super(MLPNet, self).__init__()
        #定义全连接层
        self.fc1 = nn.Sequential(nn.Linear(1024, 512), nn.BatchNorm1D(512), nn.ReLU())
        self.fc2 = nn.Sequential(nn.Linear(512, 256), nn.BatchNorm1D(256), nn.ReLU())
        self.fc3 = nn.Sequential(nn.Linear(256, 128), nn.BatchNorm1D(128), nn.ReLU())
        self.fc4 = nn.Sequential(nn.Linear(128, 64), nn.BatchNorm1D(64), nn.ReLU())
        self.fc5 = nn.Sequential(nn.Linear(64, 10))
        self.dropout = nn.Dropout(p=0.5)
        

    def forward(self, inputs):
        """
        定义网络的前向计算过程
        """
        outputs = self.fc1(inputs)
        outputs = self.fc2(outputs)
        outputs = self.fc3(outputs)
        outputs = self.fc4(outputs)
        outputs = self.dropout(outputs)
        outputs = self.fc5(outputs)
        if not self.training:
            outputs = paddle.nn.functional.softmax(outputs)

        return outputs

CNN网络

  • 卷积神经网络相对于MLP进行了网络结构的优化,更加擅长从数据中抽取关键特征,提升诊断效果;
  • 典型的CNN网络结构,包括卷积层、池化层、全连接层等
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-iI21qhtx-1657276481322)(在这里插入图片描述
    )]
class CNNNet(nn.Layer):
    """
    """
    def __init__(self, num_classes):
        super(CNNNet, self).__init__()
        self.layer1 = nn.Sequential(nn.Conv1D(1,32,kernel_size=3,padding=1), nn.BatchNorm1D(32), nn.ReLU(), nn.MaxPool1D(kernel_size=2,padding=0))
        self.layer2 = nn.Sequential(nn.Conv1D(32,64,kernel_size=3,padding=1), nn.BatchNorm1D(64), nn.ReLU(), nn.MaxPool1D(kernel_size=2,padding=0))
        self.layer3 = nn.Sequential(nn.Conv1D(64,64,kernel_size=3,padding=1), nn.BatchNorm1D(64), nn.ReLU(), nn.MaxPool1D(kernel_size=2,padding=0))

        self.fc1 = nn.Linear(8192, 100)
        self.fc2 = nn.Linear(100, num_classes)
        self.relu = nn.ReLU()
        self.dropout = nn.Dropout(p=0.5)

    def forward(self, inputs):
        outputs = self.layer1(inputs)
        outputs = self.layer2(outputs)
        outputs = self.layer3(outputs)

        outputs = paddle.flatten(outputs,1)
        outputs = self.fc1(outputs)
        outputs = self.dropout(outputs)
        outputs = self.fc2(outputs)
        if not self.training:
            outputs = paddle.nn.functional.softmax(outputs)

        return outputs

ResNet网络

  • ResNet(Residual Network)是2015年ImageNet图像分类、图像物体定位和图像物体检测比赛的冠军方案。针对随着网络训练加深导致准确度下降的问题,ResNet提出了残差学习方法来减轻训练深层网络的困难。在已有设计思路(BN, 小卷积核,全卷积网络)的基础上,引入了残差模块。
  • Paddle内部已经内置实现了RestNet网络,我们只需要调用相关的包就行,注意,目前内置的resnet只支持二维三通道的数据,所以需要对振动数据做相应的格式转换,具体转换过程参考上面的CWRUDataset。
  • ResNet50网络结构图
    在这里插入图片描述
from paddle.vision.models import resnet18

class ResNet(nn.Layer):
    def __init__(self, num_classes):
        super(ResNet, self).__init__()
        self.backbone = resnet18()
        self.fc1 = nn.Sequential(nn.Linear(1000, 512), nn.ReLU(), nn.Dropout(0.1))
        self.fc2 = nn.Linear(512, num_classes)
        
    def forward(self, inputs):
        outputs = self.backbone(inputs)
        outputs = self.fc1(outputs)
        outputs = self.fc2(outputs)
        if not self.training:
            outputs = paddle.nn.functional.softmax(outputs)

        return outputs

读取数据

从原始的matlab文件里面读取数据并生成样本集合,准备模型训练

#时序步长
time_steps = 1024
#相邻样本之间重叠的数目
window = 128
#是否添加噪声
noise=True
#添加的噪声分贝数
snr = -10
#验证集比例(从全体数据中取多少比例作为验证集)
val_rate = 0.3
#测试集比例(从验证集中取多少比例作为测试集)
test_rate = 0.5
#网络类型
#network = 'MLPNet'
#network = 'CNNNet'
network = 'ResNet'
train_dataset = CWRUDataset('./', time_steps=time_steps, window=window, mode='train', \
                            val_rate=val_rate, test_rate=test_rate, noise=noise, snr=snr, network=network)
val_dataset = CWRUDataset('./', time_steps=time_steps, window=window, mode='val', \
                            val_rate=val_rate, test_rate=test_rate, noise=noise, snr=snr, network=network)
test_dataset = CWRUDataset('./', time_steps=time_steps, window=window, mode='test', \
                            val_rate=val_rate, test_rate=test_rate, noise=noise, snr=snr, network=network)

print (train_dataset.__len__())
print (val_dataset.__len__())
print (test_dataset.__len__())

print (train_dataset.feature.shape)
print (train_dataset.label.shape)
print (val_dataset.feature.shape)
print (val_dataset.label.shape)
print (test_dataset.feature.shape)
print (test_dataset.label.shape)
7285
1561
1562
(7285, 1024)
(7285,)
(1561, 1024)
(1561,)
(1562, 1024)
(1562,)

模型训练

主要通过调用paddle高级接口paddle.Model()来完成,主要包括

  • 初始化model(paddle.Model)
  • 定义优化器
  • 模型编译
  • 定义回调函数
  • 调用model.fit()训练
def train_model(lr, batch_size, epoch, num_classes, network):
    """
    模型训练
    """
    model = paddle.Model(eval(network)(num_classes))
    optim = paddle.optimizer.Adam(learning_rate=lr, parameters=model.parameters(),\
                                  weight_decay=paddle.regularizer.L2Decay(coeff=1e-5))
    model.prepare(optim, nn.CrossEntropyLoss(), Accuracy())
    callbacks = paddle.callbacks.EarlyStopping(monitor='acc', mode='max', patience=100, verbose=1, save_best_model=True)
    model.fit(train_dataset, val_dataset, epochs=epoch, \
              batch_size=batch_size,callbacks=[callbacks], save_dir=network+'_checkpoints', save_freq=20)
    
lr = 1e-3
epoch = 100
batch_size = 16
num_classes = 10
train_model(lr, batch_size, epoch, num_classes, network)

W0527 20:44:42.622202   173 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0527 20:44:42.627154   173 gpu_context.cc:306] device: 0, cuDNN Version: 7.6.


The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/100


/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:654: UserWarning: When training, we now always track global mean and variance.
  "When training, we now always track global mean and variance.")



step 370/456 - loss: 0.4899 - acc: 0.9927 - 26ms/step
step 380/456 - loss: 3.8843e-05 - acc: 0.9924 - 26ms/step
step 390/456 - loss: 0.0041 - acc: 0.9925 - 26ms/step
step 400/456 - loss: 0.0057 - acc: 0.9925 - 26ms/step
step 410/456 - loss: 0.0980 - acc: 0.9919 - 26ms/step
step 420/456 - loss: 0.1424 - acc: 0.9914 - 26ms/step
step 430/456 - loss: 0.0010 - acc: 0.9916 - 26ms/step
step 440/456 - loss: 0.0043 - acc: 0.9918 - 26ms/step
step 450/456 - loss: 6.5709e-05 - acc: 0.9918 - 26ms/step
step 456/456 - loss: 0.0535 - acc: 0.9916 - 26ms/step
Eval begin...
step 10/98 - loss: 1.4612 - acc: 0.9938 - 10ms/step
step 20/98 - loss: 1.4638 - acc: 0.9938 - 10ms/step
step 30/98 - loss: 1.4653 - acc: 0.9917 - 9ms/step
step 40/98 - loss: 1.4642 - acc: 0.9922 - 9ms/step
step 50/98 - loss: 1.4730 - acc: 0.9912 - 10ms/step
step 60/98 - loss: 1.4615 - acc: 0.9917 - 10ms/step
step 70/98 - loss: 1.4708 - acc: 0.9929 - 11ms/step
step 80/98 - loss: 1.4612 - acc: 0.9930 - 11ms/step
step 90/98 - loss: 1.4627 - acc: 0.9938 - 10ms/step
step 98/98 - loss: 1.4612 - acc: 0.9942 - 10ms/step
Eval samples: 1561
Epoch 71/100
step  10/456 - loss: 8.4332e-05 - acc: 1.0000 - 25ms/step
step  20/456 - loss: 3.7188e-05 - acc: 0.9906 - 26ms/step
step  30/456 - loss: 0.1210 - acc: 0.9875 - 26ms/step
step  40/456 - loss: 0.1604 - acc: 0.9828 - 26ms/step
step  50/456 - loss: 0.0011 - acc: 0.9862 - 26ms/step
step  60/456 - loss: 6.9705e-05 - acc: 0.9875 - 26ms/step
step  70/456 - loss: 0.0120 - acc: 0.9848 - 26ms/step
step  80/456 - loss: 4.7603e-05 - acc: 0.9844 - 26ms/step
step  90/456 - loss: 2.2352e-06 - acc: 0.9854 - 26ms/step
step 100/456 - loss: 0.0173 - acc: 0.9856 - 28ms/step
step 110/456 - loss: 0.0078 - acc: 0.9869 - 28ms/step
step 120/456 - loss: 0.4686 - acc: 0.9865 - 29ms/step
step 130/456 - loss: 9.7841e-04 - acc: 0.9870 - 29ms/step
step 140/456 - loss: 0.0769 - acc: 0.9862 - 29ms/step
step 150/456 - loss: 2.7637e-04 - acc: 0.9871 - 29ms/step
step 160/456 - loss: 0.0110 - acc: 0.9871 - 28ms/step
step 170/456 - loss: 0.0020 - acc: 0.9879 - 28ms/step
step 180/456 - loss: 1.0952e-06 - acc: 0.9885 - 29ms/step
step 190/456 - loss: 4.8629e-05 - acc: 0.9891 - 29ms/step
step 200/456 - loss: 8.8436e-05 - acc: 0.9894 - 28ms/step
step 210/456 - loss: 0.0012 - acc: 0.9899 - 28ms/step
step 220/456 - loss: 4.9275e-04 - acc: 0.9901 - 28ms/step
step 230/456 - loss: 9.1379e-04 - acc: 0.9902 - 28ms/step
step 240/456 - loss: 0.0050 - acc: 0.9904 - 29ms/step
step 250/456 - loss: 8.7237e-04 - acc: 0.9902 - 28ms/step
step 260/456 - loss: 3.9695e-04 - acc: 0.9901 - 28ms/step
step 270/456 - loss: 1.9371e-06 - acc: 0.9905 - 29ms/step
step 280/456 - loss: 0.0838 - acc: 0.9904 - 29ms/step
step 290/456 - loss: 1.1679e-04 - acc: 0.9905 - 29ms/step
step 300/456 - loss: 0.0021 - acc: 0.9908 - 29ms/step
step 310/456 - loss: 0.0040 - acc: 0.9911 - 29ms/step
step 320/456 - loss: 0.0018 - acc: 0.9914 - 29ms/step
step 330/456 - loss: 0.0016 - acc: 0.9915 - 29ms/step
step 340/456 - loss: 0.0163 - acc: 0.9915 - 29ms/step
step 350/456 - loss: 3.7444e-05 - acc: 0.9914 - 28ms/step
step 360/456 - loss: 0.0035 - acc: 0.9915 - 28ms/step
step 370/456 - loss: 1.8453e-04 - acc: 0.9916 - 28ms/step
step 380/456 - loss: 4.8600e-05 - acc: 0.9916 - 28ms/step
step 390/456 - loss: 0.1975 - acc: 0.9917 - 28ms/step
step 400/456 - loss: 0.0232 - acc: 0.9919 - 28ms/step
step 410/456 - loss: 0.0018 - acc: 0.9918 - 28ms/step
step 420/456 - loss: 0.0107 - acc: 0.9917 - 28ms/step
step 430/456 - loss: 1.2011e-04 - acc: 0.9917 - 28ms/step
step 440/456 - loss: 8.9453e-05 - acc: 0.9916 - 28ms/step
step 450/456 - loss: 2.1294e-04 - acc: 0.9918 - 28ms/step
step 456/456 - loss: 0.2423 - acc: 0.9919 - 28ms/step
Eval begin...
step 10/98 - loss: 1.4650 - acc: 1.0000 - 11ms/step
step 20/98 - loss: 1.4623 - acc: 1.0000 - 10ms/step
step 30/98 - loss: 1.4618 - acc: 1.0000 - 10ms/step
step 40/98 - loss: 1.4631 - acc: 0.9984 - 10ms/step
step 50/98 - loss: 1.4914 - acc: 0.9950 - 10ms/step
step 60/98 - loss: 1.4731 - acc: 0.9958 - 10ms/step
step 70/98 - loss: 1.4625 - acc: 0.9964 - 10ms/step
step 80/98 - loss: 1.4720 - acc: 0.9945 - 10ms/step
step 90/98 - loss: 1.4620 - acc: 0.9951 - 10ms/step
step 98/98 - loss: 1.4649 - acc: 0.9955 - 10ms/step
Eval samples: 1561
Epoch 72/100
step  10/456 - loss: 2.3483e-04 - acc: 1.0000 - 27ms/step
step  20/456 - loss: 0.0209 - acc: 0.9875 - 26ms/step
step  30/456 - loss: 1.5668e-05 - acc: 0.9792 - 26ms/step
step  40/456 - loss: 2.7577e-04 - acc: 0.9828 - 26ms/step
step  50/456 - loss: 0.0111 - acc: 0.9800 - 25ms/step
step  60/456 - loss: 0.1166 - acc: 0.9771 - 25ms/step
step  70/456 - loss: 0.1249 - acc: 0.9777 - 25ms/step
step  80/456 - loss: 0.0057 - acc: 0.9789 - 26ms/step
step  90/456 - loss: 3.7883e-05 - acc: 0.9806 - 26ms/step
step 100/456 - loss: 1.6837 - acc: 0.9812 - 26ms/step
step 110/456 - loss: 1.8850e-06 - acc: 0.9830 - 26ms/step
step 120/456 - loss: 8.2624e-06 - acc: 0.9812 - 26ms/step
step 130/456 - loss: 0.0162 - acc: 0.9779 - 27ms/step
step 140/456 - loss: 0.4030 - acc: 0.9763 - 28ms/step
step 150/456 - loss: 0.0743 - acc: 0.9767 - 28ms/step
step 160/456 - loss: 0.1429 - acc: 0.9762 - 28ms/step

开源链接https://aistudio.baidu.com/aistudio/projectdetail/4123335?contributionType=1

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐