PGL图学习之项目实践(UniMP算法实现论文节点分类、新冠疫苗项目实战,助力疫情)[系列九]

1.图学习技术与应用

图是一个复杂世界的通用语言,社交网络中人与人之间的连接、蛋白质分子、推荐系统中用户与物品之间的连接等等,都可以使用图来表达。图神经网络将神经网络运用至图结构中,可以被描述成消息传递的范式。百度开发了PGL2.2,基于底层深度学习框架paddle,给用户暴露了编程接口来实现图网络。与此同时,百度也使用了前沿的图神经网络技术针对一些应用进行模型算法的落地。本次将介绍百度的PGL图学习技术与应用。

1.1图来源与建模

首先和大家分享下图学习主流的图神经网络建模方式。
在这里插入图片描述

14年左右开始,学术界出现了一些基于图谱分解的技术,通过频域变换,将图变换至频域进行处理,再将处理结果变换回空域来得到图上节点的表示。后来,空域卷积借鉴了图像的二维卷积,并逐渐取代了频域图学习方法。图结构上的卷积是对节点邻居的聚合。

在这里插入图片描述

基于空间的图神经网络主要需要考虑两个问题:

  • 怎样表达节点特征;

  • 怎样表达一整张图。

第一个问题可以使用邻居聚合的方法,第二问题使用节点聚合来解决。

在这里插入图片描述

目前大部分主流的图神经网络都可以描述成消息传递的形式。需要考虑节点如何将消息发送至目标节点,然后目标节点如何对收到的节点特征进行接收。

1.2 PGL2.2回顾介绍

PGL2.2基于消息传递的思路构建整体框架。PGL最底层是飞浆核心paddle深度学习框架。在此之上,搭建了CPU图引擎和GPU上进行tensor化的图引擎,来方便对图进行如图切分、图存储、图采样、图游走的算法。再上一层,会对用户暴露一些编程接口,包括底层的消息传递接口和图网络实现接口,以及高层的同构图、异构图的编程接口。框架顶层会支持几大类图模型,包括传统图表示学习中的图游走模型、消息传递类模型、知识嵌入类模型等,去支撑下游的应用场景。

在这里插入图片描述

最初的PGL是基于paddle1.x的版本进行开发的,所以那时候还是像tensorflow一样的静态图模式。目前paddle2.0已经进行了全面动态化,那么PGL也相应地做了动态图的升级。现在去定义一个图神经网络就只需要定义节点数量、边数量以及节点特征,然后将图tensor化即可。可以自定义如何将消息进行发送以及目标节点如何接收消息。

在这里插入图片描述

上图是使用PGL构建一个GAT网络的例子。最开始会去计算节点的权重,在发送消息的时候GAT会将原节点和目标节点特征进行求和,再加上一个非线性激活函数。在接收的时候,可以通过reduce_softmax对边上的权重进行归一化,再乘上hidden state进行加权求和。这样就可以很方便地实现一个GAT网络。

在这里插入图片描述

对于图神经网络来讲,在构建完网络后,要对它进行训练。训练方式和一般机器学习有所不同,需要根据图的规模选择适用的训练方案。

在这里插入图片描述

例如在小图,即图规模小于GPU显存的情况下,会使用full batch模式进行训练。它其实就是把一整张图的所有节点都放置在GPU上,通过一个图网络来输出所有点的特征。它的好处在于可以跑一个很深的图。这一训练方案会被应用于中小型数据集,例如Cora、Pubmed、Citeseer、ogbn-arxiv等。最近在ICML上发现了可以堆叠至1000层的图神经网络,同样也是在这种中小型数据集上做评估。

在这里插入图片描述

对于中等规模的图,即图规模大于GPU单卡显存,知识可以进行分片训练,每一次将一张子图塞入GPU上。PGL提供了另一个方案,使用分片技术来降低显存使用的峰值。例如对一个复杂图进行计算时,它的计算复杂度取决于边计算时显存使用的峰值,此时如果有多块GPU就可以把边计算进行分块,每台机器只负责一小部分的计算,这样就可以大大地减少图神经网络的计算峰值,从而达到更深的图神经网络的训练。分块训练完毕后,需要通过NCCL来同步节点特征。
在这里插入图片描述

在PGL中,只需要一行DistGPUGraph命令就可以在原来full batch的训练代码中加入这样一个新特性,使得可以在多GPU中运行一个深层图神经网络。例如在obgn-arxiv中尝试了比较复杂的TransformerConv网络,如果使用单卡训练一个三层网络,其GPU显存会被占用近30G,而使用分片训练就可以将它的显存峰值降低。同时,还实现了并行的计算加速,例如原来跑100 epoch需要十分钟,现在只需要200秒。

在这里插入图片描述

在大图的情况下,又回归到平时做数据并行的mini batch模式。Mini batch与full batch相比最主要的问题在于它需要做邻居的采样,而邻居数目的提升会对模型的深度进行限制。这一模式适用于一些巨型数据集,包括ogbn-products和ogbn-papers100m。

在这里插入图片描述

发现PyG的作者的新工作GNNAutoScale能够把一个图神经网络进行自动的深度扩展。它的主要思路是利用CPU的缓存技术,将邻居节点的特征缓存至CPU内存中。当训练图网络时,可以不用实时获取所有邻居的最新表达,而是获取它的历史embedding进行邻居聚合计算。实验发现这样做的效果还是不错的。

在这里插入图片描述

在工业界的情况下可能会存在更大的图规模的场景,那么这时候可能单CPU也存不下如此图规模的数据,这时需要一个分布式的多机存储和采样。PGL有一套分布式的图引擎接口,使得可以轻松地在MPI以及K8S集群上通过PGL launch接口进行一键的分布式图引擎部署。目前也支持不同类型的邻居采样、节点遍历和图游走算法。

在这里插入图片描述

整体的大规模训练方式包括一个大规模分布式图引擎,中间会包含一些图采样的算子和神经网络的开发算子。顶层针对工业界大规模场景,往往需要一个parameter server来存储上亿级别的稀疏特征。借助paddlefleet的大规模参数服务器来支持超大规模的embedding存储。

1.3 图神经网络技术

1.3.1 节点分类任务

在这里插入图片描述

在算法上也进行了一些研究。图神经网络与一般机器学习场景有很大的区别。一般的机器学习假设数据之间独立同分布,但是在图网络的场景下,样本是有关联的。预测样本和训练样本有时会存在边关系。通常称这样的任务为半监督节点分类问题。

在这里插入图片描述

解决节点分类问题的传统方法是LPA标签传播算法,考虑链接关系以及标签之间的关系。另外一类方法是以GCN为代表的特征传播算法,只考虑特征与链接的关系。

在这里插入图片描述

通过实验发现在很多数据集下,训练集很难通过过拟合达到99%的分类准确率。也就是说,训练集中的特征其实包含很大的噪声,使得网络缺乏过拟合能力。所以,想要显示地将训练label加入模型,因为标签可以消减大部分歧义。在训练过程中,为了避免标签泄露,提出了UniMP算法,把标签传播和特征传播融合起来。这一方法在三个open graph benchmark数据集上取得了SOTA的结果。

在这里插入图片描述

后续还把UniMP应用到更大规模的KDDCup 21的比赛中,将UniMP同构算法做了异构图的拓展,使其在异构图场景下进行分类任务。具体地,在节点邻居采样、批归一化和注意力机制中考虑节点之间的关系类型。

1.3.2 链接预测任务

在这里插入图片描述

第二个比较经典的任务是链接预测任务。目前很多人尝试使用GNN与link prediction进行融合,但是这存在两个瓶颈。首先,GNN的深度和邻居采样的数量有关;其次,当训练像知识图谱的任务时,每一轮训练都需要遍历训练集的三元组,此时训练的复杂度和邻居节点数量存在线性关系,这就导致了如果邻居比较多,训练一个epoch的耗时很长。
在这里插入图片描述

借鉴了最近基于纯特征传播的算法,如SGC等图神经网络的简化方式,提出了基于关系的embedding传播。发现单独使用embedding进行特征传播在知识图谱上是行不通的。因为知识图谱上存在复杂的边关系。所以,根据不同关系下embedding设计了不同的score function进行特征传播。此外,发现之前有一篇论文提出了OTE的算法,在图神经网络上进行了两阶段的训练。

在这里插入图片描述

使用OGBL-WikiKG2数据集训练OTE模型需要超过100个小时,而如果切换到的特征传播算法,即先跑一次OTE算法,再进行REP特征传播,只需要1.7个小时就可以使模型收敛。所以REP带来了近50倍的训练效率的提升。还发现只需要正确设定score function,大部分知识图谱算法使用的特征传播算法都会有效果上的提升;不同的算法使用REP也可以加速它们的收敛。

在这里插入图片描述

将这一套方法应用到KDDCup 21 Wiki90M的比赛中。为了实现比赛中要求的超大规模知识图谱的表示,做了一套大规模的知识表示工具Graph4KG,最终在KDDCup中取得了冠军。

1.4 算法应用落地

在这里插入图片描述

PGL在百度内部已经进行了广泛应用。包括百度搜索中的网页质量评估,会把网页构成一个动态图,并在图上进行图分类的任务。百度搜索还使用PGL进行网页反作弊,即对大规模节点进行检测。在文本检索应用中,尝试使用图神经网络与自然语言处理中的语言模型相结合。在其他情况下,的落地场景有推荐系统、风控、百度地图中的流量预测、POI检索等。

在这里插入图片描述

本文以推荐系统为例,介绍一下平时如何将图神经网络在应用中进行落地。

推荐系统常用的算法是基于item-based和user-based协同过滤算法。Item-based协同过滤就是推荐和item相似的内容,而user-based 就是推荐相似的用户。这里最重要的是如何去衡量物品与物品之间、用户与用户之间的相似性。

在这里插入图片描述

可以将其与图学习结合,使用点击日志来构造图关系(包括社交关系、用户行为、物品关联),然后通过表示学习构造用户物品的向量空间。在这个空间上就可以度量物品之间的相似性,以及用户之间的相似性,进而使用其进行推荐。
在这里插入图片描述

常用的方法有传统的矩阵分解方法,和阿里提出的基于随机游走 + Word2Vec的EGES算法。近几年兴起了使用图对比学习来获得节点表示。

在这里插入图片描述

在推荐算法中,主要的需求是支持复杂的结构,支持大规模的实现和快速的实验成本。希望有一个工具包可以解决GNN + 表示学习的问题。所以,对现有的图表示学习算法进行了抽象。具体地,将图表示学习分成了四个部分。第一部分是图的类型,将其分为同构图、异构图、二部图,并在图中定义了多种关系,例如点击关系、关注关系等。第二,实现了不同的样本采样的方法,包括在同构图中常用的node2Vec以及异构图中按照用户自定义的meta path进行采样。第三部分是节点的表示。可以根据id去表示节点,也可以通过图采样使用子图来表示一个节点。还构造了四种GNN的聚合方式。

在这里插入图片描述

发现不同场景以及不同的图表示的训练方式下,模型效果差异较大。所以的工具还支持大规模稀疏特征side-info的支持来进行更丰富的特征组合。用户可能有很多不同的字段,有些字段可能是缺失的,此时只需要通过一个配置表来配置节点包含的特征以及字段即可。还支持GNN的异构图自动扩展。你可以自定义边关系,如点击关系、购买关系、关注关系等,并选取合适的聚合方式,如lightgcn,就可以自动的对GNN进行异构图扩展,使lightgcn变为relation-wise的lightgcn。

在这里插入图片描述

对工具进行了瓶颈分析,发现它主要集中在分布式训练中图采样和负样本构造中。可以通过使用In-Batch Negative的方法进行优化,即在batch内走负采样,减少通讯开销。这一优化可以使得训练速度提升四至五倍,而且在训练效果上几乎是无损的。此外,在图采样中可以通过对样本重构来降低采样的次数,得到两倍左右的速度提升,且训练效果基本持平。相比于市面上现有的分布式图表示工具,还可以实现单机、双机、四机甚至更多机器的扩展。

在这里插入图片描述

不仅如此,还发现游走类模型训练速度较快,比较适合作为优秀的热启动参数。具体地,可以先运行一次metapath2Vce算法,将训练得到的embedding作为初始化参数送入GNN中作为热启动的节点表示。发现这样做在效果上有一定的提升。

1.5 Q&A

Q1:在特征在多卡之间传递的训练模式中,使用push和pull的方式通讯时间占比大概有多大?

A:通讯时间的占比挺大的。如果是特别简单的模型,如GCN等,那么使用这种方法训练,通讯时间甚至会比直接跑这个模型的训练时间还要久。所以这一方法适合复杂模型,即模型计算较多,且通讯中特征传递的数据量相比来说较小,这种情况下就比较适合这种分布式计算。

Q2:图学习中节点邻居数较多会不会导致特征过平滑?

A:这里采用的方法很多时候都很暴力,即直接使用attention加多头的机制,这样会极大地减缓过平滑问题。因为使用attention机制会使得少量特征被softmax激活;多头的方式可以使得每个头学到的激活特征不一样。所以这样做一定比直接使用GCN进行聚合会好。

Q3:百度有没有使用图学习在自然语言处理领域的成功经验?

A:之前有类似的工作,你可以关注ERINESage这篇论文。它主要是将图网络和预训练语言模型进行结合。也将图神经网络落地到了例如搜索、推荐的场景。因为语言模型本身很难对用户日志中包含的点击关系进行建模,通过图神经网络就可以将点击日志中的后验关系融入语言模型,进而得到较大的提升。

Q4:能详细介绍一下KDD比赛中将同构图拓展至异构图的UniMP方法吗?

A:首先,每一个关系类型其实应该有不同的邻居采样方法。例如paper到author的关系,会单独地根据它来采样邻居节点。如果按照同构图的方式来采样,目标节点的邻居节点可能是论文,也可能是作者或者机构,那么采样的节点是不均匀的。其次,在批归一化中按照关系channel来进行归一化,因为如果你将paper节点和author节点同时归一化,由于它们的统计均值和方差不一样,那么这种做法会把两者的统计量同时带骗。同理,在聚合操作中,不同的关系对两个节点的作用不同,需要按照不同关系使用不同的attention注意力权重来聚合特征。

2.基于UniMP算法实现论文引用网络节点分类任务

图学习之基于PGL-UniMP算法的论文引用网络节点分类任务:https://aistudio.baidu.com/aistudio/projectdetail/5116458?contributionType=1

由于文章篇幅问题,为了让学习者有更好的体验,这里新开一个项目完成这个任务。

Epoch 987 Train Acc 0.7554459 Valid Acc 0.7546095
Epoch 988 Train Acc 0.7537374 Valid Acc 0.75717235
Epoch 989 Train Acc 0.75497127 Valid Acc 0.7573859
Epoch 990 Train Acc 0.7611409 Valid Acc 0.75653166
Epoch 991 Train Acc 0.75316787 Valid Acc 0.75489426
Epoch 992 Train Acc 0.749561 Valid Acc 0.7547519
Epoch 993 Train Acc 0.7571544 Valid Acc 0.7551079
Epoch 994 Train Acc 0.7516492 Valid Acc 0.75581974
Epoch 995 Train Acc 0.7563476 Valid Acc 0.7563181
Epoch 996 Train Acc 0.7504627 Valid Acc 0.7538976
Epoch 997 Train Acc 0.7476152 Valid Acc 0.75439596
Epoch 998 Train Acc 0.7539272 Valid Acc 0.7528298
Epoch 999 Train Acc 0.7532153 Valid Acc 0.75396883

3.新冠疫苗项目实战,助力疫情

Kaggle新冠疫苗研发竞赛:https://www.kaggle.com/c/stanford-covid-vaccine/overview
在这里插入图片描述

mRNA疫苗已经成为2019冠状病毒最快的候选疫苗,但目前它们面临着关键的潜在限制。目前最大的挑战之一是如何设计超稳定的RNA分子(mRNA)。传统疫苗是装在注射器里通过冷藏运输到世界各地,但mRNA疫苗目前还不可能做到这一点。

研究人员已经观察到RNA分子有降解的倾向。这是一个严重的限制,降解会使mRNA疫苗失效。目前,对于特定RNA的主干中哪个部位最容易受影响的细节知之甚少。在不了解这些情况的情况下,目前针对COVID-19的mRNA疫苗必须在高度冷藏条件下准备和运输,它们必须能够得到稳定,否则不太可能送达地球上的每个人。

由斯坦福大学医学院(Stanford’s School of Medicine)计算生物学家瑞朱·达斯(Rhiju Das)教授领导的永恒星系(Eterna)社区将科学家和竞赛玩家聚集在一起,解决谜题并发明药物。Eterna是一款在线竞赛平台,通过谜题挑战玩家解决诸如mRNA设计等科学问题。由斯坦福大学的研究人员合成并进行实验测试,以获得关于RNA分子的新见解。Eterna社区之前已经开启了新的科学原理,对致命疾病做出了新的诊断,并利用世界上最强大的智力资源改善公众生活。Eterna社区通过其在20多份出版物上的贡献推动了生物技术,包括RNA生物技术进展。

在这次竞赛中,我们希望利用Kaggle社区的数据科学专业知识来开发模型和设计RNA降解规则。模型将预测RNA分子每个碱基的可能降解率,训练的对象是由超过3000个RNA分子组成的Eterna数据集子集(它们跨越了一整套序列和结构),以及它们在每个位置的降解率。然后,我们将根据Eterna玩家刚刚为COVID-19 mRNA疫苗设计的第二代RNA序列为模型评分。这些最终的测试序列目前正在合成和实验表征在斯坦福大学与建模工作并行——自然将评分模型!

提高mRNA疫苗的稳定性已经在探索,我们必须解决这一深刻的科学挑战,以加速mRNA疫苗研究,并提供一种针对COVID-19背后病毒SARS-CoV-2的冰箱稳定疫苗。我们正在试图解决的问题希望得到学术实验室、工业研发团队和超级计算机的帮助,你可以加入电子竞赛玩家、科学家和开发者的团队,在Eterna永恒星球上对抗这一毁灭性病毒。

3.1案例简介

将编码的DNA送到细胞中,细胞使用mRNA(Messenger RNA)组装蛋白,免疫系统检测到组装蛋白质以后,利用构建病毒蛋白的编码基因激活免疫系统产生抗体,增强针对冠状病毒的抵御能力。

不同的mRNA生成同一个蛋白质,

在这里插入图片描述

mRNA随着时间的流逝及温度的变化发生了降解,
在这里插入图片描述

如何找到结构更加稳定的mRNA?利用图神经网络找到更稳定的mRNA,颜色越深越稳定.

在这里插入图片描述

在这里插入图片描述

3.2 新冠疫苗项目拔高实战

数据分布特征

查看当前挂载的数据集目录

# 查看当前挂载的数据集目录, 该目录下的变更重启环境后会自动还原
# 这里可以看到我们数据集的名称为: data179441
!ls /home/aistudio/data
data179441
# 查看数据集train.json,数据格式:
# {"index":401,"id":"id_2a983d026","sequence":"GGAAAAAGGCUCAAAAACUGUACGAAGGUACAGAAAAACCAUAGCGAAAGCUAUGGAAAAAGAGCCAACUACUGGUUCGCCAGUAGAAAAGAAACAACAACAACAAC","structure":".......(((((.....((((((....)))))).....(((((((....))))))).....)))))..(((((((....))))))).....................","predicted_loop_type":"EEEEEEESSSSSMMMMMSSSSSSHHHHSSSSSSMMMMMSSSSSSSHHHHSSSSSSSMMMMMSSSSSXXSSSSSSSHHHHSSSSSSSEEEEEEEEEEEEEEEEEEEEE","signal_to_noise":8.157,"SN_filter":1.0,"seq_length":107,"seq_scored":68,"reactivity_error":[0.1423,0.2177,0.139,0.0994,0.1153,0.0995,0.0582,0.0237,0.0226,0.0263,0.0235,0.0692,0.1025,0.0635,0.0713,0.0749,0.0542,0.0218,0.0075,0.0208,0.0213,0.018,0.024,0.0736,0.0713,0.0391,0.0696,0.0423,0.0273,0.0198,0.0203,0.0093,0.0508,0.0871,0.0622,0.0625,0.0623,0.0473,0.0159,0.0217,0.0155,0.0119,0.0145,0.0128,0.0133,0.0319,0.0558,0.0359,0.0346,0.0085,0.0096,0.0161,0.0129,0.0113,0.0137,0.0434,0.0588,0.0595,0.0624,0.0525,0.0378,0.0177,0.0141,0.016,0.0094,0.0228,0.0578,0.0383],"deg_error_Mg_pH10":[0.1878,0.3274,0.1631,0.0812,0.1629,0.1502,0.1275,0.0633,0.0685,0.0775,0.0695,0.1889,0.1619,0.1326,0.0742,0.0613,0.0547,0.0485,0.0289,0.0515,0.0497,0.0304,0.0233,0.0528,0.0466,0.03,0.0513,0.0561,0.0261,0.0387,0.0316,0.0289,0.112,0.1137,0.0846,0.0631,0.0433,0.0464,0.0265,0.0315,0.0346,0.0218,0.0254,0.0223,0.0176,0.0327,0.0335,0.0297,0.0262,0.03,0.0331,0.0201,0.0329,0.0186,0.0232,0.073,0.0625,0.0585,0.0593,0.0471,0.0453,0.0317,0.0195,0.0337,0.0311,0.0333,0.036,0.0562],"deg_error_pH10":[0.232,0.3104,0.1631,0.0778,0.1532,0.1399,0.1284,0.0564,0.0634,0.099,0.0594,0.1322,0.1365,0.136,0.1118,0.1049,0.0986,0.0473,0.0267,0.0433,0.0478,0.0266,0.0375,0.0597,0.0657,0.0551,0.0952,0.0624,0.0561,0.0417,0.0404,0.0317,0.1204,0.1383,0.1066,0.1015,0.0807,0.0884,0.0359,0.0497,0.0424,0.033,0.0313,0.0364,0.021,0.0476,0.0495,0.037,0.047,0.0428,0.0448,0.0425,0.0335,0.0269,0.0401,0.1032,0.0864,0.0977,0.0974,0.0821,0.0959,0.0556,0.033,0.0517,0.0453,0.0626,0.0841,0.1283],"deg_error_Mg_50C":[0.1342,0.2586,0.1547,0.0724,0.1516,0.1302,0.0857,0.0411,0.0349,0.0471,0.0448,0.13,0.1312,0.1216,0.0827,0.0713,0.0502,0.0332,0.0184,0.0269,0.0275,0.0183,0.0237,0.045,0.0522,0.0391,0.0611,0.0413,0.0269,0.021,0.0308,0.0218,0.1118,0.1188,0.0898,0.0648,0.0521,0.0458,0.0247,0.0272,0.0238,0.0166,0.0178,0.019,0.0136,0.0278,0.0366,0.0291,0.0282,0.0167,0.0221,0.0135,0.0189,0.0067,0.0156,0.0818,0.0718,0.0752,0.0815,0.0573,0.0617,0.0326,0.024,0.0299,0.0305,0.0389,0.0441,0.054],"deg_error_50C":[0.1858,0.2902,0.1741,0.0976,0.1655,0.1298,0.1092,0.0595,0.0464,0.0776,0.0601,0.1411,0.1319,0.1292,0.1263,0.1279,0.0998,0.0498,0.0386,0.0481,0.0635,0.0383,0.0499,0.0737,0.0802,0.0752,0.1019,0.0777,0.0529,0.0381,0.055,0.0631,0.1288,0.138,0.0833,0.1019,0.0992,0.081,0.0284,0.045,0.0326,0.0341,0.0316,0.0371,0.0257,0.0677,0.0606,0.0618,0.0519,0.0423,0.033,0.0504,0.0463,0.021,0.0474,0.107,0.0997,0.099,0.0964,0.0838,0.0769,0.0439,0.0315,0.0475,0.0379,0.0719,0.0805,0.099],"reactivity":[1.123,3.8721,1.713,0.8734,1.3266,0.9945,0.2319,0.0312,0.0196,0.0122,0.0234,0.3576,1.1503,0.31,0.5168,0.6628,0.3396,-0.0029,0.0,0.0221,0.0184,0.0193,0.0381,0.6968,0.676,0.1654,0.6669,0.2018,0.0571,0.0247,0.0079,-0.0037,0.228,1.1223,0.5402,0.6254,0.6763,0.3724,0.0151,0.0073,0.0068,0.0094,0.0187,-0.0026,0.014,0.1307,0.5515,0.219,0.1912,-0.0036,0.0044,0.0185,0.0052,0.0088,0.0177,0.2602,0.5248,0.7127,0.7374,0.548,0.2271,0.0377,0.0152,0.0374,-0.0072,0.0419,0.7803,0.2772],"deg_Mg_pH10":[0.712,4.2396,0.9996,0.1747,1.1575,1.0471,0.7494,0.1471,0.1808,0.242,0.1974,2.4576,2.1187,1.5409,0.4156,0.2857,0.2373,0.1608,0.0473,0.2133,0.1952,0.0582,0.0223,0.2264,0.167,0.051,0.2164,0.2843,0.0308,0.1201,0.0582,0.0544,1.5511,1.8827,1.0921,0.617,0.2783,0.3367,0.0867,0.1096,0.1697,0.0581,0.0866,0.0523,0.0335,0.1483,0.1606,0.1344,0.0941,0.1342,0.1802,0.0468,0.1748,0.0442,0.0792,1.112,0.7844,0.8065,0.8327,0.5404,0.5097,0.2545,0.0775,0.3106,0.2497,0.278,0.362,1.045],"deg_pH10":[2.3831,5.385,1.4281,0.1975,1.3957,1.1176,0.8533,0.1538,0.1879,0.5337,0.1605,0.8779,1.0647,1.0386,0.7012,0.714,0.7048,0.0258,0.0211,0.0761,0.0889,0.021,0.0444,0.1558,0.217,0.1494,0.6119,0.1911,0.1477,0.0726,0.0185,0.0222,0.9114,1.3969,0.7886,0.8342,0.5404,0.6823,0.054,0.0596,0.081,0.0615,0.0526,0.0412,0.0131,0.1074,0.1244,0.0869,0.1393,0.1044,0.1463,0.1038,0.0436,0.0343,0.1111,0.8634,0.3376,0.8998,0.7099,0.5414,0.8286,0.267,0.0532,0.2501,0.1356,0.276,0.7102,1.9362],"deg_Mg_50C":[0.6751,4.3933,1.6426,0.2694,1.8023,1.3557,0.4751,0.0981,0.0543,0.1044,0.1166,1.4446,1.6431,1.4634,0.6241,0.4909,0.2331,0.0405,0.0148,0.0399,0.0355,0.0147,0.0251,0.1578,0.2427,0.1206,0.3671,0.1302,0.0291,0.0167,0.0456,0.0201,1.5332,1.951,1.1253,0.5821,0.3826,0.2864,0.0548,0.0236,0.0396,0.0214,0.0267,0.0128,0.0107,0.0616,0.1508,0.1051,0.0847,0.0114,0.0522,0.0009,0.0218,0.0,0.0182,1.0762,0.7187,1.035,1.2014,0.5732,0.7027,0.1796,0.0792,0.1617,0.1442,0.235,0.3647,0.5791],"deg_50C":[1.0915,3.7795,1.3767,0.3335,1.3792,0.7563,0.4283,0.1329,0.052,0.2067,0.1244,0.8698,0.7834,0.7217,0.7968,0.9452,0.5773,0.0347,0.0506,0.0815,0.1747,0.0502,0.0933,0.2459,0.3174,0.2917,0.6025,0.3061,0.0982,0.0406,0.1026,0.2047,0.9596,1.2014,0.3152,0.7039,0.7273,0.4594,0.0139,0.025,0.0175,0.0516,0.0412,0.0379,0.0206,0.2756,0.2029,0.2619,0.1538,0.0833,0.05,0.1426,0.1124,0.0099,0.1384,0.8286,0.561,0.8028,0.6205,0.5066,0.3916,0.1169,0.0374,0.1668,0.0611,0.3675,0.5446,0.8819]}
# ........
# 这里我们只需要安装图学习框架PGL
!pip install pgl==1.2.1   # 安装PGL
# 主要代码文件在./src目录
%cd ./src
/home/aistudio/src
# 加载一些需要用到的模块,设置随机数
import json
import random
import numpy as np
import pandas as pd

import matplotlib.pyplot as plt
import networkx as nx

from utils.config import prepare_config, make_dir
from utils.logger import prepare_logger, log_to_file
from data_parser import GraphParser

seed = 123
np.random.seed(seed)
random.seed(seed)
# https://www.kaggle.com/c/stanford-covid-vaccine/data
# 加载训练用的数据
df = pd.read_json('../data/data179441/train.json', lines=True)
# 查看一下数据集的内容
sample = df.loc[0]
print(sample)
index                                                                400
id                                                          id_2a7a4496f
sequence               GGAAAGCCCGCGGCGCCGGGCGCCGCGGCCGCCCAGGCCGCCCGGC...
structure              .....(((...)))((((((((((((((((((((.((((....)))...
predicted_loop_type    EEEEESSSHHHSSSSSSSSSSSSSSSSSSSSSSSISSSSHHHHSSS...
signal_to_noise                                                        0
SN_filter                                                              0
seq_length                                                           107
seq_scored                                                            68
reactivity_error       [146151.225, 146151.225, 146151.225, 146151.22...
deg_error_Mg_pH10      [104235.1742, 104235.1742, 104235.1742, 104235...
deg_error_pH10         [222620.9531, 222620.9531, 222620.9531, 222620...
deg_error_Mg_50C       [171525.3217, 171525.3217, 171525.3217, 171525...
deg_error_50C          [191738.0886, 191738.0886, 191738.0886, 191738...
reactivity             [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
deg_Mg_pH10            [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
deg_pH10               [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
deg_Mg_50C             [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
deg_50C                [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
Name: 0, dtype: object

例如 deg_50C、deg_Mg_50C 这样的值全为0的行,就是我们需要预测的。

structure一行,数据中的括号是为了构成边用的。

本案例要预测RNA序列不同位置的降解速率,训练数据中提供了多个ground值,标签包括以下几项:reactivity, deg_Mg_pH10, and deg_Mg_50

  • reactivity - (1x68 vector 训练集,1x91测试集) 一个浮点数数组,与seq_scores有相同的长度,是前68个碱基的反应活性值,按顺序表示,用于确定RNA样本可能的二级结构。

  • deg_Mg_pH10 - (训练集 1x68向量,1x91测试集)一个浮点数数组,与seq_scores有相同的长度,是前68个碱基的反应活性值,按顺序表示,用于确定在高pH (pH 10)下的降解可能性。

  • deg_Mg_50 - (训练集 1x68向量,1x91测试集)一个浮点数数组,与seq_scores有相同的长度,是前68个碱基的反应活性值,按顺序表示,用于确定在高温(50摄氏度)下的降解可能性。

# 利用GraphParser构造图结构的数据
args = prepare_config("./config.yaml", isCreate=False, isSave=False)
parser = GraphParser(args) # GraphParser类来自data_parser.py
gdata = parser.parse(sample) # GraphParser里最主要的函数就是parse(self, sample)
# 查看我们构造好的图数据
gdata

数据格式:

{'nfeat': array([[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 1., 0., ..., 0., 0., 0.],
        ...,
        [1., 0., 0., ..., 0., 0., 0.],
        [1., 0., 0., ..., 0., 0., 0.],
        [1., 0., 0., ..., 0., 0., 0.]], dtype=float32),
 'edges': array([[  0,   1],
        [  1,   0],
        [  1,   2],
        ...,
        [142, 105],
        [106, 142],
        [142, 106]]),
 'efeat': array([[ 0.,  0.,  0.,  1.,  1.],
        [ 0.,  0.,  0., -1.,  1.],
        [ 0.,  0.,  0.,  1.,  1.],
        ...,
        [ 0.,  1.,  0.,  0.,  0.],
        [ 0.,  1.,  0.,  0.,  0.],
        [ 0.,  1.,  0.,  0.,  0.]], dtype=float32),
 'labels': array([[ 0.    ,  0.    ,  0.    ],
        [ 0.    ,  0.    ,  0.    ],
        ...,
        [ 0.    ,  0.9213,  0.    ],
        [ 6.8894,  3.5097,  5.7754],
        [ 0.    ,  1.8426,  6.0642],
          ...,        
        [ 0.    ,  0.    ,  0.    ],
        [ 0.    ,  0.    ,  0.    ]], dtype=float32),
 'mask': array([[ True],
        [ True],
     ......
       [False]])}
print(gdata['nfeat'].shape)
print(gdata['edges'].shape)
print(gdata['efeat'].shape)
print(gdata['labels'].shape)
print(gdata['mask'].shape)
# nfeat —— 节点特征

# edges —— 边

# efeat —— 边特征

# labels —— 节点标签有三种,所以这可以看成是一个多分类任务
(143, 14)
(564, 2)
(564, 5)
(143, 3)
(143, 1)
# 图数据可视化
fig = plt.figure(figsize=(24, 12))
nx_G = nx.Graph()
nx_G.add_nodes_from([i for i in range(len(gdata['nfeat']))])

nx_G.add_edges_from(gdata['edges'])
node_color = ['g' for _ in range(sample['seq_length'])] + \
['y' for _ in range(len(gdata['nfeat']) - sample['seq_length'])]
options = {
    "node_color": node_color,
}
pos = nx.spring_layout(nx_G, iterations=400, k=0.2)
nx.draw(nx_G, pos, **options)

 k=0.2)
nx.draw(nx_G, pos, **options)

plt.show()

在这里插入图片描述

从图中可以看到,绿色节点是碱基,黄色节点是密码子。

结果返回的是 MCRMSE 和 loss

{‘MCRMSE’: 0.5496759, ‘loss’: 0.3025484172316889}

[DEBUG] 2022-11-25 17:50:42,468 [  trainer.py:   66]:	{'MCRMSE': 0.5496759, 'loss': 0.3025484172316889}
[DEBUG] 2022-11-25 17:50:42,468 [  trainer.py:   73]:	write to tensorboard ../checkpoints/covid19/eval_history/eval
[DEBUG] 2022-11-25 17:50:42,469 [  trainer.py:   73]:	write to tensorboard ../checkpoints/covid19/eval_history/eval
[INFO] 2022-11-25 17:50:42,469 [  trainer.py:   76]:	[Eval:eval]:MCRMSE:0.5496758818626404	loss:0.3025484172316889
[INFO] 2022-11-25 17:50:42,602 [monitored_executor.py:  606]:	********** Stop Loop ************
[DEBUG] 2022-11-25 17:50:42,607 [monitored_executor.py:  199]:	saving step 12500 to ../checkpoints/covid19/model_12500

这部分代码实现参考项目:PGL图学习之基于GNN模型新冠疫苗任务[系列九]

# 我们在 layer.py 里定义了一个新的 gnn 模型(my_gnn),消息传递的过程中加入了边的特征(edge_feat)
# 然后修改 model.py 里的 GNNModel
# 使用修改后的模型,运行 main.py。为节省时间,设置 epochs = 100

# !python main.py --config config.yaml #训练
#!python main.py --mode infer #预测

4.总结

本项目讲了论文节点分类任务和新冠疫苗任务,并在论文节点分类任务中对代码进行详细讲解。PGL八九系列的项目耦合性比较大,也花了挺久时间研究希望对大家有帮助。

后续将做一次大的总结偏向业务侧该如何落地以及图算法的归纳,之后会进行不定期更新图相关的算法!

  • easydict库和collections库!
  • 从官方数据处理部分,学习到利用np的vstack实现自环边以及知道有向边如何添加反向边的数据——这样的一种代码实现边数据转换的方式!
  • 从模型加载部分,学习了多program执行的操作,理清了program与命名空间之间的联系!
  • 从模型训练部分,强化了执行器执行时,需要传入正确的program以及feed_dict,在pgl中可以使用图Graph自带的to_feed方法返回一个feed_dict数据字典作为初始数据,后边再按需添加新数据!
  • 从model.py学习了模型的组网,以及pgl中conv类下的网络模型方法的调用,方便组网!
  • 重点来了:从build_model.py学习了模型的参数的加载组合,实现统一的处理和返回统一的算子以及参数!

参考链接:

https://cdn.modb.pro/db/253226

https://aistudio.baidu.com/aistudio/projectdetail/1285193?channelType=0&channel=0

图神经网络实战案例-新冠疫苗项目实战,助力疫情!

图学习百万量级基准数据集OGB:Open Graph Benchmark https://zhuanlan.zhihu.com/p/165996331

百度图学习技术与应用:https://cdn.modb.pro/db/253226

UniMP:用于半监督分类的统一信息传递模型:https://zhuanlan.zhihu.com/p/370357388



此文章为搬运
原项目链接

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐