基于飞桨PGL实现TransE的知识表示任务

1、飞桨PGL介绍

Paddle Graph Learning (PGL)是一个基于PaddlePaddle的高效易用的图学习框架

在最新发布的 PGL 中引入了异质图的支持,新增MetaPath采样支持异质图表示学习,新增异质图Message Passing机制支持基于消息传递的异质图算法,利用新增的异质图接口,能轻松搭建前沿的异质图学习算法。而且,在最新发布的PGL中,同时也增加了分布式图存储以及一些分布式图学习训练算法,例如,分布式deep walk和分布式graphsage。结合PaddlePaddle深度学习框架,我们的框架基本能够覆盖大部分的图网络应用,包括图表示学习以及图神经网络。

目前业界知识表示模型层出不穷,例如 TransE、RotatE 等。飞桨 PGL 基于大规模知识表示库 PGL-KE,对已有算法升级提出了 Normalized Orthogonal Transforms Embedding(NOTE)模型,能够对关系进行多维度建模,同时能在大规模场景下仍保持数值稳定性。

另外,飞桨 PGL 也迎来重大升级,推出了万亿超大规模分布式图引擎,分布式图引擎研发的初衷也是希望图学习算法可以在业界实现更大规模的产业应用,目前,百度已借助飞桨 PGL 在搜索、信息流推荐、金融风控、智能地图、知识图谱等多个场景实现数十项应用落地。
飞桨 PGL 还与多个外部机构合作:网易云音乐在调研了大量开源方案后,也选择了对大规模图训练更加友好的飞桨 PGL 作为云音乐推荐的图神经网络基础框架。同时,飞桨 PGL 也助力科技创新 2030「新一代人工智能」重大项目 OpenKS 知识计算引擎。

源于图神经网络对于复杂数据建模的便利以及其强大的表达能力,飞桨 PGL 也探究图神经网络与多个交叉学科的结合,包括构建大数据疫情预测系统,与飞桨螺旋桨 PaddleHelix 合作致力于化合物属性预测,并在多个化合物预测榜单上取得 SOTA。

图学习作为通用的人工智能算法之一,势必成为智能时代新的基础能力,赋能各行各业,助力智能经济腾飞。现阶段仅仅是图学习热潮的开始,未来还将有更加深度的技术产出,和更大规模的产业机会出现,扎根图学习领域,持续为产业智慧化升级赋能,需要从现在就开始。

PGL 链接:

https://github.com/PaddlePaddle/PGL

B 站 图神经网络 7 日教程:

https://www.bilibili.com/video/BV1rf4y1v7cU

PGL 图学习入门教程:

https://aistudio.baidu.com/aistudio/projectdetail/413386

2、知识图谱嵌入:TransE介绍

TransE模型是针对给定三元组进行计算“能量”达到优化目的,其中负例是通过替换头实体或者尾实体自行构造的,优化目标就是使得正负例样本距离最大化,通过最小化正样本的“能量”,最大化负样本的“能量”,达到优化嵌入表示的目的。

(一)算法流程图

TransE的算法流程图下图所示:

(二)算法伪代码

伪代码的意思是:

input: 输入模型的参数是训练集的三元组,实体集E,关系集L,margin,向量的维度k

1:初始化: 对于关系按照1的初始化方式初始化即可

2:这里进行了L2范数归一化,也就是除以自身的L2范数

3:同理,也对实体进行了初始化,但是这里没有除以自身的L2范数

4:训练的循环过程中:

5:首先对实体进行了L2范数归一化

6:取一个batch的样本,这里Sbatch代表的是正样本,也就是正确的三元组

7: 初始化三元组对,应该就是创造一个用于储存的列表

8,9,10:这里的意思应该是根据Sbatch的正样本替换头实体或者尾实体构造负样本,然后把对应的正样本三元组和负样本三元组放到一起,组成Tbatch

11:完成正负样本的提取

12:根据梯度下降更新向量

13:结束循环

3、实验过程

3.1 数据准备

数据准备(数据处理):从本地或 URL 读取数据,并完成预处理操作(如数据校验、格式转化等),保证模型可读性。

本次实验数据采用数据集 WN18 ,分为训练集和测试集。利用训练集进行transE建模,通过训练为每个实体和关系建立起向量映射,并在测试集中计算MeanRank和Hit10指标进行结果检验。

数据集 WN18 是 WordNet 的子集,包含18种关系和40k种实体。

WN18: https://drive.google.com/open?id=1MXy257ZsjeXQHZScHLeQeVnUTPjltlwD

# 下载数据(本环境中已经下载好,此步骤可忽略)
%pwd
%cd /home/aistudio/
!sh download.sh

# 对下载数据解压并移动到指定位置
# %pwd
# %cd /home/aistudio/data/
# !tar xvf WN18RR.tar.gz -C WN18RR
# !tar xvf FB15k-237.tar.gz -C FB15k-237
# !tar xvf fb15k.tgz -C FB15k
# !mv FB15k/FB15k/freebase_mtr100_mte100-train.txt FB15k/train.txt
# !mv FB15k/FB15k/freebase_mtr100_mte100-valid.txt FB15k/valid.txt
# !mv FB15k/FB15k/freebase_mtr100_mte100-test.txt FB15k/test.txt
# 切换回工作空间位置
%cd /home/aistudio/work
%pwd

3.2 环境准备(初始化)

在工作环境中,引入paddle、pgl等相关依赖包。导包: 运行这些模型的脚本。

"""
The script to run these models.
"""
import argparse
import timeit
import os
import numpy as np
import paddle.fluid as fluid
import paddle
from data_loader import KGLoader
from evalutate import Evaluate
from model import model_dict
from model.utils import load_var
from mp_mapper import mp_reader_mapper
from pgl.utils.logger import log
paddle.enable_static()

3.3 模型设计

模型设计:网络结构设计,相当于模型的假设空间,即模型能够表达的关系集合。

选择迭代次数5000次,学习率0.01进行训练。损失函数变化如下:

3.3.1、评价指标

3.3.1.1 Mean rank

对于测试集的每个三元组,以预测tail实体为例,我们将(h,r,t)中的t用知识图谱中的每个实体来代替,然后通过distance(h, r, t)函数来计算距离,这样我们可以得到一系列的距离,之后按照升序将这些分数排列。

distance(h, r, t)函数值是越小越好,那么在上个排列中,排的越前越好。

现在重点来了,我们去看每个三元组中正确答案也就是真实的t到底能在上述序列中排多少位,比如说t1排100,t2排200,t3排60…,之后对这些排名求平均,mean rank就得到了。

3.3.1.2 Hit@10

还是按照上述进行函数值排列,然后去看每个三元组正确答案是否排在序列的前十,如果在的话就计数+1

本实验的TransE模型设计,可查阅/home/aistudio/work/model/TransE.py文件。

 """
    The TransE Model.
    """

    def __init__(self,
                 data_reader,
                 hidden_size,
                 margin,
                 learning_rate,
                 args,
                 optimizer="adam"):
        self._neg_times = args.neg_times
        super(TransE, self).__init__(
            model_name="TransE",
            data_reader=data_reader,
            hidden_size=hidden_size,
            margin=margin,
            learning_rate=learning_rate,
            args=args,
            optimizer=optimizer)
        self.construct()

    def creat_share_variables(self):
        """
        Share variables for train and test programs.
        创建共享变量
        """
        # fluid.layers.create_parameter 该OP创建一个参数,该参数是一个可学习的变量, 拥有梯度并且可优化
        entity_embedding = fluid.layers.create_parameter(
            shape=self._ent_shape, dtype="float32", name=self.ent_name)
        relation_embedding = fluid.layers.create_parameter(
            shape=self._rel_shape, dtype="float32", name=self.rel_name)
        return entity_embedding, relation_embedding

    @staticmethod
    def score_with_l2_normalize(head, rel, tail):
        """
        定义一个计算首尾实体以及关系的特征函数,用l2标准化评分
        Score function of TransE
        TransE的得分函数
        """
        head = fluid.layers.l2_normalize(head, axis=-1)
        rel = fluid.layers.l2_normalize(rel, axis=-1)
        tail = fluid.layers.l2_normalize(tail, axis=-1)
        score = head + rel - tail  # 首实体+关系-尾实体   #一范数
        return score

    def construct_train_program(self):
        """
        Construct train program.
        构建训练程序
        """
        # 初始化定义好相关参数,通过计算最小化loss来优化更新ent_embeddings,rel_embeddings两个矩阵
        # pos代表好的三元组里面的首实体尾实体和关系,neg则代表不相关的
        entity_embedding, relation_embedding = self.creat_share_variables()
        pos_head = lookup_table(self.train_pos_input[:, 0], entity_embedding)
        pos_tail = lookup_table(self.train_pos_input[:, 2], entity_embedding)
        pos_rel = lookup_table(self.train_pos_input[:, 1], relation_embedding)
        neg_head = lookup_table(self.train_neg_input[:, 0], entity_embedding)
        neg_tail = lookup_table(self.train_neg_input[:, 2], entity_embedding)
        neg_rel = lookup_table(self.train_neg_input[:, 1], relation_embedding)

        pos_score = self.score_with_l2_normalize(pos_head, pos_rel, pos_tail)
        neg_score = self.score_with_l2_normalize(neg_head, neg_rel, neg_tail)

        pos = fluid.layers.reduce_sum(
            fluid.layers.abs(pos_score), 1, keep_dim=False)
        neg = fluid.layers.reduce_sum(
            fluid.layers.abs(neg_score), 1, keep_dim=False)
        neg = fluid.layers.reshape(
            neg, shape=[-1, self._neg_times], inplace=True)

        loss = fluid.layers.reduce_mean(
            fluid.layers.relu(pos - neg + self._margin))
        return [loss]

    def construct_test_program(self):
        """
        Construct test program
        """
        entity_embedding, relation_embedding = self.creat_share_variables()
        entity_embedding = fluid.layers.l2_normalize(entity_embedding, axis=-1)
        relation_embedding = fluid.layers.l2_normalize(
            relation_embedding, axis=-1)
        head_vec = lookup_table(self.test_input[0], entity_embedding)
        rel_vec = lookup_table(self.test_input[1], relation_embedding)
        tail_vec = lookup_table(self.test_input[2], entity_embedding)
        # The paddle fluid.layers.topk GPU OP is very inefficient
        # we do sort operation in the evaluation step using multiprocessing.
        id_replace_head = fluid.layers.reduce_sum(
            fluid.layers.abs(entity_embedding + rel_vec - tail_vec), dim=1)
        id_replace_tail = fluid.layers.reduce_sum(
            fluid.layers.abs(entity_embedding - rel_vec - head_vec), dim=1)

        return [id_replace_head, id_replace_tail]

3.4 训练配置

训练配置:设定模型采用的寻解算法,即优化器,并指定计算资源。

运行程序一个时期。

:param batch_iter:准备数据的batch_ite。

:param程序:运行程序、train_program或测试程序。

:param exe:Paddle 的执行器。

:param fetch_list:要获取的变量。

:param epoch:训练进程的epoch编号。

:param prefix:前缀名称,键入“string”。

:param log_per_step:每个步骤的日志。
def run_round(batch_iter,
              program,
              exe,
              fetch_list,
              epoch,
              prefix="train",
              log_per_step=1000):
    """
    Run the program for one epoch.
    :param batch_iter: the batch_iter of prepared data.
    :param program: the running program, train_program or test program.
    :param exe: the executor of paddle.
    :param fetch_list: the variables to fetch.
    :param epoch: the epoch number of train process.
    :param prefix: the prefix name, type `string`.
    :param log_per_step: log per step.
    :return: None
    """
    batch = 0
    tmp_epoch = 0
    loss = 0
    tmp_loss = 0
    run_time = 0
    data_time = 0
    t2 = timeit.default_timer()
    start_epoch_time = timeit.default_timer()
    for batch_feed_dict in batch_iter():
        batch += 1
        t1 = timeit.default_timer()
        data_time += (t1 - t2)
        batch_fetch = exe.run(program,
                              fetch_list=fetch_list,
                              feed=batch_feed_dict)
        if prefix == "train":
            loss += batch_fetch[0]
            tmp_loss += batch_fetch[0]
        if batch % log_per_step == 0:
            tmp_epoch += 1
            if prefix == "train":
                log.info("Epoch %s (%.7f sec) Train Loss: %.7f" %
                         (epoch + tmp_epoch,
                          timeit.default_timer() - start_epoch_time,
                          tmp_loss[0] / batch))
                start_epoch_time = timeit.default_timer()
            else:
                log.info("Batch %s" % batch)
            batch = 0
            tmp_loss = 0

        t2 = timeit.default_timer()
        run_time += (t2 - t1)

    if prefix == "train":
        log.info("GPU run time {}, Data prepare extra time {}".format(
            run_time, data_time))
        log.info("Epoch %s \t All Loss %s" % (epoch + tmp_epoch, loss))


3.5 训练过程

训练过程:循环调用训练过程,每轮都包括前向计算、损失函数(优化目标)和后向传播三个步骤。

def train(args):
    """
    Train the knowledge graph embedding model.
    :param args: all args.
    :return: None
    """
    kgreader = KGLoader(
        batch_size=args.batch_size,
        data_dir=args.data_dir,
        neg_mode=args.neg_mode,
        neg_times=args.neg_times)
    if args.model in model_dict:
        Model = model_dict[args.model]
    else:
        raise ValueError("No model for name {}".format(args.model))
    model = Model(
        data_reader=kgreader, # 数据加载器
        hidden_size=args.hidden_size,# 神经网络的隐藏节点(神经元)
        margin=args.margin, # 表示正负样本之间的间距,是一个超参数,也就是公式中Loss里的γ;loss时设置有一个边缘值‘甘码’(y表示),代码中用margin表示;margin (以下称作间隔)被认为是模型泛化能力的保证,但在神经网络时代使用的最多的损失函数 Softmax 交叉熵损失中并没有显式地引入间隔项;
        learning_rate=args.learning_rate, # 学习率,其实就是梯度下降中的步长
        args=args,
        optimizer=args.optimizer) # 模型优化器:一般设置为adam

    def iter_map_wrapper(data_batch, repeat=1):
        """
        wrapper for multiprocess reader
        :param data_batch: the source data iter.
        :param repeat: repeat data for multi epoch
        :return: iterator of feed data
        """

        def data_repeat():
            """repeat data for multi epoch"""
            for i in range(repeat):
                for d in data_batch():
                    yield d

        reader = mp_reader_mapper(
            data_repeat,
            func=kgreader.training_data_no_filter
            if args.nofilter else kgreader.training_data_map,
            num_works=args.sample_workers)

        return reader

    def iter_wrapper(data_batch, feed_list):
        """
        Decorator of make up the feed dict
        :param data_batch: the source data iter.
        :param feed_list: the feed list (names of variables).
        :return: iterator of feed data.
        """

        def work():
            """work"""
            for batch in data_batch():
                feed_dict = {}
                for k, v in zip(feed_list, batch):
                    feed_dict[k] = v
                yield feed_dict

        return work

    loader = fluid.io.DataLoader.from_generator(
        feed_list=model.train_feed_vars, capacity=20, iterable=True)

    places = fluid.cuda_places() if args.use_cuda else fluid.cpu_places()
    exe = fluid.Executor(places[0])
    exe.run(model.startup_program)
    exe.run(fluid.default_startup_program())
    if args.pretrain and model.model_name in ["TransR", "transr"]:
        pretrain_ent = os.path.join(args.checkpoint,
                                    model.ent_name.replace("TransR", "TransE"))
        pretrain_rel = os.path.join(args.checkpoint,
                                    model.rel_name.replace("TransR", "TransE"))
        if os.path.exists(pretrain_ent):
            print("loading pretrain!")
            #var = fluid.global_scope().find_var(model.ent_name)
            load_var(exe, model.train_program, model.ent_name, pretrain_ent)
            #var = fluid.global_scope().find_var(model.rel_name)
            load_var(exe, model.train_program, model.rel_name, pretrain_rel)
        else:
            raise ValueError("pretrain file {} not exists!".format(
                pretrain_ent))

    prog = fluid.CompiledProgram(model.train_program).with_data_parallel(
        loss_name=model.train_fetch_vars[0].name)

    if args.only_evaluate:
        s = timeit.default_timer()
        fluid.io.load_params(
            exe, dirname=args.checkpoint, main_program=model.train_program)
        Evaluate(kgreader).launch_evaluation(
            exe=exe,
            reader=iter_wrapper(kgreader.test_data_batch,
                                model.test_feed_list),
            fetch_list=model.test_fetch_vars,
            program=model.test_program,
            num_workers=10)
        log.info(timeit.default_timer() - s)
        return None

    batch_iter = iter_map_wrapper(
        kgreader.training_data_batch,
        repeat=args.evaluate_per_iteration, )
    loader.set_batch_generator(batch_iter, places=places)

    for epoch in range(0, args.epoch // args.evaluate_per_iteration):
        run_round(
            batch_iter=loader,
            exe=exe,
            prefix="train",
            # program=model.train_program,
            program=prog,
            fetch_list=model.train_fetch_vars,
            log_per_step=kgreader.train_num // args.batch_size,
            epoch=epoch * args.evaluate_per_iteration)
        log.info("epoch\t%s" % ((1 + epoch) * args.evaluate_per_iteration))
        fluid.io.save_params(
            exe, dirname=args.checkpoint, main_program=model.train_program)
        if not args.noeval:
            eva = Evaluate(kgreader)
            eva.launch_evaluation(
                exe=exe,
                reader=iter_wrapper(kgreader.test_data_batch,
                                    model.test_feed_list),
                fetch_list=model.test_fetch_vars,
                program=model.test_program,
                num_workers=10)

3.6 模型预测

def main():
    """
    The main entry of all.
    :return: None
    """
    parser = argparse.ArgumentParser(
        description="Knowledge Graph Embedding for PGL")
    parser.add_argument('--use_cuda', action='store_true', help="use_cuda")
    parser.add_argument(
        '--data_dir',
        dest='data_dir',
        type=str,
        help='the directory of dataset',
        default='/home/aistudio/data/data176054/')
    parser.add_argument(
        '--model',
        dest='model',
        type=str,
        help="model to run",
        default="TransE")
    parser.add_argument(
        '--learning_rate',
        dest='learning_rate',
        type=float,
        help='learning rate',
        default=0.001)
    parser.add_argument(
        '--epoch', dest='epoch', type=int, help='epoch to run', default=400)
    parser.add_argument(
        '--sample_workers',
        dest='sample_workers',
        type=int,
        help='sample workers',
        default=4)
    parser.add_argument(
        '--batch_size',
        dest='batch_size',
        type=int,
        help="batch size",
        default=1000)
    parser.add_argument(
        '--optimizer',
        dest='optimizer',
        type=str,
        help='optimizer',
        default='adam')
    parser.add_argument(
        '--hidden_size',
        dest='hidden_size',
        type=int,
        help='embedding dimension',
        default=50)
    parser.add_argument(
        '--margin', dest='margin', type=float, help='margin', default=4.0) # loss时设置有一个边缘值‘甘码’(y表示),代码中用margin表示
    parser.add_argument(
        '--checkpoint',
        dest='checkpoint',
        type=str,
        help='directory to save checkpoint directory',
        default='output/')
    parser.add_argument(
        '--evaluate_per_iteration',
        dest='evaluate_per_iteration',
        type=int,
        help='evaluate the training result per x iteration',
        default=50)
    parser.add_argument(
        '--only_evaluate',
        dest='only_evaluate',
        action='store_true',
        help='only do the evaluate program',
        default=False)
    parser.add_argument(
        '--adv_temp_value', type=float, help='adv_temp_value', default=2.0)
    parser.add_argument('--neg_times', type=int, help='neg_times', default=1)
    parser.add_argument(
        '--neg_mode', type=bool, help='return neg mode flag', default=False)

    parser.add_argument(
        '--nofilter',
        type=bool,
        help='don\'t filter invalid examples',
        default=False)
    parser.add_argument(
        '--pretrain',
        type=bool,
        help='pretrain for TransR model',
        default=False)
    parser.add_argument(
        '--noeval',
        type=bool,
        help='whether to evaluate the result',
        default=False)

    # args = parser.parse_args()
    args = parser.parse_known_args()[0]
    log.info(args)
    print('---组参完成---')
    train(args)
if __name__ == '__main__':
    main()
[INFO] 2022-12-06 13:43:13,210 [4070309804.py:   97]:	Namespace(adv_temp_value=2.0, batch_size=1000, checkpoint='output/', data_dir='/home/aistudio/data/data176054/', epoch=400, evaluate_per_iteration=50, hidden_size=50, learning_rate=0.001, margin=4.0, model='TransE', neg_mode=False, neg_times=1, noeval=False, nofilter=False, only_evaluate=False, optimizer='adam', pretrain=False, sample_workers=4, use_cuda=False)
[INFO] 2022-12-06 13:43:13,211 [data_loader.py:  154]:	Start loading the  dataset


---组参完成---


[INFO] 2022-12-06 13:43:14,387 [data_loader.py:  189]:	entity number: 40943
[INFO] 2022-12-06 13:43:14,388 [data_loader.py:  190]:	relation number: 18
[INFO] 2022-12-06 13:43:14,389 [data_loader.py:  191]:	training triple number: 141442
[INFO] 2022-12-06 13:43:14,389 [data_loader.py:  192]:	testing triple number: 5000
[INFO] 2022-12-06 13:43:14,390 [data_loader.py:  193]:	valid triple number: 5000
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:341: UserWarning: /home/aistudio/work/model/TransE.py:66
The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:341: UserWarning: /home/aistudio/work/model/TransE.py:66
The behavior of expression A - B has been unified with elementwise_sub(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_sub(X, Y, axis=0) instead of A - B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/math_op_patch.py:341: UserWarning: /home/aistudio/work/model/TransE.py:92
The behavior of expression A - B has been unified with elementwise_sub(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_sub(X, Y, axis=0) instead of A - B. This transitional warning will be dropped in the future.
  op_type, op_type, EXPRESSION_MAP[method_name]))
!!! The CPU_NUM is not specified, you should set CPU_NUM in the environment variable list.
CPU_NUM indicates that how many CPUPlace are used in the current task.
And if this parameter are set as N (equal to the number of physical CPU core) the program may be faster.

export CPU_NUM=64 # for example, set CPU_NUM as number of physical CPU core which is 64.

!!! The default number of CPU_NUM=1.
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py:1350: UserWarning: There are no operators in the program to be executed. If you pass Program manually, please use fluid.program_guard to ensure the current Program is being used.
  warnings.warn(error_info)
[INFO] 2022-12-06 13:43:20,055 [428529760.py:   43]:	Epoch 1 (5.4638239 sec) Train Loss: 3.3904142
[INFO] 2022-12-06 13:43:25,098 [428529760.py:   43]:	Epoch 2 (5.0384955 sec) Train Loss: 2.0507836
[INFO] 2022-12-06 13:43:30,604 [428529760.py:   43]:	Epoch 3 (5.5042218 sec) Train Loss: 1.3682744
[INFO] 2022-12-06 13:43:35,641 [428529760.py:   43]:	Epoch 4 (5.0355141 sec) Train Loss: 0.9566964
[INFO] 2022-12-06 13:43:40,561 [428529760.py:   43]:	Epoch 5 (4.9183161 sec) Train Loss: 0.6764126
[INFO] 2022-12-06 13:43:45,589 [428529760.py:   43]:	Epoch 6 (5.0269899 sec) Train Loss: 0.4839673
[INFO] 2022-12-06 13:43:50,554 [428529760.py:   43]:	Epoch 7 (4.9636348 sec) Train Loss: 0.3572380
[INFO] 2022-12-06 13:43:55,604 [428529760.py:   43]:	Epoch 8 (5.0481690 sec) Train Loss: 0.2726556
[INFO] 2022-12-06 13:44:00,740 [428529760.py:   43]:	Epoch 9 (5.1348923 sec) Train Loss: 0.2185346
[INFO] 2022-12-06 13:44:06,164 [428529760.py:   43]:	Epoch 10 (5.4226106 sec) Train Loss: 0.1785618
[INFO] 2022-12-06 13:44:11,155 [428529760.py:   43]:	Epoch 11 (4.9899144 sec) Train Loss: 0.1523592
[INFO] 2022-12-06 13:44:16,128 [428529760.py:   43]:	Epoch 12 (4.9704318 sec) Train Loss: 0.1330784
[INFO] 2022-12-06 13:44:21,062 [428529760.py:   43]:	Epoch 13 (4.9331696 sec) Train Loss: 0.1165288
[INFO] 2022-12-06 13:44:26,337 [428529760.py:   43]:	Epoch 14 (5.2730006 sec) Train Loss: 0.1056620
[INFO] 2022-12-06 13:44:31,513 [428529760.py:   43]:	Epoch 15 (5.1749869 sec) Train Loss: 0.0969450
[INFO] 2022-12-06 13:44:37,269 [428529760.py:   43]:	Epoch 16 (5.7548259 sec) Train Loss: 0.0916834
[INFO] 2022-12-06 13:44:42,284 [428529760.py:   43]:	Epoch 17 (5.0128260 sec) Train Loss: 0.0845696
[INFO] 2022-12-06 13:44:47,220 [428529760.py:   43]:	Epoch 18 (4.9349826 sec) Train Loss: 0.0783846
[INFO] 2022-12-06 13:44:52,082 [428529760.py:   43]:	Epoch 19 (4.8602244 sec) Train Loss: 0.0747965
[INFO] 2022-12-06 13:44:56,988 [428529760.py:   43]:	Epoch 20 (4.9047669 sec) Train Loss: 0.0709205
[INFO] 2022-12-06 13:45:01,916 [428529760.py:   43]:	Epoch 21 (4.9263447 sec) Train Loss: 0.0683614
[INFO] 2022-12-06 13:45:07,377 [428529760.py:   43]:	Epoch 22 (5.4598152 sec) Train Loss: 0.0653081
[INFO] 2022-12-06 13:45:12,294 [428529760.py:   43]:	Epoch 23 (4.9157793 sec) Train Loss: 0.0633284
[INFO] 2022-12-06 13:45:17,204 [428529760.py:   43]:	Epoch 24 (4.9081433 sec) Train Loss: 0.0627550
[INFO] 2022-12-06 13:45:22,111 [428529760.py:   43]:	Epoch 25 (4.9060906 sec) Train Loss: 0.0602098
[INFO] 2022-12-06 13:45:27,080 [428529760.py:   43]:	Epoch 26 (4.9676229 sec) Train Loss: 0.0587177
[INFO] 2022-12-06 13:45:32,166 [428529760.py:   43]:	Epoch 27 (5.0844083 sec) Train Loss: 0.0560029
[INFO] 2022-12-06 13:45:37,049 [428529760.py:   43]:	Epoch 28 (4.8816401 sec) Train Loss: 0.0553185
[INFO] 2022-12-06 13:45:42,694 [428529760.py:   43]:	Epoch 29 (5.6432405 sec) Train Loss: 0.0546600
[INFO] 2022-12-06 13:45:47,874 [428529760.py:   43]:	Epoch 30 (5.1783156 sec) Train Loss: 0.0535423
[INFO] 2022-12-06 13:45:52,938 [428529760.py:   43]:	Epoch 31 (5.0628335 sec) Train Loss: 0.0534795
[INFO] 2022-12-06 13:45:57,955 [428529760.py:   43]:	Epoch 32 (5.0157709 sec) Train Loss: 0.0529087
[INFO] 2022-12-06 13:46:02,873 [428529760.py:   43]:	Epoch 33 (4.9159907 sec) Train Loss: 0.0500903
[INFO] 2022-12-06 13:46:07,934 [428529760.py:   43]:	Epoch 34 (5.0594325 sec) Train Loss: 0.0502141
[INFO] 2022-12-06 13:46:13,360 [428529760.py:   43]:	Epoch 35 (5.4245299 sec) Train Loss: 0.0485228
[INFO] 2022-12-06 13:46:18,230 [428529760.py:   43]:	Epoch 36 (4.8689978 sec) Train Loss: 0.0487547
[INFO] 2022-12-06 13:46:23,120 [428529760.py:   43]:	Epoch 37 (4.8885430 sec) Train Loss: 0.0473931
[INFO] 2022-12-06 13:46:27,926 [428529760.py:   43]:	Epoch 38 (4.8040541 sec) Train Loss: 0.0468729
[INFO] 2022-12-06 13:46:32,837 [428529760.py:   43]:	Epoch 39 (4.9095472 sec) Train Loss: 0.0455435
[INFO] 2022-12-06 13:46:37,754 [428529760.py:   43]:	Epoch 40 (4.9158688 sec) Train Loss: 0.0447434
[INFO] 2022-12-06 13:46:43,116 [428529760.py:   43]:	Epoch 41 (5.3604870 sec) Train Loss: 0.0446136
[INFO] 2022-12-06 13:46:48,298 [428529760.py:   43]:	Epoch 42 (5.1806323 sec) Train Loss: 0.0434222
[INFO] 2022-12-06 13:46:53,307 [428529760.py:   43]:	Epoch 43 (5.0070449 sec) Train Loss: 0.0428572
[INFO] 2022-12-06 13:46:58,293 [428529760.py:   43]:	Epoch 44 (4.9852949 sec) Train Loss: 0.0417620
[INFO] 2022-12-06 13:47:03,546 [428529760.py:   43]:	Epoch 45 (5.2510994 sec) Train Loss: 0.0407941
[INFO] 2022-12-06 13:47:08,575 [428529760.py:   43]:	Epoch 46 (5.0280359 sec) Train Loss: 0.0416141
[INFO] 2022-12-06 13:47:13,568 [428529760.py:   43]:	Epoch 47 (4.9922183 sec) Train Loss: 0.0407951
[INFO] 2022-12-06 13:47:19,173 [428529760.py:   43]:	Epoch 48 (5.6027198 sec) Train Loss: 0.0406169
[INFO] 2022-12-06 13:47:24,009 [428529760.py:   43]:	Epoch 49 (4.8349077 sec) Train Loss: 0.0397049
[INFO] 2022-12-06 13:47:28,912 [428529760.py:   43]:	Epoch 50 (4.9015913 sec) Train Loss: 0.0380231
[INFO] 2022-12-06 13:47:30,664 [428529760.py:   55]:	GPU run time 255.44718721322715, Data prepare extra time 0.6254030801355839
[INFO] 2022-12-06 13:47:30,666 [428529760.py:   56]:	Epoch 50 	 All Loss [1758.8983]
[INFO] 2022-12-06 13:47:30,666 [2594160307.py:  118]:	epoch	50


[9.357s] #evaluation triple: 500/5000
[19.831s] #evaluation triple: 1000/5000
[29.233s] #evaluation triple: 1500/5000
[38.463s] #evaluation triple: 2000/5000
[48.111s] #evaluation triple: 2500/5000
[58.739s] #evaluation triple: 3000/5000
[68.274s] #evaluation triple: 3500/5000
[77.828s] #evaluation triple: 4000/5000
[88.045s] #evaluation triple: 4500/5000


[INFO] 2022-12-06 13:49:08,238 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 13:49:08,240 [evalutate.py:  131]:	MeanRank: 303.18, MRR: 0.3034, Hits@1: 0.0443, Hits@3: 0.4901, Hits@10: 0.7770
[INFO] 2022-12-06 13:49:08,283 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 13:49:08,285 [evalutate.py:  137]:	MeanRank: 291.80, MRR: 0.3964, Hits@1: 0.0721, Hits@3: 0.6835, Hits@10: 0.8964


[97.347s] #evaluation triple: 5000/5000


[INFO] 2022-12-06 13:49:13,687 [428529760.py:   43]:	Epoch 51 (5.4002516 sec) Train Loss: 0.0371012
[INFO] 2022-12-06 13:49:18,774 [428529760.py:   43]:	Epoch 52 (5.0822815 sec) Train Loss: 0.0367098
[INFO] 2022-12-06 13:49:23,928 [428529760.py:   43]:	Epoch 53 (5.1529047 sec) Train Loss: 0.0375297
[INFO] 2022-12-06 13:49:29,538 [428529760.py:   43]:	Epoch 54 (5.6085019 sec) Train Loss: 0.0364166
[INFO] 2022-12-06 13:49:34,751 [428529760.py:   43]:	Epoch 55 (5.2114417 sec) Train Loss: 0.0350702
[INFO] 2022-12-06 13:49:40,140 [428529760.py:   43]:	Epoch 56 (5.3882163 sec) Train Loss: 0.0348688
[INFO] 2022-12-06 13:49:45,612 [428529760.py:   43]:	Epoch 57 (5.4704254 sec) Train Loss: 0.0359056
[INFO] 2022-12-06 13:49:50,954 [428529760.py:   43]:	Epoch 58 (5.3400839 sec) Train Loss: 0.0349069
[INFO] 2022-12-06 13:49:56,099 [428529760.py:   43]:	Epoch 59 (5.1433429 sec) Train Loss: 0.0341026
[INFO] 2022-12-06 13:50:01,683 [428529760.py:   43]:	Epoch 60 (5.5830107 sec) Train Loss: 0.0327229
[INFO] 2022-12-06 13:50:06,701 [428529760.py:   43]:	Epoch 61 (5.0171092 sec) Train Loss: 0.0325510
[INFO] 2022-12-06 13:50:11,756 [428529760.py:   43]:	Epoch 62 (5.0536300 sec) Train Loss: 0.0320574
[INFO] 2022-12-06 13:50:16,804 [428529760.py:   43]:	Epoch 63 (5.0470951 sec) Train Loss: 0.0323992
[INFO] 2022-12-06 13:50:21,880 [428529760.py:   43]:	Epoch 64 (5.0739700 sec) Train Loss: 0.0319603
[INFO] 2022-12-06 13:50:26,988 [428529760.py:   43]:	Epoch 65 (5.1075177 sec) Train Loss: 0.0322082
[INFO] 2022-12-06 13:50:32,672 [428529760.py:   43]:	Epoch 66 (5.6826631 sec) Train Loss: 0.0310558
[INFO] 2022-12-06 13:50:37,853 [428529760.py:   43]:	Epoch 67 (5.1795347 sec) Train Loss: 0.0310251
[INFO] 2022-12-06 13:50:42,963 [428529760.py:   43]:	Epoch 68 (5.1081007 sec) Train Loss: 0.0305722
[INFO] 2022-12-06 13:50:48,118 [428529760.py:   43]:	Epoch 69 (5.1530536 sec) Train Loss: 0.0294034
[INFO] 2022-12-06 13:50:53,233 [428529760.py:   43]:	Epoch 70 (5.1130715 sec) Train Loss: 0.0287701
[INFO] 2022-12-06 13:50:58,600 [428529760.py:   43]:	Epoch 71 (5.3665334 sec) Train Loss: 0.0284615
[INFO] 2022-12-06 13:51:04,633 [428529760.py:   43]:	Epoch 72 (6.0311235 sec) Train Loss: 0.0289763
[INFO] 2022-12-06 13:51:09,928 [428529760.py:   43]:	Epoch 73 (5.2929466 sec) Train Loss: 0.0291605
[INFO] 2022-12-06 13:51:15,017 [428529760.py:   43]:	Epoch 74 (5.0885031 sec) Train Loss: 0.0285382
[INFO] 2022-12-06 13:51:20,020 [428529760.py:   43]:	Epoch 75 (5.0007245 sec) Train Loss: 0.0279563
[INFO] 2022-12-06 13:51:25,016 [428529760.py:   43]:	Epoch 76 (4.9947251 sec) Train Loss: 0.0283066
[INFO] 2022-12-06 13:51:30,091 [428529760.py:   43]:	Epoch 77 (5.0732034 sec) Train Loss: 0.0271705
[INFO] 2022-12-06 13:51:35,755 [428529760.py:   43]:	Epoch 78 (5.6630881 sec) Train Loss: 0.0275734
[INFO] 2022-12-06 13:51:41,067 [428529760.py:   43]:	Epoch 79 (5.3101965 sec) Train Loss: 0.0262595
[INFO] 2022-12-06 13:51:46,214 [428529760.py:   43]:	Epoch 80 (5.1453882 sec) Train Loss: 0.0270439
[INFO] 2022-12-06 13:51:51,315 [428529760.py:   43]:	Epoch 81 (5.1002489 sec) Train Loss: 0.0264053
[INFO] 2022-12-06 13:51:56,395 [428529760.py:   43]:	Epoch 82 (5.0778470 sec) Train Loss: 0.0269806
[INFO] 2022-12-06 13:52:01,573 [428529760.py:   43]:	Epoch 83 (5.1769991 sec) Train Loss: 0.0260972
[INFO] 2022-12-06 13:52:06,846 [428529760.py:   43]:	Epoch 84 (5.2706695 sec) Train Loss: 0.0263455
[INFO] 2022-12-06 13:52:12,683 [428529760.py:   43]:	Epoch 85 (5.8364071 sec) Train Loss: 0.0254442
[INFO] 2022-12-06 13:52:17,999 [428529760.py:   43]:	Epoch 86 (5.3137185 sec) Train Loss: 0.0254218
[INFO] 2022-12-06 13:52:23,271 [428529760.py:   43]:	Epoch 87 (5.2713958 sec) Train Loss: 0.0255177
[INFO] 2022-12-06 13:52:28,495 [428529760.py:   43]:	Epoch 88 (5.2220312 sec) Train Loss: 0.0252741
[INFO] 2022-12-06 13:52:33,524 [428529760.py:   43]:	Epoch 89 (5.0275463 sec) Train Loss: 0.0246056
[INFO] 2022-12-06 13:52:38,716 [428529760.py:   43]:	Epoch 90 (5.1912719 sec) Train Loss: 0.0255990
[INFO] 2022-12-06 13:52:44,506 [428529760.py:   43]:	Epoch 91 (5.7886728 sec) Train Loss: 0.0246083
[INFO] 2022-12-06 13:52:49,673 [428529760.py:   43]:	Epoch 92 (5.1651727 sec) Train Loss: 0.0240886
[INFO] 2022-12-06 13:52:54,797 [428529760.py:   43]:	Epoch 93 (5.1222641 sec) Train Loss: 0.0234190
[INFO] 2022-12-06 13:52:59,822 [428529760.py:   43]:	Epoch 94 (5.0233171 sec) Train Loss: 0.0230209
[INFO] 2022-12-06 13:53:04,941 [428529760.py:   43]:	Epoch 95 (5.1181496 sec) Train Loss: 0.0241110
[INFO] 2022-12-06 13:53:09,954 [428529760.py:   43]:	Epoch 96 (5.0107970 sec) Train Loss: 0.0231939
[INFO] 2022-12-06 13:53:15,659 [428529760.py:   43]:	Epoch 97 (5.7042502 sec) Train Loss: 0.0226674
[INFO] 2022-12-06 13:53:20,745 [428529760.py:   43]:	Epoch 98 (5.0840192 sec) Train Loss: 0.0231384
[INFO] 2022-12-06 13:53:25,714 [428529760.py:   43]:	Epoch 99 (4.9676126 sec) Train Loss: 0.0231350
[INFO] 2022-12-06 13:53:31,019 [428529760.py:   43]:	Epoch 100 (5.3032174 sec) Train Loss: 0.0227960
[INFO] 2022-12-06 13:53:32,908 [428529760.py:   55]:	GPU run time 263.94817079789937, Data prepare extra time 0.6734625436365604
[INFO] 2022-12-06 13:53:32,910 [428529760.py:   56]:	Epoch 100 	 All Loss [204.08678]
[INFO] 2022-12-06 13:53:32,911 [2594160307.py:  118]:	epoch	100


[9.909s] #evaluation triple: 500/5000
[20.061s] #evaluation triple: 1000/5000
[29.396s] #evaluation triple: 1500/5000
[38.499s] #evaluation triple: 2000/5000
[48.496s] #evaluation triple: 2500/5000
[57.714s] #evaluation triple: 3000/5000
[66.846s] #evaluation triple: 3500/5000
[77.383s] #evaluation triple: 4000/5000
[87.415s] #evaluation triple: 4500/5000


[INFO] 2022-12-06 13:55:10,184 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 13:55:10,186 [evalutate.py:  131]:	MeanRank: 261.21, MRR: 0.3091, Hits@1: 0.0525, Hits@3: 0.4924, Hits@10: 0.7848
[INFO] 2022-12-06 13:55:10,188 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 13:55:10,189 [evalutate.py:  137]:	MeanRank: 249.29, MRR: 0.4118, Hits@1: 0.0891, Hits@3: 0.7026, Hits@10: 0.9060


[96.954s] #evaluation triple: 5000/5000


[INFO] 2022-12-06 13:55:15,674 [428529760.py:   43]:	Epoch 101 (5.4828282 sec) Train Loss: 0.0230619
[INFO] 2022-12-06 13:55:20,815 [428529760.py:   43]:	Epoch 102 (5.1367022 sec) Train Loss: 0.0219430
[INFO] 2022-12-06 13:55:26,560 [428529760.py:   43]:	Epoch 103 (5.7442551 sec) Train Loss: 0.0222132
[INFO] 2022-12-06 13:55:31,746 [428529760.py:   43]:	Epoch 104 (5.1849575 sec) Train Loss: 0.0221578
[INFO] 2022-12-06 13:55:36,902 [428529760.py:   43]:	Epoch 105 (5.1548973 sec) Train Loss: 0.0220807
[INFO] 2022-12-06 13:55:42,098 [428529760.py:   43]:	Epoch 106 (5.1935004 sec) Train Loss: 0.0214499
[INFO] 2022-12-06 13:55:47,287 [428529760.py:   43]:	Epoch 107 (5.1874246 sec) Train Loss: 0.0215036
[INFO] 2022-12-06 13:55:52,319 [428529760.py:   43]:	Epoch 108 (5.0308274 sec) Train Loss: 0.0211480
[INFO] 2022-12-06 13:55:57,938 [428529760.py:   43]:	Epoch 109 (5.6176782 sec) Train Loss: 0.0223646
[INFO] 2022-12-06 13:56:03,067 [428529760.py:   43]:	Epoch 110 (5.1268767 sec) Train Loss: 0.0212580
[INFO] 2022-12-06 13:56:08,464 [428529760.py:   43]:	Epoch 111 (5.3955648 sec) Train Loss: 0.0209714
[INFO] 2022-12-06 13:56:13,781 [428529760.py:   43]:	Epoch 112 (5.3156384 sec) Train Loss: 0.0204633
[INFO] 2022-12-06 13:56:19,163 [428529760.py:   43]:	Epoch 113 (5.3808708 sec) Train Loss: 0.0206893
[INFO] 2022-12-06 13:56:24,381 [428529760.py:   43]:	Epoch 114 (5.2161021 sec) Train Loss: 0.0207985
[INFO] 2022-12-06 13:56:30,182 [428529760.py:   43]:	Epoch 115 (5.7994624 sec) Train Loss: 0.0194320
[INFO] 2022-12-06 13:56:35,214 [428529760.py:   43]:	Epoch 116 (5.0305015 sec) Train Loss: 0.0201423
[INFO] 2022-12-06 13:56:40,365 [428529760.py:   43]:	Epoch 117 (5.1504943 sec) Train Loss: 0.0204539
[INFO] 2022-12-06 13:56:45,513 [428529760.py:   43]:	Epoch 118 (5.1457694 sec) Train Loss: 0.0198392
[INFO] 2022-12-06 13:56:50,637 [428529760.py:   43]:	Epoch 119 (5.1231706 sec) Train Loss: 0.0203209
[INFO] 2022-12-06 13:56:55,745 [428529760.py:   43]:	Epoch 120 (5.1069117 sec) Train Loss: 0.0200447
[INFO] 2022-12-06 13:57:01,476 [428529760.py:   43]:	Epoch 121 (5.7289428 sec) Train Loss: 0.0196649
[INFO] 2022-12-06 13:57:06,769 [428529760.py:   43]:	Epoch 122 (5.2917225 sec) Train Loss: 0.0192582
[INFO] 2022-12-06 13:57:11,773 [428529760.py:   43]:	Epoch 123 (5.0034807 sec) Train Loss: 0.0195108
[INFO] 2022-12-06 13:57:16,920 [428529760.py:   43]:	Epoch 124 (5.1453051 sec) Train Loss: 0.0197790
[INFO] 2022-12-06 13:57:21,985 [428529760.py:   43]:	Epoch 125 (5.0633465 sec) Train Loss: 0.0189460
[INFO] 2022-12-06 13:57:27,269 [428529760.py:   43]:	Epoch 126 (5.2824940 sec) Train Loss: 0.0185997
[INFO] 2022-12-06 13:57:33,246 [428529760.py:   43]:	Epoch 127 (5.9754968 sec) Train Loss: 0.0183853
[INFO] 2022-12-06 13:57:38,485 [428529760.py:   43]:	Epoch 128 (5.2381382 sec) Train Loss: 0.0199356
[INFO] 2022-12-06 13:57:43,628 [428529760.py:   43]:	Epoch 129 (5.1416392 sec) Train Loss: 0.0187044
[INFO] 2022-12-06 13:57:48,781 [428529760.py:   43]:	Epoch 130 (5.1516999 sec) Train Loss: 0.0191069
[INFO] 2022-12-06 13:57:53,821 [428529760.py:   43]:	Epoch 131 (5.0377883 sec) Train Loss: 0.0187784
[INFO] 2022-12-06 13:57:58,812 [428529760.py:   43]:	Epoch 132 (4.9896284 sec) Train Loss: 0.0185528
[INFO] 2022-12-06 13:58:04,341 [428529760.py:   43]:	Epoch 133 (5.5278596 sec) Train Loss: 0.0183107
[INFO] 2022-12-06 13:58:09,709 [428529760.py:   43]:	Epoch 134 (5.3671046 sec) Train Loss: 0.0180064
[INFO] 2022-12-06 13:58:14,977 [428529760.py:   43]:	Epoch 135 (5.2663960 sec) Train Loss: 0.0181655
[INFO] 2022-12-06 13:58:20,041 [428529760.py:   43]:	Epoch 136 (5.0623502 sec) Train Loss: 0.0174502
[INFO] 2022-12-06 13:58:25,114 [428529760.py:   43]:	Epoch 137 (5.0713555 sec) Train Loss: 0.0180155
[INFO] 2022-12-06 13:58:30,206 [428529760.py:   43]:	Epoch 138 (5.0909743 sec) Train Loss: 0.0182103
[INFO] 2022-12-06 13:58:35,616 [428529760.py:   43]:	Epoch 139 (5.4081072 sec) Train Loss: 0.0188365
[INFO] 2022-12-06 13:58:41,828 [428529760.py:   43]:	Epoch 140 (6.2115343 sec) Train Loss: 0.0178686
[INFO] 2022-12-06 13:58:47,673 [428529760.py:   43]:	Epoch 141 (5.8429155 sec) Train Loss: 0.0185002
[INFO] 2022-12-06 13:58:53,431 [428529760.py:   43]:	Epoch 142 (5.7561059 sec) Train Loss: 0.0178951
[INFO] 2022-12-06 13:58:59,298 [428529760.py:   43]:	Epoch 143 (5.8647802 sec) Train Loss: 0.0182900
[INFO] 2022-12-06 13:59:04,540 [428529760.py:   43]:	Epoch 144 (5.2393870 sec) Train Loss: 0.0176227
[INFO] 2022-12-06 13:59:10,219 [428529760.py:   43]:	Epoch 145 (5.6775959 sec) Train Loss: 0.0176968
[INFO] 2022-12-06 13:59:15,305 [428529760.py:   43]:	Epoch 146 (5.0851366 sec) Train Loss: 0.0171949
[INFO] 2022-12-06 13:59:20,324 [428529760.py:   43]:	Epoch 147 (5.0176372 sec) Train Loss: 0.0172300
[INFO] 2022-12-06 13:59:25,386 [428529760.py:   43]:	Epoch 148 (5.0596618 sec) Train Loss: 0.0173201
[INFO] 2022-12-06 13:59:30,351 [428529760.py:   43]:	Epoch 149 (4.9643242 sec) Train Loss: 0.0183097
[INFO] 2022-12-06 13:59:35,416 [428529760.py:   43]:	Epoch 150 (5.0633665 sec) Train Loss: 0.0170047
[INFO] 2022-12-06 13:59:37,288 [428529760.py:   55]:	GPU run time 266.37851434201, Data prepare extra time 0.7186713516712189
[INFO] 2022-12-06 13:59:37,290 [428529760.py:   56]:	Epoch 150 	 All Loss [138.17612]
[INFO] 2022-12-06 13:59:37,291 [2594160307.py:  118]:	epoch	150


[10.499s] #evaluation triple: 500/5000
[19.837s] #evaluation triple: 1000/5000
[30.135s] #evaluation triple: 1500/5000
[41.059s] #evaluation triple: 2000/5000
[50.433s] #evaluation triple: 2500/5000
[60.015s] #evaluation triple: 3000/5000
[70.306s] #evaluation triple: 3500/5000
[79.734s] #evaluation triple: 4000/5000
[89.226s] #evaluation triple: 4500/5000


[INFO] 2022-12-06 14:01:16,687 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 14:01:16,689 [evalutate.py:  131]:	MeanRank: 261.73, MRR: 0.3214, Hits@1: 0.0709, Hits@3: 0.5004, Hits@10: 0.7883
[INFO] 2022-12-06 14:01:16,690 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 14:01:16,692 [evalutate.py:  137]:	MeanRank: 249.96, MRR: 0.4344, Hits@1: 0.1130, Hits@3: 0.7293, Hits@10: 0.9230


[98.830s] #evaluation triple: 5000/5000


[INFO] 2022-12-06 14:01:23,055 [428529760.py:   43]:	Epoch 151 (6.3618148 sec) Train Loss: 0.0174155
[INFO] 2022-12-06 14:01:28,372 [428529760.py:   43]:	Epoch 152 (5.3126577 sec) Train Loss: 0.0167870
[INFO] 2022-12-06 14:01:33,639 [428529760.py:   43]:	Epoch 153 (5.2648587 sec) Train Loss: 0.0175284
[INFO] 2022-12-06 14:01:38,883 [428529760.py:   43]:	Epoch 154 (5.2429139 sec) Train Loss: 0.0174355
[INFO] 2022-12-06 14:01:43,939 [428529760.py:   43]:	Epoch 155 (5.0541574 sec) Train Loss: 0.0168346
[INFO] 2022-12-06 14:01:48,997 [428529760.py:   43]:	Epoch 156 (5.0567193 sec) Train Loss: 0.0168401
[INFO] 2022-12-06 14:01:54,659 [428529760.py:   43]:	Epoch 157 (5.6614996 sec) Train Loss: 0.0173274
[INFO] 2022-12-06 14:01:59,726 [428529760.py:   43]:	Epoch 158 (5.0644454 sec) Train Loss: 0.0172629
[INFO] 2022-12-06 14:02:04,833 [428529760.py:   43]:	Epoch 159 (5.1056106 sec) Train Loss: 0.0161370
[INFO] 2022-12-06 14:02:09,834 [428529760.py:   43]:	Epoch 160 (4.9996350 sec) Train Loss: 0.0165054
[INFO] 2022-12-06 14:02:14,878 [428529760.py:   43]:	Epoch 161 (5.0431368 sec) Train Loss: 0.0162929
[INFO] 2022-12-06 14:02:20,095 [428529760.py:   43]:	Epoch 162 (5.2152514 sec) Train Loss: 0.0161138
[INFO] 2022-12-06 14:02:25,824 [428529760.py:   43]:	Epoch 163 (5.7278195 sec) Train Loss: 0.0166447
[INFO] 2022-12-06 14:02:30,869 [428529760.py:   43]:	Epoch 164 (5.0428162 sec) Train Loss: 0.0161616
[INFO] 2022-12-06 14:02:36,060 [428529760.py:   43]:	Epoch 165 (5.1901312 sec) Train Loss: 0.0163184
[INFO] 2022-12-06 14:02:41,431 [428529760.py:   43]:	Epoch 166 (5.3698074 sec) Train Loss: 0.0159139
[INFO] 2022-12-06 14:02:46,880 [428529760.py:   43]:	Epoch 167 (5.4471227 sec) Train Loss: 0.0165502
[INFO] 2022-12-06 14:02:52,211 [428529760.py:   43]:	Epoch 168 (5.3297676 sec) Train Loss: 0.0159275
[INFO] 2022-12-06 14:02:57,991 [428529760.py:   43]:	Epoch 169 (5.7784885 sec) Train Loss: 0.0153945
[INFO] 2022-12-06 14:03:03,130 [428529760.py:   43]:	Epoch 170 (5.1382128 sec) Train Loss: 0.0151808
[INFO] 2022-12-06 14:03:08,170 [428529760.py:   43]:	Epoch 171 (5.0385400 sec) Train Loss: 0.0157319
[INFO] 2022-12-06 14:03:13,161 [428529760.py:   43]:	Epoch 172 (4.9894963 sec) Train Loss: 0.0154022
[INFO] 2022-12-06 14:03:18,215 [428529760.py:   43]:	Epoch 173 (5.0529352 sec) Train Loss: 0.0152791
[INFO] 2022-12-06 14:03:23,107 [428529760.py:   43]:	Epoch 174 (4.8906519 sec) Train Loss: 0.0156454
[INFO] 2022-12-06 14:03:28,552 [428529760.py:   43]:	Epoch 175 (5.4433895 sec) Train Loss: 0.0154638
[INFO] 2022-12-06 14:03:33,938 [428529760.py:   43]:	Epoch 176 (5.3818902 sec) Train Loss: 0.0156946
[INFO] 2022-12-06 14:03:39,113 [428529760.py:   43]:	Epoch 177 (5.1737107 sec) Train Loss: 0.0151310
[INFO] 2022-12-06 14:03:44,092 [428529760.py:   43]:	Epoch 178 (4.9770066 sec) Train Loss: 0.0157168
[INFO] 2022-12-06 14:03:49,005 [428529760.py:   43]:	Epoch 179 (4.9116112 sec) Train Loss: 0.0159113
[INFO] 2022-12-06 14:03:54,062 [428529760.py:   43]:	Epoch 180 (5.0554208 sec) Train Loss: 0.0152291
[INFO] 2022-12-06 14:03:59,454 [428529760.py:   43]:	Epoch 181 (5.3910607 sec) Train Loss: 0.0153713
[INFO] 2022-12-06 14:04:05,442 [428529760.py:   43]:	Epoch 182 (5.9864343 sec) Train Loss: 0.0153255
[INFO] 2022-12-06 14:04:10,657 [428529760.py:   43]:	Epoch 183 (5.2130815 sec) Train Loss: 0.0151434
[INFO] 2022-12-06 14:04:15,734 [428529760.py:   43]:	Epoch 184 (5.0758540 sec) Train Loss: 0.0155610
[INFO] 2022-12-06 14:04:20,797 [428529760.py:   43]:	Epoch 185 (5.0619502 sec) Train Loss: 0.0147455
[INFO] 2022-12-06 14:04:25,828 [428529760.py:   43]:	Epoch 186 (5.0284896 sec) Train Loss: 0.0151667
[INFO] 2022-12-06 14:04:30,909 [428529760.py:   43]:	Epoch 187 (5.0802074 sec) Train Loss: 0.0154017
[INFO] 2022-12-06 14:04:36,735 [428529760.py:   43]:	Epoch 188 (5.8248173 sec) Train Loss: 0.0152935
[INFO] 2022-12-06 14:04:41,737 [428529760.py:   43]:	Epoch 189 (5.0005584 sec) Train Loss: 0.0146928
[INFO] 2022-12-06 14:04:46,834 [428529760.py:   43]:	Epoch 190 (5.0955126 sec) Train Loss: 0.0148585
[INFO] 2022-12-06 14:04:51,891 [428529760.py:   43]:	Epoch 191 (5.0549302 sec) Train Loss: 0.0146670
[INFO] 2022-12-06 14:04:57,046 [428529760.py:   43]:	Epoch 192 (5.1540342 sec) Train Loss: 0.0146429
[INFO] 2022-12-06 14:05:02,210 [428529760.py:   43]:	Epoch 193 (5.1620965 sec) Train Loss: 0.0149815
[INFO] 2022-12-06 14:05:07,834 [428529760.py:   43]:	Epoch 194 (5.6226151 sec) Train Loss: 0.0153472
[INFO] 2022-12-06 14:05:12,800 [428529760.py:   43]:	Epoch 195 (4.9647347 sec) Train Loss: 0.0153386
[INFO] 2022-12-06 14:05:18,205 [428529760.py:   43]:	Epoch 196 (5.4032588 sec) Train Loss: 0.0142858
[INFO] 2022-12-06 14:05:23,422 [428529760.py:   43]:	Epoch 197 (5.2160440 sec) Train Loss: 0.0144347
[INFO] 2022-12-06 14:05:28,597 [428529760.py:   43]:	Epoch 198 (5.1731777 sec) Train Loss: 0.0145736
[INFO] 2022-12-06 14:05:33,568 [428529760.py:   43]:	Epoch 199 (4.9700399 sec) Train Loss: 0.0149177
[INFO] 2022-12-06 14:05:39,139 [428529760.py:   43]:	Epoch 200 (5.5698652 sec) Train Loss: 0.0147685
[INFO] 2022-12-06 14:05:40,985 [428529760.py:   55]:	GPU run time 263.5835297368467, Data prepare extra time 0.7084652911871672
[INFO] 2022-12-06 14:05:40,987 [428529760.py:   56]:	Epoch 200 	 All Loss [111.46738]
[INFO] 2022-12-06 14:05:40,988 [2594160307.py:  118]:	epoch	200


[9.198s] #evaluation triple: 500/5000
[18.275s] #evaluation triple: 1000/5000
[27.386s] #evaluation triple: 1500/5000
[37.275s] #evaluation triple: 2000/5000
[46.301s] #evaluation triple: 2500/5000
[56.120s] #evaluation triple: 3000/5000
[66.710s] #evaluation triple: 3500/5000
[75.951s] #evaluation triple: 4000/5000
[84.968s] #evaluation triple: 4500/5000


[INFO] 2022-12-06 14:07:15,990 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 14:07:15,993 [evalutate.py:  131]:	MeanRank: 244.01, MRR: 0.3276, Hits@1: 0.0784, Hits@3: 0.5060, Hits@10: 0.7917
[INFO] 2022-12-06 14:07:15,993 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 14:07:15,995 [evalutate.py:  137]:	MeanRank: 232.04, MRR: 0.4513, Hits@1: 0.1306, Hits@3: 0.7488, Hits@10: 0.9285


[94.704s] #evaluation triple: 5000/5000


[INFO] 2022-12-06 14:07:21,586 [428529760.py:   43]:	Epoch 201 (5.5901504 sec) Train Loss: 0.0145231
[INFO] 2022-12-06 14:07:26,602 [428529760.py:   43]:	Epoch 202 (5.0116644 sec) Train Loss: 0.0141905
[INFO] 2022-12-06 14:07:31,516 [428529760.py:   43]:	Epoch 203 (4.9128935 sec) Train Loss: 0.0142230
[INFO] 2022-12-06 14:07:36,490 [428529760.py:   43]:	Epoch 204 (4.9722089 sec) Train Loss: 0.0138566
[INFO] 2022-12-06 14:07:41,447 [428529760.py:   43]:	Epoch 205 (4.9566509 sec) Train Loss: 0.0138524
[INFO] 2022-12-06 14:07:46,548 [428529760.py:   43]:	Epoch 206 (5.0989635 sec) Train Loss: 0.0139954
[INFO] 2022-12-06 14:07:52,512 [428529760.py:   43]:	Epoch 207 (5.9632766 sec) Train Loss: 0.0146913
[INFO] 2022-12-06 14:07:57,830 [428529760.py:   43]:	Epoch 208 (5.3165924 sec) Train Loss: 0.0139343
[INFO] 2022-12-06 14:08:03,140 [428529760.py:   43]:	Epoch 209 (5.3077017 sec) Train Loss: 0.0137547
[INFO] 2022-12-06 14:08:08,215 [428529760.py:   43]:	Epoch 210 (5.0741033 sec) Train Loss: 0.0135561
[INFO] 2022-12-06 14:08:13,281 [428529760.py:   43]:	Epoch 211 (5.0644807 sec) Train Loss: 0.0144586
[INFO] 2022-12-06 14:08:18,277 [428529760.py:   43]:	Epoch 212 (4.9946574 sec) Train Loss: 0.0141250
[INFO] 2022-12-06 14:08:23,731 [428529760.py:   43]:	Epoch 213 (5.4525754 sec) Train Loss: 0.0142614
[INFO] 2022-12-06 14:08:28,667 [428529760.py:   43]:	Epoch 214 (4.9346108 sec) Train Loss: 0.0134664
[INFO] 2022-12-06 14:08:33,509 [428529760.py:   43]:	Epoch 215 (4.8412733 sec) Train Loss: 0.0137170
[INFO] 2022-12-06 14:08:38,481 [428529760.py:   43]:	Epoch 216 (4.9704180 sec) Train Loss: 0.0143813
[INFO] 2022-12-06 14:08:43,373 [428529760.py:   43]:	Epoch 217 (4.8911592 sec) Train Loss: 0.0137557
[INFO] 2022-12-06 14:08:48,443 [428529760.py:   43]:	Epoch 218 (5.0683762 sec) Train Loss: 0.0136498
[INFO] 2022-12-06 14:08:54,076 [428529760.py:   43]:	Epoch 219 (5.6312961 sec) Train Loss: 0.0136360
[INFO] 2022-12-06 14:08:58,988 [428529760.py:   43]:	Epoch 220 (4.9113144 sec) Train Loss: 0.0133655
[INFO] 2022-12-06 14:09:04,022 [428529760.py:   43]:	Epoch 221 (5.0324300 sec) Train Loss: 0.0139919
[INFO] 2022-12-06 14:09:09,270 [428529760.py:   43]:	Epoch 222 (5.2470913 sec) Train Loss: 0.0139030
[INFO] 2022-12-06 14:09:14,474 [428529760.py:   43]:	Epoch 223 (5.2023110 sec) Train Loss: 0.0130767
[INFO] 2022-12-06 14:09:19,680 [428529760.py:   43]:	Epoch 224 (5.2050102 sec) Train Loss: 0.0139174
[INFO] 2022-12-06 14:09:25,130 [428529760.py:   43]:	Epoch 225 (5.4484624 sec) Train Loss: 0.0134756
[INFO] 2022-12-06 14:09:30,348 [428529760.py:   43]:	Epoch 226 (5.2171903 sec) Train Loss: 0.0135605
[INFO] 2022-12-06 14:09:35,285 [428529760.py:   43]:	Epoch 227 (4.9352039 sec) Train Loss: 0.0131975
[INFO] 2022-12-06 14:09:40,309 [428529760.py:   43]:	Epoch 228 (5.0230079 sec) Train Loss: 0.0137813
[INFO] 2022-12-06 14:09:45,385 [428529760.py:   43]:	Epoch 229 (5.0741614 sec) Train Loss: 0.0134310
[INFO] 2022-12-06 14:09:50,357 [428529760.py:   43]:	Epoch 230 (4.9705934 sec) Train Loss: 0.0133356
[INFO] 2022-12-06 14:09:55,410 [428529760.py:   43]:	Epoch 231 (5.0520061 sec) Train Loss: 0.0133074
[INFO] 2022-12-06 14:10:01,035 [428529760.py:   43]:	Epoch 232 (5.6239179 sec) Train Loss: 0.0138981
[INFO] 2022-12-06 14:10:06,096 [428529760.py:   43]:	Epoch 233 (5.0594978 sec) Train Loss: 0.0135330
[INFO] 2022-12-06 14:10:11,024 [428529760.py:   43]:	Epoch 234 (4.9264966 sec) Train Loss: 0.0138438
[INFO] 2022-12-06 14:10:16,157 [428529760.py:   43]:	Epoch 235 (5.1313870 sec) Train Loss: 0.0139259
[INFO] 2022-12-06 14:10:21,186 [428529760.py:   43]:	Epoch 236 (5.0283191 sec) Train Loss: 0.0133156
[INFO] 2022-12-06 14:10:26,386 [428529760.py:   43]:	Epoch 237 (5.1984452 sec) Train Loss: 0.0127190
[INFO] 2022-12-06 14:10:32,281 [428529760.py:   43]:	Epoch 238 (5.8932661 sec) Train Loss: 0.0128574
[INFO] 2022-12-06 14:10:37,505 [428529760.py:   43]:	Epoch 239 (5.2223399 sec) Train Loss: 0.0126133
[INFO] 2022-12-06 14:10:42,620 [428529760.py:   43]:	Epoch 240 (5.1135950 sec) Train Loss: 0.0128673
[INFO] 2022-12-06 14:10:47,609 [428529760.py:   43]:	Epoch 241 (4.9884750 sec) Train Loss: 0.0132589
[INFO] 2022-12-06 14:10:52,480 [428529760.py:   43]:	Epoch 242 (4.8692685 sec) Train Loss: 0.0134043
[INFO] 2022-12-06 14:10:57,403 [428529760.py:   43]:	Epoch 243 (4.9214836 sec) Train Loss: 0.0129047
[INFO] 2022-12-06 14:11:02,854 [428529760.py:   43]:	Epoch 244 (5.4501316 sec) Train Loss: 0.0134988
[INFO] 2022-12-06 14:11:07,899 [428529760.py:   43]:	Epoch 245 (5.0439210 sec) Train Loss: 0.0130015
[INFO] 2022-12-06 14:11:12,994 [428529760.py:   43]:	Epoch 246 (5.0931918 sec) Train Loss: 0.0134518
[INFO] 2022-12-06 14:11:18,133 [428529760.py:   43]:	Epoch 247 (5.1371061 sec) Train Loss: 0.0130832
[INFO] 2022-12-06 14:11:23,141 [428529760.py:   43]:	Epoch 248 (5.0069373 sec) Train Loss: 0.0134366
[INFO] 2022-12-06 14:11:28,084 [428529760.py:   43]:	Epoch 249 (4.9416824 sec) Train Loss: 0.0125928
[INFO] 2022-12-06 14:11:33,243 [428529760.py:   43]:	Epoch 250 (5.1572441 sec) Train Loss: 0.0126592
[INFO] 2022-12-06 14:11:35,521 [428529760.py:   55]:	GPU run time 258.7828383781016, Data prepare extra time 0.7429611850529909
[INFO] 2022-12-06 14:11:35,523 [428529760.py:   56]:	Epoch 250 	 All Loss [96.42074]
[INFO] 2022-12-06 14:11:35,524 [2594160307.py:  118]:	epoch	250


[9.782s] #evaluation triple: 500/5000
[19.431s] #evaluation triple: 1000/5000
[28.880s] #evaluation triple: 1500/5000
[38.710s] #evaluation triple: 2000/5000
[47.830s] #evaluation triple: 2500/5000
[57.094s] #evaluation triple: 3000/5000
[67.062s] #evaluation triple: 3500/5000
[76.338s] #evaluation triple: 4000/5000
[85.880s] #evaluation triple: 4500/5000


[INFO] 2022-12-06 14:13:11,987 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 14:13:11,990 [evalutate.py:  131]:	MeanRank: 241.59, MRR: 0.3363, Hits@1: 0.0893, Hits@3: 0.5112, Hits@10: 0.7946
[INFO] 2022-12-06 14:13:11,991 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 14:13:11,993 [evalutate.py:  137]:	MeanRank: 229.83, MRR: 0.4658, Hits@1: 0.1480, Hits@3: 0.7600, Hits@10: 0.9324


[96.184s] #evaluation triple: 5000/5000


[INFO] 2022-12-06 14:13:17,997 [428529760.py:   43]:	Epoch 251 (6.0029500 sec) Train Loss: 0.0131853
[INFO] 2022-12-06 14:13:22,936 [428529760.py:   43]:	Epoch 252 (4.9348029 sec) Train Loss: 0.0128932
[INFO] 2022-12-06 14:13:27,822 [428529760.py:   43]:	Epoch 253 (4.8842197 sec) Train Loss: 0.0128224
[INFO] 2022-12-06 14:13:32,887 [428529760.py:   43]:	Epoch 254 (5.0637011 sec) Train Loss: 0.0128376
[INFO] 2022-12-06 14:13:37,887 [428529760.py:   43]:	Epoch 255 (4.9981010 sec) Train Loss: 0.0124257
[INFO] 2022-12-06 14:13:42,881 [428529760.py:   43]:	Epoch 256 (4.9921014 sec) Train Loss: 0.0130728
[INFO] 2022-12-06 14:13:48,408 [428529760.py:   43]:	Epoch 257 (5.5259801 sec) Train Loss: 0.0126911
[INFO] 2022-12-06 14:13:53,408 [428529760.py:   43]:	Epoch 258 (4.9983272 sec) Train Loss: 0.0132162
[INFO] 2022-12-06 14:13:58,401 [428529760.py:   43]:	Epoch 259 (4.9920685 sec) Train Loss: 0.0126512
[INFO] 2022-12-06 14:14:03,500 [428529760.py:   43]:	Epoch 260 (5.0980546 sec) Train Loss: 0.0128621
[INFO] 2022-12-06 14:14:08,542 [428529760.py:   43]:	Epoch 261 (5.0405573 sec) Train Loss: 0.0134137
[INFO] 2022-12-06 14:14:13,481 [428529760.py:   43]:	Epoch 262 (4.9370435 sec) Train Loss: 0.0129841
[INFO] 2022-12-06 14:14:19,170 [428529760.py:   43]:	Epoch 263 (5.6876735 sec) Train Loss: 0.0128204
[INFO] 2022-12-06 14:14:24,488 [428529760.py:   43]:	Epoch 264 (5.3163809 sec) Train Loss: 0.0123899
[INFO] 2022-12-06 14:14:29,762 [428529760.py:   43]:	Epoch 265 (5.2727621 sec) Train Loss: 0.0127483
[INFO] 2022-12-06 14:14:35,020 [428529760.py:   43]:	Epoch 266 (5.2562684 sec) Train Loss: 0.0122436
[INFO] 2022-12-06 14:14:40,078 [428529760.py:   43]:	Epoch 267 (5.0560404 sec) Train Loss: 0.0131197
[INFO] 2022-12-06 14:14:45,208 [428529760.py:   43]:	Epoch 268 (5.1293908 sec) Train Loss: 0.0128812
[INFO] 2022-12-06 14:14:50,902 [428529760.py:   43]:	Epoch 269 (5.6917796 sec) Train Loss: 0.0122906
[INFO] 2022-12-06 14:14:55,932 [428529760.py:   43]:	Epoch 270 (5.0291177 sec) Train Loss: 0.0124749
[INFO] 2022-12-06 14:15:00,943 [428529760.py:   43]:	Epoch 271 (5.0092362 sec) Train Loss: 0.0126110
[INFO] 2022-12-06 14:15:06,011 [428529760.py:   43]:	Epoch 272 (5.0667784 sec) Train Loss: 0.0125545
[INFO] 2022-12-06 14:15:11,135 [428529760.py:   43]:	Epoch 273 (5.1224967 sec) Train Loss: 0.0129326
[INFO] 2022-12-06 14:15:16,304 [428529760.py:   43]:	Epoch 274 (5.1675000 sec) Train Loss: 0.0119309
[INFO] 2022-12-06 14:15:21,738 [428529760.py:   43]:	Epoch 275 (5.4333519 sec) Train Loss: 0.0129868
[INFO] 2022-12-06 14:15:26,957 [428529760.py:   43]:	Epoch 276 (5.2167786 sec) Train Loss: 0.0125157
[INFO] 2022-12-06 14:15:32,150 [428529760.py:   43]:	Epoch 277 (5.1918520 sec) Train Loss: 0.0131286
[INFO] 2022-12-06 14:15:37,498 [428529760.py:   43]:	Epoch 278 (5.3463025 sec) Train Loss: 0.0123888
[INFO] 2022-12-06 14:15:43,074 [428529760.py:   43]:	Epoch 279 (5.5750952 sec) Train Loss: 0.0121060
[INFO] 2022-12-06 14:15:48,439 [428529760.py:   43]:	Epoch 280 (5.3635594 sec) Train Loss: 0.0124194
[INFO] 2022-12-06 14:15:54,114 [428529760.py:   43]:	Epoch 281 (5.6734317 sec) Train Loss: 0.0131171
[INFO] 2022-12-06 14:15:59,215 [428529760.py:   43]:	Epoch 282 (5.0990349 sec) Train Loss: 0.0123402
[INFO] 2022-12-06 14:16:04,219 [428529760.py:   43]:	Epoch 283 (5.0035260 sec) Train Loss: 0.0123993
[INFO] 2022-12-06 14:16:09,222 [428529760.py:   43]:	Epoch 284 (5.0009875 sec) Train Loss: 0.0123148
[INFO] 2022-12-06 14:16:14,138 [428529760.py:   43]:	Epoch 285 (4.9147838 sec) Train Loss: 0.0125778
[INFO] 2022-12-06 14:16:19,116 [428529760.py:   43]:	Epoch 286 (4.9773035 sec) Train Loss: 0.0119516
[INFO] 2022-12-06 14:16:24,134 [428529760.py:   43]:	Epoch 287 (5.0160707 sec) Train Loss: 0.0120533
[INFO] 2022-12-06 14:16:29,675 [428529760.py:   43]:	Epoch 288 (5.5394283 sec) Train Loss: 0.0123068
[INFO] 2022-12-06 14:16:34,641 [428529760.py:   43]:	Epoch 289 (4.9649174 sec) Train Loss: 0.0122370
[INFO] 2022-12-06 14:16:39,644 [428529760.py:   43]:	Epoch 290 (5.0021755 sec) Train Loss: 0.0117906
[INFO] 2022-12-06 14:16:44,679 [428529760.py:   43]:	Epoch 291 (5.0336244 sec) Train Loss: 0.0116917
[INFO] 2022-12-06 14:16:49,734 [428529760.py:   43]:	Epoch 292 (5.0535673 sec) Train Loss: 0.0117060
[INFO] 2022-12-06 14:16:54,855 [428529760.py:   43]:	Epoch 293 (5.1187586 sec) Train Loss: 0.0125124
[INFO] 2022-12-06 14:17:00,895 [428529760.py:   43]:	Epoch 294 (6.0390965 sec) Train Loss: 0.0113565
[INFO] 2022-12-06 14:17:06,076 [428529760.py:   43]:	Epoch 295 (5.1791770 sec) Train Loss: 0.0122484
[INFO] 2022-12-06 14:17:11,219 [428529760.py:   43]:	Epoch 296 (5.1416250 sec) Train Loss: 0.0121895
[INFO] 2022-12-06 14:17:16,303 [428529760.py:   43]:	Epoch 297 (5.0826004 sec) Train Loss: 0.0117610
[INFO] 2022-12-06 14:17:21,318 [428529760.py:   43]:	Epoch 298 (5.0134910 sec) Train Loss: 0.0117925
[INFO] 2022-12-06 14:17:26,285 [428529760.py:   43]:	Epoch 299 (4.9654877 sec) Train Loss: 0.0117016
[INFO] 2022-12-06 14:17:31,890 [428529760.py:   43]:	Epoch 300 (5.6040248 sec) Train Loss: 0.0120291
[INFO] 2022-12-06 14:17:33,624 [428529760.py:   55]:	GPU run time 260.8392007295042, Data prepare extra time 0.7912144865840673
[INFO] 2022-12-06 14:17:33,626 [428529760.py:   56]:	Epoch 300 	 All Loss [88.55295]
[INFO] 2022-12-06 14:17:33,627 [2594160307.py:  118]:	epoch	300


[9.200s] #evaluation triple: 500/5000
[18.552s] #evaluation triple: 1000/5000
[28.126s] #evaluation triple: 1500/5000
[38.631s] #evaluation triple: 2000/5000
[48.238s] #evaluation triple: 2500/5000
[57.616s] #evaluation triple: 3000/5000
[67.691s] #evaluation triple: 3500/5000
[76.975s] #evaluation triple: 4000/5000
[86.052s] #evaluation triple: 4500/5000


[INFO] 2022-12-06 14:19:10,018 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 14:19:10,021 [evalutate.py:  131]:	MeanRank: 242.54, MRR: 0.3383, Hits@1: 0.0945, Hits@3: 0.5132, Hits@10: 0.7926
[INFO] 2022-12-06 14:19:10,021 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 14:19:10,023 [evalutate.py:  137]:	MeanRank: 230.63, MRR: 0.4725, Hits@1: 0.1572, Hits@3: 0.7670, Hits@10: 0.9331


[96.115s] #evaluation triple: 5000/5000


[INFO] 2022-12-06 14:19:15,692 [428529760.py:   43]:	Epoch 301 (5.6681375 sec) Train Loss: 0.0121311
[INFO] 2022-12-06 14:19:20,720 [428529760.py:   43]:	Epoch 302 (5.0240809 sec) Train Loss: 0.0113676
[INFO] 2022-12-06 14:19:25,661 [428529760.py:   43]:	Epoch 303 (4.9399808 sec) Train Loss: 0.0113864
[INFO] 2022-12-06 14:19:30,882 [428529760.py:   43]:	Epoch 304 (5.2193733 sec) Train Loss: 0.0113158
[INFO] 2022-12-06 14:19:36,095 [428529760.py:   43]:	Epoch 305 (5.2118627 sec) Train Loss: 0.0117796
[INFO] 2022-12-06 14:19:41,848 [428529760.py:   43]:	Epoch 306 (5.7516978 sec) Train Loss: 0.0126295
[INFO] 2022-12-06 14:19:47,211 [428529760.py:   43]:	Epoch 307 (5.3615992 sec) Train Loss: 0.0114976
[INFO] 2022-12-06 14:19:52,308 [428529760.py:   43]:	Epoch 308 (5.0953898 sec) Train Loss: 0.0119166
[INFO] 2022-12-06 14:19:57,290 [428529760.py:   43]:	Epoch 309 (4.9804269 sec) Train Loss: 0.0114572
[INFO] 2022-12-06 14:20:02,340 [428529760.py:   43]:	Epoch 310 (5.0487220 sec) Train Loss: 0.0112977
[INFO] 2022-12-06 14:20:07,323 [428529760.py:   43]:	Epoch 311 (4.9816449 sec) Train Loss: 0.0119916
[INFO] 2022-12-06 14:20:12,316 [428529760.py:   43]:	Epoch 312 (4.9920624 sec) Train Loss: 0.0115118
[INFO] 2022-12-06 14:20:18,008 [428529760.py:   43]:	Epoch 313 (5.6910729 sec) Train Loss: 0.0119434
[INFO] 2022-12-06 14:20:23,075 [428529760.py:   43]:	Epoch 314 (5.0654367 sec) Train Loss: 0.0114161
[INFO] 2022-12-06 14:20:28,054 [428529760.py:   43]:	Epoch 315 (4.9777077 sec) Train Loss: 0.0113199
[INFO] 2022-12-06 14:20:33,009 [428529760.py:   43]:	Epoch 316 (4.9529605 sec) Train Loss: 0.0115645
[INFO] 2022-12-06 14:20:38,132 [428529760.py:   43]:	Epoch 317 (5.1221737 sec) Train Loss: 0.0116928
[INFO] 2022-12-06 14:20:43,251 [428529760.py:   43]:	Epoch 318 (5.1172485 sec) Train Loss: 0.0116295
[INFO] 2022-12-06 14:20:49,262 [428529760.py:   43]:	Epoch 319 (6.0100891 sec) Train Loss: 0.0120287
[INFO] 2022-12-06 14:20:54,529 [428529760.py:   43]:	Epoch 320 (5.2654253 sec) Train Loss: 0.0112940
[INFO] 2022-12-06 14:20:59,740 [428529760.py:   43]:	Epoch 321 (5.2097408 sec) Train Loss: 0.0116382
[INFO] 2022-12-06 14:21:04,974 [428529760.py:   43]:	Epoch 322 (5.2329123 sec) Train Loss: 0.0115877
[INFO] 2022-12-06 14:21:09,976 [428529760.py:   43]:	Epoch 323 (5.0002572 sec) Train Loss: 0.0111960
[INFO] 2022-12-06 14:21:14,963 [428529760.py:   43]:	Epoch 324 (4.9861467 sec) Train Loss: 0.0113620
[INFO] 2022-12-06 14:21:20,477 [428529760.py:   43]:	Epoch 325 (5.5127044 sec) Train Loss: 0.0114585
[INFO] 2022-12-06 14:21:25,525 [428529760.py:   43]:	Epoch 326 (5.0465739 sec) Train Loss: 0.0111758
[INFO] 2022-12-06 14:21:30,595 [428529760.py:   43]:	Epoch 327 (5.0688685 sec) Train Loss: 0.0115351
[INFO] 2022-12-06 14:21:35,583 [428529760.py:   43]:	Epoch 328 (4.9865029 sec) Train Loss: 0.0112677
[INFO] 2022-12-06 14:21:40,470 [428529760.py:   43]:	Epoch 329 (4.8859240 sec) Train Loss: 0.0114232
[INFO] 2022-12-06 14:21:45,457 [428529760.py:   43]:	Epoch 330 (4.9853072 sec) Train Loss: 0.0113422
[INFO] 2022-12-06 14:21:50,832 [428529760.py:   43]:	Epoch 331 (5.3723345 sec) Train Loss: 0.0117017
[INFO] 2022-12-06 14:21:56,075 [428529760.py:   43]:	Epoch 332 (5.2420035 sec) Train Loss: 0.0113053
[INFO] 2022-12-06 14:22:01,100 [428529760.py:   43]:	Epoch 333 (5.0238883 sec) Train Loss: 0.0113124
[INFO] 2022-12-06 14:22:06,450 [428529760.py:   43]:	Epoch 334 (5.3483767 sec) Train Loss: 0.0116078
[INFO] 2022-12-06 14:22:11,643 [428529760.py:   43]:	Epoch 335 (5.1917900 sec) Train Loss: 0.0115174
[INFO] 2022-12-06 14:22:16,940 [428529760.py:   43]:	Epoch 336 (5.2954499 sec) Train Loss: 0.0108507
[INFO] 2022-12-06 14:22:22,340 [428529760.py:   43]:	Epoch 337 (5.3983726 sec) Train Loss: 0.0117727
[INFO] 2022-12-06 14:22:27,810 [428529760.py:   43]:	Epoch 338 (5.4691672 sec) Train Loss: 0.0110713
[INFO] 2022-12-06 14:22:32,878 [428529760.py:   43]:	Epoch 339 (5.0665189 sec) Train Loss: 0.0109254
[INFO] 2022-12-06 14:22:37,977 [428529760.py:   43]:	Epoch 340 (5.0968570 sec) Train Loss: 0.0109877
[INFO] 2022-12-06 14:22:42,984 [428529760.py:   43]:	Epoch 341 (5.0063376 sec) Train Loss: 0.0115714
[INFO] 2022-12-06 14:22:48,128 [428529760.py:   43]:	Epoch 342 (5.1417569 sec) Train Loss: 0.0114094
[INFO] 2022-12-06 14:22:53,141 [428529760.py:   43]:	Epoch 343 (5.0126427 sec) Train Loss: 0.0112634
[INFO] 2022-12-06 14:22:58,737 [428529760.py:   43]:	Epoch 344 (5.5943814 sec) Train Loss: 0.0109739
[INFO] 2022-12-06 14:23:03,817 [428529760.py:   43]:	Epoch 345 (5.0780006 sec) Train Loss: 0.0110770
[INFO] 2022-12-06 14:23:08,951 [428529760.py:   43]:	Epoch 346 (5.1334581 sec) Train Loss: 0.0107661
[INFO] 2022-12-06 14:23:13,918 [428529760.py:   43]:	Epoch 347 (4.9656716 sec) Train Loss: 0.0111399
[INFO] 2022-12-06 14:23:18,809 [428529760.py:   43]:	Epoch 348 (4.8893229 sec) Train Loss: 0.0106420
[INFO] 2022-12-06 14:23:23,813 [428529760.py:   43]:	Epoch 349 (5.0027711 sec) Train Loss: 0.0114590
[INFO] 2022-12-06 14:23:29,642 [428529760.py:   43]:	Epoch 350 (5.8280777 sec) Train Loss: 0.0110585
[INFO] 2022-12-06 14:23:31,486 [428529760.py:   55]:	GPU run time 260.7401882056147, Data prepare extra time 0.7218807395547628
[INFO] 2022-12-06 14:23:31,488 [428529760.py:   56]:	Epoch 350 	 All Loss [81.14848]
[INFO] 2022-12-06 14:23:31,489 [2594160307.py:  118]:	epoch	350


[9.533s] #evaluation triple: 500/5000
[18.739s] #evaluation triple: 1000/5000
[28.526s] #evaluation triple: 1500/5000
[37.886s] #evaluation triple: 2000/5000
[46.856s] #evaluation triple: 2500/5000
[55.929s] #evaluation triple: 3000/5000
[66.267s] #evaluation triple: 3500/5000
[76.187s] #evaluation triple: 4000/5000
[85.571s] #evaluation triple: 4500/5000


[INFO] 2022-12-06 14:25:07,492 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 14:25:07,496 [evalutate.py:  131]:	MeanRank: 241.35, MRR: 0.3431, Hits@1: 0.0994, Hits@3: 0.5185, Hits@10: 0.7937
[INFO] 2022-12-06 14:25:07,496 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 14:25:07,498 [evalutate.py:  137]:	MeanRank: 229.24, MRR: 0.4814, Hits@1: 0.1667, Hits@3: 0.7767, Hits@10: 0.9345


[95.709s] #evaluation triple: 5000/5000


[INFO] 2022-12-06 14:25:12,954 [428529760.py:   43]:	Epoch 351 (5.4553820 sec) Train Loss: 0.0110731
[INFO] 2022-12-06 14:25:17,940 [428529760.py:   43]:	Epoch 352 (4.9821215 sec) Train Loss: 0.0113844
[INFO] 2022-12-06 14:25:22,983 [428529760.py:   43]:	Epoch 353 (5.0409892 sec) Train Loss: 0.0110057
[INFO] 2022-12-06 14:25:27,886 [428529760.py:   43]:	Epoch 354 (4.9019986 sec) Train Loss: 0.0110216
[INFO] 2022-12-06 14:25:32,836 [428529760.py:   43]:	Epoch 355 (4.9486015 sec) Train Loss: 0.0112751
[INFO] 2022-12-06 14:25:38,155 [428529760.py:   43]:	Epoch 356 (5.3176671 sec) Train Loss: 0.0111027
[INFO] 2022-12-06 14:25:43,386 [428529760.py:   43]:	Epoch 357 (5.2299256 sec) Train Loss: 0.0110664
[INFO] 2022-12-06 14:25:48,402 [428529760.py:   43]:	Epoch 358 (5.0139296 sec) Train Loss: 0.0105313
[INFO] 2022-12-06 14:25:53,582 [428529760.py:   43]:	Epoch 359 (5.1788955 sec) Train Loss: 0.0110419
[INFO] 2022-12-06 14:25:58,890 [428529760.py:   43]:	Epoch 360 (5.3062439 sec) Train Loss: 0.0108828
[INFO] 2022-12-06 14:26:04,234 [428529760.py:   43]:	Epoch 361 (5.3425035 sec) Train Loss: 0.0113178
[INFO] 2022-12-06 14:26:09,456 [428529760.py:   43]:	Epoch 362 (5.2212943 sec) Train Loss: 0.0108751
[INFO] 2022-12-06 14:26:14,989 [428529760.py:   43]:	Epoch 363 (5.5072578 sec) Train Loss: 0.0106593
[INFO] 2022-12-06 14:26:20,013 [428529760.py:   43]:	Epoch 364 (5.0225282 sec) Train Loss: 0.0110162
[INFO] 2022-12-06 14:26:25,010 [428529760.py:   43]:	Epoch 365 (4.9957636 sec) Train Loss: 0.0105444
[INFO] 2022-12-06 14:26:29,990 [428529760.py:   43]:	Epoch 366 (4.9785773 sec) Train Loss: 0.0113366
[INFO] 2022-12-06 14:26:34,968 [428529760.py:   43]:	Epoch 367 (4.9762742 sec) Train Loss: 0.0105689
[INFO] 2022-12-06 14:26:40,052 [428529760.py:   43]:	Epoch 368 (5.0829293 sec) Train Loss: 0.0108596
[INFO] 2022-12-06 14:26:45,927 [428529760.py:   43]:	Epoch 369 (5.8742789 sec) Train Loss: 0.0105815
[INFO] 2022-12-06 14:26:50,949 [428529760.py:   43]:	Epoch 370 (5.0192748 sec) Train Loss: 0.0108027
[INFO] 2022-12-06 14:26:55,964 [428529760.py:   43]:	Epoch 371 (5.0142948 sec) Train Loss: 0.0109028
[INFO] 2022-12-06 14:27:00,897 [428529760.py:   43]:	Epoch 372 (4.9314201 sec) Train Loss: 0.0108108
[INFO] 2022-12-06 14:27:06,030 [428529760.py:   43]:	Epoch 373 (5.1317280 sec) Train Loss: 0.0114572
[INFO] 2022-12-06 14:27:11,347 [428529760.py:   43]:	Epoch 374 (5.3147405 sec) Train Loss: 0.0106621
[INFO] 2022-12-06 14:27:17,202 [428529760.py:   43]:	Epoch 375 (5.8534048 sec) Train Loss: 0.0107089
[INFO] 2022-12-06 14:27:22,441 [428529760.py:   43]:	Epoch 376 (5.2376577 sec) Train Loss: 0.0111551
[INFO] 2022-12-06 14:27:27,565 [428529760.py:   43]:	Epoch 377 (5.1224879 sec) Train Loss: 0.0107360
[INFO] 2022-12-06 14:27:32,499 [428529760.py:   43]:	Epoch 378 (4.9331037 sec) Train Loss: 0.0108177
[INFO] 2022-12-06 14:27:37,497 [428529760.py:   43]:	Epoch 379 (4.9962078 sec) Train Loss: 0.0108471
[INFO] 2022-12-06 14:27:42,507 [428529760.py:   43]:	Epoch 380 (5.0087466 sec) Train Loss: 0.0103145
[INFO] 2022-12-06 14:27:48,241 [428529760.py:   43]:	Epoch 381 (5.7329487 sec) Train Loss: 0.0107845
[INFO] 2022-12-06 14:27:53,439 [428529760.py:   43]:	Epoch 382 (5.1961709 sec) Train Loss: 0.0105421
[INFO] 2022-12-06 14:27:58,656 [428529760.py:   43]:	Epoch 383 (5.2157656 sec) Train Loss: 0.0108239
[INFO] 2022-12-06 14:28:04,037 [428529760.py:   43]:	Epoch 384 (5.3796127 sec) Train Loss: 0.0108059
[INFO] 2022-12-06 14:28:09,073 [428529760.py:   43]:	Epoch 385 (5.0345648 sec) Train Loss: 0.0105290
[INFO] 2022-12-06 14:28:14,176 [428529760.py:   43]:	Epoch 386 (5.1017577 sec) Train Loss: 0.0106038
[INFO] 2022-12-06 14:28:19,931 [428529760.py:   43]:	Epoch 387 (5.7538544 sec) Train Loss: 0.0112857
[INFO] 2022-12-06 14:28:25,503 [428529760.py:   43]:	Epoch 388 (5.5701230 sec) Train Loss: 0.0108252
[INFO] 2022-12-06 14:28:30,785 [428529760.py:   43]:	Epoch 389 (5.2813737 sec) Train Loss: 0.0104723
[INFO] 2022-12-06 14:28:36,084 [428529760.py:   43]:	Epoch 390 (5.2969607 sec) Train Loss: 0.0107158
[INFO] 2022-12-06 14:28:41,444 [428529760.py:   43]:	Epoch 391 (5.3593256 sec) Train Loss: 0.0101656
[INFO] 2022-12-06 14:28:46,779 [428529760.py:   43]:	Epoch 392 (5.3326526 sec) Train Loss: 0.0104386
[INFO] 2022-12-06 14:28:52,570 [428529760.py:   43]:	Epoch 393 (5.7893734 sec) Train Loss: 0.0107273
[INFO] 2022-12-06 14:28:57,707 [428529760.py:   43]:	Epoch 394 (5.1358231 sec) Train Loss: 0.0099732
[INFO] 2022-12-06 14:29:02,785 [428529760.py:   43]:	Epoch 395 (5.0757371 sec) Train Loss: 0.0108062
[INFO] 2022-12-06 14:29:07,756 [428529760.py:   43]:	Epoch 396 (4.9706663 sec) Train Loss: 0.0110288
[INFO] 2022-12-06 14:29:12,878 [428529760.py:   43]:	Epoch 397 (5.1203472 sec) Train Loss: 0.0103624
[INFO] 2022-12-06 14:29:18,096 [428529760.py:   43]:	Epoch 398 (5.2158717 sec) Train Loss: 0.0103986
[INFO] 2022-12-06 14:29:23,305 [428529760.py:   43]:	Epoch 399 (5.2084237 sec) Train Loss: 0.0112534
[INFO] 2022-12-06 14:29:28,796 [428529760.py:   43]:	Epoch 400 (5.4894753 sec) Train Loss: 0.0103557
[INFO] 2022-12-06 14:29:30,541 [428529760.py:   55]:	GPU run time 262.3300326894969, Data prepare extra time 0.712560310959816
[INFO] 2022-12-06 14:29:30,543 [428529760.py:   56]:	Epoch 400 	 All Loss [76.75364]
[INFO] 2022-12-06 14:29:30,544 [2594160307.py:  118]:	epoch	400


[9.254s] #evaluation triple: 500/5000
[19.227s] #evaluation triple: 1000/5000
[29.777s] #evaluation triple: 1500/5000
[39.097s] #evaluation triple: 2000/5000
[48.198s] #evaluation triple: 2500/5000
[57.337s] #evaluation triple: 3000/5000
[67.265s] #evaluation triple: 3500/5000
[76.435s] #evaluation triple: 4000/5000
[85.329s] #evaluation triple: 4500/5000


[INFO] 2022-12-06 14:31:06,708 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 14:31:06,711 [evalutate.py:  131]:	MeanRank: 242.28, MRR: 0.3463, Hits@1: 0.1036, Hits@3: 0.5181, Hits@10: 0.7926
[INFO] 2022-12-06 14:31:06,782 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 14:31:06,784 [evalutate.py:  137]:	MeanRank: 230.38, MRR: 0.4860, Hits@1: 0.1725, Hits@3: 0.7794, Hits@10: 0.9346


[95.908s] #evaluation triple: 5000/5000

#evaluation triple: 3000/5000
[67.265s] #evaluation triple: 3500/5000
[76.435s] #evaluation triple: 4000/5000
[85.329s] #evaluation triple: 4500/5000

[INFO] 2022-12-06 14:31:06,708 [evalutate.py:  127]:	-----Raw-Average-Results
[INFO] 2022-12-06 14:31:06,711 [evalutate.py:  131]:	MeanRank: 242.28, MRR: 0.3463, Hits@1: 0.1036, Hits@3: 0.5181, Hits@10: 0.7926
[INFO] 2022-12-06 14:31:06,782 [evalutate.py:  132]:	-----Filter-Average-Results
[INFO] 2022-12-06 14:31:06,784 [evalutate.py:  137]:	MeanRank: 230.38, MRR: 0.4860, Hits@1: 0.1725, Hits@3: 0.7794, Hits@10: 0.9346


[95.908s] #evaluation triple: 5000/5000

4、实验结果

指标如下:

MeanRank: 230.38,MRR: 0.4860,Hits@1: 0.1725,Hits@3: 0.7794,Hits@10: 0.9346

结果存储在:

/home/aistudio/work/output/TransE__dim=50_entity_embeddings

/home/aistudio/work/output/TransE__dim=50_relation_embeddings

5、 链接预测(后续方向)

通过transE建模后,我们得到了每个实体和关系的嵌入向量,利用嵌入向量,我们可以进行知识图谱的链接预测

将三元组(head,relation,tail)记为(h,r,t),链接预测分为三类

1、头实体预测:(?,r,t)

2、关系预测:(h,?,t)

3、尾实体预测:(h,r,?)

原理:利用向量的可加性即可实现。以(h,r,?)的预测为例:

假设t’=h+r,则在所有的实体中选择与t’距离最近的向量,即为t的的预测值

此文章为搬运
原项目链接

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐