转自AI Studio,原文链接:

论文复现: DSIN-阿里点击率预测三部曲-3 - 飞桨AI Studio

论文复现:Deep Session Interest Network for Click-Through Rate Prediction

一、简介

Deep Session Interest Network for Click-Through Rate Prediction 是点击率预测问题的一篇经典论文,该论文的先前工作有大家非常熟悉的DINDIEN。都是关注于用户的兴趣,对用户的历史会话行为建模出用户的兴趣表示。DSIN模型观察到了,用户在 session 中的兴趣是高度相近的,但是在不同的 session 中的兴趣是不同的,如下图所示: 

根据上述观察,DSIN将用户的历史交互数据划分成了一个个 session , 然后再通过自注意力和双向LSTM对用户的 session 兴趣进行建模。模型框架与DIN,DIEN类似,如下图所示:

 论文连接:Deep Session Interest Network for Click-Through Rate Prediction

二、复现精度

基于paddlepaddle深度学习框架,对文献算法进行复现后,本项目达到的测试精度,如下表所示。

模型aucbatch_sizeepoch_numTime of each epoch
DSIN0.635640961约10分钟

参数设置可详见config_bigdata.yaml文件。

三、数据集

本项目所使用的数据集Ali_Display_Ad_Click是由阿里所提供的一个淘宝展示广告点击率预估数据集。

1、原始数据集介绍

  • 原始样本骨架raw_sample:淘宝网站中随机抽样了114万用户8天内的广告展示/点击日志(2600万条记录),构成原始的样本骨架
  1. user:脱敏过的用户ID;
  2. adgroup_id:脱敏过的广告单元ID;
  3. time_stamp:时间戳;
  4. pid:资源位;
  5. nonclk:为1代表没有点击;为0代表点击;
  6. clk:为0代表没有点击;为1代表点击;
user,time_stamp,adgroup_id,pid,nonclk,clk
581738,1494137644,1,430548_1007,1,0
  • 广告基本信息表ad_feature:本数据集涵盖了raw_sample中全部广告的基本信息
  1. adgroup_id:脱敏过的广告ID;
  2. cate_id:脱敏过的商品类目ID;
  3. campaign_id:脱敏过的广告计划ID;
  4. customer: 脱敏过的广告主ID;
  5. brand:脱敏过的品牌ID;
  6. price: 宝贝的价格
adgroup_id,cate_id,campaign_id,customer,brand,price
63133,6406,83237,1,95471,170.0
  • 用户基本信息表user_profile:本数据集涵盖了raw_sample中全部用户的基本信息
  1. userid:脱敏过的用户ID;
  2. cms_segid:微群ID;
  3. cms_group_id:cms_group_id;
  4. final_gender_code:性别 1:男,2:女;
  5. age_level:年龄层次; 1234
  6. pvalue_level:消费档次,1:低档,2:中档,3:高档;
  7. shopping_level:购物深度,1:浅层用户,2:中度用户,3:深度用户
  8. occupation:是否大学生 ,1:是,0:否
  9. new_user_class_level:城市层级
userid,cms_segid,cms_group_id,final_gender_code,age_level,pvalue_level,shopping_level,occupation,new_user_class_level 
234,0,5,2,5,,3,0,3
  • 用户的行为日志behavior_log:本数据集涵盖了raw_sample中全部用户22天内的购物行为
  1. user:脱敏过的用户ID;
  2. time_stamp:时间戳;
  3. btag:行为类型, 包括以下四种:(pv:浏览),(cart:加入购物车),(fav:喜欢),(buy:购买)
  4. cate:脱敏过的商品类目id;
  5. brand: 脱敏过的品牌id;
user,time_stamp,btag,cate,brand
558157,1493741625,pv,6250,91286

预处理数据集介绍

对原始数据集中的四个文件,参考论文的数据预处理过程对数据进行处理,形成满足DSIN论文条件且可以被reader直接读取的数据集。 数据集共有八个pkl文件,训练集和测试集各自拥有四个,以训练集为例,这四个文件为train_feat_input.pkl、train_sess_input、train_sess_length和train_label.pkl。各自存储了按0.25的采样比进行采样后的user及item特征输入,用户会话特征输入、用户会话长度和标签数据。

四、环境依赖

  • 硬件:

    • x86 cpu
    • NVIDIA GPU
  • 框架:

    • PaddlePaddle = 2.2.2
    • Python = 3.7
  • 其他依赖项:

    • PaddleRec

五、快速开始

1、克隆PaddleRec

In [1]

#clone PaddleRec
import os
!ls /home/aistudio/data/
!ls work/
!python --version
!pip list | grep paddlepaddle
if not os.path.isdir('work/PaddleRec'):
    !cd work && git clone https://gitee.com/paddlepaddle/PaddleRec.git
data131207
PaddleRec
Python 3.7.4
paddlepaddle-gpu       2.2.2.post101

2、解压数据集并移动到PaddleRec的datasets里

In [2]

#解压数据集
!tar -zxvf data/data131207/model_input.tar.gz
!mkdir '/home/aistudio/work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/'
!mkdir '/home/aistudio/work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/'
!mkdir '/home/aistudio/work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/'
!mv model_input/test_feat_input.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/
!mv model_input/test_label.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/
!mv model_input/test_sess_input.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/
!mv model_input/test_session_length.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_test/
!mv model_input/train_feat_input.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/
!mv model_input/train_label.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/
!mv model_input/train_sess_input.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/
!mv model_input/train_session_length.pkl work/PaddleRec/datasets/Ali_Display_Ad_Click_DSIN/big_train/
model_input/
model_input/test_session_length.pkl
model_input/test_sess_input.pkl
model_input/train_sess_input.pkl
model_input/train_feat_input.pkl
model_input/test_feat_input.pkl
model_input/test_label.pkl
model_input/train_label.pkl
model_input/train_session_length.pkl

3、写入模型相关代码

In [3]

!mkdir '/home/aistudio/work/PaddleRec/models/rank/dsin'
%cd '/home/aistudio/work/PaddleRec/models/rank/dsin'
/home/aistudio/work/PaddleRec/models/rank/dsin

In [4]

%%writefile net.py
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import math
import numpy as np
from sequence_layers import PositionalEncoder, AttentionSequencePoolingLayer, MLP

class DSIN_layer(nn.Layer):
    def __init__(self, user_size, adgroup_size, pid_size, cms_segid_size, cms_group_size,
                 final_gender_size, age_level_size, pvalue_level_size, shopping_level_size,
                 occupation_size, new_user_class_level_size, campaign_size,customer_size, cate_size, brand_size,  # above is all sparse feat size
                 sparse_embed_size = 4, att_embedding_size = 8, sess_count = 5, sess_max_length = 10, l2_reg_embedding=1e-6):
        super().__init__()

        # feature size
        self.user_size = user_size
        self.adgroup_size = adgroup_size   
        self.pid_size = pid_size
        self.cms_segid_size = cms_segid_size
        self.cms_group_size = cms_group_size
        self.final_gender_size = final_gender_size
        self.age_level_size = age_level_size
        self.pvalue_level_size = pvalue_level_size
        self.shopping_level_size = shopping_level_size
        self.occupation_size = occupation_size
        self.new_user_class_level_size = new_user_class_level_size
        self.campaign_size = campaign_size
        self.customer_size = customer_size
        self.cate_size = cate_size
        self.brand_size = brand_size

        # sparse embed size
        self.sparse_embed_size = sparse_embed_size

        # transform attention embed size
        self.att_embedding_size = att_embedding_size

        # hyper_parameters
        self.sess_count = 5
        self.sess_max_length = 10

        # sparse embedding layer
        self.userid_embeddings_var = paddle.nn.Embedding(
            self.user_size,
            self.sparse_embed_size,
            sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.adgroup_embeddings_var = paddle.nn.Embedding(
            self.adgroup_size,
            self.sparse_embed_size,
            sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.pid_embeddings_var = paddle.nn.Embedding(
            self.pid_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.cmsid_embeddings_var = paddle.nn.Embedding(
            self.cms_segid_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.cmsgroup_embeddings_var = paddle.nn.Embedding(
            self.cms_group_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.gender_embeddings_var = paddle.nn.Embedding(
            self.final_gender_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.age_embeddings_var = paddle.nn.Embedding(
            self.age_level_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.pvalue_embeddings_var = paddle.nn.Embedding(
            self.pvalue_level_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.shopping_embeddings_var = paddle.nn.Embedding(
            self.shopping_level_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.occupation_embeddings_var = paddle.nn.Embedding(
            self.occupation_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.new_user_class_level_embeddings_var = paddle.nn.Embedding(
            self.new_user_class_level_size,
            self.sparse_embed_size,
            #sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.campaign_embeddings_var = paddle.nn.Embedding(
            self.campaign_size,
            self.sparse_embed_size,
            sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.customer_embeddings_var = paddle.nn.Embedding(
            self.customer_size,
            self.sparse_embed_size,
            sparse=True,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.cate_embeddings_var = paddle.nn.Embedding(
            self.cate_size,
            self.sparse_embed_size,
            sparse=True,
            padding_idx=0,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        self.brand_embeddings_var = paddle.nn.Embedding(
            self.brand_size,
            self.sparse_embed_size,
            sparse=True,
            padding_idx=0,
            weight_attr=paddle.ParamAttr(
                regularizer=paddle.regularizer.L2Decay(l2_reg_embedding),
                initializer=nn.initializer.Normal(mean=0.0, std=0.0001)))

        # sess interest extractor layer
        self.position_encoding = PositionalEncoder(2*self.sparse_embed_size)
        self.transform = nn.TransformerEncoderLayer(
            d_model = self.att_embedding_size, 
            nhead = 8,
            dim_feedforward = 64,
            weight_attr = self._get_weight_attr(),
            bias_attr= False,
            dropout = 0.0)

        # sess interest interacting layer
        self.bilstm = nn.LSTM(2*self.sparse_embed_size, 2*self.sparse_embed_size, num_layers = 2, direction='bidirectional')

        # sess interest activating layer
        self.transform_actpool = AttentionSequencePoolingLayer(weight_normalization=True, name='transform')
        self.lstm_actpool = AttentionSequencePoolingLayer(weight_normalization=True, name='lstm')

        # MLP moudle
        self.mlp = MLP(mlp_hidden_units=[77, 200, 80])

    def _get_weight_attr(self):
        return paddle.ParamAttr(initializer=nn.initializer.TruncatedNormal(std=0.05))

    def forward(self, inputs):
        '''
        inputs : tulpe, (sparse_input, dense_input, sess_input, sess_length)
            sparse_input: (N, 15)
            dense_input: (N,)
            sess_input:(N, 10, 10)
            sess_length: (N,)
        '''
        sparse_input, dense_input, sess_input, sess_length = inputs
        #assert(type(sess_length) == paddle.Tensor), f"At Attention SequencePoolingLayer expected inputs[2]'s type is paddle.Tensor, but got {type(sess_length)}"

        # sparse and dense feature
        self.user = sparse_input[:, 0]
        self.adgroup = sparse_input[:, 1]
        self.pid = sparse_input[:, 2]
        self.cmsid = sparse_input[:, 3]
        self.cmsgroup = sparse_input[:, 4]
        self.gender = sparse_input[:, 5]
        self.age = sparse_input[:, 6]
        self.pvalue = sparse_input[:, 7]
        self.shopping = sparse_input[:, 8]
        self.occupation = sparse_input[:, 9]
        self.new_user_class = sparse_input[:, 10]
        self.campaign = sparse_input[:, 11]
        self.customer = sparse_input[:, 12]
        self.cate = sparse_input[:, 13]
        self.brand = sparse_input[:, 14]
        self.price = dense_input.unsqueeze_(-1)

        # sparse feature embedding
        self.user_embeded = self.userid_embeddings_var(self.user)
        self.adgroup_embeded = self.adgroup_embeddings_var(self.adgroup)
        self.pid_embeded = self.pid_embeddings_var(self.pid)
        self.cmsid_embeded = self.cmsid_embeddings_var(self.cmsid)
        self.cmsgroup_embeded = self.cmsgroup_embeddings_var(self.cmsgroup)
        self.gender_embeded = self.gender_embeddings_var(self.gender)
        self.age_embeded = self.age_embeddings_var(self.age)
        self.pvalue_embeded = self.pvalue_embeddings_var(self.pvalue)
        self.shopping_embeded = self.shopping_embeddings_var(self.shopping)
        self.occupation_embeded = self.occupation_embeddings_var(self.occupation)
        self.new_user_class_embeded = self.new_user_class_level_embeddings_var(self.new_user_class)
        self.campaign_embeded = self.campaign_embeddings_var(self.campaign)
        self.customer_embeded = self.customer_embeddings_var(self.customer)
        self.cate_embeded = self.cate_embeddings_var(self.cate)
        self.brand_embeded = self.brand_embeddings_var(self.brand)

        # concat query embeded  
        # Note: query feature is cate_embeded and brand_embeded
        query_embeded = paddle.concat([self.cate_embeded,self.brand_embeded],-1)

        # concat sparse feature embeded  
        deep_input_embeded = paddle.concat([self.user_embeded, self.adgroup_embeded, self.pid_embeded, self.cmsid_embeded,
                                    self.cmsgroup_embeded, self.gender_embeded, self.age_embeded, self.pvalue_embeded,
                                    self.shopping_embeded, self.occupation_embeded, self.new_user_class_embeded,
                                    self.campaign_embeded, self.customer_embeded, self.cate_embeded, self.brand_embeded], -1)

        # sess_interest_division part
        #cate_sess_embeded = self.cate_embeddings_var(paddle.to_tensor(sess_input[:, ::2, :]))
        #brand_sess_embeded = self.brand_embeddings_var(paddle.to_tensor(sess_input[:, 1::2, :]))
        cate_sess_embeded = self.cate_embeddings_var(sess_input[:, ::2, :])
        brand_sess_embeded = self.brand_embeddings_var(sess_input[:, 1::2, :])

        # tr_input (n,5,10,8)
        tr_input = paddle.concat([cate_sess_embeded,brand_sess_embeded],axis=-1) 

        # sess interest extractor part
        lstm_input = []
        for i in range(self.sess_count):
            tr_sess_input = self.position_encoding( tr_input[:, i, :, :] )
            tr_sess_input = self.transform(tr_sess_input)
            tr_sess_input = paddle.mean(tr_sess_input, axis=1, keepdim=True)
            lstm_input.append(tr_sess_input)

        lstm_input = paddle.concat([lstm_input[0], lstm_input[1], lstm_input[2], lstm_input[3], lstm_input[4]], axis=1)
        lstm_output, _ = self.bilstm(lstm_input)
        lstm_output = (lstm_output[:, :, :2*self.sparse_embed_size] + lstm_output[:, :, 2*self.sparse_embed_size:])/2

        # sess interest activating layer
        lstm_input = self.transform_actpool([query_embeded, lstm_input, sess_length])
        lstm_output = self.lstm_actpool([query_embeded, lstm_output, sess_length])

        # concatenate all moudle output
        mlp_input = paddle.concat([deep_input_embeded, paddle.nn.Flatten()(lstm_input), paddle.nn.Flatten()(lstm_output), self.price], axis=-1)

        out = self.mlp(mlp_input)
        return out
Writing net.py

In [5]

%%writefile sequence_layers.py
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import paddle
import paddle.nn as nn
import numpy as np
import copy
import math

class PositionalEncoder(nn.Layer):
    def __init__(self, d_model, max_seq_len=50):
        #d_model为嵌入维度
        super(PositionalEncoder, self).__init__()
        self.d_model = d_model

        position = np.array([[pos / np.power(10000, 2. * i / self.d_model)
                            for i in range(self.d_model)]
                            for pos in range(max_seq_len)]) 
        # Second part, apply the cosine to even columns and sin to odds.
        position[:, 0::2] = np.sin(position[:, 0::2])  # dim 2i
        position[:, 1::2] = np.cos(position[:, 1::2])  # dim 2i+1
        self.position = self.create_parameter(shape=[max_seq_len,self.d_model],
                                            default_initializer=paddle.nn.initializer.Assign(value=position))

    def forward(self, x):
        x = x*math.sqrt(self.d_model)
        seq_len = x.shape[1]
        x = x+self.position[:seq_len,:]
        return x

class AttentionSequencePoolingLayer(nn.Layer):
    def __init__(self, dnn_units=[8, 64, 16], dnn_activation='sigmoid', weight_normalization=False, name=None):
        super().__init__()
        self.dnn_units = dnn_units
        self.dnn_activation = 'sigmoid'
        self.weight_normalization = weight_normalization
        self.name = name
        layer_list = []
        #bn_list = []
        for i in range(len(dnn_units)-1):
            dnn_layer = nn.Linear(
                in_features = self.dnn_units[i] if i != 0 else self.dnn_units[i]*4 ,
                out_features = self.dnn_units[i+1],  
                weight_attr= self._weight_init())
            self.add_sublayer(self.name + f'linear_{i}', dnn_layer)
            layer_list.append(dnn_layer)
            #layer_list.append(copy.deepcopy(dnn_layer))
            #bn_layer = nn.BatchNorm(50)
            #self.add_sublayer(self.name + f'bn_{i}', bn_layer)
            #bn_list.append(bn_layer)
            #bn_list.append(copy.deepcopy(bn_layer))
        #self.bn_layer = nn.LayerList(bn_list)
        self.layers = nn.LayerList(layer_list)
        self.dnn = nn.Linear(self.dnn_units[-1], 1, weight_attr=self._weight_init()) 
        self.activation = nn.Sigmoid()
        self.soft = nn.Softmax()
    def _weight_init(self):
        return paddle.framework.ParamAttr(initializer=paddle.nn.initializer.XavierNormal())

    def forward(self, inputs):
        querys, keys, sess_length = inputs
        #assert(type(sess_length) == paddle.Tensor), f"At Attention SequencePoolingLayer expected inputs[2]'s type is paddle.Tensor, but got {type(sess_length)}"
        keys_length = keys.shape[1]
        key_masks = nn.functional.sequence_mask(sess_length, keys_length) 
        querys = paddle.tile(querys.unsqueeze(1), [1, keys_length, 1])
        att_input = paddle.concat([querys, keys, querys-keys, querys*keys], axis=-1)
        for i, layer in enumerate(self.layers):
            att_input = layer(att_input)
            #att_input = self.bn_layer[i](att_input)  # BatchNomalization
            att_input = self.activation(att_input) # activation 
        att_score = self.dnn(att_input)  # (N, 50, 1)
        att_score = paddle.transpose(att_score, [0, 2, 1]) # (N, 1, 50)
        if self.weight_normalization:
            paddings = paddle.ones_like(att_score) * (-2 ** 32 + 1)
        else:
            paddings = paddle.zeros_like(att_score)
        att_score = paddle.where(key_masks.unsqueeze(1) == 1, att_score, paddings)  # key_masks.unsqueeze in order to keep shape same as att_score
        att_score = self.soft(att_score)
        out = paddle.matmul(att_score, keys)
        return out

class MLP(nn.Layer):
    def __init__(self, mlp_hidden_units, use_bn=True):
        super().__init__()
        self.mlp_hidden_units = mlp_hidden_units
        self.acitivation = paddle.nn.Sigmoid()
        layer_list = []
        for i in range(len(mlp_hidden_units)-1):
            dnn_layer = nn.Linear(
                in_features = self.mlp_hidden_units[i],
                out_features = self.mlp_hidden_units[i+1],  
                weight_attr= self._weight_init())
            self.add_sublayer(f'linear_{i}', dnn_layer)
            layer_list.append(dnn_layer)
        self.layers = nn.LayerList(layer_list)
        self.dense = nn.Linear(self.mlp_hidden_units[-1], 1, bias_attr=True, weight_attr= self._weight_init())
        self.predict_layer = nn.Sigmoid()

    def _weight_init(self):
        return paddle.framework.ParamAttr(initializer=paddle.nn.initializer.XavierNormal())

    def forward(self, x):
        for layer in self.layers:
            x = layer(x)
            x = self.acitivation(x)
        x = self.dense(x)
        x = self.predict_layer(x)
        return x
Writing sequence_layers.py

In [6]

%%writefile dygraph_model.py
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import math

import net


class DygraphModel():
    # define model
    def create_model(self, config):
        user_size = config.get("hyper_parameters.user_size")
        cms_segid_size = config.get("hyper_parameters.cms_segid_size")
        cms_group_size = config.get("hyper_parameters.cms_group_size")
        final_gender_size = config.get(
            "hyper_parameters.final_gender_size")
        age_level_size = config.get("hyper_parameters.age_level_size")
        pvalue_level_size = config.get("hyper_parameters.pvalue_level_size")
        shopping_level_size = config.get(
            "hyper_parameters.shopping_level_size")
        occupation_size = config.get("hyper_parameters.occupation_size")
        new_user_class_level_size = config.get(
            "hyper_parameters.new_user_class_level_size")
        adgroup_size = config.get("hyper_parameters.adgroup_size")
        cate_size = config.get("hyper_parameters.cate_size")
        campaign_size = config.get("hyper_parameters.campaign_size")
        customer_size = config.get("hyper_parameters.customer_size")
        brand_size = config.get("hyper_parameters.brand_size")
        pid_size = config.get("hyper_parameters.pid_size")
        feat_embed_size = config.get(
            "hyper_parameters.feat_embed_size")

        dsin_model = net.DSIN_layer(
            user_size, adgroup_size, pid_size, cms_segid_size, cms_group_size,
            final_gender_size, age_level_size, pvalue_level_size, shopping_level_size,
            occupation_size, new_user_class_level_size, campaign_size, customer_size,
            cate_size, brand_size, sparse_embed_size=feat_embed_size, l2_reg_embedding=1e-6)

        return dsin_model

    # define loss function by predicts and label
    def create_loss(self, pred, label):
        return paddle.nn.BCELoss()(pred,label)

    # define feeds which convert numpy of batch data to paddle.tensor
    def create_feeds(self, batch_data, config):
        data, label = (batch_data[0], batch_data[1], batch_data[2], batch_data[3]), batch_data[-1]
        #data, label = batch_data[0], batch_data[1]
        label = label.reshape([-1,1])
        return label, data

    # define optimizer
    def create_optimizer(self, dy_model, config):
        lr = config.get("hyper_parameters.optimizer.learning_rate", 0.001)
        optimizer = paddle.optimizer.Adam(
            learning_rate=lr, parameters=dy_model.parameters())
        return optimizer

    # define metrics such as auc/acc
    # multi-task need to define multi metric
    def create_metrics(self):
        metrics_list_name = ["auc"]
        auc_metric = paddle.metric.Auc("ROC")
        metrics_list = [auc_metric]
        return metrics_list, metrics_list_name

    # construct train forward phase
    def train_forward(self, dy_model, metrics_list, batch_data, config):
        label, input_tensor = self.create_feeds(batch_data, config)

        pred = dy_model.forward(input_tensor)
        # update metrics
        predict_2d = paddle.concat(x=[1 - pred, pred], axis=1)
        metrics_list[0].update(preds=predict_2d.numpy(), labels=label.numpy())
        loss = self.create_loss(pred,paddle.cast(label, "float32"))
        print_dict = {'loss': loss}
        # print_dict = None
        return loss, metrics_list, print_dict

    def infer_forward(self, dy_model, metrics_list, batch_data, config):
        label, input_tensor = self.create_feeds(batch_data, config)

        pred = dy_model.forward(input_tensor)
        # update metrics
        predict_2d = paddle.concat(x=[1 - pred, pred], axis=1)
        metrics_list[0].update(preds=predict_2d.numpy(), labels=label.numpy())

        return metrics_list, None
Writing dygraph_model.py

In [7]

%%writefile dsin_reader.py
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import print_function
import numpy as np

from paddle.io import IterableDataset
import pandas as pd

sparse_features = ['userid', 'adgroup_id', 'pid', 'cms_segid', 'cms_group_id', 'final_gender_code', 'age_level',
                    'pvalue_level', 'shopping_level', 'occupation', 'new_user_class_level ', 'campaign_id',
                    'customer', 'cate_id', 'brand']

dense_features = ['price']

class RecDataset(IterableDataset):
    def __init__(self, file_list, config):
        super().__init__()
        self.file_list = file_list
        data_file = [ f.split('/')[-1] for f in file_list]
        mode = data_file[0].split('_')[0]
        data_dir = file_list[0].split(data_file[0])[0]
        assert(mode == 'train' or mode == 'test' or mode == 'sample'), f"mode must be 'train' or 'test', but get '{mode}'"
        feat_input = pd.read_pickle(data_dir + mode + '_feat_input.pkl')
        self.sess_input = pd.read_pickle(data_dir + mode + '_sess_input.pkl')
        self.sess_length = pd.read_pickle(data_dir + mode + '_session_length.pkl')
        self.label = pd.read_pickle(data_dir + mode + '_label.pkl')
        if str(type(self.label)).split("'")[1] != 'numpy.ndarray':
            self.label = self.label.to_numpy()
        self.label = self.label.astype('int64')
        self.num_samples = self.label.shape[0]
        self.sparse_input = feat_input[sparse_features].to_numpy().astype('int64')
        self.dense_input = feat_input[dense_features].to_numpy().reshape(-1).astype('float32')

    def __iter__(self):
        for i in range(self.num_samples):
            yield [self.sparse_input[i, :], self.dense_input[i], self.sess_input[i, :, :], self.sess_length[i], self.label[i]]
Writing dsin_reader.py

In [8]

%%writefile config_bigdata.yaml
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

runner:
  train_data_dir: "../../../datasets/Ali_Display_Ad_Click_DSIN/big_train/"
  train_reader_path: "dsin_reader" # importlib format
  use_gpu: True
  use_auc: True
  train_batch_size: 4096
  epochs: 1
  print_interval: 50

  model_save_path: "output_model_all_dsin"
  test_data_dir: "../../../datasets/Ali_Display_Ad_Click_DSIN/big_test/"
  infer_reader_path: "dsin_reader" # importlib format
  infer_batch_size: 16384 # 2**14
  infer_load_path: "output_model_all_dsin"
  infer_start_epoch: 0
  infer_end_epoch: 1

# hyper parameters of user-defined network
hyper_parameters:
  # optimizer config
  optimizer:
    class: Adam
    learning_rate: 0.00235
  # user feature size
  user_size: 265442
  cms_segid_size: 97
  cms_group_size: 13
  final_gender_size: 2
  age_level_size: 7
  pvalue_level_size: 4
  shopping_level_size: 3
  occupation_size: 2
  new_user_class_level_size: 5

  # item feature size
  adgroup_size: 512431
  cate_size: 11859   #max value + 1
  campaign_size: 309448
  customer_size: 195841
  brand_size: 362855  #max value + 1

  # context feature size
  pid_size: 2

  # embedding size
  feat_embed_size: 4
Writing config_bigdata.yaml

4、利用PaddleRec的trainer以及infer进行模型训练及其测试

In [9]

!python ../../../tools/trainer.py -m config_bigdata.yaml
2022-05-11 19:50:56,823 - INFO - **************common.configs**********
2022-05-11 19:50:56,823 - INFO - use_gpu: True, use_xpu: False, use_visual: False, train_batch_size: 4096, train_data_dir: ../../../datasets/Ali_Display_Ad_Click_DSIN/big_train/, epochs: 1, print_interval: 50, model_save_path: output_model_all_dsin
2022-05-11 19:50:56,823 - INFO - **************common.configs**********
W0511 19:50:56.825248  1525 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0511 19:50:56.831076  1525 device_context.cc:465] device: 0, cuDNN Version: 7.6.
2022-05-11 19:51:01,867 - INFO - read data
2022-05-11 19:51:01,867 - INFO - reader path:dsin_reader
2022-05-11 19:51:13,903 - INFO - epoch: 0, batch_id: 0, auc:0.502794, loss:0.85580873, avg_reader_cost: 0.00291 sec, avg_batch_cost: 0.01317 sec, avg_samples: 81.92000, ips: 6220.65504 ins/s
2022-05-11 19:51:33,319 - INFO - epoch: 0, batch_id: 50, auc:0.495701, loss:0.19559237, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.38773 sec, avg_samples: 4096.00000, ips: 10564.02249 ins/s
2022-05-11 19:51:52,451 - INFO - epoch: 0, batch_id: 100, auc:0.499694, loss:0.21434923, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.38206 sec, avg_samples: 4096.00000, ips: 10720.87298 ins/s
2022-05-11 19:52:10,842 - INFO - epoch: 0, batch_id: 150, auc:0.512509, loss:0.19038938, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.36725 sec, avg_samples: 4096.00000, ips: 11153.31692 ins/s
2022-05-11 19:52:28,755 - INFO - epoch: 0, batch_id: 200, auc:0.530944, loss:0.20696387, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.35769 sec, avg_samples: 4096.00000, ips: 11451.33054 ins/s
2022-05-11 19:52:46,030 - INFO - epoch: 0, batch_id: 250, auc:0.545280, loss:0.18852976, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.34493 sec, avg_samples: 4096.00000, ips: 11874.79419 ins/s
2022-05-11 19:53:03,111 - INFO - epoch: 0, batch_id: 300, auc:0.558348, loss:0.20377612, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34106 sec, avg_samples: 4096.00000, ips: 12009.68762 ins/s
2022-05-11 19:53:20,102 - INFO - epoch: 0, batch_id: 350, auc:0.567205, loss:0.2231454, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.33924 sec, avg_samples: 4096.00000, ips: 12073.90980 ins/s
2022-05-11 19:53:36,952 - INFO - epoch: 0, batch_id: 400, auc:0.572662, loss:0.2543741, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.33644 sec, avg_samples: 4096.00000, ips: 12174.55680 ins/s
2022-05-11 19:53:54,328 - INFO - epoch: 0, batch_id: 450, auc:0.577503, loss:0.16823483, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34696 sec, avg_samples: 4096.00000, ips: 11805.51984 ins/s
2022-05-11 19:54:13,481 - INFO - epoch: 0, batch_id: 500, auc:0.580811, loss:0.19309358, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.38248 sec, avg_samples: 4096.00000, ips: 10709.07133 ins/s
2022-05-11 19:54:32,650 - INFO - epoch: 0, batch_id: 550, auc:0.584353, loss:0.19425544, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.38280 sec, avg_samples: 4096.00000, ips: 10700.23452 ins/s
2022-05-11 19:54:51,018 - INFO - epoch: 0, batch_id: 600, auc:0.587535, loss:0.19358435, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.36678 sec, avg_samples: 4096.00000, ips: 11167.49886 ins/s
2022-05-11 19:55:08,682 - INFO - epoch: 0, batch_id: 650, auc:0.590837, loss:0.21790585, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.35272 sec, avg_samples: 4096.00000, ips: 11612.52946 ins/s
2022-05-11 19:55:26,055 - INFO - epoch: 0, batch_id: 700, auc:0.594234, loss:0.19218928, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34689 sec, avg_samples: 4096.00000, ips: 11807.69064 ins/s
2022-05-11 19:55:43,041 - INFO - epoch: 0, batch_id: 750, auc:0.597527, loss:0.20641877, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.33916 sec, avg_samples: 4096.00000, ips: 12076.80625 ins/s
2022-05-11 19:55:59,994 - INFO - epoch: 0, batch_id: 800, auc:0.600670, loss:0.22155708, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.33848 sec, avg_samples: 4096.00000, ips: 12101.22339 ins/s
2022-05-11 19:56:17,091 - INFO - epoch: 0, batch_id: 850, auc:0.603358, loss:0.19764367, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34137 sec, avg_samples: 4096.00000, ips: 11998.85636 ins/s
2022-05-11 19:56:34,397 - INFO - epoch: 0, batch_id: 900, auc:0.605445, loss:0.18218887, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.34556 sec, avg_samples: 4096.00000, ips: 11853.31707 ins/s
2022-05-11 19:56:53,374 - INFO - epoch: 0, batch_id: 950, auc:0.606719, loss:0.20349224, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.37895 sec, avg_samples: 4096.00000, ips: 10808.89367 ins/s
2022-05-11 19:57:12,244 - INFO - epoch: 0, batch_id: 1000, auc:0.608219, loss:0.18338634, avg_reader_cost: 0.00016 sec, avg_batch_cost: 0.37685 sec, avg_samples: 4096.00000, ips: 10868.97179 ins/s
2022-05-11 19:57:30,490 - INFO - epoch: 0, batch_id: 1050, auc:0.610018, loss:0.18991007, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.36437 sec, avg_samples: 4096.00000, ips: 11241.38734 ins/s
2022-05-11 19:57:48,290 - INFO - epoch: 0, batch_id: 1100, auc:0.611764, loss:0.19425409, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.35542 sec, avg_samples: 4096.00000, ips: 11524.47769 ins/s
2022-05-11 19:58:05,738 - INFO - epoch: 0, batch_id: 1150, auc:0.613360, loss:0.18417387, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34839 sec, avg_samples: 4096.00000, ips: 11756.97841 ins/s
2022-05-11 19:58:22,780 - INFO - epoch: 0, batch_id: 1200, auc:0.615447, loss:0.2374034, avg_reader_cost: 0.00018 sec, avg_batch_cost: 0.34027 sec, avg_samples: 4096.00000, ips: 12037.41497 ins/s
2022-05-11 19:58:39,730 - INFO - epoch: 0, batch_id: 1250, auc:0.616718, loss:0.21474466, avg_reader_cost: 0.00017 sec, avg_batch_cost: 0.33845 sec, avg_samples: 4096.00000, ips: 12102.39913 ins/s
2022-05-11 19:58:56,387 - INFO - epoch: 0, batch_id: 1300, auc:0.618325, loss:0.17899244, avg_reader_cost: 0.00016 sec, avg_batch_cost: 0.33259 sec, avg_samples: 4096.00000, ips: 12315.36361 ins/s
2022-05-11 19:59:13,529 - INFO - epoch: 0, batch_id: 1350, auc:0.619961, loss:0.21630415, avg_reader_cost: 0.00015 sec, avg_batch_cost: 0.34231 sec, avg_samples: 4096.00000, ips: 11965.62220 ins/s
2022-05-11 19:59:14,210 - INFO - epoch: 0 done, auc: 0.620026,loss:0.14849854, epoch time: 480.97 s
2022-05-11 19:59:14,386 - INFO - Already save model in output_model_all_dsin/0

In [10]

!python ../../../tools/infer.py -m config_bigdata.yaml
2022-05-11 19:59:48,026 - INFO - **************common.configs**********
2022-05-11 19:59:48,026 - INFO - use_gpu: True, use_xpu: False, use_visual: False, infer_batch_size: 16384, test_data_dir: ../../../datasets/Ali_Display_Ad_Click_DSIN/big_test/, start_epoch: 0, end_epoch: 1, print_interval: 50, model_load_path: output_model_all_dsin
2022-05-11 19:59:48,026 - INFO - **************common.configs**********
W0511 19:59:48.027812  1904 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0511 19:59:48.033318  1904 device_context.cc:465] device: 0, cuDNN Version: 7.6.
2022-05-11 19:59:52,275 - INFO - read data
2022-05-11 19:59:52,276 - INFO - reader path:dsin_reader
2022-05-11 19:59:53,777 - INFO - load model epoch 0
2022-05-11 19:59:53,777 - INFO - start load model from output_model_all_dsin/0
2022-05-11 19:59:54,438 - INFO - epoch: 0, batch_id: 0, auc: 0.628742, avg_reader_cost: 0.00439 sec, avg_batch_cost: 0.01166 sec, avg_samples: 16384.00000, ips: 1239157.77 ins/s
2022-05-11 20:00:02,133 - INFO - epoch: 0 done, auc: 0.635660, epoch time: 8.36 s

六、代码结构与详细说明

6.1 代码结构(在work/PaddleRec/models/rank/dsin目录下)

├── config_bigdata.yaml # 全量数据配置文件
├── net.py # 模型核心组网(动静统一)
├── sequence_layers.py # 模型组网模块
├── dsin_reader.py # 数据读取程序
├── dygraph_model.py # 构建动态图

6.2 参数说明

可以在 config_bigdata.yaml 中设置训练与评估相关参数,具体如下:

参数默认值说明其他
--runner.train_data_dirNone训练数据路径
--runner.train_reader_pathdsin_reader训练数据读取器文件路径
--runner.use_gpuTrue是否使用gpu
--runner.use_auc5是否使用auc
--runner.train_batch_size4096模型训练的batch大小
--runner.epochs1训练epochs
--runner.print_interval50trainer训练时每print_interval个batch就打印一次指标

评估的相关参数与训练类似就不展开了,且模型的超参数也详细存放在了config_bigdata.yaml文件中了。

七、复现心得

本项目复现时遇到了许多的问题,分别对应在数据集获取,模型对齐,精度对齐这几方面。且主要因为源代码模块化程度较高,把很多细节都需要去仔细阅读源码才能彻底明白模型框架,论文上的模型框架图只是提供一个大致的思路。

(1)数据集获取: 因为该论文所使用的数据集是需要经过预处理的,而且原始数据集大小为23G+,所以在一开始复现的时候就在数据集这上面费了不少心思,也正是因为这样我学习到了如何去处理这种大数据。通过逐行读取,划分成多个子数据集的方式,将用户的历史行为数据切分成了多个。

(2)模型对齐: 如前面所说的,在一开始复现的时候我就只通过原文中的模型框架尝试去搭建,可想而知结果肯定是不好的。还是需要去仔细阅读原文代码才能很好的与原文模型进行对齐。(可以通过model summary的方式去观察模型框架以及每个layer的输入输出shape。)

(3)精度对齐: 在精度对齐上面,实在是太费心思了。重点是确保数据集一定要与原文一致,模型一定要与原文对齐。达到这两点后基本上就可以实现精度对齐了。

八、模型信息

训练完成后,模型和相关LOG保存在./output_model_all_dsin目录下。

信息说明
发布者lfyzzz
时间2022.5.11
框架版本Paddle 2.2.2
应用场景推荐系统
支持硬件GPU、CPU

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐