AI达人特训营】系统认证风险预测-异常检测

本项目基于百度飞桨PaddlePaddle实现系统认证风险预测-异常检测,构造特征,修改模型,提高线下f1分数。

一、赛题背景

随着国家、企业对安全和效率越来越重视,作为安全基础设施之一的统一身份管理(IAM,Identity and Access Management)也得到越来越多的关注。 在IAM领域中,其主要安全防护手段是身份鉴别,既我们常见的口令验证、扫码验证、指纹验证等。它们一般分为三类,既用户所知(如口令)、所有(如身份证)、特征(如人脸)。但这些身份鉴别方式都有其各自的优缺点,比如口令,强度高了记不住,强度低了容易丢,又比如人脸,摇头晃脑做活体检测体验不好,静默检测如果算法不够,又很容易被照片、视频、人脸模型绕过。也因此,在等保2.0中对于三级以上系统要求必须使用两种以上的身份鉴别方式进行身份验证,来提高身份鉴别的可信度,这也被成为双因素认证。

但这对用户来说虽然在一定程度提高了安全性,但极大的降低了用户体验。也因此,IAM厂商开始参考UEBA、用户画像等行为分析技术,来探索一种既能确保用户体验,又能提高身份鉴别强度的方法。而在当前IAM的探索进程当中,目前最为可具落地的方法,是基于规则的行为分析技术,因为它的可理解性很高,且很容易与身份鉴别技术进行联动。

但基于规则但局限性也很明显,它是基于经验的,有宁错杀一千,不放过一个的特点,缺少从数据层面来证明是否有人正在尝试窃取或验证非法获取的身份信息,又或者正在使用窃取的身份信息,以此来提前进行风险预警和处置。

更多内容请前往比赛官网查看:系统认证风险预测-异常检测

比赛任务

本赛题中,参赛团队需要基于用户认证行为数据及风险异常标记结构,构建用户认证行为特征模型和风险异常评估模型,利用风险评估模型去判断当前用户认证行为是否存在风险。

  • 利用用户认证数据构建行为基线
  • 采用监督学习模型,基于用户认证行为特征,构建风险异常评估模型,判断当前用户认证行为是否存在风险

二、数据分析及处理

比赛数据是从竹云的风险分析产品日志库中摘录而来,主要涉及认证日志与风险日志数据。比赛数据经过数据脱敏和数据筛选等安全处理操作,供大家使用。其中认证日志是用户在访问应用系统产生的行为数据,包括登录、单点登录、退出等行为。

该比赛使用的数据已上传至AI Studio:https://aistudio.baidu.com/aistudio/datasetdetail/112151

!pip uninstall paddlepaddle -y
!pip install paddlepaddle

数据概况

import pandas as pd
import numpy as numpy
import json

train = pd.read_csv('data/data112151/train_dataset.csv', sep='\t')
test = pd.read_csv('data/data112151/test_dataset.csv', sep='\t')
data = pd.concat([train, test])
index = 0
data['ii'] = data['session_id'].apply(lambda x: int(x[-7:-5]))
data['location_first_lvl'] = data['location'].astype(str).apply(lambda x: json.loads(x)['first_lvl'])
data['location_sec_lvl'] = data['location'].astype(str).apply(lambda x: json.loads(x)['sec_lvl'])
data['location_third_lvl'] = data['location'].astype(str).apply(lambda x: json.loads(x)['third_lvl'])
data.drop(['client_type', 'browser_source'], axis=1, inplace=True)
data['auth_type'].fillna('1', inplace=True)
from tqdm import tqdm
from sklearn.preprocessing import LabelEncoder
for col in tqdm(['user_name', 'action', 'auth_type', 'ip',
                 'ip_location_type_keyword', 'ip_risk_level', 'location', 'device_model',
                 'os_type', 'os_version', 'browser_type', 'browser_version',
                 'bus_system_code', 'op_target', 'location_first_lvl', 'location_sec_lvl',
                 'location_third_lvl']):
    lbl = LabelEncoder()
    data[col] = lbl.fit_transform(data[col])
import numpy as np
data['op_date'] = pd.to_datetime(data['op_date'])

data['year'] = data['op_date'].dt.year
data['month'] = data['op_date'].dt.month
data['day'] = data['op_date'].dt.day
data['hour'] = data['op_date'].dt.hour

data['op_ts'] = data["op_date"].values.astype(np.int64) // 10 ** 9
data = data.sort_values(by=['user_name', 'op_ts']).reset_index(drop=True)
data['last_ts'] = data.groupby(['user_name'])['op_ts'].shift(1)
data['ts_diff1'] = data['op_ts'] - data['last_ts']
for f in ['ip', 'location', 'device_model', 'os_version', 'browser_version']:
    data[f'user_{f}_nunique'] = data.groupby(['user_name'])[f].transform('nunique')
for method in ['mean', 'max', 'min', 'std', 'sum', 'median']:
    for col in ['user_name', 'ip', 'location', 'device_model', 'os_version', 'browser_version']:
        data[f'ts_diff1_{method}_' + str(col)] = data.groupby(col)['ts_diff1'].transform(method)

group_list = ['user_name','ip', 'location', 'device_model', 'os_version', 'browser_version', 'op_target']
cat_cols = ['action', 'auth_type', 'browser_type',
            'browser_version', 'bus_system_code', 'device_model',
            'ip', 'ip_location_type_keyword', 'ip_risk_level', 'location', 'op_target',
            'os_type', 'os_version', 'user_name'
            ]
data.drop(columns = 'session_id', inplace=True)


data.drop(columns='op_date',inplace=True)
features = [item for item in data.columns if item != 'risk_label']
traindata = data[~data['risk_label'].isnull()].reset_index(drop=True)
traindata = traindata.fillna(0)
testdata = data[data['risk_label'].isnull()].reset_index(drop=True)
for item in features:
    testdata[item] = testdata[item].fillna(0)
data_X = traindata[features].values[:12000]
data_Y = traindata['risk_label'].values[:12000].astype(int).reshape(-1, 1)
data_X_test = traindata[features].values[12000:]
data_Y_test = traindata['risk_label'].values[12000:].astype(int).reshape(-1, 1)
testdata = testdata[features].values
# 归一化
from sklearn.preprocessing import MinMaxScaler
mm = MinMaxScaler()
data_X = mm.fit_transform(data_X)
data_X_test = mm.fit_transform(data_X_test)
testdata = mm.fit_transform(testdata)
testdata.shape

三、模型组网

使用飞桨PaddlePaddle进行组网,在本基线系统中,只使用两层全连接层完成分类任务。

import random
import paddle

seed = 1234
# 设置随机种子 固定结果
def set_seed(seed):
    np.random.seed(seed)
    random.seed(seed)
    paddle.seed(seed)

set_seed(seed)
import paddle
import paddle.nn as nn

# 定义动态图
class Classification(paddle.nn.Layer):
    def __init__(self):
        super(Classification, self).__init__()
        nn.Tanh
        self.drop = paddle.nn.Dropout(p=0.2)
        # 定义一层全连接层,输出维度是2,激活函数为None,即不使用激活函数
        self.fc1 = paddle.nn.Linear(66, 64)
        self.fc2 = paddle.nn.Linear(64, 32)
        self.fc3 = paddle.nn.Linear(32, 16)
        self.fc4 = paddle.nn.Linear(16, 8)
        self.fc5= paddle.nn.Linear(8, 2)
        self.Tanh = nn.Tanh()
    
    # 网络的前向计算函数
    def forward(self, inputs):
        x = self.Tanh(self.fc1(inputs))
        x = self.drop(x)
        x = self.Tanh(self.fc2(x))
        x = self.drop(x)
        x = self.Tanh(self.fc3(x))
        x = self.drop(x)
        x = self.Tanh(self.fc4(x))
        x = self.drop(x)
        pred = self.fc5(x)
        return pred

四、配置参数及训练

记录日志

# 定义绘制训练过程的损失值变化趋势的方法draw_train_process
train_nums = []
train_costs = []
def draw_train_process(iters,train_costs):
    title="training cost"
    plt.title(title, fontsize=24)
    plt.xlabel("iter", fontsize=14)
    plt.ylabel("cost", fontsize=14)
    plt.plot(iters, train_costs,color='red',label='training cost') 
    plt.grid()
    plt.show()

定义损失函数

损失函数使用【R-Drop:摘下SOTA的Dropout正则化策略】里的kl_loss:

import paddle
import paddle.nn.functional as F

class kl_loss(paddle.nn.Layer):
    def __init__(self):
       super(kl_loss, self).__init__()

    def forward(self, p, q, label):
        ce_loss = 0.5 * (F.mse_loss(p, label=label)) + F.mse_loss(q, label=label)
        kl_loss = self.compute_kl_loss(p, q)

        # carefully choose hyper-parameters
        loss = ce_loss + 0.3 * kl_loss 

        return loss

    def compute_kl_loss(self, p, q):
        
        p_loss = F.kl_div(F.log_softmax(p, axis=-1), F.softmax(q, axis=-1), reduction='none')
        q_loss = F.kl_div(F.log_softmax(q, axis=-1), F.softmax(p, axis=-1), reduction='none')

        # You can choose whether to use function "sum" and "mean" depending on your task
        p_loss = p_loss.sum()
        q_loss = q_loss.sum()

        loss = (p_loss + q_loss) / 2

        return loss

模型训练

import paddle.nn.functional as F
y_preds = []
labels_list = []
BATCH_SIZE = 8
train_data = data_X
train_data_y = data_Y
test_data = data_X_test
test_data_y = data_Y_test



compute_kl_loss = kl_loss()

def train(model):
    print('start training ... ')
    # 开启模型训练模式
    model.train()
    EPOCH_NUM = 5
    train_num = 0
    scheduler = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=0.001, T_max=int(traindata.shape[0]/BATCH_SIZE*EPOCH_NUM), verbose=False)
    optimizer = paddle.optimizer.Adam(learning_rate=scheduler, parameters=model.parameters())
    for epoch_id in range(EPOCH_NUM):
        # 在每轮迭代开始之前,将训练数据的顺序随机的打乱
        np.random.shuffle(train_data)
        # 将训练数据进行拆分,每个batch包含50条数据
        mini_batches = [np.append(train_data[k:k+BATCH_SIZE],train_data_y[k:k+BATCH_SIZE],axis=1)for k in range(0, len(train_data))]
        for batch_id, data in enumerate(mini_batches):
            features_np = np.array(data[:, :66], np.float32)
            labels_np = np.array(data[:, -1:], np.float32)
        
            features = paddle.to_tensor(features_np)
            labels = paddle.to_tensor(labels_np)
            
            # m = labels.flatten(0)
            # m = np.array(m)
        
            #前向计算
            y_pred1 = model(features)
            # x= paddle.argmax(y_pred1,axis=-1)
            # x = np.array(x)
            # acc = acc+np.sum(m == x)
            # print(acc)
            
            y_pred2 = model(features)
            cost = compute_kl_loss(y_pred1, y_pred2, label=labels)
            # cost = F.mse_loss(y_pred, label=labels)
            train_cost = cost.numpy()[0]
            #反向传播
            cost.backward()
            #最小化loss,更新参数
            optimizer.step()
            # 清除梯度
            optimizer.clear_grad()
            if batch_id % 500 == 0 and epoch_id % 1 == 0:
                print("Pass:%d,Cost:%0.5f"%(epoch_id, train_cost))

            train_num = train_num + BATCH_SIZE
            train_nums.append(train_num)
            train_costs.append(train_cost)
model = Classification()
train(model)

def predict(model):
    print('start predicting')
    model.eval()
    outputs = []
    mini_batches = [np.append(test_data[k:k+BATCH_SIZE],test_data_y[k:k+BATCH_SIZE],axis=1)for k in range(0,len(test_data),BATCH_SIZE)]
    for data in mini_batches:
        features_np = np.array(data[:,:66],np.float32)
        features = paddle.to_tensor(features_np)
        pred = model(features)
        out = paddle.argmax(pred,axis=1)
        outputs.extend(out.numpy())
    return outputs
outputs = predict(model)
test_data_y = test_data_y.reshape(-1,)
outputs = np.array(outputs)
np.sum(test_data_y==outputs)/test_data_y.shape[0]
from sklearn.metrics import f1_score
x = f1_score(test_data_y,outputs)
x

绘制趋势曲线

import matplotlib
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline

draw_train_process(train_nums, train_costs)

五、保存预测结果

模型预测:

predict_result = []
for infer_feature in testdata:
    infer_feature = infer_feature.reshape(1,66)
    infer_feature = paddle.to_tensor(np.array(infer_feature, dtype='float32'))
    result = model(infer_feature)
    predict_result.append(result)

将结果写入.CSV文件中:

import os
import pandas as pd

id_list = [item for item in range(1, 10001)]
label_list = []
csv_file = 'submission.csv'

for item in range(len(id_list)):
    label = np.argmax(predict_result[item])
    label_list.append(label)

data = {'id':id_list, 'ret':label_list}
df = pd.DataFrame(data)
df.to_csv(csv_file, index=False, encoding='utf8')

六、改进思路

1:修正原先baseline错误将第七列特征作为标签的问题。
2:特征工程衍生出67维特征,主要做了labelencoder编码和时序方面的特征。
3:修改模型参数,将dropout值改成0.2,并调低了学习率。
4:修改模型层数,增加至五层线性层,并使用了Tanh激活函数。

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐