1.项目场景说明

深度信息是感知三维世界的重要信息之一,其在近年来火热的自动驾驶、自动化物流、AR和VR等场景都起着重要的作用。常用的深度信息设备包括激光雷达、ToF等设备。

但是,上述深度信息设备采集的深度信息往往存在信息稀疏、分辨率较低等问题,同时高精度深度信息采集设备往往价格较高,这一直是企业应用的难点。为此,百度机器人与自动驾驶实验室开发了深度信息的增强方案,包括深度信息补全、深度信息超分辨率及深度信息估计等,缓解上述难点问题,助力深度信息的应用。

本项目展示的是以百度机器人与自动驾驶实验室的IEEE Transactions on Multimedia 2021论文 WAFP-Net: Weighted Attention Fusion based Progressive Residual Learning for Depth Map Super-resolution 为参考,
复现的基于自适应融合注意力的深度图超分辨率模型WAFP-Net,针对真实场景存在的两种图像退化方式(间隔采样和带噪声的双三次采样),提出了一种自适应的融合注意力机制,在保持模型参数量优势的情况下,在多个数据集上取得了SOTA的精度。

读者可以直接组织数据或经过简单数据处理后在自定义数据集上训练超分辨率重建模型。

2.技术方案选型

2.1 Depth-Superresolution 深度图像超分辨率

为了缓解深度信息分辨率较低的问题,Paddle-Depth发布的WAFP-Net,提供了深度图像超分辨率能力,以低分辨率深度图像作为输入,可恢复高分辨率的深度图像,可为AR、VR三维重建等提供更好的深度信息输入。

渐进式框架融合通道和空间注意力机制

WAFP-Net采用渐进式的算法框架,将深度图像超分辨率问题拆解为多阶段的任务,每个阶段的输入为上个阶段的输出。最后一个阶段的输入融合之前所有阶段的输出,从而可以更好的恢复高分辨的深度图像。另外,每个阶段融合通道和空间注意力机制,可以较好的恢复高分辨率深度图像的边界结构等信息,在不需要彩色图像辅助的情况下,较好的恢复高分辨率的深度图像。

高分辨率深度图像点云可视化结果:

2.2 延伸阅读

2.2.1 Depth-completion 深度补全

在自动驾驶、物流、园区等场景中完成三维重建、高精地图等建立时必不可少的就是获取深度。深度传感器如激光雷达获得的稀疏深度不满足建立稠密场景地图的需求。随着深度学习的发展,采用端到端神经网络对传感器获得的稀疏深度进行补全的方法能够获取到稠密深度并完成场景重建或地图构建,但是端到端深度补全的精度仍然受限,在精度要求较高的场景中难以满足业务需求。

Paddle-Depth全新发布的深度补全算法FCFR-Net,提供了补全稠密深度,生成高精度场景地图的方案,解决了在自动驾驶、物流及园区等场景应用中,精度不高、落地困难的问题。

二阶段补全框架,提升补全精度

以神经网络模型ResNet为例,FCFR-Net将深度补全过程分为两个阶段,第一阶段以任意的插值方式将传感器获取的稀疏深度填充为稠密形式,保证精度下界;第二阶段设计两个encoder分支分别提取图像和填充深度的特征,用单个decoder融合图像和深度特征,最终得到高精度的深度补全结果。同时,FCFR-Net提供了对图像和深度特征融合的优化方案,针对特征提取和特征融合分别提出了特征通道混合和基于区域能量融合的方案,深度补全的精度得到进一步提升。

FCFR-Net可以在保证实时响应的前提下,大幅提升深度补全精度:

技术方案平均均方误差
单阶段方案814mm
二阶段方案735mm

补全和建图示例如下:

2.2.2 Depth-estimation 深度信息估计

为了缓解深度信息采集设备价格昂贵等问题,Paddle-Depth发布MLDA-Net,提供了深度信息估计能力,以低分辨率的彩色图像作为输入,可以以自监督的方式估计对应的深度信息,有效减少对于深度采集设备的依赖。

多尺度双注意力机制

MLDA-Net以彩色图像作为输入,以多尺度的方式分别提取两种类型的特征,极大的增强了特征表达能力,同时,使用注意力机制有效融合两种类型的特征,进一步增强特征表达能力。另外,使用边界注意力机制有效增强估计的深度信息的边缘区域,更好的恢复场景的深度信息。

3 安装说明

3.1 项目下载方式

# 拉取github项目
!git clone https://github.com/HydrogenSulfate/PaddleVideo
%cd PaddleVideo/
# checkout WAFPNet分支
!git checkout remotes/origin/add_WAFPNet
%cd ~

3.2 安装环境准备

# !pip install scipy
# !pip install h5py
!pip install decord==0.4.2
!pip install av==8.0.3
!pip install scikit-image
!pip install SimpleITK
!pip install lmdb
%cd PaddleVideo
!pip install -r requirements.txt
%cd ~

4 数据准备

4.1 数据集介绍

本文档所使用的数据融合了Middlebury dataset/ MPI Sintel dataset 和 synthetic New Tsukuba dataset 共三个数据集

  1. 准备raw图片数据

    下载2个数据集压缩包:WAFP_data.zipWAFP_test_data.zip
    解压并将data_all文件夹(含133张深度图)和test_data文件夹(含4个测试数据)放置成以下位置:

    data/
    └── depthSR/
        ├── data_all/
        │   ├── alley_1_1.png
        │   ├── ...
        │   └── ...
        ├── test_data/
        │   ├── cones_x4.mat
        │   ├── teddy_x4.mat
        │   ├── tskuba_x4.mat
        │   └── venus_x4.mat
        ├── val.list
        ├── generate_train_noise.m
        └── modcrop.m
    
!wget https://videotag.bj.bcebos.com/Data/WAFP_data.zip
!wget https://videotag.bj.bcebos.com/Data/WAFP_test_data.zip
!unzip WAFP_data.zip -d PaddleVideo/data/depthSR/
!unzip WAFP_test_data.zip -d PaddleVideo/data/depthSR/

在数据集中,图片文件是以mat文件格式存储的。

mat文件是Matlab的数据存储的标准格式。mat文件是标准的二进制文件,还可以ASCII码形式保存和加载,在MATLAB中打开显示类似于单行EXCEL表格。

数据demo如下:

import cv2
import scipy.io as scio
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt

dataFile = r'PaddleVideo/data/depthSR/test_data/cones_x4.mat' # 单个的mat文件
data = scio.loadmat(dataFile)
print(type(data))
print(data)
<class 'dict'>
{'__header__': b'MATLAB 5.0 MAT-file, Platform: PCWIN64, Created on: Mon Feb 14 16:18:33 2022', '__version__': '1.0', '__globals__': [], 'im_gt_y': array([[0.26666667, 0.26666667, 0.26666667, ..., 0.3254902 , 0.3254902 ,
        0.3254902 ],
       [0.26666667, 0.26666667, 0.27058824, ..., 0.3254902 , 0.3254902 ,
        0.3254902 ],
       [0.27058824, 0.27058824, 0.27058824, ..., 0.32156863, 0.3254902 ,
        0.3254902 ],
       ...,
       [0.85098039, 0.85098039, 0.85098039, ..., 0.70588235, 0.69411765,
        0.69411765],
       [0.85098039, 0.85098039, 0.85098039, ..., 0.70196078, 0.69803922,
        0.69411765],
       [0.85490196, 0.85490196, 0.85490196, ..., 0.70196078, 0.70196078,
        0.69803922]]), 'im_b_y': array([[0.28627451, 0.28235294, 0.27843137, ..., 0.3372549 , 0.34117647,
        0.34509804],
       [0.28627451, 0.28235294, 0.27843137, ..., 0.3372549 , 0.34117647,
        0.34509804],
       [0.28627451, 0.28235294, 0.27843137, ..., 0.3372549 , 0.34117647,
        0.34509804],
       ...,
       [0.85490196, 0.85490196, 0.85490196, ..., 0.70196078, 0.70196078,
        0.70196078],
       [0.85882353, 0.85882353, 0.85882353, ..., 0.69803922, 0.69803922,
        0.69803922],
       [0.85882353, 0.85882353, 0.85882353, ..., 0.69411765, 0.69411765,
        0.69411765]])}
# 图片信息在`im_gt_y`字段中
print (data['im_gt_y']) 
[[0.26666667 0.26666667 0.26666667 ... 0.3254902  0.3254902  0.3254902 ]
 [0.26666667 0.26666667 0.27058824 ... 0.3254902  0.3254902  0.3254902 ]
 [0.27058824 0.27058824 0.27058824 ... 0.32156863 0.3254902  0.3254902 ]
 ...
 [0.85098039 0.85098039 0.85098039 ... 0.70588235 0.69411765 0.69411765]
 [0.85098039 0.85098039 0.85098039 ... 0.70196078 0.69803922 0.69411765]
 [0.85490196 0.85490196 0.85490196 ... 0.70196078 0.70196078 0.69803922]]
这里通过python脚本将一个测试集mat文件转换为图片格式进行预览。
# 由于导入的mat文件是structure类型的,所以需要取出需要的数据矩阵
a = data['im_gt_y']
# 取出需要的数据矩阵

# 数据矩阵转图片的函数
def MatrixToImage(data):
    data = data*255
    new_im = Image.fromarray(data.astype(np.uint8))
    return new_im

new_im = MatrixToImage(a)
# plt.imshow(a, cmap=plt.cm.gray, interpolation='nearest')
new_im.show()
new_im.save('./cones_x4.png') # 保存图片


在这里插入图片描述

4.2 数据集处理

执行generate_train_noise.m脚本,生成训练数据train_depth_x4_noise.h5

由于这个脚本是一个matlab脚本,因此操作需要在本地matlab环境下进行。

本文直接在数据集中给出生成训练数据的结果train_depth_x4_noise.h5文件。

!cp data/data154330/train_depth_x4_noise.h5 PaddleVideo/data/depthSR/

ls命令生成test.list路径文件:

%cd PaddleVideo/data/depthSR/
/home/aistudio/PaddleVideo/data/depthSR
!ls test_data > test.list

train_depth_x4_noise.h5test_datatest.list三者的路径填写到wafp.yaml的对应位置:

DATASET: #DATASET field
batch_size: 64 #Mandatory, bacth size
valid_batch_size: 1
test_batch_size: 1
num_workers: 1 #Mandatory, XXX the number of subprocess on each GPU.
train:
    format: "HDF5Dataset"
    file_path: "data/depthSR/train_depth_x4_noise.h5"  # path of train_depth_x4_noise.h5
valid:
    format: "MatDataset"
    data_prefix: "data/depthSR/test_data"  # path of test_data
    file_path: "data/depthSR/test.list"  # path of test.list
test:
    format: "MatDataset"
    data_prefix: "data/sintel/test_data"  # path of test_data
    file_path: "data/sintel/test.list"  # path of test.list

5 模型训练

  • 混合数据集使用单卡训练,训练方式的启动命令如下:
%cd /home/aistudio/PaddleVideo/
/home/aistudio/PaddleVideo
!python main.py -c configs/resolution/wafp/wafp.yaml --seed 42
[06/29 00:26:09] DALI is not installed, you can improve performance if use DALI
[06/29 00:26:09] DATASET : 
[06/29 00:26:09]     batch_size : 64
[06/29 00:26:09]     num_workers : 1
[06/29 00:26:09]     test : 
[06/29 00:26:09]         data_prefix : data/depthSR/test_data
[06/29 00:26:09]         file_path : data/depthSR/test.list
[06/29 00:26:09]         format : MatDataset
[06/29 00:26:09]     test_batch_size : 1
[06/29 00:26:09]     train : 
[06/29 00:26:09]         file_path : data/depthSR/train_depth_x4_noise.h5
[06/29 00:26:09]         format : HDF5Dataset
[06/29 00:26:09]     valid : 
[06/29 00:26:09]         data_prefix : data/depthSR/test_data
[06/29 00:26:09]         file_path : data/depthSR/test.list
[06/29 00:26:09]         format : MatDataset
[06/29 00:26:09]     valid_batch_size : 1
[06/29 00:26:09] ------------------------------------------------------------
[06/29 00:26:09] INFERENCE : 
[06/29 00:26:09]     height : 368
[06/29 00:26:09]     name : WAFP_Inference_helper
[06/29 00:26:09]     width : 440
[06/29 00:26:09] ------------------------------------------------------------
[06/29 00:26:09] METRIC : 
[06/29 00:26:09]     name : RMSEMetric
[06/29 00:26:09]     scale : 4
[06/29 00:26:09] ------------------------------------------------------------
[06/29 00:26:09] MODEL : 
[06/29 00:26:09]     backbone : 
[06/29 00:26:09]         name : WAFPNet
[06/29 00:26:09]         num_refine_layer : 9
[06/29 00:26:09]         num_residual_layer : 9
[06/29 00:26:09]     framework : Resolver2D
[06/29 00:26:09]     head : 
[06/29 00:26:09]         name : WAFPHead
[06/29 00:26:09]     runtime_cfg : 
[06/29 00:26:09]         infer : 
[06/29 00:26:09]             mode : patch
[06/29 00:26:09]             patch_size : 122
[06/29 00:26:09]             scale : 4
[06/29 00:26:09]         test : 
[06/29 00:26:09]             mode : patch
[06/29 00:26:09]             patch_size : 61
[06/29 00:26:09]             scale : 4
[06/29 00:26:09]         val : 
[06/29 00:26:09]             mode : patch
[06/29 00:26:09]             patch_size : 61
[06/29 00:26:09]             scale : 4
[06/29 00:26:09] ------------------------------------------------------------
[06/29 00:26:09] OPTIMIZER : 
[06/29 00:26:09]     grad_clip : 
[06/29 00:26:09]         name : ClipGradByGlobalNorm
[06/29 00:26:09]         value : 0.4
[06/29 00:26:09]     learning_rate : 
[06/29 00:26:09]         gamma : 0.1
[06/29 00:26:09]         learning_rate : 0.1
[06/29 00:26:09]         name : StepDecay
[06/29 00:26:09]         step_size : 20
[06/29 00:26:09]     momentum : 0.9
[06/29 00:26:09]     name : Momentum
[06/29 00:26:09]     weight_decay : 
[06/29 00:26:09]         name : L2
[06/29 00:26:09]         value : 0.0001
[06/29 00:26:09] ------------------------------------------------------------
[06/29 00:26:09] PIPELINE : 
[06/29 00:26:09]     test : 
[06/29 00:26:09]         decode : 
[06/29 00:26:09]             name : MatDecoder
[06/29 00:26:09]     train : None
[06/29 00:26:09]     valid : 
[06/29 00:26:09]         decode : 
[06/29 00:26:09]             name : MatDecoder
[06/29 00:26:09] ------------------------------------------------------------
[06/29 00:26:09] epochs : 80
[06/29 00:26:09] log_interval : 100
[06/29 00:26:09] log_level : INFO
[06/29 00:26:09] model_name : WAFP
[06/29 00:26:09] save_interval : 10
W0629 00:26:09.666808  1590 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0629 00:26:09.670467  1590 gpu_context.cc:306] device: 0, cuDNN Version: 7.6.
[06/29 00:26:13] HDF5 Data Loaded from data/depthSR/train_depth_x4_noise.h5
[06/29 00:26:13] HDF5 Data Loaded from data/depthSR/train_depth_x4_noise.h5
[06/29 00:26:13] Training in fp32 mode.
[06/29 00:26:13] epoch:[  1/80 ] train step:0    loss: 3489979.75000 lr: 0.100000 rmse: 0.00000 batch_cost: 0.85194 sec, reader_cost: 0.59954 sec, ips: 75.12286 instance/sec.
[06/29 00:26:36] epoch:[  1/80 ] train step:100  loss: 42.24559 lr: 0.100000 rmse: 0.00000 batch_cost: 0.22793 sec, reader_cost: 0.00017 sec, ips: 280.78523 instance/sec.
[06/29 00:26:59] epoch:[  1/80 ] train step:200  loss: 23.69147 lr: 0.100000 rmse: 0.00000 batch_cost: 0.22679 sec, reader_cost: 0.00014 sec, ips: 282.20301 instance/sec.
[06/29 00:27:22] epoch:[  1/80 ] train step:300  loss: 52.65426 lr: 0.100000 rmse: 0.00000 batch_cost: 0.22664 sec, reader_cost: 0.00016 sec, ips: 282.38559 instance/sec.
[06/29 00:27:44] epoch:[  1/80 ] train step:400  loss: 40.86552 lr: 0.100000 rmse: 0.00000 batch_cost: 0.22659 sec, reader_cost: 0.00016 sec, ips: 282.45422 instance/sec.
[06/29 00:28:07] epoch:[  1/80 ] train step:500  loss: 76.45895 lr: 0.100000 rmse: 0.00000 batch_cost: 0.22777 sec, reader_cost: 0.00024 sec, ips: 280.97921 instance/sec.

6 模型评估与预测

这里提供了一个训练好的模型WAFP.pdparams
供用户直接测试。

  • 测试命令如下:

    python3.7 main.py --test -c configs/resolution/wafp/wafp.yaml -w "output/WAFP/WAFP_epoch_00080.pdparams"
    

    在给定的测试数据集上的测试指标如下:

    versionRMSESSIM
    ours2.54790.9808
# 完成80个epoch后测试命令
# !python main.py --test -c configs/resolution/wafp/wafp.yaml -w "output/WAFP/WAFP_epoch_00080.pdparams"
# 下载训练后的模型直接测试
!wget https://videotag.bj.bcebos.com/PaddleVideo-release2.3/WAFP_best.pdparams
!mv WAFP_best.pdparams output/WAFP/
!python main.py --test -c configs/resolution/wafp/wafp.yaml -w "output/WAFP/WAFP_best.pdparams"
[06/29 13:51:18] DALI is not installed, you can improve performance if use DALI
[06/29 13:51:18] [35mDATASET[0m : 
[06/29 13:51:18]     [35mbatch_size[0m : [92m64[0m
[06/29 13:51:18]     [35mnum_workers[0m : [92m1[0m
[06/29 13:51:18]     [35mtest[0m : 
[06/29 13:51:18]         [35mdata_prefix[0m : [92mdata/depthSR/test_data[0m
[06/29 13:51:18]         [35mfile_path[0m : [92mdata/depthSR/test.list[0m
[06/29 13:51:18]         [35mformat[0m : [92mMatDataset[0m
[06/29 13:51:18]     [35mtest_batch_size[0m : [92m1[0m
[06/29 13:51:18]     [35mtrain[0m : 
[06/29 13:51:18]         [35mfile_path[0m : [92mdata/depthSR/train_depth_x4_noise.h5[0m
[06/29 13:51:18]         [35mformat[0m : [92mHDF5Dataset[0m
[06/29 13:51:18]     [35mvalid[0m : 
[06/29 13:51:18]         [35mdata_prefix[0m : [92mdata/depthSR/test_data[0m
[06/29 13:51:18]         [35mfile_path[0m : [92mdata/depthSR/test.list[0m
[06/29 13:51:18]         [35mformat[0m : [92mMatDataset[0m
[06/29 13:51:18]     [35mvalid_batch_size[0m : [92m1[0m
[06/29 13:51:18] ------------------------------------------------------------
[06/29 13:51:18] [35mINFERENCE[0m : 
[06/29 13:51:18]     [35mheight[0m : [92m368[0m
[06/29 13:51:18]     [35mname[0m : [92mWAFP_Inference_helper[0m
[06/29 13:51:18]     [35mwidth[0m : [92m440[0m
[06/29 13:51:18] ------------------------------------------------------------
[06/29 13:51:18] [35mMETRIC[0m : 
[06/29 13:51:18]     [35mname[0m : [92mRMSEMetric[0m
[06/29 13:51:18]     [35mscale[0m : [92m4[0m
[06/29 13:51:18] ------------------------------------------------------------
[06/29 13:51:18] [35mMODEL[0m : 
[06/29 13:51:18]     [35mbackbone[0m : 
[06/29 13:51:18]         [35mname[0m : [92mWAFPNet[0m
[06/29 13:51:18]         [35mnum_refine_layer[0m : [92m9[0m
[06/29 13:51:18]         [35mnum_residual_layer[0m : [92m9[0m
[06/29 13:51:18]     [35mframework[0m : [92mResolver2D[0m
[06/29 13:51:18]     [35mhead[0m : 
[06/29 13:51:18]         [35mname[0m : [92mWAFPHead[0m
[06/29 13:51:18]     [35mruntime_cfg[0m : 
[06/29 13:51:18]         [35minfer[0m : 
[06/29 13:51:18]             [35mmode[0m : [92mpatch[0m
[06/29 13:51:18]             [35mpatch_size[0m : [92m122[0m
[06/29 13:51:18]             [35mscale[0m : [92m4[0m
[06/29 13:51:18]         [35mtest[0m : 
[06/29 13:51:18]             [35mmode[0m : [92mpatch[0m
[06/29 13:51:18]             [35mpatch_size[0m : [92m61[0m
[06/29 13:51:18]             [35mscale[0m : [92m4[0m
[06/29 13:51:18]         [35mval[0m : 
[06/29 13:51:18]             [35mmode[0m : [92mpatch[0m
[06/29 13:51:18]             [35mpatch_size[0m : [92m61[0m
[06/29 13:51:18]             [35mscale[0m : [92m4[0m
[06/29 13:51:18] ------------------------------------------------------------
[06/29 13:51:18] [35mOPTIMIZER[0m : 
[06/29 13:51:18]     [35mgrad_clip[0m : 
[06/29 13:51:18]         [35mname[0m : [92mClipGradByGlobalNorm[0m
[06/29 13:51:18]         [35mvalue[0m : [92m0.4[0m
[06/29 13:51:18]     [35mlearning_rate[0m : 
[06/29 13:51:18]         [35mgamma[0m : [92m0.1[0m
[06/29 13:51:18]         [35mlearning_rate[0m : [92m0.1[0m
[06/29 13:51:18]         [35mname[0m : [92mStepDecay[0m
[06/29 13:51:18]         [35mstep_size[0m : [92m20[0m
[06/29 13:51:18]     [35mmomentum[0m : [92m0.9[0m
[06/29 13:51:18]     [35mname[0m : [92mMomentum[0m
[06/29 13:51:18]     [35mweight_decay[0m : 
[06/29 13:51:18]         [35mname[0m : [92mL2[0m
[06/29 13:51:18]         [35mvalue[0m : [92m0.0001[0m
[06/29 13:51:18] ------------------------------------------------------------
[06/29 13:51:18] [35mPIPELINE[0m : 
[06/29 13:51:18]     [35mtest[0m : 
[06/29 13:51:18]         [35mdecode[0m : 
[06/29 13:51:18]             [35mname[0m : [92mMatDecoder[0m
[06/29 13:51:18]     [35mtrain[0m : [92mNone[0m
[06/29 13:51:18]     [35mvalid[0m : 
[06/29 13:51:18]         [35mdecode[0m : 
[06/29 13:51:18]             [35mname[0m : [92mMatDecoder[0m
[06/29 13:51:18] ------------------------------------------------------------
[06/29 13:51:18] [35mepochs[0m : [92m80[0m
[06/29 13:51:18] [35mlog_interval[0m : [92m100[0m
[06/29 13:51:18] [35mlog_level[0m : [92mINFO[0m
[06/29 13:51:18] [35mmodel_name[0m : [92mWAFP[0m
[06/29 13:51:18] [35msave_interval[0m : [92m10[0m
W0629 13:51:18.993326  7622 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0629 13:51:18.997550  7622 gpu_context.cc:306] device: 0, cuDNN Version: 7.6.
[06/29 13:51:23] [TEST] Processing batch 0/4 ...
[06/29 13:51:24] [TEST] Processing batch 1/4 ...
[06/29 13:51:24] [TEST] Processing batch 2/4 ...
[06/29 13:51:25] [TEST] Processing batch 3/4 ...
[06/29 13:51:25] [TEST] finished, avg_rmse = 2.5479965209960938, avg_ssim = 0.9808323383331299.
[0m

7 模型导出

# 导出模型
!python tools/export_model.py -c configs/resolution/wafp/wafp.yaml -p output/WAFP/WAFP_best.pdparams -o inference/WAFP

8 模型部署

!python tools/predict.py --input_file data/depthSR/test_data/cones_x4.mat \
--config configs/resolution/wafp/wafp.yaml \
--model_file inference/WAFP/WAFP.pdmodel \
--params_file inference/WAFP/WAFP.pdiparams \
--use_gpu=True \
--use_tensorrt=False
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import MutableMapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable, Mapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Sized
[06/29 13:52:42] DALI is not installed, you can improve performance if use DALI
No module named 'ppdet', [paddledet] package and it's dependencies is required for AVA.
Inference model(WAFP)...
W0629 13:52:43.940800  7888 analysis_predictor.cc:1086] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect.
[1m[35m--- Running analysis [ir_graph_build_pass][0m
[1m[35m--- Running analysis [ir_graph_clean_pass][0m
[1m[35m--- Running analysis [ir_analysis_pass][0m
[32m--- Running IR pass [is_test_pass][0m
[32m--- Running IR pass [simplify_with_basic_ops_pass][0m
[32m--- Running IR pass [conv_bn_fuse_pass][0m
[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass][0m
[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass][0m
[32m--- Running IR pass [multihead_matmul_fuse_pass_v2][0m
[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass][0m
[32m--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass][0m
[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass][0m
[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass][0m
[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass][0m
I0629 13:52:51.970193  7888 fuse_pass_base.cc:57] ---  detected 256 subgraphs
[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass][0m
[32m--- Running IR pass [fc_fuse_pass][0m
[32m--- Running IR pass [fc_elementwise_layernorm_fuse_pass][0m
[32m--- Running IR pass [conv_elementwise_add_act_fuse_pass][0m
[32m--- Running IR pass [conv_elementwise_add2_act_fuse_pass][0m
[32m--- Running IR pass [conv_elementwise_add_fuse_pass][0m
[32m--- Running IR pass [transpose_flatten_concat_fuse_pass][0m
[32m--- Running IR pass [runtime_context_cache_pass][0m
[1m[35m--- Running analysis [ir_params_sync_among_devices_pass][0m
I0629 13:53:02.028754  7888 ir_params_sync_among_devices_pass.cc:100] Sync params from CPU to GPU
[1m[35m--- Running analysis [adjust_cudnn_workspace_size_pass][0m
[1m[35m--- Running analysis [inference_op_replace_pass][0m
[1m[35m--- Running analysis [memory_optimize_pass][0m
I0629 13:53:02.542623  7888 memory_optimize_pass.cc:216] Cluster name : elementwise_add_50  size: 67600
I0629 13:53:02.542716  7888 memory_optimize_pass.cc:216] Cluster name : relu_229.tmp_0  size: 4326400
I0629 13:53:02.542734  7888 memory_optimize_pass.cc:216] Cluster name : reshape2_435.tmp_1  size: 0
I0629 13:53:02.542744  7888 memory_optimize_pass.cc:216] Cluster name : reshape2_181.tmp_0  size: 4326400
I0629 13:53:02.542757  7888 memory_optimize_pass.cc:216] Cluster name : softmax_41.tmp_0  size: 1142440000
I0629 13:53:02.542762  7888 memory_optimize_pass.cc:216] Cluster name : conv2d_292.tmp_0  size: 4326400
I0629 13:53:02.542773  7888 memory_optimize_pass.cc:216] Cluster name : fill_constant_589.tmp_0  size: 4
I0629 13:53:02.542784  7888 memory_optimize_pass.cc:216] Cluster name : matmul_v2_232.tmp_0  size: 16384
I0629 13:53:02.542794  7888 memory_optimize_pass.cc:216] Cluster name : transpose_129.tmp_0  size: 4326400
I0629 13:53:02.542805  7888 memory_optimize_pass.cc:216] Cluster name : relu_79.tmp_0  size: 4193280
I0629 13:53:02.542815  7888 memory_optimize_pass.cc:216] Cluster name : conv2d_381.tmp_0  size: 4326400
I0629 13:53:02.542825  7888 memory_optimize_pass.cc:216] Cluster name : matmul_v2_86.tmp_0  size: 1142440000
I0629 13:53:02.542835  7888 memory_optimize_pass.cc:216] Cluster name : reshape2_137.tmp_0  size: 4326400
I0629 13:53:02.542846  7888 memory_optimize_pass.cc:216] Cluster name : data_batch_0  size: 647680
I0629 13:53:02.542857  7888 memory_optimize_pass.cc:216] Cluster name : elementwise_add_62  size: 67600
[1m[35m--- Running analysis [ir_graph_to_program_pass][0m
I0629 13:53:05.996776  7888 analysis_predictor.cc:1007] ======= optimize end =======
I0629 13:53:06.206902  7888 naive_executor.cc:102] ---  skip [feed], feed -> data_batch_0
I0629 13:53:06.311122  7888 naive_executor.cc:102] ---  skip [full_like_0.tmp_0], fetch -> fetch
W0629 13:53:06.321084  7888 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0629 13:53:06.324797  7888 gpu_context.cc:306] device: 0, cuDNN Version: 7.6.
Current input image: data/depthSR/test_data/cones_x4.mat
pred output image saved to: data/cones_x4_wafp_output.png
[0m

超分辨率重建效果:


原项目链接:https://aistudio.baidu.com/aistudio/projectdetail/4247359?contributionType=1

作者:深渊上的坑

欢迎fork & 互关~
t_data/cones_x4.mat
pred output image saved to: data/cones_x4_wafp_output.png
[0m

超分辨率重建效果:


原项目链接:https://aistudio.baidu.com/aistudio/projectdetail/4247359?contributionType=1

作者:深渊上的坑

欢迎fork & 互关~

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐