转自AI Studio,原文链接:视频&图片 超分与动漫化+补帧 - 飞桨AI Studio

1.安装依赖

由于视频不便展示,均放到项目根目录下,以便于查看

需要自行建立piece目录和piece_pr目录,分别放置切割的原视频帧,以及原视频帧动漫化后的图片

In [9]

!python3 -m pip install --upgrade ppgan

!git clone https://gitee.com/paddlepaddle/PaddleGAN.git
%cd PaddleGAN/
!pip install -v -e .

!pip install paddlehub==1.8.0 -U -i https://pypi.tuna.tsinghua.edu.cn/simple

from ppgan.apps import RealSRPredictor
import cv2
import paddlehub as hub
from PIL import Image  
import numpy as np

%env CUDA_VISIBLE_DEVICES=0
%matplotlib inline

2.单图片操作

2.1动漫化配置

现在就有可以将现实的照片转化为动漫风格的模型可以一键应用——>{随手拍,生成日系风}。PaddleHub已将定制打造的街景动漫化模型animegan_v1_hayao_60、animegan_v2_shinkai_33、animegan_v2_paprika_74等多个优质模型开源, 并且支持一键街景照片动漫化,适用于美食、风景、人物等场景。

PaddleHub建议采用1.8,因为官方给的运行实例就是1.8,且原项目评论中有因为版本更新后无法运行的

利用AnimeGAN v2来对景物图像进行动漫风格化。

论文是 AnimeGAN: A Novel Lightweight GAN for Photo Animation, 论文链接: https://link.springer.com/chapter/10.1007/978-981-15-5577-0_18.

输出宫崎骏风格动漫照片 model = hub.Module('animegan_v1_hayao_60', use_gpu=True)

输出新海诚风格动漫照片 model = hub.Module('animegan_v2_shinkai_33', use_gpu=True)

输出今敏风格动漫照片 model = hub.Module('animegan_v2_paprika_74', use_gpu=True)

In [10]

sr = RealSRPredictor()
model = hub.Module(name='animegan_v2_shinkai_33', use_gpu=True)
[04/26 22:52:31] ppgan INFO: Found /home/aistudio/.cache/ppgan/DF2K_JPEG.pdparams
[2022-04-26 22:52:31,770] [    INFO] - Installing animegan_v2_shinkai_33 module
[2022-04-26 22:52:31,773] [    INFO] - Module animegan_v2_shinkai_33 already installed in /home/aistudio/.paddlehub/modules/animegan_v2_shinkai_33

2.2超分辨率配置

构建RealSR实例。RealSR: Real-World Super-Resolution via Kernel Estimation and Noise Injection发表于CVPR 2020 Workshops的基于真实世界图像训练的超分辨率模型。此接口对输入图片或视频做4倍的超分辨率。建议视频使用mp4格式。

注意:RealSR的输入图片尺寸需小于1000x1000pix。

其实sr.run是可以直接处理视频的,但是由于对帧数处理的太多,以至于运行速度太慢,无法处理大型的视频

四倍超分效果

当然官方还提供了直接对视频进行超分的方式,本人尝试后发现需要很大的显存,如即使是500*500,1min这种很低的分辨率的视频使用16GB显存都会不足

EDVR模型提出了一个新颖的视频具有增强可变形卷积的还原框架:第一,为了处理大动作而设计的一个金字塔,级联和可变形(PCD)对齐模块,使用可变形卷积以从粗到精的方式在特征级别完成对齐;第二,提出时空注意力机制(TSA)融合模块,在时间和空间上都融合了注意机制,用以增强复原的功能。

ppgan.apps.EDVRPredictor(output='output', weight_path=None) 参数 output_path (str,可选的): 输出的文件夹路径,默认值:output. weight_path (None,可选的): 载入的权重路径,如果没有设置,则从云端下载默认的权重到本地。默认值:None。

In [2]

image_sr=sr.run_image("/home/aistudio/butterfly.png")
image_sr.save('/home/aistudio/butterfly_SR.png')

3.视频处理

3.1动漫化转化函数,支持Batch

In [3]

def style_transfer(ori_image_path,target_image_path,w,h):
    
    images_ori_batch=[]
    for p in ori_image_path:
        ima=cv2.imread(p)
        images_ori_batch.append(ima)

    np_array=model.style_transfer(images=images_ori_batch)

    for i in range(len(np_array)):
        t_image = cv2.cvtColor(np_array[i], cv2.COLOR_BGR2RGB)
        pil_image=Image.fromarray(t_image)
        pil_image.save(target_image_path[i])

3.2切割视频帧

In [12]

timeF = 1  #视频帧计数间隔次数
videoFile = '/home/aistudio/test_short.mp4'

outputFile = '/home/aistudio/piece/'
vc = cv2.VideoCapture(videoFile)
c = 1

fps = vc.get(cv2.CAP_PROP_FPS)
print(f'fps={fps}')

if vc.isOpened():
    rval, frame = vc.read()
else:
    print('openerror!')
    rval = False

while rval:
    rval, frame = vc.read()
    if c==1:
        print(frame.shape)
        # shape[0]是宽度
        # shape[1]是高度
        w=frame.shape[0]
        h=frame.shape[1]
    if c % timeF == 0:
        print(f'\r{c}',end="")
        # 垂直翻转
        # frame = cv2.flip(frame, 0)

        cv2.imwrite(outputFile + str(int(c // timeF)).zfill(7) + '.jpg', frame)
        
    c += 1
    cv2.waitKey(1)

vc.release()

print()
print(h,w)
fps=25.135135135135137
(960, 720, 3)
341
720 960

3.3排序整理图片并测试图片是否完整

这里如果不排序结果会是混乱的,因为默认不是从1,2,3,4,5的顺序排列的

In [11]

import os

ori_image_path=[]
images_files = os.listdir("/home/aistudio/piece")
for file in images_files:
    if file.endswith('.jpg'):
        t=cv2.imread("/home/aistudio/piece/"+file)
        if t is not None:
            ori_image_path.append(file)
ori_image_path.sort()
n=len(ori_image_path)
print(n)
print(ori_image_path)

3.4逐帧风格转换

In [13]

import time
from time import strftime
from time import gmtime

batch_size=1
batch_ori=[]
batch_tar=[]

time_start = time.time()
for i,name in enumerate(ori_image_path):
    ori_p="/home/aistudio/piece/"+name
    tar_p="/home/aistudio/piece_pr/"+name
    batch_ori.append(ori_p)
    batch_tar.append(tar_p)
    if i%batch_size==0 or i+1==n:
        # print(batch_ori)
        # print(batch_tar)
        style_transfer(batch_ori,batch_tar,w,h)
        batch_ori=[]
        batch_tar=[]
        epoch_used_time=(time.time()-time_start)

        # 加了个比较简陋的计时方式,显示训练剩余时间,以便估计摸鱼时间
        used_t=strftime("%H:%M:%S", gmtime(epoch_used_time))
        total_t=strftime("%H:%M:%S", gmtime((epoch_used_time/(i+1))*n))

        print(f'\r{i+1}/{n} {used_t}/{total_t}',end="")
340/340 00:00:45/00:00:45

3.5把图片合成视频,注意保持前后fps一致

存在偶尔下载后无法播放的情况,需要手工调一下fps的大小,这个感觉有点玄学的意思在

In [15]

# encoding: UTF-8
import glob as gb
import cv2

img_path=[]
for i in range(len(ori_image_path)):
    img_path.append("/home/aistudio/piece_pr/"+ori_image_path[i])

# print(img_path)

# 如果下载后无法播放,调一下那个帧率,这个感觉有点玄学的意思在,我没明白啥原理
videoWriter = cv2.VideoWriter('/home/aistudio/test_result.mp4', cv2.VideoWriter_fourcc(*'mp4v'), round(fps,3), (h,w))

for path in img_path:
    img  = cv2.imread(path) 
    # 这东西第一维是高度
    img = cv2.resize(img,(h,w))
    videoWriter.write(img)

3.6合成视频音轨

根据图片合成的视频是没有声音的,需要从原视频移动来

In [16]

!pip install moviepy

In [17]

import os
import moviepy.video.io.ImageSequenceClip
from moviepy.editor import VideoFileClip

def add_mp3(video_src1, video_src2, video_dst):
    ' 将video_src1的音频嵌入video_src2视频中'
    video_src1 = VideoFileClip(video_src1)
    video_src2 = VideoFileClip(video_src2)
    audio = video_src1.audio
    videoclip2 = video_src2.set_audio(audio)
    videoclip2.write_videofile(video_dst, codec='libx264')

video_src1 = '/home/aistudio/test_short.mp4'
video_src2 = '/home/aistudio/test_result.mp4'
video_dst = '/home/aistudio/test_result_yinpin.mp4'
add_mp3(video_src1, video_src2, video_dst)
Moviepy - Building video /home/aistudio/test_result_yinpin.mp4.
MoviePy - Writing audio in test_result_yinpinTEMP_MPY_wvf_snd.mp3
 
MoviePy - Done.
Moviepy - Writing video /home/aistudio/test_result_yinpin.mp4

 
Moviepy - Done !
Moviepy - video ready /home/aistudio/test_result_yinpin.mp4

3.7视频插帧

这个耗时较长,可以选用,大约10s视频需要10min用时

注意结果在output中

DAIN 模型通过探索深度的信息来显式检测遮挡。并且开发了一个深度感知的流投影层来合成中间流。在视频补帧方面有较好的效果。

ppgan.apps.DAINPredictor(
                        output_path='output',
                        weight_path=None,
                        time_step=None,
                        use_gpu=True,
                        remove_duplicates=False)

参数 output_path (str,可选的): 输出的文件夹路径,默认值:output.

weight_path (None,可选的): 载入的权重路径,如果没有设置,则从云端下载默认的权重到本地。默认值:None。

time_step (int): 补帧的时间系数,如果设置为0.5,则原先为每秒30帧的视频,补帧后变为每秒60帧。

remove_duplicates (bool,可选的): 是否删除重复帧,默认值:False.

在本程序中需要需要修改默认配置,可以修改tools/video-enhance.py文件

In [18]

from ppgan.apps import DAINPredictor
import paddle

# 使用插帧(DAIN)
# input参数表示输入的视频路径
# output表示处理后的视频的存放文件夹
# proccess_order 表示使用的模型和顺序(目前支持)
%cd /home/aistudio/PaddleGAN/applications/
!python tools/video-enhance.py --input /home/aistudio/test_result_yinpin.mp4 \
                               --process_order DAIN \
                               --output /home/aistudio/output
/home/aistudio/PaddleGAN/applications
/home/aistudio/PaddleGAN/PaddleGAN/ppgan/modules/init.py:58: DeprecationWarning: invalid escape sequence \s
  """
/home/aistudio/PaddleGAN/PaddleGAN/ppgan/modules/init.py:122: DeprecationWarning: invalid escape sequence \m
  """
/home/aistudio/PaddleGAN/PaddleGAN/ppgan/modules/init.py:147: DeprecationWarning: invalid escape sequence \m
  """
/home/aistudio/PaddleGAN/PaddleGAN/ppgan/modules/init.py:178: DeprecationWarning: invalid escape sequence \m
  """
/home/aistudio/PaddleGAN/PaddleGAN/ppgan/modules/init.py:215: DeprecationWarning: invalid escape sequence \m
  """
/home/aistudio/PaddleGAN/PaddleGAN/ppgan/modules/dense_motion.py:156: DeprecationWarning: invalid escape sequence \h
  """
Model DAIN process start..
[04/26 22:58:44] ppgan INFO: Downloading DAIN_weight.tar from https://paddlegan.bj.bcebos.com/applications/DAIN_weight.tar to /home/aistudio/.cache/ppgan/DAIN_weight.tar
100%|██████████████████████████████████| 78680/78680 [00:01<00:00, 47540.63it/s]
[04/26 22:58:46] ppgan INFO: Decompressing /home/aistudio/.cache/ppgan/DAIN_weight.tar...
Tue Apr 26 22:58:46-WARNING: The old way to load inference model is deprecated. model path: /home/aistudio/.cache/ppgan/DAIN_weight/model, params path: /home/aistudio/.cache/ppgan/DAIN_weight/params
W0426 22:58:46.768800  1381 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0426 22:58:46.773906  1381 device_context.cc:465] device: 0, cuDNN Version: 7.6.
Old fps (frame rate):  25.14
New fps (frame rate):  50
100%|█████████████████████████████████████████| 341/341 [12:47<00:00,  2.25s/it]
Model DAIN output frames path: /home/aistudio/output/DAIN/frames-combined/test_result_yinpin/%08d.png
Model DAIN output video path: /home/aistudio/output/DAIN/videos-output/test_result_yinpin.mp4
Model DAIN process done!

4.总结与优缺点

优势

  1. 我之前是想做一个视频的超分的,打算直接调用官方的库,但是显存爆掉了,那个视频也不太大,大约500*500然后2分钟吧,可以直接处理视频是很吃显存的,我这样拆分成图片处理后,甚至可以用16G显存处理2K以至4K视频
  2. 拆分成图片后可以进行更多丰富的操作,如动漫化

缺点

  1. 操作比较复杂,不如直接调用视频处理接口来的快
  2. 对于图片合成为视频那块帧率一直感觉比较玄学,现在还没想太明白

In [ ]

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐