★★★ 本文源自AlStudio社区精品项目,【点击此处】查看更多精品内容 >>>
LoRA: Low-Rank Adaptation of Large Language Models 是微软研究员引入的一项新技术,主要用于处理大模型微调的问题。目前超过数十亿以上参数的具有强能力的大模型 (例如 GPT-3) 通常在为了适应其下游任务的微调中会呈现出巨大开销。LoRA 建议冻结预训练模型的权重并在每个 Transformer 块中注入可训练层 (秩-分解矩阵)。因为不需要为大多数模型权重计算梯度,所以大大减少了需要训练参数的数量并且降低了 GPU 的内存要求。研究人员发现,通过聚焦大模型的 Transformer 注意力块,使用 LoRA 进行的微调质量与全模型微调相当,同时速度更快且需要更少的计算。

论文链接:https://arxiv.org/abs/2106.09685

参考代码:https://github.com/huggingface/diffusers/tree/main/examples/dreambooth

中文介绍:https://mp.weixin.qq.com/s/kEGwA_7qAKhIuoxPJyfNuw

本项目以国潮风格图片为训练集,使用dreambooth-lora训练相同风格的AI绘画扩散模型。

本项目从[使用Lora技术进行Dreambooth训练](https://aistudio.baidu.com/aistudio/projectdetail/5481677 ( x − 1 ) ( x + 3 ) a 2 + b 2 x = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + a 4 \left(x-1\right)\left(x+3\right)\sqrt{a^2+b^2}x = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{a_3 + a_4}}} (x1)(x+3)a2+b2 x=a0+a1+a2+a3+a4111)
【diffusion】扩散模型详解!理论+代码!得到灵感~~(和代码)~~
感谢他们的奉献

1. 安装依赖

  • 运行下面的按钮安装依赖,为了确保安装成功,安装完毕请重启内核!(注意:这里只需要运行一次!)
!pip install -U paddlenlp ppdiffusers safetensors --user
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: paddlenlp in ./.data/webide/pip/lib/python3.7/site-packages (2.5.2)
Requirement already satisfied: ppdiffusers in ./.data/webide/pip/lib/python3.7/site-packages (0.11.1)
Collecting ppdiffusers
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/05/f0/f311fcaa874b1238edd332030680aace7790abe29d8ec3581d4953476475/ppdiffusers-0.14.0-py3-none-any.whl (909 kB)
     l     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/909.7 kB ? eta -:--:--━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 256.0/909.7 kB 7.5 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━ 542.7/909.7 kB 7.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╸ 901.1/909.7 kB 10.3 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━
[?25hRequirement already satisfied: safetensors in ./.data/webide/pip/lib/python3.7/site-packages (0.3.0)
Requirement already satisfied: uvicorn in ./.data/webide/pip/lib/python3.7/site-packages (from paddlenlp) (0.21.0)
Requirement already satisfied: huggingface-hub>=0.11.1 in ./.data/webide/pip/lib/python3.7/site-packages (from paddlenlp) (0.13.1)
Requirement already satisfied: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.4.4)
Requirement already satisfied: tqdm in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (4.64.1)
Requirement already satisfied: rich in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (13.3.2)
Requirement already satisfied: datasets>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (2.10.1)
Requirement already satisfied: fastapi in ./.data/webide/pip/lib/python3.7/site-packages (from paddlenlp) (0.94.0)
Requirement already satisfied: multiprocess<=0.70.12.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.70.11.1)
Requirement already satisfied: colorlog in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (4.1.0)
Requirement already satisfied: visualdl in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (2.4.0)
Requirement already satisfied: paddlefsl in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (1.1.0)
Requirement already satisfied: dill<0.3.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.3.3)
Requirement already satisfied: sentencepiece in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.1.96)
Requirement already satisfied: jieba in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (0.42.1)
Requirement already satisfied: Flask-Babel<3.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (1.0.0)
Requirement already satisfied: seqeval in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (1.2.2)
Requirement already satisfied: paddle2onnx in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp) (1.0.0)
Requirement already satisfied: typer in ./.data/webide/pip/lib/python3.7/site-packages (from paddlenlp) (0.7.0)
Requirement already satisfied: ftfy in ./.data/webide/pip/lib/python3.7/site-packages (from ppdiffusers) (6.1.1)
Requirement already satisfied: regex in ./.data/webide/pip/lib/python3.7/site-packages (from ppdiffusers) (2022.10.31)
Requirement already satisfied: Pillow in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from ppdiffusers) (8.2.0)
Requirement already satisfied: numpy>=1.17 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (1.19.5)
Requirement already satisfied: aiohttp in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (3.8.4)
Requirement already satisfied: pyarrow>=6.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (11.0.0)
Requirement already satisfied: xxhash in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (3.2.0)
Requirement already satisfied: pandas in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (1.1.5)
Requirement already satisfied: importlib-metadata in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (4.2.0)
Requirement already satisfied: packaging in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (21.3)
Requirement already satisfied: pyyaml>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (5.1.2)
Requirement already satisfied: responses<0.19 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (0.18.0)
Requirement already satisfied: fsspec[http]>=2021.11.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (2023.1.0)
Requirement already satisfied: requests>=2.19.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from datasets>=2.0.0->paddlenlp) (2.24.0)
Requirement already satisfied: Babel>=2.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel<3.0.0->paddlenlp) (2.8.0)
Requirement already satisfied: Flask in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel<3.0.0->paddlenlp) (1.1.1)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel<3.0.0->paddlenlp) (2019.3)
Requirement already satisfied: Jinja2>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel<3.0.0->paddlenlp) (3.0.0)
Requirement already satisfied: filelock in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from huggingface-hub>=0.11.1->paddlenlp) (3.0.12)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from huggingface-hub>=0.11.1->paddlenlp) (4.3.0)
Requirement already satisfied: pydantic!=1.7,!=1.7.1,!=1.7.2,!=1.7.3,!=1.8,!=1.8.1,<2.0.0,>=1.6.2 in ./.data/webide/pip/lib/python3.7/site-packages (from fastapi->paddlenlp) (1.10.6)
Requirement already satisfied: starlette<0.27.0,>=0.26.0 in ./.data/webide/pip/lib/python3.7/site-packages (from fastapi->paddlenlp) (0.26.0.post1)
Requirement already satisfied: wcwidth>=0.2.5 in ./.data/webide/pip/lib/python3.7/site-packages (from ftfy->ppdiffusers) (0.2.6)
Requirement already satisfied: markdown-it-py<3.0.0,>=2.2.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from rich->paddlenlp) (2.2.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from rich->paddlenlp) (2.13.0)
Requirement already satisfied: scikit-learn>=0.21.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from seqeval->paddlenlp) (0.24.2)
Collecting click<9.0.0,>=7.1.1
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c2/f1/df59e28c642d583f7dacffb1e0965d0e00b218e0186d7858ac5233dce840/click-8.1.3-py3-none-any.whl (96 kB)
     l     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/96.6 kB ? eta -:--:--━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╺━ 92.2/96.6 kB 181.7 MB/s eta 0:00:01━━━━━━━━━━━━━━━━━━━━━━━
[?25hRequirement already satisfied: h11>=0.8 in ./.data/webide/pip/lib/python3.7/site-packages (from uvicorn->paddlenlp) (0.14.0)
Requirement already satisfied: bce-python-sdk in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (0.8.53)
Requirement already satisfied: protobuf>=3.11.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (3.20.0)
Requirement already satisfied: six>=1.14.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (1.16.0)
Requirement already satisfied: matplotlib in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp) (2.2.3)
Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask->Flask-Babel<3.0.0->paddlenlp) (1.1.0)
Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask->Flask-Babel<3.0.0->paddlenlp) (0.16.0)
Requirement already satisfied: attrs>=17.3.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from aiohttp->datasets>=2.0.0->paddlenlp) (22.1.0)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from aiohttp->datasets>=2.0.0->paddlenlp) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from aiohttp->datasets>=2.0.0->paddlenlp) (1.8.2)
Requirement already satisfied: aiosignal>=1.1.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from aiohttp->datasets>=2.0.0->paddlenlp) (1.3.1)
Requirement already satisfied: asynctest==0.13.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from aiohttp->datasets>=2.0.0->paddlenlp) (0.13.0)
Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from aiohttp->datasets>=2.0.0->paddlenlp) (3.0.1)
Requirement already satisfied: frozenlist>=1.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from aiohttp->datasets>=2.0.0->paddlenlp) (1.3.3)
Requirement already satisfied: multidict<7.0,>=4.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from aiohttp->datasets>=2.0.0->paddlenlp) (6.0.4)
Requirement already satisfied: MarkupSafe>=2.0.0rc2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Jinja2>=2.5->Flask-Babel<3.0.0->paddlenlp) (2.0.1)
Requirement already satisfied: mdurl~=0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from markdown-it-py<3.0.0,>=2.2.0->rich->paddlenlp) (0.1.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from packaging->datasets>=2.0.0->paddlenlp) (3.0.9)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests>=2.19.0->datasets>=2.0.0->paddlenlp) (2019.9.11)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests>=2.19.0->datasets>=2.0.0->paddlenlp) (1.25.11)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests>=2.19.0->datasets>=2.0.0->paddlenlp) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests>=2.19.0->datasets>=2.0.0->paddlenlp) (2.8)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (2.1.0)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (0.14.1)
Requirement already satisfied: scipy>=0.19.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp) (1.6.3)
Requirement already satisfied: anyio<5,>=3.4.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from starlette<0.27.0,>=0.26.0->fastapi->paddlenlp) (3.6.1)
Requirement already satisfied: pycryptodome>=3.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl->paddlenlp) (3.9.9)
Requirement already satisfied: future>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl->paddlenlp) (0.18.0)
Requirement already satisfied: zipp>=0.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from importlib-metadata->datasets>=2.0.0->paddlenlp) (3.8.1)
Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->visualdl->paddlenlp) (2.8.2)
Requirement already satisfied: cycler>=0.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->visualdl->paddlenlp) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->visualdl->paddlenlp) (1.1.0)
Requirement already satisfied: sniffio>=1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from anyio<5,>=3.4.0->starlette<0.27.0,>=0.26.0->fastapi->paddlenlp) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib->visualdl->paddlenlp) (56.2.0)
Installing collected packages: click, ppdiffusers
  Attempting uninstall: ppdiffusers
    Found existing installation: ppdiffusers 0.11.1
    Uninstalling ppdiffusers-0.11.1:
      Successfully uninstalled ppdiffusers-0.11.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
parl 1.4.1 requires pyzmq==18.1.1, but you have pyzmq 23.2.1 which is incompatible.
Successfully installed click-8.1.3 ppdiffusers-0.14.0

[notice] A new release of pip available: 22.1.2 -> 23.0.1
[notice] To update, run: pip install --upgrade pip

2. 准备数据

数据集介绍:本项目使用的数据集为国潮图数据集

# 准备要训练的图片
# 解压图片到指定文件夹
# 为获得更好的训练效果,可手动筛选数据集,去掉质量较差或风格差异较大的图片
import os
if not os.path.exists("work/5_guos"):
    !unzip -oq /home/aistudio/data/data190562/国潮2.zip -d work/5_guos
    #!unzip -oq work/5_guos.zip -d work#我自己手动筛选了一遍的数据集
import paddle
import paddle.vision
import matplotlib.pyplot as plt
from PIL import Image
%matplotlib inline

# 展示图片
def show_images(imgs_paths=[],cols=4):
    num_samples = len(imgs_paths)
    plt.figure(figsize=(15,15))
    i = 0
    for img_path in imgs_paths:
        img = Image.open(img_path)
        plt.subplot(int(num_samples/cols + 1), cols, i + 1)
        plt.imshow(img)
        i += 1

imgs_paths = [
    "work/5_guos/1 (1).png", "work/5_guos/1 (10).png", "work/5_guos/1 (16).png", "work/5_guos/1 (32).png",
    "work/5_guos/1 (47).png", "work/5_guos/1 (53).png", "work/5_guos/1 (6).png", "work/5_guos/1 (69).png"
]
show_images(imgs_paths)

在这里插入图片描述

#裁剪图片,图片形状不一致可以裁剪成合适大小
from PIL import Image
import os

# 遍历文件夹路径下的所有文件
for file in os.listdir("work/5_guos"):
  # 如果是图片文件,则处理
  if file.endswith(".jpg") or file.endswith(".png") or file.endswith(".bmp"):
    # 打开原始图片
    img = Image.open("work/5_guos/" + file)
    # 获取原始图片的宽度和高度
    width, height = img.size
    # 选择较大的一个作为正方形图片的边长
    size = max(width, height)
    # 创建一个空白的正方形图片
    new_img = Image.new("RGB", (size, size), (255, 255, 255))
    # 计算原始图片缩放后的宽度和高度
    new_width = int(width * 512 / size)
    new_height = int(height * 512 / size)
    # 将原始图片按照比例缩放到合适的大小
    resized_img = img.resize((new_width, new_height))
    # 计算粘贴位置的左上角坐标
    x = (size - new_width) // 2
    y = (size - new_height) // 2
    # 将缩放后的图片粘贴到空白图片的中心位置
    new_img.paste(resized_img, (x, y))
    # 将新生成的正方形图片保存到F:\chushou2路径下,并保持原始文件名不变
    new_img.save("work/5_guos/" + file)

3. 开始训练

参数解释:

  • pretrained_model_name_or_path :想要训练的底模名称,例如:“runwayml/stable-diffusion-v1-5”,更多模型可参考 paddlenlp 文档
  • instance_data_dir:想要训练的图片地址。
  • instance_prompt:训练的prompt文本。
  • resolution:训练时图像的大小,建议为512或768。
  • train_batch_size:训练时候使用的batch_size,可不修改。
  • gradient_accumulation_steps:梯度累积的步数,可不修改。
  • checkpointing_steps:每隔多少步保存模型。
  • learning_rate:训练使用的学习率。
  • report_to:我们将训练过程中出的图片导出到visudl工具中。
  • lr_scheduler:学习率衰减策略,可以是:“linear”, “constant”, “cosine”,"cosine_with_restarts"等。
  • lr_warmup_steps:学习率衰减前,warmup到最大学习率所需要的步数。
  • max_train_steps:最多训练多少步。
  • validation_prompt:训练的过程中我们会评估训练的怎么样,因此我们需要设置评估使用的prompt文本。
  • validation_epochs:每隔多少个epoch评估模型,我们可以查看训练的进度条,知道当前到了第几个epoch。
  • validation_guidance_scale:评估过程中的CFG引导值,默认为5.0.
  • seed:随机种子,设置后可以复现训练结果。
  • lora_rank:lora 的 rank值,默认为128,与开源的版本保持一致。
  • use_lion:表示是否使用lion优化器,如果我们不想使用lion的话需要把 --use_lion True 表示使用 --use_lion False 表示不使用。

dreambooth lora

!python train_dreambooth_lora.py \
  --pretrained_model_name_or_path="Linaqruf/anything-v3.0"  \
  --instance_data_dir="/home/aistudio/work/5_guos" \
  --output_dir="./dream_booth_lora_outputs" \
  --instance_prompt="A photo of chinese traditional <guochao> girl" \
  --resolution=768 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=1 \
  --checkpointing_steps=100 \
  --learning_rate=1e-4 \
  --report_to="visualdl" \
  --lr_scheduler="constant" \
  --lr_warmup_steps=400 \
  --max_train_steps=6300 \
  --lora_rank=128 \
  --validation_prompt="A photo of chinese traditional girl" \
  --validation_epochs=150 \
  --validation_guidance_scale=5.0 \
  --use_lion False \
  --seed=0
100%|███████████████████████████████████████████| 825/825 [00:00<00:00, 663kB/s]
100%|██████████████████████████████████████| 1.01M/1.01M [00:00<00:00, 32.4MB/s]
100%|████████████████████████████████████████| 512k/512k [00:00<00:00, 23.2MB/s]
100%|████████████████████████████████████████| 2.00/2.00 [00:00<00:00, 1.66kB/s]
100%|███████████████████████████████████████████| 389/389 [00:00<00:00, 327kB/s]
100%|███████████████████████████████████████████| 267/267 [00:00<00:00, 258kB/s]
100%|███████████████████████████████████████████| 342/342 [00:00<00:00, 228kB/s]
W0407 13:38:37.303400   670 gpu_resources.cc:85] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W0407 13:38:37.306448   670 gpu_resources.cc:115] device: 0, cuDNN Version: 8.2.
100%|████████████████████████████████████████| 469M/469M [00:30<00:00, 16.3MB/s]
100%|████████████████████████████████████████| 319M/319M [00:20<00:00, 16.1MB/s]
100%|███████████████████████████████████████████| 549/549 [00:00<00:00, 380kB/s]
100%|██████████████████████████████████████| 3.20G/3.20G [02:39<00:00, 21.6MB/s]
100%|███████████████████████████████████████████| 745/745 [00:00<00:00, 624kB/s]
Train Steps:   1%|▊                                                                                                                    | 42/6300 [00:45<1:49:58,  1.05s/it, epoch=0000, step_loss=0.292]
100%|███████████████████████████████████████████| 581/581 [00:00<00:00, 206kB/s][A

100%|███████████████████████████████████████████| 342/342 [00:00<00:00, 159kB/s][A
Train Steps:   2%|█▊                                                                                                                  | 100/6300 [02:09<1:49:11,  1.06s/it, epoch=0002, step_loss=0.122]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-100
Train Steps:   3%|███▋                                                                                                                | 200/6300 [04:04<1:48:43,  1.07s/it, epoch=0004, step_loss=0.301]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-200
Train Steps:   5%|█████▌                                                                                                              | 300/6300 [06:00<1:45:59,  1.06s/it, epoch=0007, step_loss=0.078]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-300
Train Steps:   6%|███████▎                                                                                                           | 400/6300 [07:57<1:43:48,  1.06s/it, epoch=0009, step_loss=0.0247]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-400
Train Steps:   8%|█████████▏                                                                                                         | 500/6300 [09:54<1:44:39,  1.08s/it, epoch=0011, step_loss=0.0311]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-500
Train Steps:  10%|███████████                                                                                                         | 600/6300 [11:51<1:40:09,  1.05s/it, epoch=0014, step_loss=0.236]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-600
Train Steps:  11%|████████████▉                                                                                                       | 700/6300 [13:47<1:38:55,  1.06s/it, epoch=0016, step_loss=0.271]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-700
Train Steps:  13%|██████████████▌                                                                                                    | 800/6300 [15:45<1:37:38,  1.07s/it, epoch=0019, step_loss=0.0103]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-800
Train Steps:  14%|████████████████▍                                                                                                  | 900/6300 [17:36<1:35:07,  1.06s/it, epoch=0021, step_loss=0.0446]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-900
Train Steps:  16%|██████████████████                                                                                                | 1000/6300 [19:29<1:34:01,  1.06s/it, epoch=0023, step_loss=0.0852]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1000
Train Steps:  17%|████████████████████                                                                                               | 1100/6300 [21:27<1:32:22,  1.07s/it, epoch=0026, step_loss=0.109]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1100
Train Steps:  19%|█████████████████████▌                                                                                           | 1200/6300 [23:21<1:30:16,  1.06s/it, epoch=0028, step_loss=0.00968]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1200
Train Steps:  21%|███████████████████████▋                                                                                           | 1300/6300 [25:14<1:27:54,  1.05s/it, epoch=0030, step_loss=0.209]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1300
Train Steps:  22%|█████████████████████████                                                                                        | 1400/6300 [27:06<1:28:26,  1.08s/it, epoch=0033, step_loss=0.00429]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1400
Train Steps:  24%|███████████████████████████▍                                                                                       | 1500/6300 [28:59<1:24:46,  1.06s/it, epoch=0035, step_loss=0.096]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1500
Train Steps:  25%|█████████████████████████████▏                                                                                     | 1600/6300 [30:52<1:23:27,  1.07s/it, epoch=0038, step_loss=0.165]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1600
Train Steps:  27%|███████████████████████████████                                                                                    | 1700/6300 [32:43<1:20:54,  1.06s/it, epoch=0040, step_loss=0.263]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1700
Train Steps:  29%|████████████████████████████████▊                                                                                  | 1800/6300 [34:35<1:19:20,  1.06s/it, epoch=0042, step_loss=0.164]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1800
Train Steps:  30%|██████████████████████████████████▍                                                                               | 1900/6300 [36:26<1:17:22,  1.06s/it, epoch=0045, step_loss=0.0734]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-1900
Train Steps:  32%|████████████████████████████████████▏                                                                             | 2000/6300 [38:17<1:15:34,  1.05s/it, epoch=0047, step_loss=0.0539]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2000
Train Steps:  33%|██████████████████████████████████████▎                                                                            | 2100/6300 [40:08<1:14:26,  1.06s/it, epoch=0049, step_loss=0.101]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2100
Train Steps:  35%|███████████████████████████████████████▍                                                                         | 2200/6300 [41:59<1:12:02,  1.05s/it, epoch=0052, step_loss=0.00831]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2200
Train Steps:  37%|█████████████████████████████████████████▉                                                                         | 2300/6300 [43:51<1:10:27,  1.06s/it, epoch=0054, step_loss=0.427]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2300
Train Steps:  38%|███████████████████████████████████████████▊                                                                       | 2400/6300 [45:46<1:09:11,  1.06s/it, epoch=0057, step_loss=0.109]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2400
Train Steps:  40%|█████████████████████████████████████████████▋                                                                     | 2500/6300 [47:41<1:06:37,  1.05s/it, epoch=0059, step_loss=0.174]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2500
Train Steps:  41%|███████████████████████████████████████████████                                                                   | 2600/6300 [49:38<1:05:01,  1.05s/it, epoch=0061, step_loss=0.0362]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2600
Train Steps:  43%|█████████████████████████████████████████████████▎                                                                 | 2700/6300 [51:35<1:03:24,  1.06s/it, epoch=0064, step_loss=0.191]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2700
Train Steps:  44%|███████████████████████████████████████████████████                                                                | 2800/6300 [53:30<1:01:50,  1.06s/it, epoch=0066, step_loss=0.324]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2800
Train Steps:  46%|████████████████████████████████████████████████████▉                                                              | 2900/6300 [55:22<1:00:14,  1.06s/it, epoch=0069, step_loss=0.101]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-2900
Train Steps:  48%|███████████████████████████████████████████████████████▏                                                            | 3000/6300 [57:14<58:23,  1.06s/it, epoch=0071, step_loss=0.0544]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3000
Train Steps:  49%|█████████████████████████████████████████████████████████                                                           | 3100/6300 [59:05<56:22,  1.06s/it, epoch=0073, step_loss=0.0531]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3100
Train Steps:  51%|██████████████████████████████████████████████████████████▍                                                        | 3200/6300 [1:00:56<54:48,  1.06s/it, epoch=0076, step_loss=0.167]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3200
Train Steps:  52%|███████████████████████████████████████████████████████████▋                                                      | 3300/6300 [1:02:47<52:44,  1.05s/it, epoch=0078, step_loss=0.0789]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3300
Train Steps:  54%|██████████████████████████████████████████████████████████████                                                     | 3400/6300 [1:04:38<51:36,  1.07s/it, epoch=0080, step_loss=0.152]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3400
Train Steps:  56%|███████████████████████████████████████████████████████████████▉                                                   | 3500/6300 [1:06:34<49:34,  1.06s/it, epoch=0083, step_loss=0.236]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3500
Train Steps:  57%|█████████████████████████████████████████████████████████████████▏                                                | 3600/6300 [1:08:28<47:13,  1.05s/it, epoch=0085, step_loss=0.0228]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3600
Train Steps:  59%|██████████████████████████████████████████████████████████████████▉                                               | 3700/6300 [1:10:26<45:46,  1.06s/it, epoch=0088, step_loss=0.0802]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3700
Train Steps:  60%|████████████████████████████████████████████████████████████████████▊                                             | 3800/6300 [1:12:22<43:46,  1.05s/it, epoch=0090, step_loss=0.0585]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3800
Train Steps:  62%|██████████████████████████████████████████████████████████████████████▌                                           | 3900/6300 [1:14:16<42:14,  1.06s/it, epoch=0092, step_loss=0.0182]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-3900
Train Steps:  63%|█████████████████████████████████████████████████████████████████████████                                          | 4000/6300 [1:16:10<40:27,  1.06s/it, epoch=0095, step_loss=0.109]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4000
Train Steps:  65%|██████████████████████████████████████████████████████████████████████████▏                                       | 4100/6300 [1:18:04<38:51,  1.06s/it, epoch=0097, step_loss=0.0385]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4100
Train Steps:  67%|████████████████████████████████████████████████████████████████████████████                                      | 4200/6300 [1:20:00<36:44,  1.05s/it, epoch=0099, step_loss=0.0563]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4200
Train Steps:  68%|██████████████████████████████████████████████████████████████████████████████▍                                    | 4300/6300 [1:21:52<35:10,  1.06s/it, epoch=0102, step_loss=0.157]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4300
Train Steps:  70%|███████████████████████████████████████████████████████████████████████████████▌                                  | 4400/6300 [1:23:45<33:23,  1.05s/it, epoch=0104, step_loss=0.0452]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4400
Train Steps:  71%|██████████████████████████████████████████████████████████████████████████████████▏                                | 4500/6300 [1:25:39<31:41,  1.06s/it, epoch=0107, step_loss=0.029]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4500
Train Steps:  73%|███████████████████████████████████████████████████████████████████████████████████▉                               | 4600/6300 [1:27:33<29:45,  1.05s/it, epoch=0109, step_loss=0.157]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4600
Train Steps:  75%|██████████████████████████████████████████████████████████████████████████████████████▌                             | 4700/6300 [1:29:26<27:59,  1.05s/it, epoch=0111, step_loss=0.15]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4700
Train Steps:  76%|██████████████████████████████████████████████████████████████████████████████████████▊                           | 4800/6300 [1:31:23<26:19,  1.05s/it, epoch=0114, step_loss=0.0156]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4800
Train Steps:  78%|█████████████████████████████████████████████████████████████████████████████████████████▍                         | 4900/6300 [1:33:19<24:53,  1.07s/it, epoch=0116, step_loss=0.282]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-4900
Train Steps:  79%|██████████████████████████████████████████████████████████████████████████████████████████▍                       | 5000/6300 [1:35:15<23:13,  1.07s/it, epoch=0119, step_loss=0.0539]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5000
Train Steps:  81%|████████████████████████████████████████████████████████████████████████████████████████████▎                     | 5100/6300 [1:37:10<21:04,  1.05s/it, epoch=0121, step_loss=0.0495]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5100
Train Steps:  83%|██████████████████████████████████████████████████████████████████████████████████████████████                    | 5200/6300 [1:39:04<19:13,  1.05s/it, epoch=0123, step_loss=0.0451]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5200
Train Steps:  84%|███████████████████████████████████████████████████████████████████████████████████████████████                  | 5300/6300 [1:40:58<17:31,  1.05s/it, epoch=0126, step_loss=0.00924]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5300
Train Steps:  86%|████████████████████████████████████████████████████████████████████████████████████████████████▊                | 5400/6300 [1:42:52<15:58,  1.07s/it, epoch=0128, step_loss=0.00588]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5400
Train Steps:  87%|███████████████████████████████████████████████████████████████████████████████████████████████████▌              | 5500/6300 [1:44:45<13:57,  1.05s/it, epoch=0130, step_loss=0.0942]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5500
Train Steps:  89%|██████████████████████████████████████████████████████████████████████████████████████████████████████▏            | 5600/6300 [1:46:41<12:20,  1.06s/it, epoch=0133, step_loss=0.243]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5600
Train Steps:  90%|████████████████████████████████████████████████████████████████████████████████████████████████████████           | 5700/6300 [1:48:41<10:35,  1.06s/it, epoch=0135, step_loss=0.284]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5700
Train Steps:  92%|████████████████████████████████████████████████████████████████████████████████████████████████████████▉         | 5800/6300 [1:50:43<08:49,  1.06s/it, epoch=0138, step_loss=0.0669]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5800
Train Steps:  94%|███████████████████████████████████████████████████████████████████████████████████████████████████████████▋       | 5900/6300 [1:52:44<07:01,  1.05s/it, epoch=0140, step_loss=0.129]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-5900
Train Steps:  95%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████▌     | 6000/6300 [1:54:44<05:20,  1.07s/it, epoch=0142, step_loss=0.162]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-6000
Train Steps:  97%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████▎   | 6100/6300 [1:56:44<03:30,  1.05s/it, epoch=0145, step_loss=0.102]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-6100
Train Steps:  98%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 6200/6300 [1:58:42<01:45,  1.05s/it, epoch=0147, step_loss=0.0188]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-6200
Train Steps: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6300/6300 [2:00:42<00:00,  1.05s/it, epoch=0149, step_loss=0.0992]Saved lora weights to ./dream_booth_lora_outputs/checkpoint-6300
Saved final lora weights to ./dream_booth_lora_outputs
Train Steps: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6300/6300 [2:01:13<00:00,  1.15s/it, epoch=0149, step_loss=0.0992]
[0m

4. 启动visualdl程序,查看我们训练过程中出图情况


5. 加载训练好的文件进行推理

import lora_helper
from allinone import StableDiffusionPipelineAllinOne
from ppdiffusers import DPMSolverMultistepScheduler
import paddle
# 基础模型,需要是paddle版本的权重,未来会加更多的权重
pretrained_model_name_or_path = "Linaqruf/anything-v3.0"

# 我们加载safetensor版本的权重
lora_outputs_path = "dream_booth_lora_outputs/checkpoint-5900/text_encoder_unet_lora.safetensors"

# 加载之前的模型
pipe = StableDiffusionPipelineAllinOne.from_pretrained(pretrained_model_name_or_path, safety_checker=None)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

# 加载lora权重
from IPython.display import clear_output, display
clear_output()
pipe.apply_lora(lora_outputs_path)
|---------------当前的rank是 128!
|---------------当前的alpha是 128.0!
Loading lora_weights successfully!

r_output, display
clear_output()
pipe.apply_lora(lora_outputs_path)


    |---------------当前的rank是 128!
    |---------------当前的alpha是 128.0!
    Loading lora_weights successfully!



```python
import lora_helper
from allinone import StableDiffusionPipelineAllinOne
from ppdiffusers import DPMSolverMultistepScheduler

number               = 4
prompt               = "A photo of chinese traditional <guochao> girl"
negative_prompt      = ""
guidance_scale       = 6
num_inference_steps  = 60
height               = 768
width                = 512

for i in range(number):
    img = pipe(prompt, negative_prompt=negative_prompt, guidance_scale=guidance_scale, height=height, width=width, num_inference_steps=num_inference_steps).images[0]
    #save_image_info(image, path = './jpgout/')
    display(img)
    display(img.argument)
  0%|          | 0/60 [00:00<?, ?it/s]

在这里插入图片描述

{'prompt': 'A photo of chinese traditional <guochao> girl',
 'negative_prompt': '',
 'height': 768,
 'width': 512,
 'num_inference_steps': 60,
 'guidance_scale': 6,
 'num_images_per_prompt': 1,
 'eta': 0.0,
 'seed': 485493554,
 'latents': None,
 'max_embeddings_multiples': 1,
 'no_boseos_middle': False,
 'skip_parsing': False,
 'skip_weighting': False,
 'epoch_time': 1680938338.6726773}



  0%|          | 0/60 [00:00<?, ?it/s]

在这里插入图片描述

{'prompt': 'A photo of chinese traditional <guochao> girl',
 'negative_prompt': '',
 'height': 768,
 'width': 512,
 'num_inference_steps': 60,
 'guidance_scale': 6,
 'num_images_per_prompt': 1,
 'eta': 0.0,
 'seed': 1163931350,
 'latents': None,
 'max_embeddings_multiples': 1,
 'no_boseos_middle': False,
 'skip_parsing': False,
 'skip_weighting': False,
 'epoch_time': 1680938361.4862285}



  0%|          | 0/60 [00:00<?, ?it/s]

在这里插入图片描述

{'prompt': 'A photo of chinese traditional <guochao> girl',
 'negative_prompt': '',
 'height': 768,
 'width': 512,
 'num_inference_steps': 60,
 'guidance_scale': 6,
 'num_images_per_prompt': 1,
 'eta': 0.0,
 'seed': 1897642367,
 'latents': None,
 'max_embeddings_multiples': 1,
 'no_boseos_middle': False,
 'skip_parsing': False,
 'skip_weighting': False,
 'epoch_time': 1680938384.2930317}



  0%|          | 0/60 [00:00<?, ?it/s]

在这里插入图片描述

{'prompt': 'A photo of chinese traditional <guochao> girl',
 'negative_prompt': '',
 'height': 768,
 'width': 512,
 'num_inference_steps': 60,
 'guidance_scale': 6,
 'num_images_per_prompt': 1,
 'eta': 0.0,
 'seed': 197296557,
 'latents': None,
 'max_embeddings_multiples': 1,
 'no_boseos_middle': False,
 'skip_parsing': False,
 'skip_weighting': False,
 'epoch_time': 1680938407.093823}

此文章为搬运
原项目链接

Logo

学大模型,用大模型上飞桨星河社区!每天8点V100G算力免费领!免费领取ERNIE 4.0 100w Token >>>

更多推荐