text
stringlengths
5
631k
id
stringlengths
14
178
metadata
dict
__index_level_0__
int64
0
647
<!--版权所有 2025 The HuggingFace Team。保留所有权利。 根据 Apache 许可证 2.0 版本("许可证")授权;除非符合许可证,否则不得使用此文件。您可以在以下网址获取许可证副本: http://www.apache.org/licenses/LICENSE-2.0 除非适用法律要求或书面同意,根据许可证分发的软件按"原样"分发,无任何明示或暗示的担保或条件。有关许可证的具体语言,请参阅许可证中的权限和限制。 --> # 如何使用 Core ML 运行 Stable Diffusion [Core ML](https://developer.apple.com/documentation/coreml) 是 Apple 框架支持的模型格式和机器学习库。如果您有兴趣在 macOS 或 iOS/iPadOS 应用中运行 Stable Diffusion 模型,本指南将展示如何将现有的 PyTorch 检查点转换为 Core ML 格式,并使用 Python 或 Swift 进行推理。 Core ML 模型可以利用 Apple 设备中所有可用的计算引擎:CPU、GPU 和 Apple Neural Engine(或 ANE,一种在 Apple Silicon Mac 和现代 iPhone/iPad 中可用的张量优化加速器)。根据模型及其运行的设备,Core ML 还可以混合和匹配计算引擎,例如,模型的某些部分可能在 CPU 上运行,而其他部分在 GPU 上运行。 <Tip> 您还可以使用 PyTorch 内置的 `mps` 加速器在 Apple Silicon Mac 上运行 `diffusers` Python 代码库。这种方法在 [mps 指南](mps) 中有详细解释,但它与原生应用不兼容。 </Tip> ## Stable Diffusion Core ML 检查点 Stable Diffusion 权重(或检查点)以 PyTorch 格式存储,因此在使用它们之前,需要将它们转换为 Core ML 格式。 幸运的是,Apple 工程师基于 `diffusers` 开发了 [一个转换工具](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml),用于将 PyTorch 检查点转换为 Core ML。 但在转换模型之前,花点时间探索 Hugging Face Hub——很可能您感兴趣的模型已经以 Core ML 格式提供: - [Apple](https://huggingface.co/apple) 组织包括 Stable Diffusion 版本 1.4、1.5、2.0 基础和 2.1 基础 - [coreml community](https://huggingface.co/coreml-community) 包括自定义微调模型 - 使用此 [过滤器](https://huggingface.co/models?pipeline_tag=text-to-image&library=coreml&p=2&sort=likes) 返回所有可用的 Core ML 检查点 如果您找不到感兴趣的模型,我们建议您遵循 Apple 的 [Converting Models to Core ML](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml) 说明。 ## 选择要使用的 Core ML 变体 Stable Diffusion 模型可以转换为不同的 Core ML 变体,用于不同目的: - 注意力类型 使用了n个块。注意力操作用于“关注”图像表示中不同区域之间的关系,并理解图像和文本表示如何相关。注意力的计算和内存消耗很大,因此存在不同的实现方式,以适应不同设备的硬件特性。对于Core ML Stable Diffusion模型,有两种注意力变体: * `split_einsum`([由Apple引入](https://machinelearning.apple.com/research/neural-engine-transformers))针对ANE设备进行了优化,这些设备在现代iPhone、iPad和M系列计算机中可用。 * “原始”注意力(在`diffusers`中使用的基础实现)仅与CPU/GPU兼容,不与ANE兼容。在CPU + GPU上使用`original`注意力运行模型可能比ANE*更快*。请参阅[此性能基准](https://huggingface.co/blog/fast-mac-diffusers#performance-benchmarks)以及社区提供的[一些额外测量](https://github.com/huggingface/swift-coreml-diffusers/issues/31)以获取更多细节。 - 支持的推理框架。 * `packages`适用于Python推理。这可用于在尝试将转换后的Core ML模型集成到原生应用程序之前进行测试,或者如果您想探索Core ML性能但不需要支持原生应用程序。例如,具有Web UI的应用程序完全可以使用Python Core ML后端。 * `compiled`模型是Swift代码所必需的。Hub中的`compiled`模型将大型UNet模型权重分成多个文件,以兼容iOS和iPadOS设备。这对应于[`--chunk-unet`转换选项](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml)。如果您想支持原生应用程序,则需要选择`compiled`变体。 官方的Core ML Stable Diffusion[模型](https://huggingface.co/apple/coreml-stable-diffusion-v1-4/tree/main)包括这些变体,但社区的可能有所不同: ``` coreml-stable-diffusion-v1-4 ├── README.md ├── original │ ├── compiled │ └── packages └── split_einsum ├── compiled └── packages ``` 您可以下载并使用所需的变体,如下所示。 ## Python中的Core ML推理 安装以下库以在Python中运行Core ML推理: ```bash pip install huggingface_hub pip install git+https://github.com/apple/ml-stable-diffusion ``` ### 下载模型检查点 要在Python中运行推理,请使用存储在`packages`文件夹中的版本之一,因为`compiled`版本仅与Swift兼容。您可以选择使用`original`或`split_einsum`注意力。 这是您如何从Hub下载`original`注意力变体到一个名为`models`的目录: ```Python from huggingface_hub import snapshot_download from pathlib import Path repo_id = "apple/coreml-stable-diffusion-v1-4" variant = "original/packages" mo del_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) print(f"Model downloaded at {model_path}") ``` ### 推理[[python-inference]] 下载模型快照后,您可以使用 Apple 的 Python 脚本来测试它。 ```shell python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i ./models/coreml-stable-diffusion-v1-4_original_packages/original/packages -o </path/to/output/image> --compute-unit CPU_AND_GPU --seed 93 ``` 使用 `-i` 标志将下载的检查点路径传递给脚本。`--compute-unit` 表示您希望允许用于推理的硬件。它必须是以下选项之一:`ALL`、`CPU_AND_GPU`、`CPU_ONLY`、`CPU_AND_NE`。您也可以提供可选的输出路径和用于可重现性的种子。 推理脚本假设您使用的是 Stable Diffusion 模型的原始版本,`CompVis/stable-diffusion-v1-4`。如果您使用另一个模型,您*必须*在推理命令行中使用 `--model-version` 选项指定其 Hub ID。这适用于已支持的模型以及您自己训练或微调的自定义模型。 例如,如果您想使用 [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5): ```shell python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version stable-diffusion-v1-5/stable-diffusion-v1-5 ``` ## Core ML 在 Swift 中的推理 在 Swift 中运行推理比在 Python 中稍快,因为模型已经以 `mlmodelc` 格式编译。这在应用启动时加载模型时很明显,但如果在之后运行多次生成,则不应明显。 ### 下载 要在您的 Mac 上运行 Swift 推理,您需要一个 `compiled` 检查点版本。我们建议您使用类似于先前示例的 Python 代码在本地下载它们,但使用 `compiled` 变体之一: ```Python from huggingface_hub import snapshot_download from pathlib import Path repo_id = "apple/coreml-stable-diffusion-v1-4" variant = "original/compiled" model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) print(f"Model downloaded at {model_path}") ``` ### 推理[[swift-inference]] 要运行推理,请克隆 Apple 的仓库: ```bash git clone https://github.com/apple/ml-stable-diffusion cd ml-stable-diffusion ``` 然后使用 Apple 的命令行工具,[Swift Package Manager](https://www.swift.org/package-manager/#): ```bash swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" ``` 您必须在 `--resource-path` 中指定上一步下载的检查点之一,请确保它包含扩展名为 `.mlmodelc` 的已编译 Core ML 包。`--compute-units` 必须是以下值之一:`all`、`cpuOnly`、`cpuAndGPU`、`cpuAndNeuralEngine`。 有关更多详细信息,请参考 [Apple 仓库中的说明](https://github.com/apple/ml-stable-diffusion)。 ## 支持的 Diffusers 功能 Core ML 模型和推理代码不支持 🧨 Diffusers 的许多功能、选项和灵活性。以下是一些需要注意的限制: - Core ML 模型仅适用于推理。它们不能用于训练或微调。 - 只有两个调度器已移植到 Swift:Stable Diffusion 使用的默认调度器和我们从 `diffusers` 实现移植到 Swift 的 `DPMSolverMultistepScheduler`。我们推荐您使用 `DPMSolverMultistepScheduler`,因为它在约一半的步骤中产生相同的质量。 - 负面提示、无分类器引导尺度和图像到图像任务在推理代码中可用。高级功能如深度引导、ControlNet 和潜在上采样器尚不可用。 Apple 的 [转换和推理仓库](https://github.com/apple/ml-stable-diffusion) 和我们自己的 [swift-coreml-diffusers](https://github.com/huggingface/swift-coreml-diffusers) 仓库旨在作为技术演示,以帮助其他开发者在此基础上构建。 如果您对任何缺失功能有强烈需求,请随时提交功能请求或更好的是,贡献一个 PR 🙂。 ## 原生 Diffusers Swift 应用 一个简单的方法来在您自己的 Apple 硬件上运行 Stable Diffusion 是使用 [我们的开源 Swift 仓库](https://github.com/huggingface/swift-coreml-diffusers),它基于 `diffusers` 和 Apple 的转换和推理仓库。您可以研究代码,使用 [Xcode](https://developer.apple.com/xcode/) 编译它,并根据您的需求进行适配。为了方便,[App Store 中还有一个独立 Mac 应用](https://apps.apple.com/app/diffusers/id1666309574),因此您无需处理代码或 IDE 即可使用它。如果您是开发者,并已确定 Core ML 是构建您的 Stable Diffusion 应用的最佳解决方案,那么您可以使用本指南的其余部分来开始您的项目。我们迫不及待想看看您会构建什么 🙂。
diffusers/docs/source/zh/optimization/coreml.md/0
{ "file_path": "diffusers/docs/source/zh/optimization/coreml.md", "repo_id": "diffusers", "token_count": 5827 }
127
<!--Copyright 2025 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]] # 快速上手 训练扩散模型,是为了对随机高斯噪声进行逐步去噪,以生成令人感兴趣的样本,比如图像或者语音。 扩散模型的发展引起了人们对生成式人工智能的极大兴趣,你可能已经在网上见过扩散生成的图像了。🧨 Diffusers库的目的是让大家更易上手扩散模型。 无论你是开发人员还是普通用户,本文将向你介绍🧨 Diffusers 并帮助你快速开始生成内容! 🧨 Diffusers 库的三个主要组件: 无论你是开发者还是普通用户,这个快速指南将向你介绍🧨 Diffusers,并帮助你快速使用和生成!该库三个主要部分如下: * [`DiffusionPipeline`]是一个高级的端到端类,旨在通过预训练的扩散模型快速生成样本进行推理。 * 作为创建扩散系统做组件的流行的预训练[模型](./api/models)框架和模块。 * 许多不同的[调度器](./api/schedulers/overview):控制如何在训练过程中添加噪声的算法,以及如何在推理过程中生成去噪图像的算法。 快速入门将告诉你如何使用[`DiffusionPipeline`]进行推理,然后指导你如何结合模型和调度器以复现[`DiffusionPipeline`]内部发生的事情。 <Tip> 快速入门是🧨[Diffusers入门](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)的简化版,可以帮助你快速上手。如果你想了解更多关于🧨 Diffusers的目标、设计理念以及关于它的核心API的更多细节,可以点击🧨[Diffusers入门](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)查看。 </Tip> 在开始之前,确认一下你已经安装好了所需要的库: ```bash pip install --upgrade diffusers accelerate transformers ``` - [🤗 Accelerate](https://huggingface.co/docs/accelerate/index) 在推理和训练过程中加速模型加载。 - [🤗 Transformers](https://huggingface.co/docs/transformers/index) 是运行最流行的扩散模型所必须的库,比如[Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview). ## 扩散模型管道 [`DiffusionPipeline`]是用预训练的扩散系统进行推理的最简单方法。它是一个包含模型和调度器的端到端系统。你可以直接使用[`DiffusionPipeline`]完成许多任务。请查看下面的表格以了解一些支持的任务,要获取完整的支持任务列表,请查看[🧨 Diffusers 总结](./api/pipelines/overview#diffusers-summary) 。 | **任务** | **描述** | **管道** |------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------| | Unconditional Image Generation | 从高斯噪声中生成图片 | [unconditional_image_generation](./using-diffusers/unconditional_image_generation) | | Text-Guided Image Generation | 给定文本提示生成图像 | [conditional_image_generation](./using-diffusers/conditional_image_generation) | | Text-Guided Image-to-Image Translation | 在文本提示的指导下调整图像 | [img2img](./using-diffusers/img2img) | | Text-Guided Image-Inpainting | 给出图像、遮罩和文本提示,填充图像的遮罩部分 | [inpaint](./using-diffusers/inpaint) | | Text-Guided Depth-to-Image Translation | 在文本提示的指导下调整图像的部分内容,同时通过深度估计保留其结构 | [depth2img](./using-diffusers/depth2img) | 首先创建一个[`DiffusionPipeline`]的实例,并指定要下载的pipeline检查点。 你可以使用存储在Hugging Face Hub上的任何[`DiffusionPipeline`][检查点](https://huggingface.co/models?library=diffusers&sort=downloads)。 在教程中,你将加载[`stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5)检查点,用于文本到图像的生成。 首先创建一个[DiffusionPipeline]实例,并指定要下载的管道检查点。 您可以在Hugging Face Hub上使用[DiffusionPipeline]的任何检查点。 在本快速入门中,您将加载stable-diffusion-v1-5检查点,用于文本到图像生成。 <Tip warning={true}>。 对于[Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion)模型,在运行该模型之前,请先仔细阅读[许可证](https://huggingface.co/spaces/CompVis/stable-diffusion-license)。🧨 Diffusers实现了一个[`safety_checker`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py),以防止有攻击性的或有害的内容,但Stable Diffusion模型改进图像的生成能力仍有可能产生潜在的有害内容。 </Tip> 用[`~DiffusionPipeline.from_pretrained`]方法加载模型。 ```python >>> from diffusers import DiffusionPipeline >>> pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") ``` [`DiffusionPipeline`]会下载并缓存所有的建模、标记化和调度组件。你可以看到Stable Diffusion的pipeline是由[`UNet2DConditionModel`]和[`PNDMScheduler`]等组件组成的: ```py >>> pipeline StableDiffusionPipeline { "_class_name": "StableDiffusionPipeline", "_diffusers_version": "0.13.1", ..., "scheduler": [ "diffusers", "PNDMScheduler" ], ..., "unet": [ "diffusers", "UNet2DConditionModel" ], "vae": [ "diffusers", "AutoencoderKL" ] } ``` 我们强烈建议你在GPU上运行这个pipeline,因为该模型由大约14亿个参数组成。 你可以像在Pytorch里那样把生成器对象移到GPU上: ```python >>> pipeline.to("cuda") ``` 现在你可以向`pipeline`传递一个文本提示来生成图像,然后获得去噪的图像。默认情况下,图像输出被放在一个[`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class)对象中。 ```python >>> image = pipeline("An image of a squirrel in Picasso style").images[0] >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/image_of_squirrel_painting.png"/> </div> 调用`save`保存图像: ```python >>> image.save("image_of_squirrel_painting.png") ``` ### 本地管道 你也可以在本地使用管道。唯一的区别是你需提前下载权重: ``` git lfs install git clone https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5 ``` 将下载好的权重加载到管道中: ```python >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5") ``` 现在你可以像上一节中那样运行管道了。 ### 更换调度器 不同的调度器对去噪速度和质量的权衡是不同的。要想知道哪种调度器最适合你,最好的办法就是试用一下。🧨 Diffusers的主要特点之一是允许你轻松切换不同的调度器。例如,要用[`EulerDiscreteScheduler`]替换默认的[`PNDMScheduler`],用[`~diffusers.ConfigMixin.from_config`]方法加载即可: ```py >>> from diffusers import EulerDiscreteScheduler >>> pipeline = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") >>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) ``` 试着用新的调度器生成一个图像,看看你能否发现不同之处。 在下一节中,你将仔细观察组成[`DiffusionPipeline`]的组件——模型和调度器,并学习如何使用这些组件来生成猫咪的图像。 ## 模型 大多数模型取一个噪声样本,在每个时间点预测*噪声残差*(其他模型则直接学习预测前一个样本或速度或[`v-prediction`](https://github.com/huggingface/diffusers/blob/5e5ce13e2f89ac45a0066cb3f369462a3cf1d9ef/src/diffusers/schedulers/scheduling_ddim.py#L110)),即噪声较小的图像与输入图像的差异。你可以混搭模型创建其他扩散系统。 模型是用[`~ModelMixin.from_pretrained`]方法启动的,该方法还在本地缓存了模型权重,所以下次加载模型时更快。对于快速入门,你默认加载的是[`UNet2DModel`],这是一个基础的无条件图像生成模型,该模型有一个在猫咪图像上训练的检查点: ```py >>> from diffusers import UNet2DModel >>> repo_id = "google/ddpm-cat-256" >>> model = UNet2DModel.from_pretrained(repo_id) ``` 想知道模型的参数,调用 `model.config`: ```py >>> model.config ``` 模型配置是一个🧊冻结的🧊字典,意思是这些参数在模型创建后就不变了。这是特意设置的,确保在开始时用于定义模型架构的参数保持不变,其他参数仍然可以在推理过程中进行调整。 一些最重要的参数: * `sample_size`:输入样本的高度和宽度尺寸。 * `in_channels`:输入样本的输入通道数。 * `down_block_types`和`up_block_types`:用于创建U-Net架构的下采样和上采样块的类型。 * `block_out_channels`:下采样块的输出通道数;也以相反的顺序用于上采样块的输入通道数。 * `layers_per_block`:每个U-Net块中存在的ResNet块的数量。 为了使用该模型进行推理,用随机高斯噪声生成图像形状。它应该有一个`batch`轴,因为模型可以接收多个随机噪声,一个`channel`轴,对应于输入通道的数量,以及一个`sample_size`轴,对应图像的高度和宽度。 ```py >>> import torch >>> torch.manual_seed(0) >>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) >>> noisy_sample.shape torch.Size([1, 3, 256, 256]) ``` 对于推理,将噪声图像和一个`timestep`传递给模型。`timestep` 表示输入图像的噪声程度,开始时噪声更多,结束时噪声更少。这有助于模型确定其在扩散过程中的位置,是更接近开始还是结束。使用 `sample` 获得模型输出: ```py >>> with torch.no_grad(): ... noisy_residual = model(sample=noisy_sample, timestep=2).sample ``` 想生成实际的样本,你需要一个调度器指导去噪过程。在下一节中,你将学习如何把模型与调度器结合起来。 ## 调度器 调度器管理一个噪声样本到一个噪声较小的样本的处理过程,给出模型输出 —— 在这种情况下,它是`noisy_residual`。 <Tip> 🧨 Diffusers是一个用于构建扩散系统的工具箱。预定义好的扩散系统[`DiffusionPipeline`]能方便你快速试用,你也可以单独选择自己的模型和调度器组件来建立一个自定义的扩散系统。 </Tip> 在快速入门教程中,你将用它的[`~diffusers.ConfigMixin.from_config`]方法实例化[`DDPMScheduler`]: ```py >>> from diffusers import DDPMScheduler >>> scheduler = DDPMScheduler.from_config(repo_id) >>> scheduler DDPMScheduler { "_class_name": "DDPMScheduler", "_diffusers_version": "0.13.1", "beta_end": 0.02, "beta_schedule": "linear", "beta_start": 0.0001, "clip_sample": true, "clip_sample_range": 1.0, "num_train_timesteps": 1000, "prediction_type": "epsilon", "trained_betas": null, "variance_type": "fixed_small" } ``` <Tip> 💡 注意调度器是如何从配置中实例化的。与模型不同,调度器没有可训练的权重,而且是无参数的。 </Tip> * `num_train_timesteps`:去噪过程的长度,或者换句话说,将随机高斯噪声处理成数据样本所需的时间步数。 * `beta_schedule`:用于推理和训练的噪声表。 * `beta_start`和`beta_end`:噪声表的开始和结束噪声值。 要预测一个噪音稍小的图像,请将 模型输出、`timestep`和当前`sample` 传递给调度器的[`~diffusers.DDPMScheduler.step`]方法: ```py >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample >>> less_noisy_sample.shape ``` 这个 `less_noisy_sample` 去噪样本 可以被传递到下一个`timestep` ,处理后会将变得噪声更小。现在让我们把所有步骤合起来,可视化整个去噪过程。 首先,创建一个函数,对去噪后的图像进行后处理并显示为`PIL.Image`: ```py >>> import PIL.Image >>> import numpy as np >>> def display_sample(sample, i): ... image_processed = sample.cpu().permute(0, 2, 3, 1) ... image_processed = (image_processed + 1.0) * 127.5 ... image_processed = image_processed.numpy().astype(np.uint8) ... image_pil = PIL.Image.fromarray(image_processed[0]) ... display(f"Image at step {i}") ... display(image_pil) ``` 将输入和模型移到GPU上加速去噪过程: ```py >>> model.to("cuda") >>> noisy_sample = noisy_sample.to("cuda") ``` 现在创建一个去噪循环,该循环预测噪声较少样本的残差,并使用调度程序计算噪声较少的样本: ```py >>> import tqdm >>> sample = noisy_sample >>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): ... # 1. predict noise residual ... with torch.no_grad(): ... residual = model(sample, t).sample ... # 2. compute less noisy image and set x_t -> x_t-1 ... sample = scheduler.step(residual, t, sample).prev_sample ... # 3. optionally look at image ... if (i + 1) % 50 == 0: ... display_sample(sample, i + 1) ``` 看!这样就从噪声中生成出一只猫了!😻 <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/diffusion-quicktour.png"/> </div> ## 下一步 希望你在这次快速入门教程中用🧨Diffuser 生成了一些很酷的图像! 下一步你可以: * 在[训练](./tutorials/basic_training)教程中训练或微调一个模型来生成你自己的图像。 * 查看官方和社区的[训练或微调脚本](https://github.com/huggingface/diffusers/tree/main/examples#-diffusers-examples)的例子,了解更多使用情况。 * 在[使用不同的调度器](./using-diffusers/schedulers)指南中了解更多关于加载、访问、更改和比较调度器的信息。 * 在[Stable Diffusion](./stable_diffusion)教程中探索提示工程、速度和内存优化,以及生成更高质量图像的技巧。 * 通过[在GPU上优化PyTorch](./optimization/fp16)指南,以及运行[Apple (M1/M2)上的Stable Diffusion](./optimization/mps)和[ONNX Runtime](./optimization/onnx)的教程,更深入地了解如何加速🧨Diffuser。
diffusers/docs/source/zh/quicktour.md/0
{ "file_path": "diffusers/docs/source/zh/quicktour.md", "repo_id": "diffusers", "token_count": 8447 }
128
# Advanced diffusion training examples ## Train Dreambooth LoRA with Stable Diffusion XL > [!TIP] > 💡 This example follows the techniques and recommended practices covered in the blog post: [LoRA training scripts of the world, unite!](https://huggingface.co/blog/sdxl_lora_advanced_script). Make sure to check it out before starting 🤗 [DreamBooth](https://huggingface.co/papers/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. LoRA - Low-Rank Adaption of Large Language Models, was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://huggingface.co/papers/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen* In a nutshell, LoRA allows to adapt pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages: - Previous pretrained weights are kept frozen so that the model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114) - Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable. - LoRA attention layers allow to control to which extent the model is adapted towards new training images via a `scale` parameter. [cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository. The `train_dreambooth_lora_sdxl_advanced.py` script shows how to implement dreambooth-LoRA, combining the training process shown in `train_dreambooth_lora_sdxl.py`, with advanced features and techniques, inspired and built upon contributions by [Nataniel Ruiz](https://twitter.com/natanielruizg): [Dreambooth](https://dreambooth.github.io), [Rinon Gal](https://twitter.com/RinonGal): [Textual Inversion](https://textual-inversion.github.io), [Ron Mokady](https://twitter.com/MokadyRon): [Pivotal Tuning](https://huggingface.co/papers/2106.05744), [Simo Ryu](https://twitter.com/cloneofsimo): [cog-sdxl](https://github.com/replicate/cog-sdxl), [Kohya](https://twitter.com/kohya_tech/): [sd-scripts](https://github.com/kohya-ss/sd-scripts), [The Last Ben](https://twitter.com/__TheBen): [fast-stable-diffusion](https://github.com/TheLastBen/fast-stable-diffusion) ❤️ > [!NOTE] > 💡If this is your first time training a Dreambooth LoRA, congrats!🥳 > You might want to familiarize yourself more with the techniques: [Dreambooth blog](https://huggingface.co/blog/dreambooth), [Using LoRA for Efficient Stable Diffusion Fine-Tuning blog](https://huggingface.co/blog/lora) 📚 Read more about the advanced features and best practices in this community derived blog post: [LoRA training scripts of the world, unite!](https://huggingface.co/blog/sdxl_lora_advanced_script) ## Running locally with PyTorch ### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install -e . ``` Then cd in the `examples/advanced_diffusion_training` folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` Or for a default accelerate configuration without answering questions about your environment ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell e.g. a notebook ```python from accelerate.utils import write_basic_config write_basic_config() ``` When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups. Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment. Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub: ```bash hf auth login ``` This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter. > [!NOTE] > In the examples below we use `wandb` to document the training runs. To do the same, make sure to install `wandb`: > `pip install wandb` > Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`. ### Pivotal Tuning **Training with text encoder(s)** Alongside the UNet, LoRA fine-tuning of the text encoders is also supported. In addition to the text encoder optimization available with `train_dreambooth_lora_sdxl_advanced.py`, in the advanced script **pivotal tuning** is also supported. [pivotal tuning](https://huggingface.co/blog/sdxl_lora_advanced_script#pivotal-tuning) combines Textual Inversion with regular diffusion fine-tuning - we insert new tokens into the text encoders of the model, instead of reusing existing ones. We then optimize the newly-inserted token embeddings to represent the new concept. To do so, just specify `--train_text_encoder_ti` while launching training (for regular text encoder optimizations, use `--train_text_encoder`). Please keep the following points in mind: * SDXL has two text encoders. So, we fine-tune both using LoRA. * When not fine-tuning the text encoders, we ALWAYS precompute the text embeddings to save memory. ### 3D icon example Now let's get our dataset. For this example we will use some cool images of 3d rendered icons: https://huggingface.co/datasets/linoyts/3d_icon. Let's first download it locally: ```python from huggingface_hub import snapshot_download local_dir = "./3d_icon" snapshot_download( "LinoyTsaban/3d_icon", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes", ) ``` Let's review some of the advanced features we're going to be using for this example: - **custom captions**: To use custom captioning, first ensure that you have the datasets library installed, otherwise you can install it by ```bash pip install datasets ``` Now we'll simply specify the name of the dataset and caption column (in this case it's "prompt") ``` --dataset_name=./3d_icon --caption_column=prompt ``` You can also load a dataset straight from by specifying it's name in `dataset_name`. Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loading your own caption dataset. - **optimizer**: for this example, we'll use [prodigy](https://huggingface.co/blog/sdxl_lora_advanced_script#adaptive-optimizers) - an adaptive optimizer - To use Prodigy, please make sure to install the prodigyopt library: `pip install prodigyopt` - **pivotal tuning** - **min SNR gamma** **Now, we can launch training:** ```bash export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" export DATASET_NAME="./3d_icon" export OUTPUT_DIR="3d-icon-SDXL-LoRA" export VAE_PATH="madebyollin/sdxl-vae-fp16-fix" accelerate launch train_dreambooth_lora_sdxl_advanced.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --pretrained_vae_model_name_or_path=$VAE_PATH \ --dataset_name=$DATASET_NAME \ --instance_prompt="3d icon in the style of TOK" \ --validation_prompt="a TOK icon of an astronaut riding a horse, in the style of TOK" \ --output_dir=$OUTPUT_DIR \ --caption_column="prompt" \ --mixed_precision="bf16" \ --resolution=1024 \ --train_batch_size=3 \ --repeats=1 \ --report_to="wandb"\ --gradient_accumulation_steps=1 \ --gradient_checkpointing \ --learning_rate=1.0 \ --text_encoder_lr=1.0 \ --optimizer="prodigy"\ --train_text_encoder_ti\ --train_text_encoder_ti_frac=0.5\ --snr_gamma=5.0 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --rank=8 \ --max_train_steps=1000 \ --checkpointing_steps=2000 \ --seed="0" \ --push_to_hub ``` To better track our training experiments, we're using the following flags in the command above: * `report_to="wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`. * `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected. Our experiments were conducted on a single 40GB A100 GPU. ### Inference Once training is done, we can perform inference like so: 1. starting with loading the unet lora weights ```python import torch from huggingface_hub import hf_hub_download, upload_file from diffusers import DiffusionPipeline from diffusers.models import AutoencoderKL from safetensors.torch import load_file username = "linoyts" repo_id = f"{username}/3d-icon-SDXL-LoRA" pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", ).to("cuda") pipe.load_lora_weights(repo_id, weight_name="pytorch_lora_weights.safetensors") ``` 2. now we load the pivotal tuning embeddings ```python text_encoders = [pipe.text_encoder, pipe.text_encoder_2] tokenizers = [pipe.tokenizer, pipe.tokenizer_2] embedding_path = hf_hub_download(repo_id=repo_id, filename="3d-icon-SDXL-LoRA_emb.safetensors", repo_type="model") state_dict = load_file(embedding_path) # load embeddings of text_encoder 1 (CLIP ViT-L/14) pipe.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) # load embeddings of text_encoder 2 (CLIP ViT-G/14) pipe.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) ``` 3. let's generate images ```python instance_token = "<s0><s1>" prompt = f"a {instance_token} icon of an orange llama eating ramen, in the style of {instance_token}" image = pipe(prompt=prompt, num_inference_steps=25, cross_attention_kwargs={"scale": 1.0}).images[0] image.save("llama.png") ``` ### Comfy UI / AUTOMATIC1111 Inference The new script fully supports textual inversion loading with Comfy UI and AUTOMATIC1111 formats! **AUTOMATIC1111 / SD.Next** \ In AUTOMATIC1111/SD.Next we will load a LoRA and a textual embedding at the same time. - *LoRA*: Besides the diffusers format, the script will also train a WebUI compatible LoRA. It is generated as `{your_lora_name}.safetensors`. You can then include it in your `models/Lora` directory. - *Embedding*: the embedding is the same for diffusers and WebUI. You can download your `{lora_name}_emb.safetensors` file from a trained model, and include it in your `embeddings` directory. You can then run inference by prompting `a y2k_emb webpage about the movie Mean Girls <lora:y2k:0.9>`. You can use the `y2k_emb` token normally, including increasing its weight by doing `(y2k_emb:1.2)`. **ComfyUI** \ In ComfyUI we will load a LoRA and a textual embedding at the same time. - *LoRA*: Besides the diffusers format, the script will also train a ComfyUI compatible LoRA. It is generated as `{your_lora_name}.safetensors`. You can then include it in your `models/Lora` directory. Then you will load the LoRALoader node and hook that up with your model and CLIP. [Official guide for loading LoRAs](https://comfyanonymous.github.io/ComfyUI_examples/lora/) - *Embedding*: the embedding is the same for diffusers and WebUI. You can download your `{lora_name}_emb.safetensors` file from a trained model, and include it in your `models/embeddings` directory and use it in your prompts like `embedding:y2k_emb`. [Official guide for loading embeddings](https://comfyanonymous.github.io/ComfyUI_examples/textual_inversion_embeddings/). - ### Specifying a better VAE SDXL's VAE is known to suffer from numerical instability issues. This is why we also expose a CLI argument namely `--pretrained_vae_model_name_or_path` that lets you specify the location of a better VAE (such as [this one](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)). ### DoRA training The advanced script supports DoRA training too! > Proposed in [DoRA: Weight-Decomposed Low-Rank Adaptation](https://huggingface.co/papers/2402.09353), **DoRA** is very similar to LoRA, except it decomposes the pre-trained weight into two components, **magnitude** and **direction** and employs LoRA for _directional_ updates to efficiently minimize the number of trainable parameters. The authors found that by using DoRA, both the learning capacity and training stability of LoRA are enhanced without any additional overhead during inference. > [!NOTE] > 💡DoRA training is still _experimental_ > and is likely to require different hyperparameter values to perform best compared to a LoRA. > Specifically, we've noticed 2 differences to take into account your training: > 1. **LoRA seem to converge faster than DoRA** (so a set of parameters that may lead to overfitting when training a LoRA may be working well for a DoRA) > 2. **DoRA quality superior to LoRA especially in lower ranks** the difference in quality of DoRA of rank 8 and LoRA of rank 8 appears to be more significant than when training ranks of 32 or 64 for example. > This is also aligned with some of the quantitative analysis shown in the paper. **Usage** 1. To use DoRA you need to install `peft` from main: ```bash pip install git+https://github.com/huggingface/peft.git ``` 2. Enable DoRA training by adding this flag ```bash --use_dora ``` **Inference** The inference is the same as if you train a regular LoRA 🤗 ## Conducting EDM-style training It's now possible to perform EDM-style training as proposed in [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364). simply set: ```diff + --do_edm_style_training \ ``` Other SDXL-like models that use the EDM formulation, such as [playgroundai/playground-v2.5-1024px-aesthetic](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic), can also be DreamBooth'd with the script. Below is an example command: ```bash accelerate launch train_dreambooth_lora_sdxl_advanced.py \ --pretrained_model_name_or_path="playgroundai/playground-v2.5-1024px-aesthetic" \ --dataset_name="linoyts/3d_icon" \ --instance_prompt="3d icon in the style of TOK" \ --validation_prompt="a TOK icon of an astronaut riding a horse, in the style of TOK" \ --output_dir="3d-icon-SDXL-LoRA" \ --do_edm_style_training \ --caption_column="prompt" \ --mixed_precision="bf16" \ --resolution=1024 \ --train_batch_size=3 \ --repeats=1 \ --report_to="wandb"\ --gradient_accumulation_steps=1 \ --gradient_checkpointing \ --learning_rate=1.0 \ --text_encoder_lr=1.0 \ --optimizer="prodigy"\ --train_text_encoder_ti\ --train_text_encoder_ti_frac=0.5\ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --rank=8 \ --max_train_steps=1000 \ --checkpointing_steps=2000 \ --seed="0" \ --push_to_hub ``` > [!CAUTION] > Min-SNR gamma is not supported with the EDM-style training yet. When training with the PlaygroundAI model, it's recommended to not pass any "variant". ### B-LoRA training The advanced script now supports B-LoRA training too! > Proposed in [Implicit Style-Content Separation using B-LoRA](https://huggingface.co/papers/2403.14572), B-LoRA is a method that leverages LoRA to implicitly separate the style and content components of a **single** image. It was shown that learning the LoRA weights of two specific blocks (referred to as B-LoRAs) achieves style-content separation that cannot be achieved by training each B-LoRA independently. Once trained, the two B-LoRAs can be used as independent components to allow various image stylization tasks **Usage** Enable B-LoRA training by adding this flag ```bash --use_blora ``` You can train a B-LoRA with as little as 1 image, and 1000 steps. Try this default configuration as a start: ```bash !accelerate launch train_dreambooth_b-lora_sdxl.py \ --pretrained_model_name_or_path="stabilityai/stable-diffusion-xl-base-1.0" \ --instance_data_dir="linoyts/B-LoRA_teddy_bear" \ --output_dir="B-LoRA_teddy_bear" \ --instance_prompt="a [v18]" \ --resolution=1024 \ --rank=64 \ --train_batch_size=1 \ --learning_rate=5e-5 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --max_train_steps=1000 \ --checkpointing_steps=2000 \ --seed="0" \ --gradient_checkpointing \ --mixed_precision="fp16" ``` **Inference** The inference is a bit different: 1. we need load *specific* unet layers (as opposed to a regular LoRA/DoRA) 2. the trained layers we load, changes based on our objective (e.g. style/content) ```python import torch from diffusers import StableDiffusionXLPipeline, AutoencoderKL # taken & modified from B-LoRA repo - https://github.com/yardenfren1996/B-LoRA/blob/main/blora_utils.py def is_belong_to_blocks(key, blocks): try: for g in blocks: if g in key: return True return False except Exception as e: raise type(e)(f'failed to is_belong_to_block, due to: {e}') def lora_lora_unet_blocks(lora_path, alpha, target_blocks): state_dict, _ = pipeline.lora_state_dict(lora_path) filtered_state_dict = {k: v * alpha for k, v in state_dict.items() if is_belong_to_blocks(k, target_blocks)} return filtered_state_dict vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, ).to("cuda") # pick a blora for content/style (you can also set one to None) content_B_lora_path = "lora-library/B-LoRA-teddybear" style_B_lora_path= "lora-library/B-LoRA-pen_sketch" content_B_LoRA = lora_lora_unet_blocks(content_B_lora_path,alpha=1,target_blocks=["unet.up_blocks.0.attentions.0"]) style_B_LoRA = lora_lora_unet_blocks(style_B_lora_path,alpha=1.1,target_blocks=["unet.up_blocks.0.attentions.1"]) combined_lora = {**content_B_LoRA, **style_B_LoRA} # Load both loras pipeline.load_lora_into_unet(combined_lora, None, pipeline.unet) #generate prompt = "a [v18] in [v30] style" pipeline(prompt, num_images_per_prompt=4).images ``` ### LoRA training of Targeted U-net Blocks The advanced script now supports custom choice of U-net blocks to train during Dreambooth LoRA tuning. > [!NOTE] > This feature is still experimental > Recently, works like B-LoRA showed the potential advantages of learning the LoRA weights of specific U-net blocks, not only in speed & memory, > but also in reducing the amount of needed data, improving style manipulation and overcoming overfitting issues. > In light of this, we're introducing a new feature to the advanced script to allow for configurable U-net learned blocks. **Usage** Configure LoRA learned U-net blocks adding a `lora_unet_blocks` flag, with a comma separated string specifying the targeted blocks. e.g: ```bash --lora_unet_blocks="unet.up_blocks.0.attentions.0,unet.up_blocks.0.attentions.1" ``` > [!NOTE] > if you specify both `--use_blora` and `--lora_unet_blocks`, values given in --lora_unet_blocks will be ignored. > When enabling --use_blora, targeted U-net blocks are automatically set to be "unet.up_blocks.0.attentions.0,unet.up_blocks.0.attentions.1" as discussed in the paper. > If you wish to experiment with different blocks, specify `--lora_unet_blocks` only. **Inference** Inference is the same as for B-LoRAs, except the input targeted blocks should be modified based on your training configuration. ```python import torch from diffusers import StableDiffusionXLPipeline, AutoencoderKL # taken & modified from B-LoRA repo - https://github.com/yardenfren1996/B-LoRA/blob/main/blora_utils.py def is_belong_to_blocks(key, blocks): try: for g in blocks: if g in key: return True return False except Exception as e: raise type(e)(f'failed to is_belong_to_block, due to: {e}') def lora_lora_unet_blocks(lora_path, alpha, target_blocks): state_dict, _ = pipeline.lora_state_dict(lora_path) filtered_state_dict = {k: v * alpha for k, v in state_dict.items() if is_belong_to_blocks(k, target_blocks)} return filtered_state_dict vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, ).to("cuda") lora_path = "lora-library/B-LoRA-pen_sketch" state_dict = lora_lora_unet_blocks(content_B_lora_path,alpha=1,target_blocks=["unet.up_blocks.0.attentions.0"]) # Load trained lora layers into the unet pipeline.load_lora_into_unet(state_dict, None, pipeline.unet) #generate prompt = "a dog in [v30] style" pipeline(prompt, num_images_per_prompt=4).images ``` ### Tips and Tricks Check out [these recommended practices](https://huggingface.co/blog/sdxl_lora_advanced_script#additional-good-practices) ## Running on Colab Notebook Check out [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/SDXL_Dreambooth_LoRA_advanced_example.ipynb). to train using the advanced features (including pivotal tuning), and [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/SDXL_DreamBooth_LoRA_.ipynb) to train on a free colab, using some of the advanced features (excluding pivotal tuning)
diffusers/examples/advanced_diffusion_training/README.md/0
{ "file_path": "diffusers/examples/advanced_diffusion_training/README.md", "repo_id": "diffusers", "token_count": 7246 }
129
#!/usr/bin/env python # coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import copy import logging import math import os import random import shutil from contextlib import nullcontext from pathlib import Path import accelerate import numpy as np import torch import transformers from accelerate import Accelerator from accelerate.logging import get_logger from accelerate.utils import DistributedType, ProjectConfiguration, set_seed from datasets import load_dataset from huggingface_hub import create_repo, upload_folder from packaging import version from PIL import Image from torchvision import transforms from tqdm.auto import tqdm import diffusers from diffusers import ( AutoencoderKL, CogView4ControlPipeline, CogView4Transformer2DModel, FlowMatchEulerDiscreteScheduler, ) from diffusers.optimization import get_scheduler from diffusers.training_utils import ( compute_density_for_timestep_sampling, compute_loss_weighting_for_sd3, free_memory, ) from diffusers.utils import check_min_version, is_wandb_available, load_image, make_image_grid from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card from diffusers.utils.torch_utils import is_compiled_module if is_wandb_available(): import wandb # Will error if the minimal version of diffusers is not installed. Remove at your own risks. check_min_version("0.36.0.dev0") logger = get_logger(__name__) NORM_LAYER_PREFIXES = ["norm_q", "norm_k", "norm_added_q", "norm_added_k"] def encode_images(pixels: torch.Tensor, vae: torch.nn.Module, weight_dtype): pixel_latents = vae.encode(pixels.to(vae.dtype)).latent_dist.sample() pixel_latents = (pixel_latents - vae.config.shift_factor) * vae.config.scaling_factor return pixel_latents.to(weight_dtype) def log_validation(cogview4_transformer, args, accelerator, weight_dtype, step, is_final_validation=False): logger.info("Running validation... ") if not is_final_validation: cogview4_transformer = accelerator.unwrap_model(cogview4_transformer) pipeline = CogView4ControlPipeline.from_pretrained( args.pretrained_model_name_or_path, transformer=cogview4_transformer, torch_dtype=weight_dtype, ) else: transformer = CogView4Transformer2DModel.from_pretrained(args.output_dir, torch_dtype=weight_dtype) pipeline = CogView4ControlPipeline.from_pretrained( args.pretrained_model_name_or_path, transformer=transformer, torch_dtype=weight_dtype, ) pipeline.to(accelerator.device) pipeline.set_progress_bar_config(disable=True) if args.seed is None: generator = None else: generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if len(args.validation_image) == len(args.validation_prompt): validation_images = args.validation_image validation_prompts = args.validation_prompt elif len(args.validation_image) == 1: validation_images = args.validation_image * len(args.validation_prompt) validation_prompts = args.validation_prompt elif len(args.validation_prompt) == 1: validation_images = args.validation_image validation_prompts = args.validation_prompt * len(args.validation_image) else: raise ValueError( "number of `args.validation_image` and `args.validation_prompt` should be checked in `parse_args`" ) image_logs = [] if is_final_validation or torch.backends.mps.is_available(): autocast_ctx = nullcontext() else: autocast_ctx = torch.autocast(accelerator.device.type, weight_dtype) for validation_prompt, validation_image in zip(validation_prompts, validation_images): validation_image = load_image(validation_image) # maybe need to inference on 1024 to get a good image validation_image = validation_image.resize((args.resolution, args.resolution)) images = [] for _ in range(args.num_validation_images): with autocast_ctx: image = pipeline( prompt=validation_prompt, control_image=validation_image, num_inference_steps=50, guidance_scale=args.guidance_scale, max_sequence_length=args.max_sequence_length, generator=generator, height=args.resolution, width=args.resolution, ).images[0] image = image.resize((args.resolution, args.resolution)) images.append(image) image_logs.append( {"validation_image": validation_image, "images": images, "validation_prompt": validation_prompt} ) tracker_key = "test" if is_final_validation else "validation" for tracker in accelerator.trackers: if tracker.name == "tensorboard": for log in image_logs: images = log["images"] validation_prompt = log["validation_prompt"] validation_image = log["validation_image"] formatted_images = [] formatted_images.append(np.asarray(validation_image)) for image in images: formatted_images.append(np.asarray(image)) formatted_images = np.stack(formatted_images) tracker.writer.add_images(validation_prompt, formatted_images, step, dataformats="NHWC") elif tracker.name == "wandb": formatted_images = [] for log in image_logs: images = log["images"] validation_prompt = log["validation_prompt"] validation_image = log["validation_image"] formatted_images.append(wandb.Image(validation_image, caption="Conditioning")) for image in images: image = wandb.Image(image, caption=validation_prompt) formatted_images.append(image) tracker.log({tracker_key: formatted_images}) else: logger.warning(f"image logging not implemented for {tracker.name}") del pipeline free_memory() return image_logs def save_model_card(repo_id: str, image_logs=None, base_model=str, repo_folder=None): img_str = "" if image_logs is not None: img_str = "You can find some example images below.\n\n" for i, log in enumerate(image_logs): images = log["images"] validation_prompt = log["validation_prompt"] validation_image = log["validation_image"] validation_image.save(os.path.join(repo_folder, "image_control.png")) img_str += f"prompt: {validation_prompt}\n" images = [validation_image] + images make_image_grid(images, 1, len(images)).save(os.path.join(repo_folder, f"images_{i}.png")) img_str += f"![images_{i})](./images_{i}.png)\n" model_description = f""" # cogview4-control-{repo_id} These are Control weights trained on {base_model} with new type of conditioning. {img_str} ## License Please adhere to the licensing terms as described [here](https://huggingface.co/THUDM/CogView4-6b/blob/main/LICENSE.md) """ model_card = load_or_create_model_card( repo_id_or_path=repo_id, from_training=True, license="other", base_model=base_model, model_description=model_description, inference=True, ) tags = [ "cogview4", "cogview4-diffusers", "text-to-image", "diffusers", "control", "diffusers-training", ] model_card = populate_model_card(model_card, tags=tags) model_card.save(os.path.join(repo_folder, "README.md")) def parse_args(input_args=None): parser = argparse.ArgumentParser(description="Simple example of a CogView4 Control training script.") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--variant", type=str, default=None, help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--output_dir", type=str, default="cogview4-control", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument( "--cache_dir", type=str, default=None, help="The directory where the downloaded models and datasets will be stored.", ) parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=1024, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--max_sequence_length", type=int, default=128, help="The maximum sequence length for the prompt." ) parser.add_argument( "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." ) parser.add_argument("--num_train_epochs", type=int, default=1) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--checkpointing_steps", type=int, default=500, help=( "Save a checkpoint of the training state every X updates. Checkpoints can be used for resuming training via `--resume_from_checkpoint`. " "In the case that the checkpoint is better than the final trained model, the checkpoint can also be used for inference." "Using a checkpoint for inference requires separate loading of the original pipeline and the individual checkpointed model components." "See https://huggingface.co/docs/diffusers/main/en/training/dreambooth#performing-inference-using-a-saved-checkpoint for step by step" "instructions." ), ) parser.add_argument( "--checkpoints_total_limit", type=int, default=None, help=("Max number of checkpoints to store."), ) parser.add_argument( "--resume_from_checkpoint", type=str, default=None, help=( "Whether training should be resumed from a previous checkpoint. Use a path saved by" ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' ), ) parser.add_argument( "--proportion_empty_prompts", type=float, default=0, help="Proportion of image prompts to be replaced with empty strings. Defaults to 0 (no prompt replacement).", ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--learning_rate", type=float, default=5e-6, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--lr_num_cycles", type=int, default=1, help="Number of hard resets of the lr in cosine_with_restarts scheduler.", ) parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") parser.add_argument( "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." ) parser.add_argument( "--dataloader_num_workers", type=int, default=0, help=( "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." ), ) parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--allow_tf32", action="store_true", help=( "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" ), ) parser.add_argument( "--report_to", type=str, default="tensorboard", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." ), ) parser.add_argument( "--dataset_name", type=str, default=None, help=( "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," " or to a folder containing files that 🤗 Datasets can understand." ), ) parser.add_argument( "--dataset_config_name", type=str, default=None, help="The config of the Dataset, leave as None if there's only one config.", ) parser.add_argument( "--image_column", type=str, default="image", help="The column of the dataset containing the target image." ) parser.add_argument( "--conditioning_image_column", type=str, default="conditioning_image", help="The column of the dataset containing the control conditioning image.", ) parser.add_argument( "--caption_column", type=str, default="text", help="The column of the dataset containing a caption or a list of captions.", ) parser.add_argument("--log_dataset_samples", action="store_true", help="Whether to log somple dataset samples.") parser.add_argument( "--max_train_samples", type=int, default=None, help=( "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." ), ) parser.add_argument( "--validation_prompt", type=str, default=None, nargs="+", help=( "A set of prompts evaluated every `--validation_steps` and logged to `--report_to`." " Provide either a matching number of `--validation_image`s, a single `--validation_image`" " to be used with all prompts, or a single prompt that will be used with all `--validation_image`s." ), ) parser.add_argument( "--validation_image", type=str, default=None, nargs="+", help=( "A set of paths to the control conditioning image be evaluated every `--validation_steps`" " and logged to `--report_to`. Provide either a matching number of `--validation_prompt`s, a" " a single `--validation_prompt` to be used with all `--validation_image`s, or a single" " `--validation_image` that will be used with all `--validation_prompt`s." ), ) parser.add_argument( "--num_validation_images", type=int, default=1, help="Number of images to be generated for each `--validation_image`, `--validation_prompt` pair", ) parser.add_argument( "--validation_steps", type=int, default=100, help=( "Run validation every X steps. Validation consists of running the prompt" " `args.validation_prompt` multiple times: `args.num_validation_images`" " and logging the images." ), ) parser.add_argument( "--tracker_project_name", type=str, default="cogview4_train_control", help=( "The `project_name` argument passed to Accelerator.init_trackers for" " more information see https://huggingface.co/docs/accelerate/v0.17.0/en/package_reference/accelerator#accelerate.Accelerator" ), ) parser.add_argument( "--jsonl_for_train", type=str, default=None, help="Path to the jsonl file containing the training data.", ) parser.add_argument( "--only_target_transformer_blocks", action="store_true", help="If we should only target the transformer blocks to train along with the input layer (`x_embedder`).", ) parser.add_argument( "--guidance_scale", type=float, default=3.5, help="the guidance scale used for transformer.", ) parser.add_argument( "--upcast_before_saving", action="store_true", help=( "Whether to upcast the trained transformer layers to float32 before saving (at the end of training). " "Defaults to precision dtype used for training to save memory" ), ) parser.add_argument( "--weighting_scheme", type=str, default="none", choices=["sigma_sqrt", "logit_normal", "mode", "cosmap", "none"], help=('We default to the "none" weighting scheme for uniform sampling and uniform loss'), ) parser.add_argument( "--logit_mean", type=float, default=0.0, help="mean to use when using the `'logit_normal'` weighting scheme." ) parser.add_argument( "--logit_std", type=float, default=1.0, help="std to use when using the `'logit_normal'` weighting scheme." ) parser.add_argument( "--mode_scale", type=float, default=1.29, help="Scale of mode weighting scheme. Only effective when using the `'mode'` as the `weighting_scheme`.", ) parser.add_argument( "--offload", action="store_true", help="Whether to offload the VAE and the text encoders to CPU when they are not used.", ) if input_args is not None: args = parser.parse_args(input_args) else: args = parser.parse_args() if args.dataset_name is None and args.jsonl_for_train is None: raise ValueError("Specify either `--dataset_name` or `--jsonl_for_train`") if args.dataset_name is not None and args.jsonl_for_train is not None: raise ValueError("Specify only one of `--dataset_name` or `--jsonl_for_train`") if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1: raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].") if args.validation_prompt is not None and args.validation_image is None: raise ValueError("`--validation_image` must be set if `--validation_prompt` is set") if args.validation_prompt is None and args.validation_image is not None: raise ValueError("`--validation_prompt` must be set if `--validation_image` is set") if ( args.validation_image is not None and args.validation_prompt is not None and len(args.validation_image) != 1 and len(args.validation_prompt) != 1 and len(args.validation_image) != len(args.validation_prompt) ): raise ValueError( "Must provide either 1 `--validation_image`, 1 `--validation_prompt`," " or the same number of `--validation_prompt`s and `--validation_image`s" ) if args.resolution % 8 != 0: raise ValueError( "`--resolution` must be divisible by 8 for consistently sized encoded images between the VAE and the cogview4 transformer." ) return args def get_train_dataset(args, accelerator): dataset = None if args.dataset_name is not None: # Downloading and loading a dataset from the hub. dataset = load_dataset( args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, ) if args.jsonl_for_train is not None: # load from json dataset = load_dataset("json", data_files=args.jsonl_for_train, cache_dir=args.cache_dir) dataset = dataset.flatten_indices() # Preprocessing the datasets. # We need to tokenize inputs and targets. column_names = dataset["train"].column_names # 6. Get the column names for input/target. if args.image_column is None: image_column = column_names[0] logger.info(f"image column defaulting to {image_column}") else: image_column = args.image_column if image_column not in column_names: raise ValueError( f"`--image_column` value '{args.image_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" ) if args.caption_column is None: caption_column = column_names[1] logger.info(f"caption column defaulting to {caption_column}") else: caption_column = args.caption_column if caption_column not in column_names: raise ValueError( f"`--caption_column` value '{args.caption_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" ) if args.conditioning_image_column is None: conditioning_image_column = column_names[2] logger.info(f"conditioning image column defaulting to {conditioning_image_column}") else: conditioning_image_column = args.conditioning_image_column if conditioning_image_column not in column_names: raise ValueError( f"`--conditioning_image_column` value '{args.conditioning_image_column}' not found in dataset columns. Dataset columns are: {', '.join(column_names)}" ) with accelerator.main_process_first(): train_dataset = dataset["train"].shuffle(seed=args.seed) if args.max_train_samples is not None: train_dataset = train_dataset.select(range(args.max_train_samples)) return train_dataset def prepare_train_dataset(dataset, accelerator): image_transforms = transforms.Compose( [ transforms.Resize((args.resolution, args.resolution), interpolation=transforms.InterpolationMode.BILINEAR), transforms.ToTensor(), transforms.Lambda(lambda x: x * 2 - 1), ] ) def preprocess_train(examples): images = [ (image.convert("RGB") if not isinstance(image, str) else Image.open(image).convert("RGB")) for image in examples[args.image_column] ] images = [image_transforms(image) for image in images] conditioning_images = [ (image.convert("RGB") if not isinstance(image, str) else Image.open(image).convert("RGB")) for image in examples[args.conditioning_image_column] ] conditioning_images = [image_transforms(image) for image in conditioning_images] examples["pixel_values"] = images examples["conditioning_pixel_values"] = conditioning_images is_caption_list = isinstance(examples[args.caption_column][0], list) if is_caption_list: examples["captions"] = [max(example, key=len) for example in examples[args.caption_column]] else: examples["captions"] = list(examples[args.caption_column]) return examples with accelerator.main_process_first(): dataset = dataset.with_transform(preprocess_train) return dataset def collate_fn(examples): pixel_values = torch.stack([example["pixel_values"] for example in examples]) pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() conditioning_pixel_values = torch.stack([example["conditioning_pixel_values"] for example in examples]) conditioning_pixel_values = conditioning_pixel_values.to(memory_format=torch.contiguous_format).float() captions = [example["captions"] for example in examples] return {"pixel_values": pixel_values, "conditioning_pixel_values": conditioning_pixel_values, "captions": captions} def main(args): if args.report_to == "wandb" and args.hub_token is not None: raise ValueError( "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." " Please use `hf auth login` to authenticate with the Hub." ) logging_out_dir = Path(args.output_dir, args.logging_dir) if torch.backends.mps.is_available() and args.mixed_precision == "bf16": # due to pytorch#99272, MPS does not yet support bfloat16. raise ValueError( "Mixed precision training with bfloat16 is not supported on MPS. Please use fp16 (recommended) or fp32 instead." ) accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=str(logging_out_dir)) accelerator = Accelerator( gradient_accumulation_steps=args.gradient_accumulation_steps, mixed_precision=args.mixed_precision, log_with=args.report_to, project_config=accelerator_project_config, ) # Disable AMP for MPS. A technique for accelerating machine learning computations on iOS and macOS devices. if torch.backends.mps.is_available(): logger.info("MPS is enabled. Disabling AMP.") accelerator.native_amp = False # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", # DEBUG, INFO, WARNING, ERROR, CRITICAL level=logging.INFO, ) logger.info(accelerator.state, main_process_only=False) if accelerator.is_local_main_process: transformers.utils.logging.set_verbosity_warning() diffusers.utils.logging.set_verbosity_info() else: transformers.utils.logging.set_verbosity_error() diffusers.utils.logging.set_verbosity_error() # If passed along, set the training seed now. if args.seed is not None: set_seed(args.seed) # Handle the repository creation if accelerator.is_main_process: if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) if args.push_to_hub: repo_id = create_repo( repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token ).repo_id # Load models. We will load the text encoders later in a pipeline to compute # embeddings. vae = AutoencoderKL.from_pretrained( args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision, variant=args.variant, ) cogview4_transformer = CogView4Transformer2DModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="transformer", revision=args.revision, variant=args.variant, ) logger.info("All models loaded successfully") noise_scheduler = FlowMatchEulerDiscreteScheduler.from_pretrained( args.pretrained_model_name_or_path, subfolder="scheduler", ) noise_scheduler_copy = copy.deepcopy(noise_scheduler) if not args.only_target_transformer_blocks: cogview4_transformer.requires_grad_(True) vae.requires_grad_(False) # cast down and move to the CPU weight_dtype = torch.float32 if accelerator.mixed_precision == "fp16": weight_dtype = torch.float16 elif accelerator.mixed_precision == "bf16": weight_dtype = torch.bfloat16 # let's not move the VAE to the GPU yet. vae.to(dtype=torch.float32) # keep the VAE in float32. # enable image inputs with torch.no_grad(): patch_size = cogview4_transformer.config.patch_size initial_input_channels = cogview4_transformer.config.in_channels * patch_size**2 new_linear = torch.nn.Linear( cogview4_transformer.patch_embed.proj.in_features * 2, cogview4_transformer.patch_embed.proj.out_features, bias=cogview4_transformer.patch_embed.proj.bias is not None, dtype=cogview4_transformer.dtype, device=cogview4_transformer.device, ) new_linear.weight.zero_() new_linear.weight[:, :initial_input_channels].copy_(cogview4_transformer.patch_embed.proj.weight) if cogview4_transformer.patch_embed.proj.bias is not None: new_linear.bias.copy_(cogview4_transformer.patch_embed.proj.bias) cogview4_transformer.patch_embed.proj = new_linear assert torch.all(cogview4_transformer.patch_embed.proj.weight[:, initial_input_channels:].data == 0) cogview4_transformer.register_to_config( in_channels=cogview4_transformer.config.in_channels * 2, out_channels=cogview4_transformer.config.in_channels ) if args.only_target_transformer_blocks: cogview4_transformer.patch_embed.proj.requires_grad_(True) for name, module in cogview4_transformer.named_modules(): if "transformer_blocks" in name: module.requires_grad_(True) else: module.requirs_grad_(False) def unwrap_model(model): model = accelerator.unwrap_model(model) model = model._orig_mod if is_compiled_module(model) else model return model # `accelerate` 0.16.0 will have better support for customized saving if version.parse(accelerate.__version__) >= version.parse("0.16.0"): def save_model_hook(models, weights, output_dir): if accelerator.is_main_process: for model in models: if isinstance(unwrap_model(model), type(unwrap_model(cogview4_transformer))): model = unwrap_model(model) model.save_pretrained(os.path.join(output_dir, "transformer")) else: raise ValueError(f"unexpected save model: {model.__class__}") # make sure to pop weight so that corresponding model is not saved again if weights: weights.pop() def load_model_hook(models, input_dir): transformer_ = None if not accelerator.distributed_type == DistributedType.DEEPSPEED: while len(models) > 0: model = models.pop() if isinstance(unwrap_model(model), type(unwrap_model(cogview4_transformer))): transformer_ = model # noqa: F841 else: raise ValueError(f"unexpected save model: {unwrap_model(model).__class__}") else: transformer_ = CogView4Transformer2DModel.from_pretrained(input_dir, subfolder="transformer") # noqa: F841 accelerator.register_save_state_pre_hook(save_model_hook) accelerator.register_load_state_pre_hook(load_model_hook) if args.gradient_checkpointing: cogview4_transformer.enable_gradient_checkpointing() # Enable TF32 for faster training on Ampere GPUs, # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices if args.allow_tf32: torch.backends.cuda.matmul.allow_tf32 = True if args.scale_lr: args.learning_rate = ( args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs if args.use_8bit_adam: try: import bitsandbytes as bnb except ImportError: raise ImportError( "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." ) optimizer_class = bnb.optim.AdamW8bit else: optimizer_class = torch.optim.AdamW # Optimization parameters optimizer = optimizer_class( cogview4_transformer.parameters(), lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) # Prepare dataset and dataloader. train_dataset = get_train_dataset(args, accelerator) train_dataset = prepare_train_dataset(train_dataset, accelerator) train_dataloader = torch.utils.data.DataLoader( train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=args.train_batch_size, num_workers=args.dataloader_num_workers, ) # Scheduler and math around the number of training steps. # Check the PR https://github.com/huggingface/diffusers/pull/8312 for detailed explanation. if args.max_train_steps is None: len_train_dataloader_after_sharding = math.ceil(len(train_dataloader) / accelerator.num_processes) num_update_steps_per_epoch = math.ceil(len_train_dataloader_after_sharding / args.gradient_accumulation_steps) num_training_steps_for_scheduler = ( args.num_train_epochs * num_update_steps_per_epoch * accelerator.num_processes ) else: num_training_steps_for_scheduler = args.max_train_steps * accelerator.num_processes lr_scheduler = get_scheduler( args.lr_scheduler, optimizer=optimizer, num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, num_training_steps=args.max_train_steps * accelerator.num_processes, num_cycles=args.lr_num_cycles, power=args.lr_power, ) # Prepare everything with our `accelerator`. cogview4_transformer, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( cogview4_transformer, optimizer, train_dataloader, lr_scheduler ) # We need to recalculate our total training steps as the size of the training dataloader may have changed. num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch if num_training_steps_for_scheduler != args.max_train_steps * accelerator.num_processes: logger.warning( f"The length of the 'train_dataloader' after 'accelerator.prepare' ({len(train_dataloader)}) does not match " f"the expected length ({len_train_dataloader_after_sharding}) when the learning rate scheduler was created. " f"This inconsistency may result in the learning rate scheduler not functioning properly." ) # Afterwards we recalculate our number of training epochs args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: tracker_config = dict(vars(args)) # tensorboard cannot handle list types for config tracker_config.pop("validation_prompt") tracker_config.pop("validation_image") accelerator.init_trackers(args.tracker_project_name, config=tracker_config) # Train! total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num batches each epoch = {len(train_dataloader)}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") logger.info(f" Total optimization steps = {args.max_train_steps}") global_step = 0 first_epoch = 0 # Create a pipeline for text encoding. We will move this pipeline to GPU/CPU as needed. text_encoding_pipeline = CogView4ControlPipeline.from_pretrained( args.pretrained_model_name_or_path, transformer=None, vae=None, torch_dtype=weight_dtype ) tokenizer = text_encoding_pipeline.tokenizer # Potentially load in the weights and states from a previous save if args.resume_from_checkpoint: if args.resume_from_checkpoint != "latest": path = os.path.basename(args.resume_from_checkpoint) else: # Get the most recent checkpoint dirs = os.listdir(args.output_dir) dirs = [d for d in dirs if d.startswith("checkpoint")] dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) path = dirs[-1] if len(dirs) > 0 else None if path is None: logger.info(f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run.") args.resume_from_checkpoint = None initial_global_step = 0 else: logger.info(f"Resuming from checkpoint {path}") accelerator.load_state(os.path.join(args.output_dir, path)) global_step = int(path.split("-")[1]) initial_global_step = global_step first_epoch = global_step // num_update_steps_per_epoch else: initial_global_step = 0 if accelerator.is_main_process and args.report_to == "wandb" and args.log_dataset_samples: logger.info("Logging some dataset samples.") formatted_images = [] formatted_control_images = [] all_prompts = [] for i, batch in enumerate(train_dataloader): images = (batch["pixel_values"] + 1) / 2 control_images = (batch["conditioning_pixel_values"] + 1) / 2 prompts = batch["captions"] if len(formatted_images) > 10: break for img, control_img, prompt in zip(images, control_images, prompts): formatted_images.append(img) formatted_control_images.append(control_img) all_prompts.append(prompt) logged_artifacts = [] for img, control_img, prompt in zip(formatted_images, formatted_control_images, all_prompts): logged_artifacts.append(wandb.Image(control_img, caption="Conditioning")) logged_artifacts.append(wandb.Image(img, caption=prompt)) wandb_tracker = [tracker for tracker in accelerator.trackers if tracker.name == "wandb"] wandb_tracker[0].log({"dataset_samples": logged_artifacts}) progress_bar = tqdm( range(0, args.max_train_steps), initial=initial_global_step, desc="Steps", # Only show the progress bar once on each machine. disable=not accelerator.is_local_main_process, ) for epoch in range(first_epoch, args.num_train_epochs): cogview4_transformer.train() for step, batch in enumerate(train_dataloader): with accelerator.accumulate(cogview4_transformer): # Convert images to latent space # vae encode prompts = batch["captions"] attention_mask = tokenizer( prompts, padding="longest", # not use max length max_length=args.max_sequence_length, truncation=True, add_special_tokens=True, return_tensors="pt", ).attention_mask.float() pixel_latents = encode_images(batch["pixel_values"], vae.to(accelerator.device), weight_dtype) control_latents = encode_images( batch["conditioning_pixel_values"], vae.to(accelerator.device), weight_dtype ) if args.offload: vae.cpu() # Sample a random timestep for each image # for weighting schemes where we sample timesteps non-uniformly bsz = pixel_latents.shape[0] noise = torch.randn_like(pixel_latents, device=accelerator.device, dtype=weight_dtype) u = compute_density_for_timestep_sampling( weighting_scheme=args.weighting_scheme, batch_size=bsz, logit_mean=args.logit_mean, logit_std=args.logit_std, mode_scale=args.mode_scale, ) # Add noise according for cogview4 indices = (u * noise_scheduler_copy.config.num_train_timesteps).long() timesteps = noise_scheduler_copy.timesteps[indices].to(device=pixel_latents.device) sigmas = noise_scheduler_copy.sigmas[indices].to(device=pixel_latents.device) captions = batch["captions"] image_seq_lens = torch.tensor( pixel_latents.shape[2] * pixel_latents.shape[3] // patch_size**2, dtype=pixel_latents.dtype, device=pixel_latents.device, ) # H * W / VAE patch_size mu = torch.sqrt(image_seq_lens / 256) mu = mu * 0.75 + 0.25 scale_factors = mu / (mu + (1 / sigmas - 1) ** 1.0).to( dtype=pixel_latents.dtype, device=pixel_latents.device ) scale_factors = scale_factors.view(len(batch["captions"]), 1, 1, 1) noisy_model_input = (1.0 - scale_factors) * pixel_latents + scale_factors * noise concatenated_noisy_model_input = torch.cat([noisy_model_input, control_latents], dim=1) text_encoding_pipeline = text_encoding_pipeline.to("cuda") with torch.no_grad(): ( prompt_embeds, pooled_prompt_embeds, ) = text_encoding_pipeline.encode_prompt(captions, "") original_size = (args.resolution, args.resolution) original_size = torch.tensor([original_size], dtype=prompt_embeds.dtype, device=prompt_embeds.device) target_size = (args.resolution, args.resolution) target_size = torch.tensor([target_size], dtype=prompt_embeds.dtype, device=prompt_embeds.device) target_size = target_size.repeat(len(batch["captions"]), 1) original_size = original_size.repeat(len(batch["captions"]), 1) crops_coords_top_left = torch.tensor([(0, 0)], dtype=prompt_embeds.dtype, device=prompt_embeds.device) crops_coords_top_left = crops_coords_top_left.repeat(len(batch["captions"]), 1) # this could be optimized by not having to do any text encoding and just # doing zeros on specified shapes for `prompt_embeds` and `pooled_prompt_embeds` if args.proportion_empty_prompts and random.random() < args.proportion_empty_prompts: # Here, we directly pass 16 pad tokens from pooled_prompt_embeds to prompt_embeds. prompt_embeds = pooled_prompt_embeds if args.offload: text_encoding_pipeline = text_encoding_pipeline.to("cpu") # Predict. noise_pred_cond = cogview4_transformer( hidden_states=concatenated_noisy_model_input, encoder_hidden_states=prompt_embeds, timestep=timesteps, original_size=original_size, target_size=target_size, crop_coords=crops_coords_top_left, return_dict=False, attention_mask=attention_mask, )[0] # these weighting schemes use a uniform timestep sampling # and instead post-weight the loss weighting = compute_loss_weighting_for_sd3(weighting_scheme=args.weighting_scheme, sigmas=sigmas) # flow-matching loss target = noise - pixel_latents weighting = weighting.view(len(batch["captions"]), 1, 1, 1) loss = torch.mean( (weighting.float() * (noise_pred_cond.float() - target.float()) ** 2).reshape(target.shape[0], -1), 1, ) loss = loss.mean() accelerator.backward(loss) if accelerator.sync_gradients: params_to_clip = cogview4_transformer.parameters() accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) optimizer.step() lr_scheduler.step() optimizer.zero_grad() # Checks if the accelerator has performed an optimization step behind the scenes if accelerator.sync_gradients: progress_bar.update(1) global_step += 1 # DeepSpeed requires saving weights on every device; saving weights only on the main process would cause issues. if accelerator.distributed_type == DistributedType.DEEPSPEED or accelerator.is_main_process: if global_step % args.checkpointing_steps == 0: # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` if args.checkpoints_total_limit is not None: checkpoints = os.listdir(args.output_dir) checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints if len(checkpoints) >= args.checkpoints_total_limit: num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 removing_checkpoints = checkpoints[0:num_to_remove] logger.info( f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" ) logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") for removing_checkpoint in removing_checkpoints: removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) shutil.rmtree(removing_checkpoint) save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") accelerator.save_state(save_path) logger.info(f"Saved state to {save_path}") if args.validation_prompt is not None and global_step % args.validation_steps == 0: image_logs = log_validation( cogview4_transformer=cogview4_transformer, args=args, accelerator=accelerator, weight_dtype=weight_dtype, step=global_step, ) logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} progress_bar.set_postfix(**logs) accelerator.log(logs, step=global_step) if global_step >= args.max_train_steps: break # Create the pipeline using using the trained modules and save it. accelerator.wait_for_everyone() if accelerator.is_main_process: cogview4_transformer = unwrap_model(cogview4_transformer) if args.upcast_before_saving: cogview4_transformer.to(torch.float32) cogview4_transformer.save_pretrained(args.output_dir) del cogview4_transformer del text_encoding_pipeline del vae free_memory() # Run a final round of validation. image_logs = None if args.validation_prompt is not None: image_logs = log_validation( cogview4_transformer=None, args=args, accelerator=accelerator, weight_dtype=weight_dtype, step=global_step, is_final_validation=True, ) if args.push_to_hub: save_model_card( repo_id, image_logs=image_logs, base_model=args.pretrained_model_name_or_path, repo_folder=args.output_dir, ) upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message="End of training", ignore_patterns=["step_*", "epoch_*", "checkpoint-*"], ) accelerator.end_training() if __name__ == "__main__": args = parse_args() main(args)
diffusers/examples/cogview4-control/train_control_cogview4.py/0
{ "file_path": "diffusers/examples/cogview4-control/train_control_cogview4.py", "repo_id": "diffusers", "token_count": 22924 }
130
import math import numbers from typing import Any, Callable, Dict, List, Optional, Union import torch import torch.nn.functional as F from torch import nn from diffusers.image_processor import PipelineImageInput from diffusers.models import AsymmetricAutoencoderKL, ImageProjection from diffusers.models.attention_processor import Attention, AttnProcessor from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint import ( StableDiffusionInpaintPipeline, retrieve_timesteps, ) from diffusers.utils import deprecate class RASGAttnProcessor: def __init__(self, mask, token_idx, scale_factor): self.attention_scores = None # Stores the last output of the similarity matrix here. Each layer will get its own RASGAttnProcessor assigned self.mask = mask self.token_idx = token_idx self.scale_factor = scale_factor self.mask_resoltuion = mask.shape[-1] * mask.shape[-2] # 64 x 64 if the image is 512x512 def __call__( self, attn: Attention, hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, temb: Optional[torch.Tensor] = None, scale: float = 1.0, ) -> torch.Tensor: # Same as the default AttnProcessor up until the part where similarity matrix gets saved downscale_factor = self.mask_resoltuion // hidden_states.shape[1] residual = hidden_states if attn.spatial_norm is not None: hidden_states = attn.spatial_norm(hidden_states, temb) input_ndim = hidden_states.ndim if input_ndim == 4: batch_size, channel, height, width = hidden_states.shape hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) batch_size, sequence_length, _ = ( hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape ) attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) if attn.group_norm is not None: hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) query = attn.to_q(hidden_states) if encoder_hidden_states is None: encoder_hidden_states = hidden_states elif attn.norm_cross: encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) key = attn.to_k(encoder_hidden_states) value = attn.to_v(encoder_hidden_states) query = attn.head_to_batch_dim(query) key = attn.head_to_batch_dim(key) value = attn.head_to_batch_dim(value) # Automatically recognize the resolution and save the attention similarity values # We need to use the values before the softmax function, hence the rewritten get_attention_scores function. if downscale_factor == self.scale_factor**2: self.attention_scores = get_attention_scores(attn, query, key, attention_mask) attention_probs = self.attention_scores.softmax(dim=-1) attention_probs = attention_probs.to(query.dtype) else: attention_probs = attn.get_attention_scores(query, key, attention_mask) # Original code hidden_states = torch.bmm(attention_probs, value) hidden_states = attn.batch_to_head_dim(hidden_states) # linear proj hidden_states = attn.to_out[0](hidden_states) # dropout hidden_states = attn.to_out[1](hidden_states) if input_ndim == 4: hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) if attn.residual_connection: hidden_states = hidden_states + residual hidden_states = hidden_states / attn.rescale_output_factor return hidden_states class PAIntAAttnProcessor: def __init__(self, transformer_block, mask, token_idx, do_classifier_free_guidance, scale_factors): self.transformer_block = transformer_block # Stores the parent transformer block. self.mask = mask self.scale_factors = scale_factors self.do_classifier_free_guidance = do_classifier_free_guidance self.token_idx = token_idx self.shape = mask.shape[2:] self.mask_resoltuion = mask.shape[-1] * mask.shape[-2] # 64 x 64 self.default_processor = AttnProcessor() def __call__( self, attn: Attention, hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, temb: Optional[torch.Tensor] = None, scale: float = 1.0, ) -> torch.Tensor: # Automatically recognize the resolution of the current attention layer and resize the masks accordingly downscale_factor = self.mask_resoltuion // hidden_states.shape[1] mask = None for factor in self.scale_factors: if downscale_factor == factor**2: shape = (self.shape[0] // factor, self.shape[1] // factor) mask = F.interpolate(self.mask, shape, mode="bicubic") # B, 1, H, W break if mask is None: return self.default_processor(attn, hidden_states, encoder_hidden_states, attention_mask, temb, scale) # STARTS HERE residual = hidden_states # Save the input hidden_states for later use input_hidden_states = hidden_states # ================================================== # # =============== SELF ATTENTION 1 ================= # # ================================================== # if attn.spatial_norm is not None: hidden_states = attn.spatial_norm(hidden_states, temb) input_ndim = hidden_states.ndim if input_ndim == 4: batch_size, channel, height, width = hidden_states.shape hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2) batch_size, sequence_length, _ = ( hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape ) attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) if attn.group_norm is not None: hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2) query = attn.to_q(hidden_states) if encoder_hidden_states is None: encoder_hidden_states = hidden_states elif attn.norm_cross: encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states) key = attn.to_k(encoder_hidden_states) value = attn.to_v(encoder_hidden_states) query = attn.head_to_batch_dim(query) key = attn.head_to_batch_dim(key) value = attn.head_to_batch_dim(value) # self_attention_probs = attn.get_attention_scores(query, key, attention_mask) # We can't use post-softmax attention scores in this case self_attention_scores = get_attention_scores( attn, query, key, attention_mask ) # The custom function returns pre-softmax probabilities self_attention_probs = self_attention_scores.softmax( dim=-1 ) # Manually compute the probabilities here, the scores will be reused in the second part of PAIntA self_attention_probs = self_attention_probs.to(query.dtype) hidden_states = torch.bmm(self_attention_probs, value) hidden_states = attn.batch_to_head_dim(hidden_states) # linear proj hidden_states = attn.to_out[0](hidden_states) # dropout hidden_states = attn.to_out[1](hidden_states) # x = x + self.attn1(self.norm1(x)) if input_ndim == 4: hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) if attn.residual_connection: # So many residuals everywhere hidden_states = hidden_states + residual self_attention_output_hidden_states = hidden_states / attn.rescale_output_factor # ================================================== # # ============ BasicTransformerBlock =============== # # ================================================== # # We use a hack by running the code from the BasicTransformerBlock that is between Self and Cross attentions here # The other option would've been modifying the BasicTransformerBlock and adding this functionality here. # I assumed that changing the BasicTransformerBlock would have been a bigger deal and decided to use this hack instead. # The SelfAttention block receives the normalized latents from the BasicTransformerBlock, # But the residual of the output is the non-normalized version. # Therefore we unnormalize the input hidden state here unnormalized_input_hidden_states = ( input_hidden_states + self.transformer_block.norm1.bias ) * self.transformer_block.norm1.weight # TODO: return if necessary # if self.use_ada_layer_norm_zero: # attn_output = gate_msa.unsqueeze(1) * attn_output # elif self.use_ada_layer_norm_single: # attn_output = gate_msa * attn_output transformer_hidden_states = self_attention_output_hidden_states + unnormalized_input_hidden_states if transformer_hidden_states.ndim == 4: transformer_hidden_states = transformer_hidden_states.squeeze(1) # TODO: return if necessary # 2.5 GLIGEN Control # if gligen_kwargs is not None: # transformer_hidden_states = self.fuser(transformer_hidden_states, gligen_kwargs["objs"]) # NOTE: we experimented with using GLIGEN and HDPainter together, the results were not that great # 3. Cross-Attention if self.transformer_block.use_ada_layer_norm: # transformer_norm_hidden_states = self.transformer_block.norm2(transformer_hidden_states, timestep) raise NotImplementedError() elif self.transformer_block.use_ada_layer_norm_zero or self.transformer_block.use_layer_norm: transformer_norm_hidden_states = self.transformer_block.norm2(transformer_hidden_states) elif self.transformer_block.use_ada_layer_norm_single: # For PixArt norm2 isn't applied here: # https://github.com/PixArt-alpha/PixArt-alpha/blob/0f55e922376d8b797edd44d25d0e7464b260dcab/diffusion/model/nets/PixArtMS.py#L70C1-L76C103 transformer_norm_hidden_states = transformer_hidden_states elif self.transformer_block.use_ada_layer_norm_continuous: # transformer_norm_hidden_states = self.transformer_block.norm2(transformer_hidden_states, added_cond_kwargs["pooled_text_emb"]) raise NotImplementedError() else: raise ValueError("Incorrect norm") if self.transformer_block.pos_embed is not None and self.transformer_block.use_ada_layer_norm_single is False: transformer_norm_hidden_states = self.transformer_block.pos_embed(transformer_norm_hidden_states) # ================================================== # # ================= CROSS ATTENTION ================ # # ================================================== # # We do an initial pass of the CrossAttention up to obtaining the similarity matrix here. # The similarity matrix is used to obtain scaling coefficients for the attention matrix of the self attention # We reuse the previously computed self-attention matrix, and only repeat the steps after the softmax cross_attention_input_hidden_states = ( transformer_norm_hidden_states # Renaming the variable for the sake of readability ) # TODO: check if classifier_free_guidance is being used before splitting here if self.do_classifier_free_guidance: # Our scaling coefficients depend only on the conditional part, so we split the inputs ( _cross_attention_input_hidden_states_unconditional, cross_attention_input_hidden_states_conditional, ) = cross_attention_input_hidden_states.chunk(2) # Same split for the encoder_hidden_states i.e. the tokens # Since the SelfAttention processors don't get the encoder states as input, we inject them into the processor in the beginning. _encoder_hidden_states_unconditional, encoder_hidden_states_conditional = self.encoder_hidden_states.chunk( 2 ) else: cross_attention_input_hidden_states_conditional = cross_attention_input_hidden_states encoder_hidden_states_conditional = self.encoder_hidden_states.chunk(2) # Rename the variables for the sake of readability # The part below is the beginning of the __call__ function of the following CrossAttention layer cross_attention_hidden_states = cross_attention_input_hidden_states_conditional cross_attention_encoder_hidden_states = encoder_hidden_states_conditional attn2 = self.transformer_block.attn2 if attn2.spatial_norm is not None: cross_attention_hidden_states = attn2.spatial_norm(cross_attention_hidden_states, temb) input_ndim = cross_attention_hidden_states.ndim if input_ndim == 4: batch_size, channel, height, width = cross_attention_hidden_states.shape cross_attention_hidden_states = cross_attention_hidden_states.view( batch_size, channel, height * width ).transpose(1, 2) ( batch_size, sequence_length, _, ) = cross_attention_hidden_states.shape # It is definitely a cross attention, so no need for an if block # TODO: change the attention_mask here attention_mask = attn2.prepare_attention_mask( None, sequence_length, batch_size ) # I assume the attention mask is the same... if attn2.group_norm is not None: cross_attention_hidden_states = attn2.group_norm(cross_attention_hidden_states.transpose(1, 2)).transpose( 1, 2 ) query2 = attn2.to_q(cross_attention_hidden_states) if attn2.norm_cross: cross_attention_encoder_hidden_states = attn2.norm_encoder_hidden_states( cross_attention_encoder_hidden_states ) key2 = attn2.to_k(cross_attention_encoder_hidden_states) query2 = attn2.head_to_batch_dim(query2) key2 = attn2.head_to_batch_dim(key2) cross_attention_probs = attn2.get_attention_scores(query2, key2, attention_mask) # CrossAttention ends here, the remaining part is not used # ================================================== # # ================ SELF ATTENTION 2 ================ # # ================================================== # # DEJA VU! mask = (mask > 0.5).to(self_attention_output_hidden_states.dtype) m = mask.to(self_attention_output_hidden_states.device) # m = rearrange(m, 'b c h w -> b (h w) c').contiguous() m = m.permute(0, 2, 3, 1).reshape((m.shape[0], -1, m.shape[1])).contiguous() # B HW 1 m = torch.matmul(m, m.permute(0, 2, 1)) + (1 - m) # # Compute scaling coefficients for the similarity matrix # # Select the cross attention values for the correct tokens only! # cross_attention_probs = cross_attention_probs.mean(dim = 0) # cross_attention_probs = cross_attention_probs[:, self.token_idx].sum(dim=1) # cross_attention_probs = cross_attention_probs.reshape(shape) # gaussian_smoothing = GaussianSmoothing(channels=1, kernel_size=3, sigma=0.5, dim=2).to(self_attention_output_hidden_states.device) # cross_attention_probs = gaussian_smoothing(cross_attention_probs.unsqueeze(0))[0] # optional smoothing # cross_attention_probs = cross_attention_probs.reshape(-1) # cross_attention_probs = ((cross_attention_probs - torch.median(cross_attention_probs.ravel())) / torch.max(cross_attention_probs.ravel())).clip(0, 1) # c = (1 - m) * cross_attention_probs.reshape(1, 1, -1) + m # PAIntA scaling coefficients # Compute scaling coefficients for the similarity matrix # Select the cross attention values for the correct tokens only! batch_size, dims, channels = cross_attention_probs.shape batch_size = batch_size // attn.heads cross_attention_probs = cross_attention_probs.reshape((batch_size, attn.heads, dims, channels)) # B, D, HW, T cross_attention_probs = cross_attention_probs.mean(dim=1) # B, HW, T cross_attention_probs = cross_attention_probs[..., self.token_idx].sum(dim=-1) # B, HW cross_attention_probs = cross_attention_probs.reshape((batch_size,) + shape) # , B, H, W gaussian_smoothing = GaussianSmoothing(channels=1, kernel_size=3, sigma=0.5, dim=2).to( self_attention_output_hidden_states.device ) cross_attention_probs = gaussian_smoothing(cross_attention_probs[:, None])[:, 0] # optional smoothing B, H, W # Median normalization cross_attention_probs = cross_attention_probs.reshape(batch_size, -1) # B, HW cross_attention_probs = ( cross_attention_probs - cross_attention_probs.median(dim=-1, keepdim=True).values ) / cross_attention_probs.max(dim=-1, keepdim=True).values cross_attention_probs = cross_attention_probs.clip(0, 1) c = (1 - m) * cross_attention_probs.reshape(batch_size, 1, -1) + m c = c.repeat_interleave(attn.heads, 0) # BD, HW if self.do_classifier_free_guidance: c = torch.cat([c, c]) # 2BD, HW # Rescaling the original self-attention matrix self_attention_scores_rescaled = self_attention_scores * c self_attention_probs_rescaled = self_attention_scores_rescaled.softmax(dim=-1) # Continuing the self attention normally using the new matrix hidden_states = torch.bmm(self_attention_probs_rescaled, value) hidden_states = attn.batch_to_head_dim(hidden_states) # linear proj hidden_states = attn.to_out[0](hidden_states) # dropout hidden_states = attn.to_out[1](hidden_states) if input_ndim == 4: hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width) if attn.residual_connection: hidden_states = hidden_states + input_hidden_states hidden_states = hidden_states / attn.rescale_output_factor return hidden_states class StableDiffusionHDPainterPipeline(StableDiffusionInpaintPipeline): def get_tokenized_prompt(self, prompt): out = self.tokenizer(prompt) return [self.tokenizer.decode(x) for x in out["input_ids"]] def init_attn_processors( self, mask, token_idx, use_painta=True, use_rasg=True, painta_scale_factors=[2, 4], # 64x64 -> [16x16, 32x32] rasg_scale_factor=4, # 64x64 -> 16x16 self_attention_layer_name="attn1", cross_attention_layer_name="attn2", list_of_painta_layer_names=None, list_of_rasg_layer_names=None, ): default_processor = AttnProcessor() width, height = mask.shape[-2:] width, height = width // self.vae_scale_factor, height // self.vae_scale_factor painta_scale_factors = [x * self.vae_scale_factor for x in painta_scale_factors] rasg_scale_factor = self.vae_scale_factor * rasg_scale_factor attn_processors = {} for x in self.unet.attn_processors: if (list_of_painta_layer_names is None and self_attention_layer_name in x) or ( list_of_painta_layer_names is not None and x in list_of_painta_layer_names ): if use_painta: transformer_block = self.unet.get_submodule(x.replace(".attn1.processor", "")) attn_processors[x] = PAIntAAttnProcessor( transformer_block, mask, token_idx, self.do_classifier_free_guidance, painta_scale_factors ) else: attn_processors[x] = default_processor elif (list_of_rasg_layer_names is None and cross_attention_layer_name in x) or ( list_of_rasg_layer_names is not None and x in list_of_rasg_layer_names ): if use_rasg: attn_processors[x] = RASGAttnProcessor(mask, token_idx, rasg_scale_factor) else: attn_processors[x] = default_processor self.unet.set_attn_processor(attn_processors) # import json # with open('/home/hayk.manukyan/repos/diffusers/debug.txt', 'a') as f: # json.dump({x:str(y) for x,y in self.unet.attn_processors.items()}, f, indent=4) @torch.no_grad() def __call__( self, prompt: Union[str, List[str]] = None, image: PipelineImageInput = None, mask_image: PipelineImageInput = None, masked_image_latents: torch.Tensor = None, height: Optional[int] = None, width: Optional[int] = None, padding_mask_crop: Optional[int] = None, strength: float = 1.0, num_inference_steps: int = 50, timesteps: List[int] = None, guidance_scale: float = 7.5, positive_prompt: Optional[str] = "", negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.01, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, ip_adapter_image: Optional[PipelineImageInput] = None, output_type: Optional[str] = "pil", return_dict: bool = True, cross_attention_kwargs: Optional[Dict[str, Any]] = None, clip_skip: int = None, callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], use_painta=True, use_rasg=True, self_attention_layer_name=".attn1", cross_attention_layer_name=".attn2", painta_scale_factors=[2, 4], # 16 x 16 and 32 x 32 rasg_scale_factor=4, # 16x16 by default list_of_painta_layer_names=None, list_of_rasg_layer_names=None, **kwargs, ): callback = kwargs.pop("callback", None) callback_steps = kwargs.pop("callback_steps", None) if callback is not None: deprecate( "callback", "1.0.0", "Passing `callback` as an input argument to `__call__` is deprecated, consider use `callback_on_step_end`", ) if callback_steps is not None: deprecate( "callback_steps", "1.0.0", "Passing `callback_steps` as an input argument to `__call__` is deprecated, consider use `callback_on_step_end`", ) # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor # prompt_no_positives = prompt if isinstance(prompt, list): prompt = [x + positive_prompt for x in prompt] else: prompt = prompt + positive_prompt # 1. Check inputs self.check_inputs( prompt, image, mask_image, height, width, strength, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds, callback_on_step_end_tensor_inputs, padding_mask_crop, ) self._guidance_scale = guidance_scale self._clip_skip = clip_skip self._cross_attention_kwargs = cross_attention_kwargs self._interrupt = False # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] # assert batch_size == 1, "Does not work with batch size > 1 currently" device = self._execution_device # 3. Encode input prompt text_encoder_lora_scale = ( cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None ) prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt, device, num_images_per_prompt, self.do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=text_encoder_lora_scale, clip_skip=self.clip_skip, ) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes if self.do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) if ip_adapter_image is not None: output_hidden_state = False if isinstance(self.unet.encoder_hid_proj, ImageProjection) else True image_embeds, negative_image_embeds = self.encode_image( ip_adapter_image, device, num_images_per_prompt, output_hidden_state ) if self.do_classifier_free_guidance: image_embeds = torch.cat([negative_image_embeds, image_embeds]) # 4. set timesteps timesteps, num_inference_steps = retrieve_timesteps(self.scheduler, num_inference_steps, device, timesteps) timesteps, num_inference_steps = self.get_timesteps( num_inference_steps=num_inference_steps, strength=strength, device=device ) # check that number of inference steps is not < 1 - as this doesn't make sense if num_inference_steps < 1: raise ValueError( f"After adjusting the num_inference_steps by strength parameter: {strength}, the number of pipeline" f"steps is {num_inference_steps} which is < 1 and not appropriate for this pipeline." ) # at which timestep to set the initial noise (n.b. 50% if strength is 0.5) latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) # create a boolean to check if the strength is set to 1. if so then initialise the latents with pure noise is_strength_max = strength == 1.0 # 5. Preprocess mask and image if padding_mask_crop is not None: crops_coords = self.mask_processor.get_crop_region(mask_image, width, height, pad=padding_mask_crop) resize_mode = "fill" else: crops_coords = None resize_mode = "default" original_image = image init_image = self.image_processor.preprocess( image, height=height, width=width, crops_coords=crops_coords, resize_mode=resize_mode ) init_image = init_image.to(dtype=torch.float32) # 6. Prepare latent variables num_channels_latents = self.vae.config.latent_channels num_channels_unet = self.unet.config.in_channels return_image_latents = num_channels_unet == 4 latents_outputs = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, prompt_embeds.dtype, device, generator, latents, image=init_image, timestep=latent_timestep, is_strength_max=is_strength_max, return_noise=True, return_image_latents=return_image_latents, ) if return_image_latents: latents, noise, image_latents = latents_outputs else: latents, noise = latents_outputs # 7. Prepare mask latent variables mask_condition = self.mask_processor.preprocess( mask_image, height=height, width=width, resize_mode=resize_mode, crops_coords=crops_coords ) if masked_image_latents is None: masked_image = init_image * (mask_condition < 0.5) else: masked_image = masked_image_latents mask, masked_image_latents = self.prepare_mask_latents( mask_condition, masked_image, batch_size * num_images_per_prompt, height, width, prompt_embeds.dtype, device, generator, self.do_classifier_free_guidance, ) # 7.5 Setting up HD-Painter # Get the indices of the tokens to be modified by both RASG and PAIntA token_idx = list(range(1, self.get_tokenized_prompt(prompt_no_positives).index("<|endoftext|>"))) + [ self.get_tokenized_prompt(prompt).index("<|endoftext|>") ] # Setting up the attention processors self.init_attn_processors( mask_condition, token_idx, use_painta, use_rasg, painta_scale_factors=painta_scale_factors, rasg_scale_factor=rasg_scale_factor, self_attention_layer_name=self_attention_layer_name, cross_attention_layer_name=cross_attention_layer_name, list_of_painta_layer_names=list_of_painta_layer_names, list_of_rasg_layer_names=list_of_rasg_layer_names, ) # 8. Check that sizes of mask, masked image and latents match if num_channels_unet == 9: # default case for runwayml/stable-diffusion-inpainting num_channels_mask = mask.shape[1] num_channels_masked_image = masked_image_latents.shape[1] if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels: raise ValueError( f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects" f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +" f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}" f" = {num_channels_latents + num_channels_masked_image + num_channels_mask}. Please verify the config of" " `pipeline.unet` or your `mask_image` or `image` input." ) elif num_channels_unet != 4: raise ValueError( f"The unet {self.unet.__class__} should have either 4 or 9 input channels, not {self.unet.config.in_channels}." ) # 9. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) if use_rasg: extra_step_kwargs["generator"] = None # 9.1 Add image embeds for IP-Adapter added_cond_kwargs = {"image_embeds": image_embeds} if ip_adapter_image is not None else None # 9.2 Optionally get Guidance Scale Embedding timestep_cond = None if self.unet.config.time_cond_proj_dim is not None: guidance_scale_tensor = torch.tensor(self.guidance_scale - 1).repeat(batch_size * num_images_per_prompt) timestep_cond = self.get_guidance_scale_embedding( guidance_scale_tensor, embedding_dim=self.unet.config.time_cond_proj_dim ).to(device=device, dtype=latents.dtype) # 10. Denoising loop num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order self._num_timesteps = len(timesteps) painta_active = True with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): if self.interrupt: continue if t < 500 and painta_active: self.init_attn_processors( mask_condition, token_idx, False, use_rasg, painta_scale_factors=painta_scale_factors, rasg_scale_factor=rasg_scale_factor, self_attention_layer_name=self_attention_layer_name, cross_attention_layer_name=cross_attention_layer_name, list_of_painta_layer_names=list_of_painta_layer_names, list_of_rasg_layer_names=list_of_rasg_layer_names, ) painta_active = False with torch.enable_grad(): self.unet.zero_grad() latents = latents.detach() latents.requires_grad = True # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents # concat latents, mask, masked_image_latents in the channel dimension latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) if num_channels_unet == 9: latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1) self.scheduler.latents = latents self.encoder_hidden_states = prompt_embeds for attn_processor in self.unet.attn_processors.values(): attn_processor.encoder_hidden_states = prompt_embeds # predict the noise residual noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, timestep_cond=timestep_cond, cross_attention_kwargs=self.cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] # perform guidance if self.do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond) if use_rasg: # Perform RASG _, _, height, width = mask_condition.shape # 512 x 512 scale_factor = self.vae_scale_factor * rasg_scale_factor # 8 * 4 = 32 # TODO: Fix for > 1 batch_size rasg_mask = F.interpolate( mask_condition, (height // scale_factor, width // scale_factor), mode="bicubic" )[0, 0] # mode is nearest by default, B, H, W # Aggregate the saved attention maps attn_map = [] for processor in self.unet.attn_processors.values(): if hasattr(processor, "attention_scores") and processor.attention_scores is not None: if self.do_classifier_free_guidance: attn_map.append(processor.attention_scores.chunk(2)[1]) # (B/2) x H, 256, 77 else: attn_map.append(processor.attention_scores) # B x H, 256, 77 ? attn_map = ( torch.cat(attn_map) .mean(0) .permute(1, 0) .reshape((-1, height // scale_factor, width // scale_factor)) ) # 77, 16, 16 # Compute the attention score attn_score = -sum( [ F.binary_cross_entropy_with_logits(x - 1.0, rasg_mask.to(device)) for x in attn_map[token_idx] ] ) # Backward the score and compute the gradients attn_score.backward() # Normalzie the gradients and compute the noise component variance_noise = latents.grad.detach() # print("VARIANCE SHAPE", variance_noise.shape) variance_noise -= torch.mean(variance_noise, [1, 2, 3], keepdim=True) variance_noise /= torch.std(variance_noise, [1, 2, 3], keepdim=True) else: variance_noise = None # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step( noise_pred, t, latents, **extra_step_kwargs, return_dict=False, variance_noise=variance_noise )[0] if num_channels_unet == 4: init_latents_proper = image_latents if self.do_classifier_free_guidance: init_mask, _ = mask.chunk(2) else: init_mask = mask if i < len(timesteps) - 1: noise_timestep = timesteps[i + 1] init_latents_proper = self.scheduler.add_noise( init_latents_proper, noise, torch.tensor([noise_timestep]) ) latents = (1 - init_mask) * init_latents_proper + init_mask * latents if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) latents = callback_outputs.pop("latents", latents) prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds) mask = callback_outputs.pop("mask", mask) masked_image_latents = callback_outputs.pop("masked_image_latents", masked_image_latents) # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if not output_type == "latent": condition_kwargs = {} if isinstance(self.vae, AsymmetricAutoencoderKL): init_image = init_image.to(device=device, dtype=masked_image_latents.dtype) init_image_condition = init_image.clone() init_image = self._encode_vae_image(init_image, generator=generator) mask_condition = mask_condition.to(device=device, dtype=masked_image_latents.dtype) condition_kwargs = {"image": init_image_condition, "mask": mask_condition} image = self.vae.decode( latents / self.vae.config.scaling_factor, return_dict=False, generator=generator, **condition_kwargs )[0] image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) else: image = latents has_nsfw_concept = None if has_nsfw_concept is None: do_denormalize = [True] * image.shape[0] else: do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) if padding_mask_crop is not None: image = [self.image_processor.apply_overlay(mask_image, original_image, i, crops_coords) for i in image] # Offload all models self.maybe_free_model_hooks() if not return_dict: return (image, has_nsfw_concept) return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) # ============= Utility Functions ============== # class GaussianSmoothing(nn.Module): """ Apply gaussian smoothing on a 1d, 2d or 3d tensor. Filtering is performed separately for each channel in the input using a depthwise convolution. Args: channels (`int` or `sequence`): Number of channels of the input tensors. The output will have this number of channels as well. kernel_size (`int` or `sequence`): Size of the Gaussian kernel. sigma (`float` or `sequence`): Standard deviation of the Gaussian kernel. dim (`int`, *optional*, defaults to `2`): The number of dimensions of the data. Default is 2 (spatial dimensions). """ def __init__(self, channels, kernel_size, sigma, dim=2): super(GaussianSmoothing, self).__init__() if isinstance(kernel_size, numbers.Number): kernel_size = [kernel_size] * dim if isinstance(sigma, numbers.Number): sigma = [sigma] * dim # The gaussian kernel is the product of the # gaussian function of each dimension. kernel = 1 meshgrids = torch.meshgrid([torch.arange(size, dtype=torch.float32) for size in kernel_size]) for size, std, mgrid in zip(kernel_size, sigma, meshgrids): mean = (size - 1) / 2 kernel *= 1 / (std * math.sqrt(2 * math.pi)) * torch.exp(-(((mgrid - mean) / (2 * std)) ** 2)) # Make sure sum of values in gaussian kernel equals 1. kernel = kernel / torch.sum(kernel) # Reshape to depthwise convolutional weight kernel = kernel.view(1, 1, *kernel.size()) kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) self.register_buffer("weight", kernel) self.groups = channels if dim == 1: self.conv = F.conv1d elif dim == 2: self.conv = F.conv2d elif dim == 3: self.conv = F.conv3d else: raise RuntimeError("Only 1, 2 and 3 dimensions are supported. Received {}.".format(dim)) def forward(self, input): """ Apply gaussian filter to input. Args: input (`torch.Tensor` of shape `(N, C, H, W)`): Input to apply Gaussian filter on. Returns: `torch.Tensor`: The filtered output tensor with the same shape as the input. """ return self.conv(input, weight=self.weight.to(input.dtype), groups=self.groups, padding="same") def get_attention_scores( self, query: torch.Tensor, key: torch.Tensor, attention_mask: torch.Tensor = None ) -> torch.Tensor: r""" Compute the attention scores. Args: query (`torch.Tensor`): The query tensor. key (`torch.Tensor`): The key tensor. attention_mask (`torch.Tensor`, *optional*): The attention mask to use. If `None`, no mask is applied. Returns: `torch.Tensor`: The attention probabilities/scores. """ if self.upcast_attention: query = query.float() key = key.float() if attention_mask is None: baddbmm_input = torch.empty( query.shape[0], query.shape[1], key.shape[1], dtype=query.dtype, device=query.device ) beta = 0 else: baddbmm_input = attention_mask beta = 1 attention_scores = torch.baddbmm( baddbmm_input, query, key.transpose(-1, -2), beta=beta, alpha=self.scale, ) del baddbmm_input if self.upcast_softmax: attention_scores = attention_scores.float() return attention_scores
diffusers/examples/community/hd_painter.py/0
{ "file_path": "diffusers/examples/community/hd_painter.py", "repo_id": "diffusers", "token_count": 20671 }
131
# Copyright 2025 Bingxin Ke, ETH Zurich and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # -------------------------------------------------------------------------- # If you find this code useful, we kindly ask you to cite our paper in your work. # Please find bibtex at: https://github.com/prs-eth/Marigold#-citation # More information about the method can be found at https://marigoldmonodepth.github.io # -------------------------------------------------------------------------- import logging import math from typing import Dict, Union import matplotlib import numpy as np import torch from PIL import Image from PIL.Image import Resampling from scipy.optimize import minimize from torch.utils.data import DataLoader, TensorDataset from tqdm.auto import tqdm from transformers import CLIPTextModel, CLIPTokenizer from diffusers import ( AutoencoderKL, DDIMScheduler, DiffusionPipeline, LCMScheduler, UNet2DConditionModel, ) from diffusers.utils import BaseOutput, check_min_version # Will error if the minimal version of diffusers is not installed. Remove at your own risks. check_min_version("0.36.0.dev0") class MarigoldDepthOutput(BaseOutput): """ Output class for Marigold monocular depth prediction pipeline. Args: depth_np (`np.ndarray`): Predicted depth map, with depth values in the range of [0, 1]. depth_colored (`None` or `PIL.Image.Image`): Colorized depth map, with the shape of [3, H, W] and values in [0, 1]. uncertainty (`None` or `np.ndarray`): Uncalibrated uncertainty(MAD, median absolute deviation) coming from ensembling. """ depth_np: np.ndarray depth_colored: Union[None, Image.Image] uncertainty: Union[None, np.ndarray] def get_pil_resample_method(method_str: str) -> Resampling: resample_method_dic = { "bilinear": Resampling.BILINEAR, "bicubic": Resampling.BICUBIC, "nearest": Resampling.NEAREST, } resample_method = resample_method_dic.get(method_str, None) if resample_method is None: raise ValueError(f"Unknown resampling method: {resample_method}") else: return resample_method class MarigoldPipeline(DiffusionPipeline): """ Pipeline for monocular depth estimation using Marigold: https://marigoldmonodepth.github.io. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: unet (`UNet2DConditionModel`): Conditional U-Net to denoise the depth latent, conditioned on image latent. vae (`AutoencoderKL`): Variational Auto-Encoder (VAE) Model to encode and decode images and depth maps to and from latent representations. scheduler (`DDIMScheduler`): A scheduler to be used in combination with `unet` to denoise the encoded image latents. text_encoder (`CLIPTextModel`): Text-encoder, for empty text embedding. tokenizer (`CLIPTokenizer`): CLIP tokenizer. """ rgb_latent_scale_factor = 0.18215 depth_latent_scale_factor = 0.18215 def __init__( self, unet: UNet2DConditionModel, vae: AutoencoderKL, scheduler: DDIMScheduler, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, ): super().__init__() self.register_modules( unet=unet, vae=vae, scheduler=scheduler, text_encoder=text_encoder, tokenizer=tokenizer, ) self.empty_text_embed = None @torch.no_grad() def __call__( self, input_image: Image, denoising_steps: int = 10, ensemble_size: int = 10, processing_res: int = 768, match_input_res: bool = True, resample_method: str = "bilinear", batch_size: int = 0, seed: Union[int, None] = None, color_map: str = "Spectral", show_progress_bar: bool = True, ensemble_kwargs: Dict = None, ) -> MarigoldDepthOutput: """ Function invoked when calling the pipeline. Args: input_image (`Image`): Input RGB (or gray-scale) image. processing_res (`int`, *optional*, defaults to `768`): Maximum resolution of processing. If set to 0: will not resize at all. match_input_res (`bool`, *optional*, defaults to `True`): Resize depth prediction to match input resolution. Only valid if `processing_res` > 0. resample_method: (`str`, *optional*, defaults to `bilinear`): Resampling method used to resize images and depth predictions. This can be one of `bilinear`, `bicubic` or `nearest`, defaults to: `bilinear`. denoising_steps (`int`, *optional*, defaults to `10`): Number of diffusion denoising steps (DDIM) during inference. ensemble_size (`int`, *optional*, defaults to `10`): Number of predictions to be ensembled. batch_size (`int`, *optional*, defaults to `0`): Inference batch size, no bigger than `num_ensemble`. If set to 0, the script will automatically decide the proper batch size. seed (`int`, *optional*, defaults to `None`) Reproducibility seed. show_progress_bar (`bool`, *optional*, defaults to `True`): Display a progress bar of diffusion denoising. color_map (`str`, *optional*, defaults to `"Spectral"`, pass `None` to skip colorized depth map generation): Colormap used to colorize the depth map. ensemble_kwargs (`dict`, *optional*, defaults to `None`): Arguments for detailed ensembling settings. Returns: `MarigoldDepthOutput`: Output class for Marigold monocular depth prediction pipeline, including: - **depth_np** (`np.ndarray`) Predicted depth map, with depth values in the range of [0, 1] - **depth_colored** (`PIL.Image.Image`) Colorized depth map, with the shape of [3, H, W] and values in [0, 1], None if `color_map` is `None` - **uncertainty** (`None` or `np.ndarray`) Uncalibrated uncertainty(MAD, median absolute deviation) coming from ensembling. None if `ensemble_size = 1` """ device = self.device input_size = input_image.size if not match_input_res: assert processing_res is not None, "Value error: `resize_output_back` is only valid with " assert processing_res >= 0 assert ensemble_size >= 1 # Check if denoising step is reasonable self._check_inference_step(denoising_steps) resample_method: Resampling = get_pil_resample_method(resample_method) # ----------------- Image Preprocess ----------------- # Resize image if processing_res > 0: input_image = self.resize_max_res( input_image, max_edge_resolution=processing_res, resample_method=resample_method, ) # Convert the image to RGB, to 1.remove the alpha channel 2.convert B&W to 3-channel input_image = input_image.convert("RGB") image = np.asarray(input_image) # Normalize rgb values rgb = np.transpose(image, (2, 0, 1)) # [H, W, rgb] -> [rgb, H, W] rgb_norm = rgb / 255.0 * 2.0 - 1.0 # [0, 255] -> [-1, 1] rgb_norm = torch.from_numpy(rgb_norm).to(self.dtype) rgb_norm = rgb_norm.to(device) assert rgb_norm.min() >= -1.0 and rgb_norm.max() <= 1.0 # ----------------- Predicting depth ----------------- # Batch repeated input image duplicated_rgb = torch.stack([rgb_norm] * ensemble_size) single_rgb_dataset = TensorDataset(duplicated_rgb) if batch_size > 0: _bs = batch_size else: _bs = self._find_batch_size( ensemble_size=ensemble_size, input_res=max(rgb_norm.shape[1:]), dtype=self.dtype, ) single_rgb_loader = DataLoader(single_rgb_dataset, batch_size=_bs, shuffle=False) # Predict depth maps (batched) depth_pred_ls = [] if show_progress_bar: iterable = tqdm(single_rgb_loader, desc=" " * 2 + "Inference batches", leave=False) else: iterable = single_rgb_loader for batch in iterable: (batched_img,) = batch depth_pred_raw = self.single_infer( rgb_in=batched_img, num_inference_steps=denoising_steps, show_pbar=show_progress_bar, seed=seed, ) depth_pred_ls.append(depth_pred_raw.detach()) depth_preds = torch.concat(depth_pred_ls, dim=0).squeeze() torch.cuda.empty_cache() # clear vram cache for ensembling # ----------------- Test-time ensembling ----------------- if ensemble_size > 1: depth_pred, pred_uncert = self.ensemble_depths(depth_preds, **(ensemble_kwargs or {})) else: depth_pred = depth_preds pred_uncert = None # ----------------- Post processing ----------------- # Scale prediction to [0, 1] min_d = torch.min(depth_pred) max_d = torch.max(depth_pred) depth_pred = (depth_pred - min_d) / (max_d - min_d) # Convert to numpy depth_pred = depth_pred.cpu().numpy().astype(np.float32) # Resize back to original resolution if match_input_res: pred_img = Image.fromarray(depth_pred) pred_img = pred_img.resize(input_size, resample=resample_method) depth_pred = np.asarray(pred_img) # Clip output range depth_pred = depth_pred.clip(0, 1) # Colorize if color_map is not None: depth_colored = self.colorize_depth_maps( depth_pred, 0, 1, cmap=color_map ).squeeze() # [3, H, W], value in (0, 1) depth_colored = (depth_colored * 255).astype(np.uint8) depth_colored_hwc = self.chw2hwc(depth_colored) depth_colored_img = Image.fromarray(depth_colored_hwc) else: depth_colored_img = None return MarigoldDepthOutput( depth_np=depth_pred, depth_colored=depth_colored_img, uncertainty=pred_uncert, ) def _check_inference_step(self, n_step: int): """ Check if denoising step is reasonable Args: n_step (`int`): denoising steps """ assert n_step >= 1 if isinstance(self.scheduler, DDIMScheduler): if n_step < 10: logging.warning( f"Too few denoising steps: {n_step}. Recommended to use the LCM checkpoint for few-step inference." ) elif isinstance(self.scheduler, LCMScheduler): if not 1 <= n_step <= 4: logging.warning(f"Non-optimal setting of denoising steps: {n_step}. Recommended setting is 1-4 steps.") else: raise RuntimeError(f"Unsupported scheduler type: {type(self.scheduler)}") def _encode_empty_text(self): """ Encode text embedding for empty prompt. """ prompt = "" text_inputs = self.tokenizer( prompt, padding="do_not_pad", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids.to(self.text_encoder.device) self.empty_text_embed = self.text_encoder(text_input_ids)[0].to(self.dtype) @torch.no_grad() def single_infer( self, rgb_in: torch.Tensor, num_inference_steps: int, seed: Union[int, None], show_pbar: bool, ) -> torch.Tensor: """ Perform an individual depth prediction without ensembling. Args: rgb_in (`torch.Tensor`): Input RGB image. num_inference_steps (`int`): Number of diffusion denoisign steps (DDIM) during inference. show_pbar (`bool`): Display a progress bar of diffusion denoising. Returns: `torch.Tensor`: Predicted depth map. """ device = rgb_in.device # Set timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # [T] # Encode image rgb_latent = self.encode_rgb(rgb_in) # Initial depth map (noise) if seed is None: rand_num_generator = None else: rand_num_generator = torch.Generator(device=device) rand_num_generator.manual_seed(seed) depth_latent = torch.randn( rgb_latent.shape, device=device, dtype=self.dtype, generator=rand_num_generator, ) # [B, 4, h, w] # Batched empty text embedding if self.empty_text_embed is None: self._encode_empty_text() batch_empty_text_embed = self.empty_text_embed.repeat((rgb_latent.shape[0], 1, 1)) # [B, 2, 1024] # Denoising loop if show_pbar: iterable = tqdm( enumerate(timesteps), total=len(timesteps), leave=False, desc=" " * 4 + "Diffusion denoising", ) else: iterable = enumerate(timesteps) for i, t in iterable: unet_input = torch.cat([rgb_latent, depth_latent], dim=1) # this order is important # predict the noise residual noise_pred = self.unet(unet_input, t, encoder_hidden_states=batch_empty_text_embed).sample # [B, 4, h, w] # compute the previous noisy sample x_t -> x_t-1 depth_latent = self.scheduler.step(noise_pred, t, depth_latent, generator=rand_num_generator).prev_sample depth = self.decode_depth(depth_latent) # clip prediction depth = torch.clip(depth, -1.0, 1.0) # shift to [0, 1] depth = (depth + 1.0) / 2.0 return depth def encode_rgb(self, rgb_in: torch.Tensor) -> torch.Tensor: """ Encode RGB image into latent. Args: rgb_in (`torch.Tensor`): Input RGB image to be encoded. Returns: `torch.Tensor`: Image latent. """ # encode h = self.vae.encoder(rgb_in) moments = self.vae.quant_conv(h) mean, logvar = torch.chunk(moments, 2, dim=1) # scale latent rgb_latent = mean * self.rgb_latent_scale_factor return rgb_latent def decode_depth(self, depth_latent: torch.Tensor) -> torch.Tensor: """ Decode depth latent into depth map. Args: depth_latent (`torch.Tensor`): Depth latent to be decoded. Returns: `torch.Tensor`: Decoded depth map. """ # scale latent depth_latent = depth_latent / self.depth_latent_scale_factor # decode z = self.vae.post_quant_conv(depth_latent) stacked = self.vae.decoder(z) # mean of output channels depth_mean = stacked.mean(dim=1, keepdim=True) return depth_mean @staticmethod def resize_max_res(img: Image.Image, max_edge_resolution: int, resample_method=Resampling.BILINEAR) -> Image.Image: """ Resize image to limit maximum edge length while keeping aspect ratio. Args: img (`Image.Image`): Image to be resized. max_edge_resolution (`int`): Maximum edge length (pixel). resample_method (`PIL.Image.Resampling`): Resampling method used to resize images. Returns: `Image.Image`: Resized image. """ original_width, original_height = img.size downscale_factor = min(max_edge_resolution / original_width, max_edge_resolution / original_height) new_width = int(original_width * downscale_factor) new_height = int(original_height * downscale_factor) resized_img = img.resize((new_width, new_height), resample=resample_method) return resized_img @staticmethod def colorize_depth_maps(depth_map, min_depth, max_depth, cmap="Spectral", valid_mask=None): """ Colorize depth maps. """ assert len(depth_map.shape) >= 2, "Invalid dimension" if isinstance(depth_map, torch.Tensor): depth = depth_map.detach().clone().squeeze().numpy() elif isinstance(depth_map, np.ndarray): depth = depth_map.copy().squeeze() # reshape to [ (B,) H, W ] if depth.ndim < 3: depth = depth[np.newaxis, :, :] # colorize cm = matplotlib.colormaps[cmap] depth = ((depth - min_depth) / (max_depth - min_depth)).clip(0, 1) img_colored_np = cm(depth, bytes=False)[:, :, :, 0:3] # value from 0 to 1 img_colored_np = np.rollaxis(img_colored_np, 3, 1) if valid_mask is not None: if isinstance(depth_map, torch.Tensor): valid_mask = valid_mask.detach().numpy() valid_mask = valid_mask.squeeze() # [H, W] or [B, H, W] if valid_mask.ndim < 3: valid_mask = valid_mask[np.newaxis, np.newaxis, :, :] else: valid_mask = valid_mask[:, np.newaxis, :, :] valid_mask = np.repeat(valid_mask, 3, axis=1) img_colored_np[~valid_mask] = 0 if isinstance(depth_map, torch.Tensor): img_colored = torch.from_numpy(img_colored_np).float() elif isinstance(depth_map, np.ndarray): img_colored = img_colored_np return img_colored @staticmethod def chw2hwc(chw): assert 3 == len(chw.shape) if isinstance(chw, torch.Tensor): hwc = torch.permute(chw, (1, 2, 0)) elif isinstance(chw, np.ndarray): hwc = np.moveaxis(chw, 0, -1) return hwc @staticmethod def _find_batch_size(ensemble_size: int, input_res: int, dtype: torch.dtype) -> int: """ Automatically search for suitable operating batch size. Args: ensemble_size (`int`): Number of predictions to be ensembled. input_res (`int`): Operating resolution of the input image. Returns: `int`: Operating batch size. """ # Search table for suggested max. inference batch size bs_search_table = [ # tested on A100-PCIE-80GB {"res": 768, "total_vram": 79, "bs": 35, "dtype": torch.float32}, {"res": 1024, "total_vram": 79, "bs": 20, "dtype": torch.float32}, # tested on A100-PCIE-40GB {"res": 768, "total_vram": 39, "bs": 15, "dtype": torch.float32}, {"res": 1024, "total_vram": 39, "bs": 8, "dtype": torch.float32}, {"res": 768, "total_vram": 39, "bs": 30, "dtype": torch.float16}, {"res": 1024, "total_vram": 39, "bs": 15, "dtype": torch.float16}, # tested on RTX3090, RTX4090 {"res": 512, "total_vram": 23, "bs": 20, "dtype": torch.float32}, {"res": 768, "total_vram": 23, "bs": 7, "dtype": torch.float32}, {"res": 1024, "total_vram": 23, "bs": 3, "dtype": torch.float32}, {"res": 512, "total_vram": 23, "bs": 40, "dtype": torch.float16}, {"res": 768, "total_vram": 23, "bs": 18, "dtype": torch.float16}, {"res": 1024, "total_vram": 23, "bs": 10, "dtype": torch.float16}, # tested on GTX1080Ti {"res": 512, "total_vram": 10, "bs": 5, "dtype": torch.float32}, {"res": 768, "total_vram": 10, "bs": 2, "dtype": torch.float32}, {"res": 512, "total_vram": 10, "bs": 10, "dtype": torch.float16}, {"res": 768, "total_vram": 10, "bs": 5, "dtype": torch.float16}, {"res": 1024, "total_vram": 10, "bs": 3, "dtype": torch.float16}, ] if not torch.cuda.is_available(): return 1 total_vram = torch.cuda.mem_get_info()[1] / 1024.0**3 filtered_bs_search_table = [s for s in bs_search_table if s["dtype"] == dtype] for settings in sorted( filtered_bs_search_table, key=lambda k: (k["res"], -k["total_vram"]), ): if input_res <= settings["res"] and total_vram >= settings["total_vram"]: bs = settings["bs"] if bs > ensemble_size: bs = ensemble_size elif bs > math.ceil(ensemble_size / 2) and bs < ensemble_size: bs = math.ceil(ensemble_size / 2) return bs return 1 @staticmethod def ensemble_depths( input_images: torch.Tensor, regularizer_strength: float = 0.02, max_iter: int = 2, tol: float = 1e-3, reduction: str = "median", max_res: int = None, ): """ To ensemble multiple affine-invariant depth images (up to scale and shift), by aligning estimating the scale and shift """ def inter_distances(tensors: torch.Tensor): """ To calculate the distance between each two depth maps. """ distances = [] for i, j in torch.combinations(torch.arange(tensors.shape[0])): arr1 = tensors[i : i + 1] arr2 = tensors[j : j + 1] distances.append(arr1 - arr2) dist = torch.concatenate(distances, dim=0) return dist device = input_images.device dtype = input_images.dtype np_dtype = np.float32 original_input = input_images.clone() n_img = input_images.shape[0] ori_shape = input_images.shape if max_res is not None: scale_factor = torch.min(max_res / torch.tensor(ori_shape[-2:])) if scale_factor < 1: downscaler = torch.nn.Upsample(scale_factor=scale_factor, mode="nearest") input_images = downscaler(torch.from_numpy(input_images)).numpy() # init guess _min = np.min(input_images.reshape((n_img, -1)).cpu().numpy(), axis=1) _max = np.max(input_images.reshape((n_img, -1)).cpu().numpy(), axis=1) s_init = 1.0 / (_max - _min).reshape((-1, 1, 1)) t_init = (-1 * s_init.flatten() * _min.flatten()).reshape((-1, 1, 1)) x = np.concatenate([s_init, t_init]).reshape(-1).astype(np_dtype) input_images = input_images.to(device) # objective function def closure(x): l = len(x) s = x[: int(l / 2)] t = x[int(l / 2) :] s = torch.from_numpy(s).to(dtype=dtype).to(device) t = torch.from_numpy(t).to(dtype=dtype).to(device) transformed_arrays = input_images * s.view((-1, 1, 1)) + t.view((-1, 1, 1)) dists = inter_distances(transformed_arrays) sqrt_dist = torch.sqrt(torch.mean(dists**2)) if "mean" == reduction: pred = torch.mean(transformed_arrays, dim=0) elif "median" == reduction: pred = torch.median(transformed_arrays, dim=0).values else: raise ValueError near_err = torch.sqrt((0 - torch.min(pred)) ** 2) far_err = torch.sqrt((1 - torch.max(pred)) ** 2) err = sqrt_dist + (near_err + far_err) * regularizer_strength err = err.detach().cpu().numpy().astype(np_dtype) return err res = minimize( closure, x, method="BFGS", tol=tol, options={"maxiter": max_iter, "disp": False}, ) x = res.x l = len(x) s = x[: int(l / 2)] t = x[int(l / 2) :] # Prediction s = torch.from_numpy(s).to(dtype=dtype).to(device) t = torch.from_numpy(t).to(dtype=dtype).to(device) transformed_arrays = original_input * s.view(-1, 1, 1) + t.view(-1, 1, 1) if "mean" == reduction: aligned_images = torch.mean(transformed_arrays, dim=0) std = torch.std(transformed_arrays, dim=0) uncertainty = std elif "median" == reduction: aligned_images = torch.median(transformed_arrays, dim=0).values # MAD (median absolute deviation) as uncertainty indicator abs_dev = torch.abs(transformed_arrays - aligned_images) mad = torch.median(abs_dev, dim=0).values uncertainty = mad else: raise ValueError(f"Unknown reduction method: {reduction}") # Scale and shift to [0, 1] _min = torch.min(aligned_images) _max = torch.max(aligned_images) aligned_images = (aligned_images - _min) / (_max - _min) uncertainty /= _max - _min return aligned_images, uncertainty
diffusers/examples/community/marigold_depth_estimation.py/0
{ "file_path": "diffusers/examples/community/marigold_depth_estimation.py", "repo_id": "diffusers", "token_count": 12108 }
132
import inspect import os import random import warnings from typing import Any, Callable, Dict, List, Optional, Tuple, Union import matplotlib.pyplot as plt import torch import torch.nn.functional as F from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer from diffusers.image_processor import VaeImageProcessor from diffusers.loaders import ( FromSingleFileMixin, StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin, ) from diffusers.models import AutoencoderKL, UNet2DConditionModel from diffusers.models.attention_processor import AttnProcessor2_0, XFormersAttnProcessor from diffusers.models.lora import adjust_lora_scale_text_encoder from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin from diffusers.schedulers import KarrasDiffusionSchedulers from diffusers.utils import ( is_accelerate_available, is_accelerate_version, is_invisible_watermark_available, logging, replace_example_docstring, ) from diffusers.utils.torch_utils import randn_tensor if is_invisible_watermark_available(): from diffusers.pipelines.stable_diffusion_xl.watermark import ( StableDiffusionXLWatermarker, ) logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers import StableDiffusionXLPipeline >>> pipe = StableDiffusionXLPipeline.from_pretrained( ... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 ... ) >>> pipe = pipe.to("cuda") >>> prompt = "a photo of an astronaut riding a horse on mars" >>> image = pipe(prompt).images[0] ``` """ def gaussian_kernel(kernel_size=3, sigma=1.0, channels=3): x_coord = torch.arange(kernel_size) gaussian_1d = torch.exp(-((x_coord - (kernel_size - 1) / 2) ** 2) / (2 * sigma**2)) gaussian_1d = gaussian_1d / gaussian_1d.sum() gaussian_2d = gaussian_1d[:, None] * gaussian_1d[None, :] kernel = gaussian_2d[None, None, :, :].repeat(channels, 1, 1, 1) return kernel def gaussian_filter(latents, kernel_size=3, sigma=1.0): channels = latents.shape[1] kernel = gaussian_kernel(kernel_size, sigma, channels).to(latents.device, latents.dtype) blurred_latents = F.conv2d(latents, kernel, padding=kernel_size // 2, groups=channels) return blurred_latents # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0): """ Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891). See Section 3.4 """ std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True) std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True) # rescale the results from guidance (fixes overexposure) noise_pred_rescaled = noise_cfg * (std_text / std_cfg) # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg return noise_cfg class DemoFusionSDXLPipeline( DiffusionPipeline, StableDiffusionMixin, FromSingleFileMixin, StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin, ): r""" Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) In addition the pipeline inherits the following loading methods: - *LoRA*: [`StableDiffusionXLPipeline.load_lora_weights`] - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`] as well as the following saving methods: - *LoRA*: [`loaders.StableDiffusionXLPipeline.save_lora_weights`] Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion XL uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. text_encoder_2 ([` CLIPTextModelWithProjection`]): Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection), specifically the [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer_2 (`CLIPTokenizer`): Second Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. force_zeros_for_empty_prompt (`bool`, *optional*, defaults to `"True"`): Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of `stabilityai/stable-diffusion-xl-base-1-0`. add_watermarker (`bool`, *optional*): Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to watermark output images. If not defined, it will default to True if the package is installed, otherwise no watermarker will be used. """ model_cpu_offload_seq = "text_encoder->text_encoder_2->unet->vae" def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, text_encoder_2: CLIPTextModelWithProjection, tokenizer: CLIPTokenizer, tokenizer_2: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: KarrasDiffusionSchedulers, force_zeros_for_empty_prompt: bool = True, add_watermarker: Optional[bool] = None, ): super().__init__() self.register_modules( vae=vae, text_encoder=text_encoder, text_encoder_2=text_encoder_2, tokenizer=tokenizer, tokenizer_2=tokenizer_2, unet=unet, scheduler=scheduler, ) self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) self.default_sample_size = ( self.unet.config.sample_size if hasattr(self, "unet") and self.unet is not None and hasattr(self.unet.config, "sample_size") else 128 ) add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available() if add_watermarker: self.watermark = StableDiffusionXLWatermarker() else: self.watermark = None def encode_prompt( self, prompt: str, prompt_2: Optional[str] = None, device: Optional[torch.device] = None, num_images_per_prompt: int = 1, do_classifier_free_guidance: bool = True, negative_prompt: Optional[str] = None, negative_prompt_2: Optional[str] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, pooled_prompt_embeds: Optional[torch.Tensor] = None, negative_pooled_prompt_embeds: Optional[torch.Tensor] = None, lora_scale: Optional[float] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is used in both text-encoders device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). negative_prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. pooled_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled text embeddings will be generated from `prompt` input argument. negative_pooled_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. """ device = device or self._execution_device # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, StableDiffusionLoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale) if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] # Define tokenizers and text encoders tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2] text_encoders = ( [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2] ) if prompt_embeds is None: prompt_2 = prompt_2 or prompt # textual inversion: process multi-vector tokens if necessary prompt_embeds_list = [] prompts = [prompt, prompt_2] for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders): if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, tokenizer) text_inputs = tokenizer( prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1]) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {tokenizer.model_max_length} tokens: {removed_text}" ) prompt_embeds = text_encoder( text_input_ids.to(device), output_hidden_states=True, ) # We are only ALWAYS interested in the pooled output of the final text encoder if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2: pooled_prompt_embeds = prompt_embeds[0] prompt_embeds = prompt_embeds.hidden_states[-2] prompt_embeds_list.append(prompt_embeds) prompt_embeds = torch.concat(prompt_embeds_list, dim=-1) # get unconditional embeddings for classifier free guidance zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt: negative_prompt_embeds = torch.zeros_like(prompt_embeds) negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds) elif do_classifier_free_guidance and negative_prompt_embeds is None: negative_prompt = negative_prompt or "" negative_prompt_2 = negative_prompt_2 or negative_prompt uncond_tokens: List[str] if prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt, negative_prompt_2] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = [negative_prompt, negative_prompt_2] negative_prompt_embeds_list = [] for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders): if isinstance(self, TextualInversionLoaderMixin): negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer) max_length = prompt_embeds.shape[1] uncond_input = tokenizer( negative_prompt, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) negative_prompt_embeds = text_encoder( uncond_input.input_ids.to(device), output_hidden_states=True, ) # We are only ALWAYS interested in the pooled output of the final text encoder if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2: negative_pooled_prompt_embeds = negative_prompt_embeds[0] negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2] negative_prompt_embeds_list.append(negative_prompt_embeds) negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1) prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( bs_embed * num_images_per_prompt, -1 ) if do_classifier_free_guidance: negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view( bs_embed * num_images_per_prompt, -1 ) return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://huggingface.co/papers/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, prompt, prompt_2, height, width, callback_steps, negative_prompt=None, negative_prompt_2=None, prompt_embeds=None, negative_prompt_embeds=None, pooled_prompt_embeds=None, negative_pooled_prompt_embeds=None, num_images_per_prompt=None, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if (callback_steps is None) or ( callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) ): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt_2 is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)): raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) elif negative_prompt_2 is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) if prompt_embeds is not None and pooled_prompt_embeds is None: raise ValueError( "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`." ) if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None: raise ValueError( "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`." ) # DemoFusion specific checks if max(height, width) % 1024 != 0: raise ValueError( f"the larger one of `height` and `width` has to be divisible by 1024 but are {height} and {width}." ) if num_images_per_prompt != 1: warnings.warn("num_images_per_prompt != 1 is not supported by DemoFusion and will be ignored.") num_images_per_prompt = 1 # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents def _get_add_time_ids(self, original_size, crops_coords_top_left, target_size, dtype): add_time_ids = list(original_size + crops_coords_top_left + target_size) passed_add_embed_dim = ( self.unet.config.addition_time_embed_dim * len(add_time_ids) + self.text_encoder_2.config.projection_dim ) expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features if expected_add_embed_dim != passed_add_embed_dim: raise ValueError( f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`." ) add_time_ids = torch.tensor([add_time_ids], dtype=dtype) return add_time_ids def get_views(self, height, width, window_size=128, stride=64, random_jitter=False): height //= self.vae_scale_factor width //= self.vae_scale_factor num_blocks_height = int((height - window_size) / stride - 1e-6) + 2 if height > window_size else 1 num_blocks_width = int((width - window_size) / stride - 1e-6) + 2 if width > window_size else 1 total_num_blocks = int(num_blocks_height * num_blocks_width) views = [] for i in range(total_num_blocks): h_start = int((i // num_blocks_width) * stride) h_end = h_start + window_size w_start = int((i % num_blocks_width) * stride) w_end = w_start + window_size if h_end > height: h_start = int(h_start + height - h_end) h_end = int(height) if w_end > width: w_start = int(w_start + width - w_end) w_end = int(width) if h_start < 0: h_end = int(h_end - h_start) h_start = 0 if w_start < 0: w_end = int(w_end - w_start) w_start = 0 if random_jitter: jitter_range = (window_size - stride) // 4 w_jitter = 0 h_jitter = 0 if (w_start != 0) and (w_end != width): w_jitter = random.randint(-jitter_range, jitter_range) elif (w_start == 0) and (w_end != width): w_jitter = random.randint(-jitter_range, 0) elif (w_start != 0) and (w_end == width): w_jitter = random.randint(0, jitter_range) if (h_start != 0) and (h_end != height): h_jitter = random.randint(-jitter_range, jitter_range) elif (h_start == 0) and (h_end != height): h_jitter = random.randint(-jitter_range, 0) elif (h_start != 0) and (h_end == height): h_jitter = random.randint(0, jitter_range) h_start += h_jitter + jitter_range h_end += h_jitter + jitter_range w_start += w_jitter + jitter_range w_end += w_jitter + jitter_range views.append((h_start, h_end, w_start, w_end)) return views def tiled_decode(self, latents, current_height, current_width): core_size = self.unet.config.sample_size // 4 core_stride = core_size pad_size = self.unet.config.sample_size // 4 * 3 decoder_view_batch_size = 1 views = self.get_views(current_height, current_width, stride=core_stride, window_size=core_size) views_batch = [views[i : i + decoder_view_batch_size] for i in range(0, len(views), decoder_view_batch_size)] latents_ = F.pad(latents, (pad_size, pad_size, pad_size, pad_size), "constant", 0) image = torch.zeros(latents.size(0), 3, current_height, current_width).to(latents.device) count = torch.zeros_like(image).to(latents.device) # get the latents corresponding to the current view coordinates with self.progress_bar(total=len(views_batch)) as progress_bar: for j, batch_view in enumerate(views_batch): len(batch_view) latents_for_view = torch.cat( [ latents_[:, :, h_start : h_end + pad_size * 2, w_start : w_end + pad_size * 2] for h_start, h_end, w_start, w_end in batch_view ] ) image_patch = self.vae.decode(latents_for_view / self.vae.config.scaling_factor, return_dict=False)[0] h_start, h_end, w_start, w_end = views[j] h_start, h_end, w_start, w_end = ( h_start * self.vae_scale_factor, h_end * self.vae_scale_factor, w_start * self.vae_scale_factor, w_end * self.vae_scale_factor, ) p_h_start, p_h_end, p_w_start, p_w_end = ( pad_size * self.vae_scale_factor, image_patch.size(2) - pad_size * self.vae_scale_factor, pad_size * self.vae_scale_factor, image_patch.size(3) - pad_size * self.vae_scale_factor, ) image[:, :, h_start:h_end, w_start:w_end] += image_patch[:, :, p_h_start:p_h_end, p_w_start:p_w_end] count[:, :, h_start:h_end, w_start:w_end] += 1 progress_bar.update() image = image / count return image # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae def upcast_vae(self): dtype = self.vae.dtype self.vae.to(dtype=torch.float32) use_torch_2_0_or_xformers = isinstance( self.vae.decoder.mid_block.attentions[0].processor, (AttnProcessor2_0, XFormersAttnProcessor), ) # if xformers or torch_2_0 is used attention block does not need # to be in float32 which can save lots of memory if use_torch_2_0_or_xformers: self.vae.post_quant_conv.to(dtype) self.vae.decoder.conv_in.to(dtype) self.vae.decoder.mid_block.to(dtype) @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]] = None, prompt_2: Optional[Union[str, List[str]]] = None, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, denoising_end: Optional[float] = None, guidance_scale: float = 5.0, negative_prompt: Optional[Union[str, List[str]]] = None, negative_prompt_2: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, pooled_prompt_embeds: Optional[torch.Tensor] = None, negative_pooled_prompt_embeds: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = False, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, guidance_rescale: float = 0.0, original_size: Optional[Tuple[int, int]] = None, crops_coords_top_left: Tuple[int, int] = (0, 0), target_size: Optional[Tuple[int, int]] = None, negative_original_size: Optional[Tuple[int, int]] = None, negative_crops_coords_top_left: Tuple[int, int] = (0, 0), negative_target_size: Optional[Tuple[int, int]] = None, ################### DemoFusion specific parameters #################### view_batch_size: int = 16, multi_decoder: bool = True, stride: Optional[int] = 64, cosine_scale_1: Optional[float] = 3.0, cosine_scale_2: Optional[float] = 1.0, cosine_scale_3: Optional[float] = 1.0, sigma: Optional[float] = 0.8, show_image: bool = False, ): r""" Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is used in both text-encoders height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The height in pixels of the generated image. This is set to 1024 by default for the best results. Anything below 512 pixels won't work well for [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and checkpoints that are not specifically fine-tuned on low resolutions. width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The width in pixels of the generated image. This is set to 1024 by default for the best results. Anything below 512 pixels won't work well for [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. denoising_end (`float`, *optional*): When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be completed before it is intentionally prematurely terminated. As a result, the returned sample will still retain a substantial amount of noise as determined by the discrete timesteps selected by the scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output) guidance_scale (`float`, *optional*, defaults to 5.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). negative_prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only applies to [`schedulers.DDIMScheduler`], will be ignored for others. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. pooled_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled text embeddings will be generated from `prompt` input argument. negative_pooled_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that will be called every `callback_steps` steps during inference. The function will be called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function will be called. If not specified, the callback will be called at every step. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). guidance_rescale (`float`, *optional*, defaults to 0.7): Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) `guidance_scale` is defined as `φ` in equation 16. of [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled. `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)): `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): For most cases, `target_size` should be set to the desired height and width of the generated image. If not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). negative_original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): To negatively condition the generation process based on a specific image resolution. Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)): To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)): To negatively condition the generation process based on a target image resolution. It should be as same as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. ################### DemoFusion specific parameters #################### view_batch_size (`int`, defaults to 16): The batch size for multiple denoising paths. Typically, a larger batch size can result in higher efficiency but comes with increased GPU memory requirements. multi_decoder (`bool`, defaults to True): Determine whether to use a tiled decoder. Generally, when the resolution exceeds 3072x3072, a tiled decoder becomes necessary. stride (`int`, defaults to 64): The stride of moving local patches. A smaller stride is better for alleviating seam issues, but it also introduces additional computational overhead and inference time. cosine_scale_1 (`float`, defaults to 3): Control the strength of skip-residual. For specific impacts, please refer to Appendix C in the DemoFusion paper. cosine_scale_2 (`float`, defaults to 1): Control the strength of dilated sampling. For specific impacts, please refer to Appendix C in the DemoFusion paper. cosine_scale_3 (`float`, defaults to 1): Control the strength of the gaussian filter. For specific impacts, please refer to Appendix C in the DemoFusion paper. sigma (`float`, defaults to 1): The standard value of the gaussian filter. show_image (`bool`, defaults to False): Determine whether to show intermediate results during generation. Examples: Returns: a `list` with the generated images at each phase. """ # 0. Default height and width to unet height = height or self.default_sample_size * self.vae_scale_factor width = width or self.default_sample_size * self.vae_scale_factor x1_size = self.default_sample_size * self.vae_scale_factor height_scale = height / x1_size width_scale = width / x1_size scale_num = int(max(height_scale, width_scale)) aspect_ratio = min(height_scale, width_scale) / max(height_scale, width_scale) original_size = original_size or (height, width) target_size = target_size or (height, width) # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, prompt_2, height, width, callback_steps, negative_prompt, negative_prompt_2, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, num_images_per_prompt, ) # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://huggingface.co/papers/2205.11487 . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input prompt text_encoder_lora_scale = ( cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None ) ( prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, ) = self.encode_prompt( prompt=prompt, prompt_2=prompt_2, device=device, num_images_per_prompt=num_images_per_prompt, do_classifier_free_guidance=do_classifier_free_guidance, negative_prompt=negative_prompt, negative_prompt_2=negative_prompt_2, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, pooled_prompt_embeds=pooled_prompt_embeds, negative_pooled_prompt_embeds=negative_pooled_prompt_embeds, lora_scale=text_encoder_lora_scale, ) # 4. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # 5. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height // scale_num, width // scale_num, prompt_embeds.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 7. Prepare added time ids & embeddings add_text_embeds = pooled_prompt_embeds add_time_ids = self._get_add_time_ids( original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype ) if negative_original_size is not None and negative_target_size is not None: negative_add_time_ids = self._get_add_time_ids( negative_original_size, negative_crops_coords_top_left, negative_target_size, dtype=prompt_embeds.dtype, ) else: negative_add_time_ids = add_time_ids if do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0) add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0) add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0) prompt_embeds = prompt_embeds.to(device) add_text_embeds = add_text_embeds.to(device) add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1) # 8. Denoising loop num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0) # 7.1 Apply denoising_end if denoising_end is not None and isinstance(denoising_end, float) and denoising_end > 0 and denoising_end < 1: discrete_timestep_cutoff = int( round( self.scheduler.config.num_train_timesteps - (denoising_end * self.scheduler.config.num_train_timesteps) ) ) num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps))) timesteps = timesteps[:num_inference_steps] output_images = [] ############################################################### Phase 1 ################################################################# print("### Phase 1 Denoising ###") with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): latents_for_view = latents # expand the latents if we are doing classifier free guidance latent_model_input = latents.repeat_interleave(2, dim=0) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids} noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, cross_attention_kwargs=cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2] noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) if do_classifier_free_guidance and guidance_rescale > 0.0: # Based on 3.4. in https://huggingface.co/papers/2305.08891 noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) anchor_mean = latents.mean() anchor_std = latents.std() if not output_type == "latent": # make sure the VAE is in float32 mode, as it overflows in float16 needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast if needs_upcasting: self.upcast_vae() latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) print("### Phase 1 Decoding ###") image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] # cast back to fp16 if needed if needs_upcasting: self.vae.to(dtype=torch.float16) image = self.image_processor.postprocess(image, output_type=output_type) if show_image: plt.figure(figsize=(10, 10)) plt.imshow(image[0]) plt.axis("off") # Turn off axis numbers and ticks plt.show() output_images.append(image[0]) ####################################################### Phase 2+ ##################################################### for current_scale_num in range(2, scale_num + 1): print("### Phase {} Denoising ###".format(current_scale_num)) current_height = self.unet.config.sample_size * self.vae_scale_factor * current_scale_num current_width = self.unet.config.sample_size * self.vae_scale_factor * current_scale_num if height > width: current_width = int(current_width * aspect_ratio) else: current_height = int(current_height * aspect_ratio) latents = F.interpolate( latents, size=(int(current_height / self.vae_scale_factor), int(current_width / self.vae_scale_factor)), mode="bicubic", ) noise_latents = [] noise = torch.randn_like(latents) for timestep in timesteps: noise_latent = self.scheduler.add_noise(latents, noise, timestep.unsqueeze(0)) noise_latents.append(noise_latent) latents = noise_latents[0] with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): count = torch.zeros_like(latents) value = torch.zeros_like(latents) cosine_factor = ( 0.5 * ( 1 + torch.cos( torch.pi * (self.scheduler.config.num_train_timesteps - t) / self.scheduler.config.num_train_timesteps ) ).cpu() ) c1 = cosine_factor**cosine_scale_1 latents = latents * (1 - c1) + noise_latents[i] * c1 ############################################# MultiDiffusion ############################################# views = self.get_views( current_height, current_width, stride=stride, window_size=self.unet.config.sample_size, random_jitter=True, ) views_batch = [views[i : i + view_batch_size] for i in range(0, len(views), view_batch_size)] jitter_range = (self.unet.config.sample_size - stride) // 4 latents_ = F.pad(latents, (jitter_range, jitter_range, jitter_range, jitter_range), "constant", 0) count_local = torch.zeros_like(latents_) value_local = torch.zeros_like(latents_) for j, batch_view in enumerate(views_batch): vb_size = len(batch_view) # get the latents corresponding to the current view coordinates latents_for_view = torch.cat( [ latents_[:, :, h_start:h_end, w_start:w_end] for h_start, h_end, w_start, w_end in batch_view ] ) # expand the latents if we are doing classifier free guidance latent_model_input = latents_for_view latent_model_input = ( latent_model_input.repeat_interleave(2, dim=0) if do_classifier_free_guidance else latent_model_input ) latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) prompt_embeds_input = torch.cat([prompt_embeds] * vb_size) add_text_embeds_input = torch.cat([add_text_embeds] * vb_size) add_time_ids_input = [] for h_start, h_end, w_start, w_end in batch_view: add_time_ids_ = add_time_ids.clone() add_time_ids_[:, 2] = h_start * self.vae_scale_factor add_time_ids_[:, 3] = w_start * self.vae_scale_factor add_time_ids_input.append(add_time_ids_) add_time_ids_input = torch.cat(add_time_ids_input) # predict the noise residual added_cond_kwargs = {"text_embeds": add_text_embeds_input, "time_ids": add_time_ids_input} noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds_input, cross_attention_kwargs=cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2] noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) if do_classifier_free_guidance and guidance_rescale > 0.0: # Based on 3.4. in https://huggingface.co/papers/2305.08891 noise_pred = rescale_noise_cfg( noise_pred, noise_pred_text, guidance_rescale=guidance_rescale ) # compute the previous noisy sample x_t -> x_t-1 self.scheduler._init_step_index(t) latents_denoised_batch = self.scheduler.step( noise_pred, t, latents_for_view, **extra_step_kwargs, return_dict=False )[0] # extract value from batch for latents_view_denoised, (h_start, h_end, w_start, w_end) in zip( latents_denoised_batch.chunk(vb_size), batch_view ): value_local[:, :, h_start:h_end, w_start:w_end] += latents_view_denoised count_local[:, :, h_start:h_end, w_start:w_end] += 1 value_local = value_local[ :, :, jitter_range : jitter_range + current_height // self.vae_scale_factor, jitter_range : jitter_range + current_width // self.vae_scale_factor, ] count_local = count_local[ :, :, jitter_range : jitter_range + current_height // self.vae_scale_factor, jitter_range : jitter_range + current_width // self.vae_scale_factor, ] c2 = cosine_factor**cosine_scale_2 value += value_local / count_local * (1 - c2) count += torch.ones_like(value_local) * (1 - c2) ############################################# Dilated Sampling ############################################# views = [[h, w] for h in range(current_scale_num) for w in range(current_scale_num)] views_batch = [views[i : i + view_batch_size] for i in range(0, len(views), view_batch_size)] h_pad = (current_scale_num - (latents.size(2) % current_scale_num)) % current_scale_num w_pad = (current_scale_num - (latents.size(3) % current_scale_num)) % current_scale_num latents_ = F.pad(latents, (w_pad, 0, h_pad, 0), "constant", 0) count_global = torch.zeros_like(latents_) value_global = torch.zeros_like(latents_) c3 = 0.99 * cosine_factor**cosine_scale_3 + 1e-2 std_, mean_ = latents_.std(), latents_.mean() latents_gaussian = gaussian_filter( latents_, kernel_size=(2 * current_scale_num - 1), sigma=sigma * c3 ) latents_gaussian = ( latents_gaussian - latents_gaussian.mean() ) / latents_gaussian.std() * std_ + mean_ for j, batch_view in enumerate(views_batch): latents_for_view = torch.cat( [latents_[:, :, h::current_scale_num, w::current_scale_num] for h, w in batch_view] ) latents_for_view_gaussian = torch.cat( [latents_gaussian[:, :, h::current_scale_num, w::current_scale_num] for h, w in batch_view] ) vb_size = latents_for_view.size(0) # expand the latents if we are doing classifier free guidance latent_model_input = latents_for_view_gaussian latent_model_input = ( latent_model_input.repeat_interleave(2, dim=0) if do_classifier_free_guidance else latent_model_input ) latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) prompt_embeds_input = torch.cat([prompt_embeds] * vb_size) add_text_embeds_input = torch.cat([add_text_embeds] * vb_size) add_time_ids_input = torch.cat([add_time_ids] * vb_size) # predict the noise residual added_cond_kwargs = {"text_embeds": add_text_embeds_input, "time_ids": add_time_ids_input} noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds_input, cross_attention_kwargs=cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2] noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) if do_classifier_free_guidance and guidance_rescale > 0.0: # Based on 3.4. in https://huggingface.co/papers/2305.08891 noise_pred = rescale_noise_cfg( noise_pred, noise_pred_text, guidance_rescale=guidance_rescale ) # compute the previous noisy sample x_t -> x_t-1 self.scheduler._init_step_index(t) latents_denoised_batch = self.scheduler.step( noise_pred, t, latents_for_view, **extra_step_kwargs, return_dict=False )[0] # extract value from batch for latents_view_denoised, (h, w) in zip(latents_denoised_batch.chunk(vb_size), batch_view): value_global[:, :, h::current_scale_num, w::current_scale_num] += latents_view_denoised count_global[:, :, h::current_scale_num, w::current_scale_num] += 1 c2 = cosine_factor**cosine_scale_2 value_global = value_global[:, :, h_pad:, w_pad:] value += value_global * c2 count += torch.ones_like(value_global) * c2 ########################################################### latents = torch.where(count > 0, value / count, value) # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) ######################################################################################################################################### latents = (latents - latents.mean()) / latents.std() * anchor_std + anchor_mean if not output_type == "latent": # make sure the VAE is in float32 mode, as it overflows in float16 needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast if needs_upcasting: self.upcast_vae() latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype) print("### Phase {} Decoding ###".format(current_scale_num)) if multi_decoder: image = self.tiled_decode(latents, current_height, current_width) else: image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] # cast back to fp16 if needed if needs_upcasting: self.vae.to(dtype=torch.float16) else: image = latents if not output_type == "latent": image = self.image_processor.postprocess(image, output_type=output_type) if show_image: plt.figure(figsize=(10, 10)) plt.imshow(image[0]) plt.axis("off") # Turn off axis numbers and ticks plt.show() output_images.append(image[0]) # Offload all models self.maybe_free_model_hooks() return output_images # Override to properly handle the loading and unloading of the additional text encoder. def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs): # We could have accessed the unet config from `lora_state_dict()` too. We pass # it here explicitly to be able to tell that it's coming from an SDXL # pipeline. # Remove any existing hooks. if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): from accelerate.hooks import AlignDevicesHook, CpuOffload, remove_hook_from_module else: raise ImportError("Offloading requires `accelerate v0.17.0` or higher.") is_model_cpu_offload = False is_sequential_cpu_offload = False recursive = False for _, component in self.components.items(): if isinstance(component, torch.nn.Module): if hasattr(component, "_hf_hook"): is_model_cpu_offload = isinstance(getattr(component, "_hf_hook"), CpuOffload) is_sequential_cpu_offload = ( isinstance(getattr(component, "_hf_hook"), AlignDevicesHook) or hasattr(component._hf_hook, "hooks") and isinstance(component._hf_hook.hooks[0], AlignDevicesHook) ) logger.info( "Accelerate hooks detected. Since you have called `load_lora_weights()`, the previous hooks will be first removed. Then the LoRA parameters will be loaded and the hooks will be applied again." ) recursive = is_sequential_cpu_offload remove_hook_from_module(component, recurse=recursive) state_dict, network_alphas = self.lora_state_dict( pretrained_model_name_or_path_or_dict, unet_config=self.unet.config, **kwargs, ) self.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=self.unet) text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k} if len(text_encoder_state_dict) > 0: self.load_lora_into_text_encoder( text_encoder_state_dict, network_alphas=network_alphas, text_encoder=self.text_encoder, prefix="text_encoder", lora_scale=self.lora_scale, ) text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k} if len(text_encoder_2_state_dict) > 0: self.load_lora_into_text_encoder( text_encoder_2_state_dict, network_alphas=network_alphas, text_encoder=self.text_encoder_2, prefix="text_encoder_2", lora_scale=self.lora_scale, ) # Offload back. if is_model_cpu_offload: self.enable_model_cpu_offload() elif is_sequential_cpu_offload: self.enable_sequential_cpu_offload() @classmethod def save_lora_weights( cls, save_directory: Union[str, os.PathLike], unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, is_main_process: bool = True, weight_name: str = None, save_function: Callable = None, safe_serialization: bool = True, ): state_dict = {} def pack_weights(layers, prefix): layers_weights = layers.state_dict() if isinstance(layers, torch.nn.Module) else layers layers_state_dict = {f"{prefix}.{module_name}": param for module_name, param in layers_weights.items()} return layers_state_dict if not (unet_lora_layers or text_encoder_lora_layers or text_encoder_2_lora_layers): raise ValueError( "You must pass at least one of `unet_lora_layers`, `text_encoder_lora_layers` or `text_encoder_2_lora_layers`." ) if unet_lora_layers: state_dict.update(pack_weights(unet_lora_layers, "unet")) if text_encoder_lora_layers and text_encoder_2_lora_layers: state_dict.update(pack_weights(text_encoder_lora_layers, "text_encoder")) state_dict.update(pack_weights(text_encoder_2_lora_layers, "text_encoder_2")) cls.write_lora_layers( state_dict=state_dict, save_directory=save_directory, is_main_process=is_main_process, weight_name=weight_name, save_function=save_function, safe_serialization=safe_serialization, ) def _remove_text_encoder_monkey_patch(self): self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder) self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder_2)
diffusers/examples/community/pipeline_demofusion_sdxl.py/0
{ "file_path": "diffusers/examples/community/pipeline_demofusion_sdxl.py", "repo_id": "diffusers", "token_count": 34869 }
133
# Copyright 2025 Jingyang Zhang and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc import inspect import math import numbers from typing import Any, Callable, Dict, List, Optional, Union import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from packaging import version from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection from diffusers.configuration_utils import FrozenDict from diffusers.image_processor import PipelineImageInput, VaeImageProcessor from diffusers.loaders import ( FromSingleFileMixin, IPAdapterMixin, StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin, ) from diffusers.models import AutoencoderKL, ImageProjection, UNet2DConditionModel from diffusers.models.attention_processor import Attention, FusedAttnProcessor2_0 from diffusers.models.lora import adjust_lora_scale_text_encoder from diffusers.pipelines.pipeline_utils import DiffusionPipeline from diffusers.pipelines.stable_diffusion.pipeline_output import StableDiffusionPipelineOutput from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker from diffusers.schedulers import KarrasDiffusionSchedulers from diffusers.utils import ( USE_PEFT_BACKEND, deprecate, logging, replace_example_docstring, scale_lora_layers, unscale_lora_layers, ) from diffusers.utils.torch_utils import randn_tensor logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers import StableDiffusionPipeline >>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) >>> pipe = pipe.to("cuda") >>> prompt = "a photo of an astronaut riding a horse on mars" >>> image = pipe(prompt).images[0] ``` """ class GaussianSmoothing(nn.Module): """ Copied from official repo: https://github.com/showlab/BoxDiff/blob/master/utils/gaussian_smoothing.py Apply gaussian smoothing on a 1d, 2d or 3d tensor. Filtering is performed separately for each channel in the input using a depthwise convolution. Arguments: channels (int, sequence): Number of channels of the input tensors. Output will have this number of channels as well. kernel_size (int, sequence): Size of the gaussian kernel. sigma (float, sequence): Standard deviation of the gaussian kernel. dim (int, optional): The number of dimensions of the data. Default value is 2 (spatial). """ def __init__(self, channels, kernel_size, sigma, dim=2): super(GaussianSmoothing, self).__init__() if isinstance(kernel_size, numbers.Number): kernel_size = [kernel_size] * dim if isinstance(sigma, numbers.Number): sigma = [sigma] * dim # The gaussian kernel is the product of the # gaussian function of each dimension. kernel = 1 meshgrids = torch.meshgrid([torch.arange(size, dtype=torch.float32) for size in kernel_size]) for size, std, mgrid in zip(kernel_size, sigma, meshgrids): mean = (size - 1) / 2 kernel *= 1 / (std * math.sqrt(2 * math.pi)) * torch.exp(-(((mgrid - mean) / (2 * std)) ** 2)) # Make sure sum of values in gaussian kernel equals 1. kernel = kernel / torch.sum(kernel) # Reshape to depthwise convolutional weight kernel = kernel.view(1, 1, *kernel.size()) kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) self.register_buffer("weight", kernel) self.groups = channels if dim == 1: self.conv = F.conv1d elif dim == 2: self.conv = F.conv2d elif dim == 3: self.conv = F.conv3d else: raise RuntimeError("Only 1, 2 and 3 dimensions are supported. Received {}.".format(dim)) def forward(self, input): """ Apply gaussian filter to input. Arguments: input (torch.Tensor): Input to apply gaussian filter on. Returns: filtered (torch.Tensor): Filtered output. """ return self.conv(input, weight=self.weight.to(input.dtype), groups=self.groups) class AttendExciteCrossAttnProcessor: def __init__(self, attnstore, place_in_unet): super().__init__() self.attnstore = attnstore self.place_in_unet = place_in_unet def __call__( self, attn: Attention, hidden_states: torch.FloatTensor, encoder_hidden_states: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, ) -> torch.Tensor: batch_size, sequence_length, _ = hidden_states.shape attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size=1) query = attn.to_q(hidden_states) is_cross = encoder_hidden_states is not None encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states key = attn.to_k(encoder_hidden_states) value = attn.to_v(encoder_hidden_states) query = attn.head_to_batch_dim(query) key = attn.head_to_batch_dim(key) value = attn.head_to_batch_dim(value) attention_probs = attn.get_attention_scores(query, key, attention_mask) self.attnstore(attention_probs, is_cross, self.place_in_unet) hidden_states = torch.bmm(attention_probs, value) hidden_states = attn.batch_to_head_dim(hidden_states) # linear proj hidden_states = attn.to_out[0](hidden_states) # dropout hidden_states = attn.to_out[1](hidden_states) return hidden_states class AttentionControl(abc.ABC): def step_callback(self, x_t): return x_t def between_steps(self): return # @property # def num_uncond_att_layers(self): # return 0 @abc.abstractmethod def forward(self, attn, is_cross: bool, place_in_unet: str): raise NotImplementedError def __call__(self, attn, is_cross: bool, place_in_unet: str): if self.cur_att_layer >= self.num_uncond_att_layers: self.forward(attn, is_cross, place_in_unet) self.cur_att_layer += 1 if self.cur_att_layer == self.num_att_layers + self.num_uncond_att_layers: self.cur_att_layer = 0 self.cur_step += 1 self.between_steps() def reset(self): self.cur_step = 0 self.cur_att_layer = 0 def __init__(self): self.cur_step = 0 self.num_att_layers = -1 self.cur_att_layer = 0 class AttentionStore(AttentionControl): @staticmethod def get_empty_store(): return {"down_cross": [], "mid_cross": [], "up_cross": [], "down_self": [], "mid_self": [], "up_self": []} def forward(self, attn, is_cross: bool, place_in_unet: str): key = f"{place_in_unet}_{'cross' if is_cross else 'self'}" if attn.shape[1] <= 32**2: # avoid memory overhead self.step_store[key].append(attn) return attn def between_steps(self): self.attention_store = self.step_store if self.save_global_store: with torch.no_grad(): if len(self.global_store) == 0: self.global_store = self.step_store else: for key in self.global_store: for i in range(len(self.global_store[key])): self.global_store[key][i] += self.step_store[key][i].detach() self.step_store = self.get_empty_store() self.step_store = self.get_empty_store() def get_average_attention(self): average_attention = self.attention_store return average_attention def get_average_global_attention(self): average_attention = { key: [item / self.cur_step for item in self.global_store[key]] for key in self.attention_store } return average_attention def reset(self): super(AttentionStore, self).reset() self.step_store = self.get_empty_store() self.attention_store = {} self.global_store = {} def __init__(self, save_global_store=False): """ Initialize an empty AttentionStore :param step_index: used to visualize only a specific step in the diffusion process """ super(AttentionStore, self).__init__() self.save_global_store = save_global_store self.step_store = self.get_empty_store() self.attention_store = {} self.global_store = {} self.curr_step_index = 0 self.num_uncond_att_layers = 0 def aggregate_attention( attention_store: AttentionStore, res: int, from_where: List[str], is_cross: bool, select: int ) -> torch.Tensor: """Aggregates the attention across the different layers and heads at the specified resolution.""" out = [] attention_maps = attention_store.get_average_attention() # for k, v in attention_maps.items(): # for vv in v: # print(vv.shape) # exit() num_pixels = res**2 for location in from_where: for item in attention_maps[f"{location}_{'cross' if is_cross else 'self'}"]: if item.shape[1] == num_pixels: cross_maps = item.reshape(1, -1, res, res, item.shape[-1])[select] out.append(cross_maps) out = torch.cat(out, dim=0) out = out.sum(0) / out.shape[0] return out def register_attention_control(model, controller): attn_procs = {} cross_att_count = 0 for name in model.unet.attn_processors.keys(): # cross_attention_dim = None if name.endswith("attn1.processor") else model.unet.config.cross_attention_dim if name.startswith("mid_block"): # hidden_size = model.unet.config.block_out_channels[-1] place_in_unet = "mid" elif name.startswith("up_blocks"): # block_id = int(name[len("up_blocks.")]) # hidden_size = list(reversed(model.unet.config.block_out_channels))[block_id] place_in_unet = "up" elif name.startswith("down_blocks"): # block_id = int(name[len("down_blocks.")]) # hidden_size = model.unet.config.block_out_channels[block_id] place_in_unet = "down" else: continue cross_att_count += 1 attn_procs[name] = AttendExciteCrossAttnProcessor(attnstore=controller, place_in_unet=place_in_unet) model.unet.set_attn_processor(attn_procs) controller.num_att_layers = cross_att_count def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0): """ Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891). See Section 3.4 """ std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True) std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True) # rescale the results from guidance (fixes overexposure) noise_pred_rescaled = noise_cfg * (std_text / std_cfg) # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg return noise_cfg def retrieve_timesteps( scheduler, num_inference_steps: Optional[int] = None, device: Optional[Union[str, torch.device]] = None, timesteps: Optional[List[int]] = None, **kwargs, ): """ Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`. Args: scheduler (`SchedulerMixin`): The scheduler to get timesteps from. num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps` must be `None`. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. timesteps (`List[int]`, *optional*): Custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default timestep spacing strategy of the scheduler is used. If `timesteps` is passed, `num_inference_steps` must be `None`. Returns: `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the second element is the number of inference steps. """ if timesteps is not None: accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accepts_timesteps: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" timestep schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) else: scheduler.set_timesteps(num_inference_steps, device=device, **kwargs) timesteps = scheduler.timesteps return timesteps, num_inference_steps class StableDiffusionBoxDiffPipeline( DiffusionPipeline, TextualInversionLoaderMixin, StableDiffusionLoraLoaderMixin, IPAdapterMixin, FromSingleFileMixin ): r""" Pipeline for text-to-image generation using Stable Diffusion with BoxDiff. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - [`~loaders.StableDiffusionLoraLoaderMixin.load_lora_weights`] for loading LoRA weights - [`~loaders.StableDiffusionLoraLoaderMixin.save_lora_weights`] for saving LoRA weights - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files - [`~loaders.IPAdapterMixin.load_ip_adapter`] for loading IP Adapters Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. unet ([`UNet2DConditionModel`]): A `UNet2DConditionModel` to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details about a model's potential harms. feature_extractor ([`~transformers.CLIPImageProcessor`]): A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. """ model_cpu_offload_seq = "text_encoder->image_encoder->unet->vae" _optional_components = ["safety_checker", "feature_extractor", "image_encoder"] _exclude_from_cpu_offload = ["safety_checker"] _callback_tensor_inputs = ["latents", "prompt_embeds", "negative_prompt_embeds"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: KarrasDiffusionSchedulers, safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPImageProcessor, image_encoder: CLIPVisionModelWithProjection = None, requires_safety_checker: bool = True, ): super().__init__() if scheduler is not None and getattr(scheduler.config, "steps_offset", 1) != 1: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " "to update the config accordingly as leaving `steps_offset` might led to incorrect results" " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" " file" ) deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["steps_offset"] = 1 scheduler._internal_dict = FrozenDict(new_config) if scheduler is not None and getattr(scheduler.config, "clip_sample", False) is True: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." " `clip_sample` should be set to False in the configuration file. Please make sure to update the" " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" ) deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["clip_sample"] = False scheduler._internal_dict = FrozenDict(new_config) if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) is_unet_version_less_0_9_0 = ( unet is not None and hasattr(unet.config, "_diffusers_version") and version.parse(version.parse(unet.config._diffusers_version).base_version) < version.parse("0.9.0.dev0") ) is_unet_sample_size_less_64 = ( unet is not None and hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 ) if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: deprecation_message = ( "The configuration file of the unet has set the default `sample_size` to smaller than" " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" " in the config might lead to incorrect results in future versions. If you have downloaded this" " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" " the `unet/config.json` file" ) deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(unet.config) new_config["sample_size"] = 64 unet._internal_dict = FrozenDict(new_config) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, image_encoder=image_encoder, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) self.register_to_config(requires_safety_checker=requires_safety_checker) def enable_vae_slicing(self): r""" Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. """ self.vae.enable_slicing() def disable_vae_slicing(self): r""" Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to computing decoding in one step. """ self.vae.disable_slicing() def enable_vae_tiling(self): r""" Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images. """ self.vae.enable_tiling() def disable_vae_tiling(self): r""" Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to computing decoding in one step. """ self.vae.disable_tiling() def _encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.FloatTensor] = None, negative_prompt_embeds: Optional[torch.FloatTensor] = None, lora_scale: Optional[float] = None, **kwargs, ): deprecation_message = "`_encode_prompt()` is deprecated and it will be removed in a future version. Use `encode_prompt()` instead. Also, be aware that the output format changed from a concatenated tensor to a tuple." deprecate("_encode_prompt()", "1.0.0", deprecation_message, standard_warn=False) prompt_embeds_tuple = self.encode_prompt( prompt=prompt, device=device, num_images_per_prompt=num_images_per_prompt, do_classifier_free_guidance=do_classifier_free_guidance, negative_prompt=negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=lora_scale, **kwargs, ) # concatenate for backwards comp prompt_embeds = torch.cat([prompt_embeds_tuple[1], prompt_embeds_tuple[0]]) return prompt_embeds def encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.FloatTensor] = None, negative_prompt_embeds: Optional[torch.FloatTensor] = None, lora_scale: Optional[float] = None, clip_skip: Optional[int] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. lora_scale (`float`, *optional*): A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. """ # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(self, StableDiffusionLoraLoaderMixin): self._lora_scale = lora_scale # dynamically adjust the LoRA scale if not USE_PEFT_BACKEND: adjust_lora_scale_text_encoder(self.text_encoder, lora_scale) else: scale_lora_layers(self.text_encoder, lora_scale) if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: # textual inversion: procecss multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, self.tokenizer) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = self.tokenizer.batch_decode( untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] ) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None if clip_skip is None: prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=attention_mask) prompt_embeds = prompt_embeds[0] else: prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, output_hidden_states=True ) # Access the `hidden_states` first, that contains a tuple of # all the hidden states from the encoder layers. Then index into # the tuple to access the hidden states from the desired layer. prompt_embeds = prompt_embeds[-1][-(clip_skip + 1)] # We also need to apply the final LayerNorm here to not mess with the # representations. The `last_hidden_states` that we typically use for # obtaining the final prompt representations passes through the LayerNorm # layer. prompt_embeds = self.text_encoder.text_model.final_layer_norm(prompt_embeds) if self.text_encoder is not None: prompt_embeds_dtype = self.text_encoder.dtype elif self.unet is not None: prompt_embeds_dtype = self.unet.dtype else: prompt_embeds_dtype = prompt_embeds.dtype prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance and negative_prompt_embeds is None: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt # textual inversion: procecss multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) max_length = prompt_embeds.shape[1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) if isinstance(self, StableDiffusionLoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(self.text_encoder, lora_scale) return text_inputs, prompt_embeds, negative_prompt_embeds def encode_image(self, image, device, num_images_per_prompt, output_hidden_states=None): dtype = next(self.image_encoder.parameters()).dtype if not isinstance(image, torch.Tensor): image = self.feature_extractor(image, return_tensors="pt").pixel_values image = image.to(device=device, dtype=dtype) if output_hidden_states: image_enc_hidden_states = self.image_encoder(image, output_hidden_states=True).hidden_states[-2] image_enc_hidden_states = image_enc_hidden_states.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_enc_hidden_states = self.image_encoder( torch.zeros_like(image), output_hidden_states=True ).hidden_states[-2] uncond_image_enc_hidden_states = uncond_image_enc_hidden_states.repeat_interleave( num_images_per_prompt, dim=0 ) return image_enc_hidden_states, uncond_image_enc_hidden_states else: image_embeds = self.image_encoder(image).image_embeds image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) uncond_image_embeds = torch.zeros_like(image_embeds) return image_embeds, uncond_image_embeds def run_safety_checker(self, image, device, dtype): if self.safety_checker is None: has_nsfw_concept = None else: if torch.is_tensor(image): feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") else: feature_extractor_input = self.image_processor.numpy_to_pil(image) safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(dtype) ) return image, has_nsfw_concept def decode_latents(self, latents): deprecation_message = "The decode_latents method is deprecated and will be removed in 1.0.0. Please use VaeImageProcessor.postprocess(...) instead" deprecate("decode_latents", "1.0.0", deprecation_message, standard_warn=False) latents = 1 / self.vae.config.scaling_factor * latents image = self.vae.decode(latents, return_dict=False)[0] image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() return image def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://huggingface.co/papers/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, prompt, height, width, boxdiff_phrases, boxdiff_boxes, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None, callback_on_step_end_tensor_inputs=None, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) if boxdiff_phrases is not None or boxdiff_boxes is not None: if not (boxdiff_phrases is not None and boxdiff_boxes is not None): raise ValueError("Either both `boxdiff_phrases` and `boxdiff_boxes` must be passed or none of them.") if not isinstance(boxdiff_phrases, list) or not isinstance(boxdiff_boxes, list): raise ValueError("`boxdiff_phrases` and `boxdiff_boxes` must be lists.") if len(boxdiff_phrases) != len(boxdiff_boxes): raise ValueError( "`boxdiff_phrases` and `boxdiff_boxes` must have the same length," f" got: `boxdiff_phrases` {len(boxdiff_phrases)} != `boxdiff_boxes`" f" {len(boxdiff_boxes)}." ) def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents def enable_freeu(self, s1: float, s2: float, b1: float, b2: float): r"""Enables the FreeU mechanism as in https://huggingface.co/papers/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of the values that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. Args: s1 (`float`): Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to mitigate "oversmoothing effect" in the enhanced denoising process. s2 (`float`): Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to mitigate "oversmoothing effect" in the enhanced denoising process. b1 (`float`): Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (`float`): Scaling factor for stage 2 to amplify the contributions of backbone features. """ if not hasattr(self, "unet"): raise ValueError("The pipeline must have `unet` for using FreeU.") self.unet.enable_freeu(s1=s1, s2=s2, b1=b1, b2=b2) def disable_freeu(self): """Disables the FreeU mechanism if enabled.""" self.unet.disable_freeu() # Copied from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl.StableDiffusionXLPipeline.fuse_qkv_projections def fuse_qkv_projections(self, unet: bool = True, vae: bool = True): """ Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused. <Tip warning={true}> This API is 🧪 experimental. </Tip> Args: unet (`bool`, defaults to `True`): To apply fusion on the UNet. vae (`bool`, defaults to `True`): To apply fusion on the VAE. """ self.fusing_unet = False self.fusing_vae = False if unet: self.fusing_unet = True self.unet.fuse_qkv_projections() self.unet.set_attn_processor(FusedAttnProcessor2_0()) if vae: if not isinstance(self.vae, AutoencoderKL): raise ValueError("`fuse_qkv_projections()` is only supported for the VAE of type `AutoencoderKL`.") self.fusing_vae = True self.vae.fuse_qkv_projections() self.vae.set_attn_processor(FusedAttnProcessor2_0()) # Copied from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl.StableDiffusionXLPipeline.unfuse_qkv_projections def unfuse_qkv_projections(self, unet: bool = True, vae: bool = True): """Disable QKV projection fusion if enabled. <Tip warning={true}> This API is 🧪 experimental. </Tip> Args: unet (`bool`, defaults to `True`): To apply fusion on the UNet. vae (`bool`, defaults to `True`): To apply fusion on the VAE. """ if unet: if not self.fusing_unet: logger.warning("The UNet was not initially fused for QKV projections. Doing nothing.") else: self.unet.unfuse_qkv_projections() self.fusing_unet = False if vae: if not self.fusing_vae: logger.warning("The VAE was not initially fused for QKV projections. Doing nothing.") else: self.vae.unfuse_qkv_projections() self.fusing_vae = False # Copied from diffusers.pipelines.latent_consistency_models.pipeline_latent_consistency_text2img.LatentConsistencyModelPipeline.get_guidance_scale_embedding def get_guidance_scale_embedding(self, w, embedding_dim=512, dtype=torch.float32): """ See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 Args: timesteps (`torch.Tensor`): generate embedding vectors at these timesteps embedding_dim (`int`, *optional*, defaults to 512): dimension of the embeddings to generate dtype: data type of the generated embeddings Returns: `torch.FloatTensor`: Embedding vectors with shape `(len(timesteps), embedding_dim)` """ assert len(w.shape) == 1 w = w * 1000.0 half_dim = embedding_dim // 2 emb = torch.log(torch.tensor(10000.0)) / (half_dim - 1) emb = torch.exp(torch.arange(half_dim, dtype=dtype) * -emb) emb = w.to(dtype)[:, None] * emb[None, :] emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) if embedding_dim % 2 == 1: # zero pad emb = torch.nn.functional.pad(emb, (0, 1)) assert emb.shape == (w.shape[0], embedding_dim) return emb @property def guidance_scale(self): return self._guidance_scale @property def guidance_rescale(self): return self._guidance_rescale @property def clip_skip(self): return self._clip_skip # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://huggingface.co/papers/2205.11487 . `guidance_scale = 1` # corresponds to doing no classifier free guidance. @property def do_classifier_free_guidance(self): return self._guidance_scale > 1 and self.unet.config.time_cond_proj_dim is None @property def cross_attention_kwargs(self): return self._cross_attention_kwargs @property def num_timesteps(self): return self._num_timesteps @property def interrupt(self): return self._interrupt def _compute_max_attention_per_index( self, attention_maps: torch.Tensor, indices_to_alter: List[int], smooth_attentions: bool = False, sigma: float = 0.5, kernel_size: int = 3, normalize_eot: bool = False, bboxes: List[int] = None, L: int = 1, P: float = 0.2, ) -> List[torch.Tensor]: """Computes the maximum attention value for each of the tokens we wish to alter.""" last_idx = -1 if normalize_eot: prompt = self.prompt if isinstance(self.prompt, list): prompt = self.prompt[0] last_idx = len(self.tokenizer(prompt)["input_ids"]) - 1 attention_for_text = attention_maps[:, :, 1:last_idx] attention_for_text *= 100 attention_for_text = torch.nn.functional.softmax(attention_for_text, dim=-1) # Shift indices since we removed the first token "1:last_idx" indices_to_alter = [index - 1 for index in indices_to_alter] # Extract the maximum values max_indices_list_fg = [] max_indices_list_bg = [] dist_x = [] dist_y = [] cnt = 0 for i in indices_to_alter: image = attention_for_text[:, :, i] # TODO # box = [max(round(b / (512 / image.shape[0])), 0) for b in bboxes[cnt]] # x1, y1, x2, y2 = box H, W = image.shape x1 = min(max(round(bboxes[cnt][0] * W), 0), W) y1 = min(max(round(bboxes[cnt][1] * H), 0), H) x2 = min(max(round(bboxes[cnt][2] * W), 0), W) y2 = min(max(round(bboxes[cnt][3] * H), 0), H) box = [x1, y1, x2, y2] cnt += 1 # coordinates to masks obj_mask = torch.zeros_like(image) ones_mask = torch.ones([y2 - y1, x2 - x1], dtype=obj_mask.dtype).to(obj_mask.device) obj_mask[y1:y2, x1:x2] = ones_mask bg_mask = 1 - obj_mask if smooth_attentions: smoothing = GaussianSmoothing(channels=1, kernel_size=kernel_size, sigma=sigma, dim=2).to(image.device) input = F.pad(image.unsqueeze(0).unsqueeze(0), (1, 1, 1, 1), mode="reflect") image = smoothing(input).squeeze(0).squeeze(0) # Inner-Box constraint k = (obj_mask.sum() * P).long() max_indices_list_fg.append((image * obj_mask).reshape(-1).topk(k)[0].mean()) # Outer-Box constraint k = (bg_mask.sum() * P).long() max_indices_list_bg.append((image * bg_mask).reshape(-1).topk(k)[0].mean()) # Corner Constraint gt_proj_x = torch.max(obj_mask, dim=0)[0] gt_proj_y = torch.max(obj_mask, dim=1)[0] corner_mask_x = torch.zeros_like(gt_proj_x) corner_mask_y = torch.zeros_like(gt_proj_y) # create gt according to the number config.L N = gt_proj_x.shape[0] corner_mask_x[max(box[0] - L, 0) : min(box[0] + L + 1, N)] = 1.0 corner_mask_x[max(box[2] - L, 0) : min(box[2] + L + 1, N)] = 1.0 corner_mask_y[max(box[1] - L, 0) : min(box[1] + L + 1, N)] = 1.0 corner_mask_y[max(box[3] - L, 0) : min(box[3] + L + 1, N)] = 1.0 dist_x.append((F.l1_loss(image.max(dim=0)[0], gt_proj_x, reduction="none") * corner_mask_x).mean()) dist_y.append((F.l1_loss(image.max(dim=1)[0], gt_proj_y, reduction="none") * corner_mask_y).mean()) return max_indices_list_fg, max_indices_list_bg, dist_x, dist_y def _aggregate_and_get_max_attention_per_token( self, attention_store: AttentionStore, indices_to_alter: List[int], attention_res: int = 16, smooth_attentions: bool = False, sigma: float = 0.5, kernel_size: int = 3, normalize_eot: bool = False, bboxes: List[int] = None, L: int = 1, P: float = 0.2, ): """Aggregates the attention for each token and computes the max activation value for each token to alter.""" attention_maps = aggregate_attention( attention_store=attention_store, res=attention_res, from_where=("up", "down", "mid"), is_cross=True, select=0, ) max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y = self._compute_max_attention_per_index( attention_maps=attention_maps, indices_to_alter=indices_to_alter, smooth_attentions=smooth_attentions, sigma=sigma, kernel_size=kernel_size, normalize_eot=normalize_eot, bboxes=bboxes, L=L, P=P, ) return max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y @staticmethod def _compute_loss( max_attention_per_index_fg: List[torch.Tensor], max_attention_per_index_bg: List[torch.Tensor], dist_x: List[torch.Tensor], dist_y: List[torch.Tensor], return_losses: bool = False, ) -> torch.Tensor: """Computes the attend-and-excite loss using the maximum attention value for each token.""" losses_fg = [max(0, 1.0 - curr_max) for curr_max in max_attention_per_index_fg] losses_bg = [max(0, curr_max) for curr_max in max_attention_per_index_bg] loss = sum(losses_fg) + sum(losses_bg) + sum(dist_x) + sum(dist_y) if return_losses: return max(losses_fg), losses_fg else: return max(losses_fg), loss @staticmethod def _update_latent(latents: torch.Tensor, loss: torch.Tensor, step_size: float) -> torch.Tensor: """Update the latent according to the computed loss.""" grad_cond = torch.autograd.grad(loss.requires_grad_(True), [latents], retain_graph=True)[0] latents = latents - step_size * grad_cond return latents def _perform_iterative_refinement_step( self, latents: torch.Tensor, indices_to_alter: List[int], loss_fg: torch.Tensor, threshold: float, text_embeddings: torch.Tensor, text_input, attention_store: AttentionStore, step_size: float, t: int, attention_res: int = 16, smooth_attentions: bool = True, sigma: float = 0.5, kernel_size: int = 3, max_refinement_steps: int = 20, normalize_eot: bool = False, bboxes: List[int] = None, L: int = 1, P: float = 0.2, ): """ Performs the iterative latent refinement introduced in the paper. Here, we continuously update the latent code according to our loss objective until the given threshold is reached for all tokens. """ iteration = 0 target_loss = max(0, 1.0 - threshold) while loss_fg > target_loss: iteration += 1 latents = latents.clone().detach().requires_grad_(True) # noise_pred_text = self.unet(latents, t, encoder_hidden_states=text_embeddings[1].unsqueeze(0)).sample self.unet.zero_grad() # Get max activation value for each subject token ( max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y, ) = self._aggregate_and_get_max_attention_per_token( attention_store=attention_store, indices_to_alter=indices_to_alter, attention_res=attention_res, smooth_attentions=smooth_attentions, sigma=sigma, kernel_size=kernel_size, normalize_eot=normalize_eot, bboxes=bboxes, L=L, P=P, ) loss_fg, losses_fg = self._compute_loss( max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y, return_losses=True ) if loss_fg != 0: latents = self._update_latent(latents, loss_fg, step_size) # with torch.no_grad(): # noise_pred_uncond = self.unet(latents, t, encoder_hidden_states=text_embeddings[0].unsqueeze(0)).sample # noise_pred_text = self.unet(latents, t, encoder_hidden_states=text_embeddings[1].unsqueeze(0)).sample # try: # low_token = np.argmax([l.item() if not isinstance(l, int) else l for l in losses_fg]) # except Exception as e: # print(e) # catch edge case :) # low_token = np.argmax(losses_fg) # low_word = self.tokenizer.decode(text_input.input_ids[0][indices_to_alter[low_token]]) # print(f'\t Try {iteration}. {low_word} has a max attention of {max_attention_per_index_fg[low_token]}') if iteration >= max_refinement_steps: # print(f'\t Exceeded max number of iterations ({max_refinement_steps})! ' # f'Finished with a max attention of {max_attention_per_index_fg[low_token]}') break # Run one more time but don't compute gradients and update the latents. # We just need to compute the new loss - the grad update will occur below latents = latents.clone().detach().requires_grad_(True) # noise_pred_text = self.unet(latents, t, encoder_hidden_states=text_embeddings[1].unsqueeze(0)).sample self.unet.zero_grad() # Get max activation value for each subject token ( max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y, ) = self._aggregate_and_get_max_attention_per_token( attention_store=attention_store, indices_to_alter=indices_to_alter, attention_res=attention_res, smooth_attentions=smooth_attentions, sigma=sigma, kernel_size=kernel_size, normalize_eot=normalize_eot, bboxes=bboxes, L=L, P=P, ) loss_fg, losses_fg = self._compute_loss( max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y, return_losses=True ) # print(f"\t Finished with loss of: {loss_fg}") return loss_fg, latents, max_attention_per_index_fg @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]] = None, boxdiff_phrases: List[str] = None, boxdiff_boxes: List[List[float]] = None, # TODO boxdiff_kwargs: Optional[Dict[str, Any]] = { "attention_res": 16, "P": 0.2, "L": 1, "max_iter_to_alter": 25, "loss_thresholds": {0: 0.05, 10: 0.5, 20: 0.8}, "scale_factor": 20, "scale_range": (1.0, 0.5), "smooth_attentions": True, "sigma": 0.5, "kernel_size": 3, "refine": False, "normalize_eot": True, }, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, timesteps: List[int] = None, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.FloatTensor] = None, prompt_embeds: Optional[torch.FloatTensor] = None, negative_prompt_embeds: Optional[torch.FloatTensor] = None, ip_adapter_image: Optional[PipelineImageInput] = None, output_type: Optional[str] = "pil", return_dict: bool = True, cross_attention_kwargs: Optional[Dict[str, Any]] = None, guidance_rescale: float = 0.0, clip_skip: Optional[int] = None, callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], **kwargs, ): r""" The call function to the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. boxdiff_attention_res (`int`, *optional*, defaults to 16): The resolution of the attention maps used for computing the BoxDiff loss. boxdiff_P (`float`, *optional*, defaults to 0.2): boxdiff_L (`int`, *optional*, defaults to 1): The number of pixels around the corner to be selected in BoxDiff loss. height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The height in pixels of the generated image. width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. timesteps (`List[int]`, *optional*): Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed will be used. Must be in descending order. guidance_scale (`float`, *optional*, defaults to 7.5): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) from the [DDIM](https://huggingface.co/papers/2010.02502) paper. Only applies to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.FloatTensor`, *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random `generator`. prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the `prompt` input argument. negative_prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. ip_adapter_image: (`PipelineImageInput`, *optional*): Optional image input to work with IP Adapters. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). guidance_rescale (`float`, *optional*, defaults to 0.0): Guidance rescale factor from [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891). Guidance rescale factor should fix overexposure when using zero terminal SNR. clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. Examples: Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images and the second element is a list of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content. """ callback = kwargs.pop("callback", None) callback_steps = kwargs.pop("callback_steps", None) if callback is not None: deprecate( "callback", "1.0.0", "Passing `callback` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`", ) if callback_steps is not None: deprecate( "callback_steps", "1.0.0", "Passing `callback_steps` as an input argument to `__call__` is deprecated, consider using `callback_on_step_end`", ) # -1. Register attention control (for BoxDiff) attention_store = AttentionStore() register_attention_control(self, attention_store) # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor # to deal with lora scaling and other possible forward hooks # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, height, width, boxdiff_phrases, boxdiff_boxes, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds, callback_on_step_end_tensor_inputs, ) self.prompt = prompt self._guidance_scale = guidance_scale self._guidance_rescale = guidance_rescale self._clip_skip = clip_skip self._cross_attention_kwargs = cross_attention_kwargs self._interrupt = False # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] device = self._execution_device # 3. Encode input prompt lora_scale = ( self.cross_attention_kwargs.get("scale", None) if self.cross_attention_kwargs is not None else None ) text_inputs, prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt, device, num_images_per_prompt, self.do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, lora_scale=lora_scale, clip_skip=self.clip_skip, ) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes if self.do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) if ip_adapter_image is not None: output_hidden_state = False if isinstance(self.unet.encoder_hid_proj, ImageProjection) else True image_embeds, negative_image_embeds = self.encode_image( ip_adapter_image, device, num_images_per_prompt, output_hidden_state ) if self.do_classifier_free_guidance: image_embeds = torch.cat([negative_image_embeds, image_embeds]) # 4. Prepare timesteps timesteps, num_inference_steps = retrieve_timesteps(self.scheduler, num_inference_steps, device, timesteps) # 5. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, prompt_embeds.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 6.1 Add image embeds for IP-Adapter added_cond_kwargs = {"image_embeds": image_embeds} if ip_adapter_image is not None else None # 6.2 Optionally get Guidance Scale Embedding timestep_cond = None if self.unet.config.time_cond_proj_dim is not None: guidance_scale_tensor = torch.tensor(self.guidance_scale - 1).repeat(batch_size * num_images_per_prompt) timestep_cond = self.get_guidance_scale_embedding( guidance_scale_tensor, embedding_dim=self.unet.config.time_cond_proj_dim ).to(device=device, dtype=latents.dtype) # 6.3 Prepare BoxDiff inputs # a) Indices to alter input_ids = self.tokenizer(prompt)["input_ids"] decoded = [self.tokenizer.decode([t]) for t in input_ids] indices_to_alter = [] bboxes = [] for phrase, box in zip(boxdiff_phrases, boxdiff_boxes): # it could happen that phrase does not correspond a single token? if phrase not in decoded: continue indices_to_alter.append(decoded.index(phrase)) bboxes.append(box) # b) A bunch of hyperparameters attention_res = boxdiff_kwargs.get("attention_res", 16) smooth_attentions = boxdiff_kwargs.get("smooth_attentions", True) sigma = boxdiff_kwargs.get("sigma", 0.5) kernel_size = boxdiff_kwargs.get("kernel_size", 3) L = boxdiff_kwargs.get("L", 1) P = boxdiff_kwargs.get("P", 0.2) thresholds = boxdiff_kwargs.get("loss_thresholds", {0: 0.05, 10: 0.5, 20: 0.8}) max_iter_to_alter = boxdiff_kwargs.get("max_iter_to_alter", len(self.scheduler.timesteps) + 1) scale_factor = boxdiff_kwargs.get("scale_factor", 20) refine = boxdiff_kwargs.get("refine", False) normalize_eot = boxdiff_kwargs.get("normalize_eot", True) scale_range = boxdiff_kwargs.get("scale_range", (1.0, 0.5)) scale_range = np.linspace(scale_range[0], scale_range[1], len(self.scheduler.timesteps)) # 7. Denoising loop num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order self._num_timesteps = len(timesteps) with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): if self.interrupt: continue # BoxDiff optimization with torch.enable_grad(): latents = latents.clone().detach().requires_grad_(True) # Forward pass of denoising with text conditioning noise_pred_text = self.unet( latents, t, encoder_hidden_states=prompt_embeds[1].unsqueeze(0), cross_attention_kwargs=cross_attention_kwargs, ).sample self.unet.zero_grad() # Get max activation value for each subject token ( max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y, ) = self._aggregate_and_get_max_attention_per_token( attention_store=attention_store, indices_to_alter=indices_to_alter, attention_res=attention_res, smooth_attentions=smooth_attentions, sigma=sigma, kernel_size=kernel_size, normalize_eot=normalize_eot, bboxes=bboxes, L=L, P=P, ) loss_fg, loss = self._compute_loss( max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y ) # Refinement from attend-and-excite (not necessary) if refine and i in thresholds.keys() and loss_fg > 1.0 - thresholds[i]: del noise_pred_text torch.cuda.empty_cache() loss_fg, latents, max_attention_per_index_fg = self._perform_iterative_refinement_step( latents=latents, indices_to_alter=indices_to_alter, loss_fg=loss_fg, threshold=thresholds[i], text_embeddings=prompt_embeds, text_input=text_inputs, attention_store=attention_store, step_size=scale_factor * np.sqrt(scale_range[i]), t=t, attention_res=attention_res, smooth_attentions=smooth_attentions, sigma=sigma, kernel_size=kernel_size, normalize_eot=normalize_eot, bboxes=bboxes, L=L, P=P, ) # Perform gradient update if i < max_iter_to_alter: _, loss = self._compute_loss( max_attention_per_index_fg, max_attention_per_index_bg, dist_x, dist_y ) if loss != 0: latents = self._update_latent( latents=latents, loss=loss, step_size=scale_factor * np.sqrt(scale_range[i]) ) # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet( latent_model_input, t, encoder_hidden_states=prompt_embeds, timestep_cond=timestep_cond, cross_attention_kwargs=self.cross_attention_kwargs, added_cond_kwargs=added_cond_kwargs, return_dict=False, )[0] # perform guidance if self.do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond) if self.do_classifier_free_guidance and self.guidance_rescale > 0.0: # Based on 3.4. in https://huggingface.co/papers/2305.08891 noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=self.guidance_rescale) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) latents = callback_outputs.pop("latents", latents) prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds) # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if not output_type == "latent": image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False, generator=generator)[ 0 ] image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) else: image = latents has_nsfw_concept = None if has_nsfw_concept is None: do_denormalize = [True] * image.shape[0] else: do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) # Offload all models self.maybe_free_model_hooks() if not return_dict: return (image, has_nsfw_concept) return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diffusers/examples/community/pipeline_stable_diffusion_boxdiff.py/0
{ "file_path": "diffusers/examples/community/pipeline_stable_diffusion_boxdiff.py", "repo_id": "diffusers", "token_count": 36197 }
134
# Copyright 2025 The Wan Team and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import html import types from typing import Any, Callable, Dict, List, Optional, Union import ftfy import regex as re import torch from transformers import AutoTokenizer, UMT5EncoderModel from diffusers.callbacks import MultiPipelineCallbacks, PipelineCallback from diffusers.loaders import WanLoraLoaderMixin from diffusers.models import AutoencoderKLWan, WanTransformer3DModel from diffusers.pipelines.pipeline_utils import DiffusionPipeline from diffusers.pipelines.wan.pipeline_output import WanPipelineOutput from diffusers.schedulers import FlowMatchEulerDiscreteScheduler from diffusers.utils import is_torch_xla_available, logging, replace_example_docstring from diffusers.utils.torch_utils import randn_tensor from diffusers.video_processor import VideoProcessor if is_torch_xla_available(): import torch_xla.core.xla_model as xm XLA_AVAILABLE = True else: XLA_AVAILABLE = False logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```python >>> import torch >>> from diffusers.utils import export_to_video >>> from diffusers import AutoencoderKLWan >>> from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler >>> from examples.community.pipeline_stg_wan import WanSTGPipeline >>> # Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers >>> model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers" >>> vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32) >>> pipe = WanSTGPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16) >>> flow_shift = 5.0 # 5.0 for 720P, 3.0 for 480P >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift) >>> pipe.to("cuda") >>> prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window." >>> negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards" >>> # Configure STG mode options >>> stg_applied_layers_idx = [8] # Layer indices from 0 to 39 for 14b or 0 to 29 for 1.3b >>> stg_scale = 1.0 # Set 0.0 for CFG >>> output = pipe( ... prompt=prompt, ... negative_prompt=negative_prompt, ... height=720, ... width=1280, ... num_frames=81, ... guidance_scale=5.0, ... stg_applied_layers_idx=stg_applied_layers_idx, ... stg_scale=stg_scale, ... ).frames[0] >>> export_to_video(output, "output.mp4", fps=16) ``` """ def basic_clean(text): text = ftfy.fix_text(text) text = html.unescape(html.unescape(text)) return text.strip() def whitespace_clean(text): text = re.sub(r"\s+", " ", text) text = text.strip() return text def prompt_clean(text): text = whitespace_clean(basic_clean(text)) return text def forward_with_stg( self, hidden_states: torch.Tensor, encoder_hidden_states: torch.Tensor, temb: torch.Tensor, rotary_emb: torch.Tensor, ) -> torch.Tensor: return hidden_states def forward_without_stg( self, hidden_states: torch.Tensor, encoder_hidden_states: torch.Tensor, temb: torch.Tensor, rotary_emb: torch.Tensor, ) -> torch.Tensor: shift_msa, scale_msa, gate_msa, c_shift_msa, c_scale_msa, c_gate_msa = ( self.scale_shift_table + temb.float() ).chunk(6, dim=1) # 1. Self-attention norm_hidden_states = (self.norm1(hidden_states.float()) * (1 + scale_msa) + shift_msa).type_as(hidden_states) attn_output = self.attn1(hidden_states=norm_hidden_states, rotary_emb=rotary_emb) hidden_states = (hidden_states.float() + attn_output * gate_msa).type_as(hidden_states) # 2. Cross-attention norm_hidden_states = self.norm2(hidden_states.float()).type_as(hidden_states) attn_output = self.attn2(hidden_states=norm_hidden_states, encoder_hidden_states=encoder_hidden_states) hidden_states = hidden_states + attn_output # 3. Feed-forward norm_hidden_states = (self.norm3(hidden_states.float()) * (1 + c_scale_msa) + c_shift_msa).type_as(hidden_states) ff_output = self.ffn(norm_hidden_states) hidden_states = (hidden_states.float() + ff_output.float() * c_gate_msa).type_as(hidden_states) return hidden_states class WanSTGPipeline(DiffusionPipeline, WanLoraLoaderMixin): r""" Pipeline for text-to-video generation using Wan. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Args: tokenizer ([`T5Tokenizer`]): Tokenizer from [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5Tokenizer), specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant. text_encoder ([`T5EncoderModel`]): [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically the [google/umt5-xxl](https://huggingface.co/google/umt5-xxl) variant. transformer ([`WanTransformer3DModel`]): Conditional Transformer to denoise the input latents. scheduler ([`UniPCMultistepScheduler`]): A scheduler to be used in combination with `transformer` to denoise the encoded image latents. vae ([`AutoencoderKLWan`]): Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. """ model_cpu_offload_seq = "text_encoder->transformer->vae" _callback_tensor_inputs = ["latents", "prompt_embeds", "negative_prompt_embeds"] def __init__( self, tokenizer: AutoTokenizer, text_encoder: UMT5EncoderModel, transformer: WanTransformer3DModel, vae: AutoencoderKLWan, scheduler: FlowMatchEulerDiscreteScheduler, ): super().__init__() self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, transformer=transformer, scheduler=scheduler, ) self.vae_scale_factor_temporal = 2 ** sum(self.vae.temperal_downsample) if getattr(self, "vae", None) else 4 self.vae_scale_factor_spatial = 2 ** len(self.vae.temperal_downsample) if getattr(self, "vae", None) else 8 self.video_processor = VideoProcessor(vae_scale_factor=self.vae_scale_factor_spatial) def _get_t5_prompt_embeds( self, prompt: Union[str, List[str]] = None, num_videos_per_prompt: int = 1, max_sequence_length: int = 226, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, ): device = device or self._execution_device dtype = dtype or self.text_encoder.dtype prompt = [prompt] if isinstance(prompt, str) else prompt prompt = [prompt_clean(u) for u in prompt] batch_size = len(prompt) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=max_sequence_length, truncation=True, add_special_tokens=True, return_attention_mask=True, return_tensors="pt", ) text_input_ids, mask = text_inputs.input_ids, text_inputs.attention_mask seq_lens = mask.gt(0).sum(dim=1).long() prompt_embeds = self.text_encoder(text_input_ids.to(device), mask.to(device)).last_hidden_state prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) prompt_embeds = [u[:v] for u, v in zip(prompt_embeds, seq_lens)] prompt_embeds = torch.stack( [torch.cat([u, u.new_zeros(max_sequence_length - u.size(0), u.size(1))]) for u in prompt_embeds], dim=0 ) # duplicate text embeddings for each generation per prompt, using mps friendly method _, seq_len, _ = prompt_embeds.shape prompt_embeds = prompt_embeds.repeat(1, num_videos_per_prompt, 1) prompt_embeds = prompt_embeds.view(batch_size * num_videos_per_prompt, seq_len, -1) return prompt_embeds def encode_prompt( self, prompt: Union[str, List[str]], negative_prompt: Optional[Union[str, List[str]]] = None, do_classifier_free_guidance: bool = True, num_videos_per_prompt: int = 1, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, max_sequence_length: int = 226, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): Whether to use classifier free guidance or not. num_videos_per_prompt (`int`, *optional*, defaults to 1): Number of videos that should be generated per prompt. torch device to place the resulting embeddings on prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. device: (`torch.device`, *optional*): torch device dtype: (`torch.dtype`, *optional*): torch dtype """ device = device or self._execution_device prompt = [prompt] if isinstance(prompt, str) else prompt if prompt is not None: batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: prompt_embeds = self._get_t5_prompt_embeds( prompt=prompt, num_videos_per_prompt=num_videos_per_prompt, max_sequence_length=max_sequence_length, device=device, dtype=dtype, ) if do_classifier_free_guidance and negative_prompt_embeds is None: negative_prompt = negative_prompt or "" negative_prompt = batch_size * [negative_prompt] if isinstance(negative_prompt, str) else negative_prompt if prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) negative_prompt_embeds = self._get_t5_prompt_embeds( prompt=negative_prompt, num_videos_per_prompt=num_videos_per_prompt, max_sequence_length=max_sequence_length, device=device, dtype=dtype, ) return prompt_embeds, negative_prompt_embeds def check_inputs( self, prompt, negative_prompt, height, width, prompt_embeds=None, negative_prompt_embeds=None, callback_on_step_end_tensor_inputs=None, ): if height % 16 != 0 or width % 16 != 0: raise ValueError(f"`height` and `width` have to be divisible by 16 but are {height} and {width}.") if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`: {negative_prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") elif negative_prompt is not None and ( not isinstance(negative_prompt, str) and not isinstance(negative_prompt, list) ): raise ValueError(f"`negative_prompt` has to be of type `str` or `list` but is {type(negative_prompt)}") def prepare_latents( self, batch_size: int, num_channels_latents: int = 16, height: int = 480, width: int = 832, num_frames: int = 81, dtype: Optional[torch.dtype] = None, device: Optional[torch.device] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, ) -> torch.Tensor: if latents is not None: return latents.to(device=device, dtype=dtype) num_latent_frames = (num_frames - 1) // self.vae_scale_factor_temporal + 1 shape = ( batch_size, num_channels_latents, num_latent_frames, int(height) // self.vae_scale_factor_spatial, int(width) // self.vae_scale_factor_spatial, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) return latents @property def guidance_scale(self): return self._guidance_scale @property def do_classifier_free_guidance(self): return self._guidance_scale > 1.0 @property def do_spatio_temporal_guidance(self): return self._stg_scale > 0.0 @property def num_timesteps(self): return self._num_timesteps @property def current_timestep(self): return self._current_timestep @property def interrupt(self): return self._interrupt @property def attention_kwargs(self): return self._attention_kwargs @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]] = None, negative_prompt: Union[str, List[str]] = None, height: int = 480, width: int = 832, num_frames: int = 81, num_inference_steps: int = 50, guidance_scale: float = 5.0, num_videos_per_prompt: Optional[int] = 1, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, output_type: Optional[str] = "np", return_dict: bool = True, attention_kwargs: Optional[Dict[str, Any]] = None, callback_on_step_end: Optional[ Union[Callable[[int, int, Dict], None], PipelineCallback, MultiPipelineCallbacks] ] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], max_sequence_length: int = 512, stg_applied_layers_idx: Optional[List[int]] = [3, 8, 16], stg_scale: Optional[float] = 0.0, ): r""" The call function to the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. height (`int`, defaults to `480`): The height in pixels of the generated image. width (`int`, defaults to `832`): The width in pixels of the generated image. num_frames (`int`, defaults to `81`): The number of frames in the generated video. num_inference_steps (`int`, defaults to `50`): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, defaults to `5.0`): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. num_videos_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the `prompt` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`WanPipelineOutput`] instead of a plain tuple. attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). callback_on_step_end (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*): A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of each denoising step during the inference. with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. autocast_dtype (`torch.dtype`, *optional*, defaults to `torch.bfloat16`): The dtype to use for the torch.amp.autocast. Examples: Returns: [`~WanPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`WanPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images and the second element is a list of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content. """ if isinstance(callback_on_step_end, (PipelineCallback, MultiPipelineCallbacks)): callback_on_step_end_tensor_inputs = callback_on_step_end.tensor_inputs # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, negative_prompt, height, width, prompt_embeds, negative_prompt_embeds, callback_on_step_end_tensor_inputs, ) self._guidance_scale = guidance_scale self._stg_scale = stg_scale self._attention_kwargs = attention_kwargs self._current_timestep = None self._interrupt = False device = self._execution_device # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] # 3. Encode input prompt prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt=prompt, negative_prompt=negative_prompt, do_classifier_free_guidance=self.do_classifier_free_guidance, num_videos_per_prompt=num_videos_per_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, max_sequence_length=max_sequence_length, device=device, ) transformer_dtype = self.transformer.dtype prompt_embeds = prompt_embeds.to(transformer_dtype) if negative_prompt_embeds is not None: negative_prompt_embeds = negative_prompt_embeds.to(transformer_dtype) # 4. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # 5. Prepare latent variables num_channels_latents = self.transformer.config.in_channels latents = self.prepare_latents( batch_size * num_videos_per_prompt, num_channels_latents, height, width, num_frames, torch.float32, device, generator, latents, ) # 6. Denoising loop num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order self._num_timesteps = len(timesteps) with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): if self.interrupt: continue self._current_timestep = t latent_model_input = latents.to(transformer_dtype) timestep = t.expand(latents.shape[0]) if self.do_spatio_temporal_guidance: for idx, block in enumerate(self.transformer.blocks): block.forward = types.MethodType(forward_without_stg, block) noise_pred = self.transformer( hidden_states=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds, attention_kwargs=attention_kwargs, return_dict=False, )[0] if self.do_classifier_free_guidance: noise_uncond = self.transformer( hidden_states=latent_model_input, timestep=timestep, encoder_hidden_states=negative_prompt_embeds, attention_kwargs=attention_kwargs, return_dict=False, )[0] if self.do_spatio_temporal_guidance: for idx, block in enumerate(self.transformer.blocks): if idx in stg_applied_layers_idx: block.forward = types.MethodType(forward_with_stg, block) noise_perturb = self.transformer( hidden_states=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds, attention_kwargs=attention_kwargs, return_dict=False, )[0] noise_pred = ( noise_uncond + guidance_scale * (noise_pred - noise_uncond) + self._stg_scale * (noise_pred - noise_perturb) ) else: noise_pred = noise_uncond + guidance_scale * (noise_pred - noise_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0] if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) latents = callback_outputs.pop("latents", latents) prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds) # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if XLA_AVAILABLE: xm.mark_step() self._current_timestep = None if not output_type == "latent": latents = latents.to(self.vae.dtype) latents_mean = ( torch.tensor(self.vae.config.latents_mean) .view(1, self.vae.config.z_dim, 1, 1, 1) .to(latents.device, latents.dtype) ) latents_std = 1.0 / torch.tensor(self.vae.config.latents_std).view(1, self.vae.config.z_dim, 1, 1, 1).to( latents.device, latents.dtype ) latents = latents / latents_std + latents_mean video = self.vae.decode(latents, return_dict=False)[0] video = self.video_processor.postprocess_video(video, output_type=output_type) else: video = latents # Offload all models self.maybe_free_model_hooks() if not return_dict: return (video,) return WanPipelineOutput(frames=video)
diffusers/examples/community/pipeline_stg_wan.py/0
{ "file_path": "diffusers/examples/community/pipeline_stg_wan.py", "repo_id": "diffusers", "token_count": 13436 }
135
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect from typing import Any, Callable, Dict, List, Optional, Union import intel_extension_for_pytorch as ipex import torch from packaging import version from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer from diffusers.configuration_utils import FrozenDict from diffusers.loaders import StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin from diffusers.models import AutoencoderKL, UNet2DConditionModel from diffusers.pipelines.pipeline_utils import DiffusionPipeline, StableDiffusionMixin from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker from diffusers.schedulers import KarrasDiffusionSchedulers from diffusers.utils import ( deprecate, logging, replace_example_docstring, ) from diffusers.utils.torch_utils import randn_tensor logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers import StableDiffusionPipeline >>> pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_ipex") >>> # For Float32 >>> pipe.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512) #value of image height/width should be consistent with the pipeline inference >>> # For BFloat16 >>> pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512) #value of image height/width should be consistent with the pipeline inference >>> prompt = "a photo of an astronaut riding a horse on mars" >>> # For Float32 >>> image = pipe(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()' >>> # For BFloat16 >>> with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16): >>> image = pipe(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()' ``` """ class StableDiffusionIPEXPipeline( DiffusionPipeline, StableDiffusionMixin, TextualInversionLoaderMixin, StableDiffusionLoraLoaderMixin ): r""" Pipeline for text-to-image generation using Stable Diffusion on IPEX. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: vae ([`AutoencoderKL`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`CLIPTextModel`]): Frozen text-encoder. Stable Diffusion uses the text portion of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. safety_checker ([`StableDiffusionSafetyChecker`]): Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. feature_extractor ([`CLIPImageProcessor`]): Model that extracts features from generated images to be used as inputs for the `safety_checker`. """ _optional_components = ["safety_checker", "feature_extractor"] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: KarrasDiffusionSchedulers, safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPImageProcessor, requires_safety_checker: bool = True, ): super().__init__() if scheduler is not None and getattr(scheduler.config, "steps_offset", 1) != 1: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " "to update the config accordingly as leaving `steps_offset` might led to incorrect results" " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" " file" ) deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["steps_offset"] = 1 scheduler._internal_dict = FrozenDict(new_config) if scheduler is not None and getattr(scheduler.config, "clip_sample", False) is True: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." " `clip_sample` should be set to False in the configuration file. Please make sure to update the" " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" ) deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["clip_sample"] = False scheduler._internal_dict = FrozenDict(new_config) if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) is_unet_version_less_0_9_0 = ( unet is not None and hasattr(unet.config, "_diffusers_version") and version.parse(version.parse(unet.config._diffusers_version).base_version) < version.parse("0.9.0.dev0") ) is_unet_sample_size_less_64 = ( unet is not None and hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 ) if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: deprecation_message = ( "The configuration file of the unet has set the default `sample_size` to smaller than" " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" " in the config might lead to incorrect results in future versions. If you have downloaded this" " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" " the `unet/config.json` file" ) deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(unet.config) new_config["sample_size"] = 64 unet._internal_dict = FrozenDict(new_config) self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, scheduler=scheduler, safety_checker=safety_checker, feature_extractor=feature_extractor, ) self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) if getattr(self, "vae", None) else 8 self.register_to_config(requires_safety_checker=requires_safety_checker) def get_input_example(self, prompt, height=None, width=None, guidance_scale=7.5, num_images_per_prompt=1): prompt_embeds = None negative_prompt_embeds = None negative_prompt = None callback_steps = 1 generator = None latents = None # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds ) # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) device = "cpu" # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://huggingface.co/papers/2205.11487 . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input prompt prompt_embeds = self._encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, ) # 5. Prepare latent variables latents = self.prepare_latents( batch_size * num_images_per_prompt, self.unet.config.in_channels, height, width, prompt_embeds.dtype, device, generator, latents, ) dummy = torch.ones(1, dtype=torch.int32) latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, dummy) unet_input_example = (latent_model_input, dummy, prompt_embeds) vae_decoder_input_example = latents return unet_input_example, vae_decoder_input_example def prepare_for_ipex(self, promt, dtype=torch.float32, height=None, width=None, guidance_scale=7.5): self.unet = self.unet.to(memory_format=torch.channels_last) self.vae.decoder = self.vae.decoder.to(memory_format=torch.channels_last) self.text_encoder = self.text_encoder.to(memory_format=torch.channels_last) if self.safety_checker is not None: self.safety_checker = self.safety_checker.to(memory_format=torch.channels_last) unet_input_example, vae_decoder_input_example = self.get_input_example(promt, height, width, guidance_scale) # optimize with ipex if dtype == torch.bfloat16: self.unet = ipex.optimize(self.unet.eval(), dtype=torch.bfloat16, inplace=True) self.vae.decoder = ipex.optimize(self.vae.decoder.eval(), dtype=torch.bfloat16, inplace=True) self.text_encoder = ipex.optimize(self.text_encoder.eval(), dtype=torch.bfloat16, inplace=True) if self.safety_checker is not None: self.safety_checker = ipex.optimize(self.safety_checker.eval(), dtype=torch.bfloat16, inplace=True) elif dtype == torch.float32: self.unet = ipex.optimize( self.unet.eval(), dtype=torch.float32, inplace=True, weights_prepack=True, auto_kernel_selection=False, ) self.vae.decoder = ipex.optimize( self.vae.decoder.eval(), dtype=torch.float32, inplace=True, weights_prepack=True, auto_kernel_selection=False, ) self.text_encoder = ipex.optimize( self.text_encoder.eval(), dtype=torch.float32, inplace=True, weights_prepack=True, auto_kernel_selection=False, ) if self.safety_checker is not None: self.safety_checker = ipex.optimize( self.safety_checker.eval(), dtype=torch.float32, inplace=True, weights_prepack=True, auto_kernel_selection=False, ) else: raise ValueError(" The value of 'dtype' should be 'torch.bfloat16' or 'torch.float32' !") # trace unet model to get better performance on IPEX with torch.cpu.amp.autocast(enabled=dtype == torch.bfloat16), torch.no_grad(): unet_trace_model = torch.jit.trace(self.unet, unet_input_example, check_trace=False, strict=False) unet_trace_model = torch.jit.freeze(unet_trace_model) self.unet.forward = unet_trace_model.forward # trace vae.decoder model to get better performance on IPEX with torch.cpu.amp.autocast(enabled=dtype == torch.bfloat16), torch.no_grad(): ave_decoder_trace_model = torch.jit.trace( self.vae.decoder, vae_decoder_input_example, check_trace=False, strict=False ) ave_decoder_trace_model = torch.jit.freeze(ave_decoder_trace_model) self.vae.decoder.forward = ave_decoder_trace_model.forward def _encode_prompt( self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt=None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt do_classifier_free_guidance (`bool`): whether to use classifier free guidance or not negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. """ if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): prompt = self.maybe_convert_prompt(prompt, self.tokenizer) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( text_input_ids, untruncated_ids ): removed_text = self.tokenizer.batch_decode( untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] ) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = text_inputs.attention_mask.to(device) else: attention_mask = None prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, ) prompt_embeds = prompt_embeds[0] prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) bs_embed, seq_len, _ = prompt_embeds.shape # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) # get unconditional embeddings for classifier free guidance if do_classifier_free_guidance and negative_prompt_embeds is None: uncond_tokens: List[str] if negative_prompt is None: uncond_tokens = [""] * batch_size elif type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif isinstance(negative_prompt, str): uncond_tokens = [negative_prompt] elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) else: uncond_tokens = negative_prompt # textual inversion: process multi-vector tokens if necessary if isinstance(self, TextualInversionLoaderMixin): uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) max_length = prompt_embeds.shape[1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: attention_mask = uncond_input.attention_mask.to(device) else: attention_mask = None negative_prompt_embeds = self.text_encoder( uncond_input.input_ids.to(device), attention_mask=attention_mask, ) negative_prompt_embeds = negative_prompt_embeds[0] if do_classifier_free_guidance: # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) return prompt_embeds def run_safety_checker(self, image, device, dtype): if self.safety_checker is not None: safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) image, has_nsfw_concept = self.safety_checker( images=image, clip_input=safety_checker_input.pixel_values.to(dtype) ) else: has_nsfw_concept = None return image, has_nsfw_concept def decode_latents(self, latents): latents = 1 / self.vae.config.scaling_factor * latents image = self.vae.decode(latents).sample image = (image / 2 + 0.5).clamp(0, 1) # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 image = image.cpu().permute(0, 2, 3, 1).float().numpy() return image def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://huggingface.co/papers/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, prompt, height, width, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None, ): if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if (callback_steps is None) or ( callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) ): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): shape = ( batch_size, num_channels_latents, int(height) // self.vae_scale_factor, int(width) // self.vae_scale_factor, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]] = None, height: Optional[int] = None, width: Optional[int] = None, num_inference_steps: int = 50, guidance_scale: float = 7.5, negative_prompt: Optional[Union[str, List[str]]] = None, num_images_per_prompt: Optional[int] = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, cross_attention_kwargs: Optional[Dict[str, Any]] = None, ): r""" Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The height in pixels of the generated image. width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The width in pixels of the generated image. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 7.5): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. eta (`float`, *optional*, defaults to 0.0): Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only applies to [`schedulers.DDIMScheduler`], will be ignored for others. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that will be called every `callback_steps` steps during inference. The function will be called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function will be called. If not specified, the callback will be called at every step. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttnProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). Examples: Returns: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the `safety_checker`. """ # 0. Default height and width to unet height = height or self.unet.config.sample_size * self.vae_scale_factor width = width or self.unet.config.sample_size * self.vae_scale_factor # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds ) # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://huggingface.co/papers/2205.11487 . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input prompt prompt_embeds = self._encode_prompt( prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, ) # 4. Prepare timesteps self.scheduler.set_timesteps(num_inference_steps, device=device) timesteps = self.scheduler.timesteps # 5. Prepare latent variables num_channels_latents = self.unet.config.in_channels latents = self.prepare_latents( batch_size * num_images_per_prompt, num_channels_latents, height, width, prompt_embeds.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 7. Denoising loop num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): # expand the latents if we are doing classifier free guidance latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) # predict the noise residual noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=prompt_embeds)["sample"] # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if callback is not None and i % callback_steps == 0: step_idx = i // getattr(self.scheduler, "order", 1) callback(step_idx, t, latents) if output_type == "latent": image = latents has_nsfw_concept = None elif output_type == "pil": # 8. Post-processing image = self.decode_latents(latents) # 9. Run safety checker image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) # 10. Convert to PIL image = self.numpy_to_pil(image) else: # 8. Post-processing image = self.decode_latents(latents) # 9. Run safety checker image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) # Offload last model to CPU if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: self.final_offload_hook.offload() if not return_dict: return (image, has_nsfw_concept) return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
diffusers/examples/community/stable_diffusion_ipex.py/0
{ "file_path": "diffusers/examples/community/stable_diffusion_ipex.py", "repo_id": "diffusers", "token_count": 16861 }
136
# Latent Consistency Distillation Example: [Latent Consistency Models (LCMs)](https://huggingface.co/papers/2310.04378) is a method to distill a latent diffusion model to enable swift inference with minimal steps. This example demonstrates how to use latent consistency distillation to distill stable-diffusion-v1.5 for inference with few timesteps. ## Full model distillation ### Running locally with PyTorch #### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install -e . ``` Then cd in the example folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗 Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` Or for a default accelerate configuration without answering questions about your environment ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell e.g. a notebook ```python from accelerate.utils import write_basic_config write_basic_config() ``` When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups. #### Example The following uses the [Conceptual Captions 12M (CC12M) dataset](https://github.com/google-research-datasets/conceptual-12m) as an example, and for illustrative purposes only. For best results you may consider large and high-quality text-image datasets such as [LAION](https://laion.ai/blog/laion-400-open-dataset/). You may also need to search the hyperparameter space according to the dataset you use. ```bash export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path/to/saved/model" accelerate launch train_lcm_distill_sd_wds.py \ --pretrained_teacher_model=$MODEL_NAME \ --output_dir=$OUTPUT_DIR \ --mixed_precision=fp16 \ --resolution=512 \ --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ --max_train_steps=1000 \ --max_train_samples=4000000 \ --dataloader_num_workers=8 \ --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ --validation_steps=200 \ --checkpointing_steps=200 --checkpoints_total_limit=10 \ --train_batch_size=12 \ --gradient_checkpointing --enable_xformers_memory_efficient_attention \ --gradient_accumulation_steps=1 \ --use_8bit_adam \ --resume_from_checkpoint=latest \ --report_to=wandb \ --seed=453645634 \ --push_to_hub ``` ## LCM-LoRA Instead of fine-tuning the full model, we can also just train a LoRA that can be injected into any SDXL model. ### Example The following uses the [Conceptual Captions 12M (CC12M) dataset](https://github.com/google-research-datasets/conceptual-12m) as an example. For best results you may consider large and high-quality text-image datasets such as [LAION](https://laion.ai/blog/laion-400-open-dataset/). ```bash export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5" export OUTPUT_DIR="path/to/saved/model" accelerate launch train_lcm_distill_lora_sd_wds.py \ --pretrained_teacher_model=$MODEL_NAME \ --output_dir=$OUTPUT_DIR \ --mixed_precision=fp16 \ --resolution=512 \ --lora_rank=64 \ --learning_rate=1e-4 --loss_type="huber" --adam_weight_decay=0.0 \ --max_train_steps=1000 \ --max_train_samples=4000000 \ --dataloader_num_workers=8 \ --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ --validation_steps=200 \ --checkpointing_steps=200 --checkpoints_total_limit=10 \ --train_batch_size=12 \ --gradient_checkpointing --enable_xformers_memory_efficient_attention \ --gradient_accumulation_steps=1 \ --use_8bit_adam \ --resume_from_checkpoint=latest \ --report_to=wandb \ --seed=453645634 \ --push_to_hub \ ```
diffusers/examples/consistency_distillation/README.md/0
{ "file_path": "diffusers/examples/consistency_distillation/README.md", "repo_id": "diffusers", "token_count": 1524 }
137
# DreamBooth training example for Lumina2 [DreamBooth](https://huggingface.co/papers/2208.12242) is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject. The `train_dreambooth_lora_lumina2.py` script shows how to implement the training procedure with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) and adapt it for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2). This will also allow us to push the trained model parameters to the Hugging Face Hub platform. ## Running locally with PyTorch ### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install -e . ``` Then cd in the `examples/dreambooth` folder and run ```bash pip install -r requirements_sana.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` Or for a default accelerate configuration without answering questions about your environment ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell (e.g., a notebook) ```python from accelerate.utils import write_basic_config write_basic_config() ``` When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups. Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.14.0` installed in your environment. ### Dog toy example Now let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example. Let's first download it locally: ```python from huggingface_hub import snapshot_download local_dir = "./dog" snapshot_download( "diffusers/dog-example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes", ) ``` This will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform. Now, we can launch training using: ```bash export MODEL_NAME="Alpha-VLLM/Lumina-Image-2.0" export INSTANCE_DIR="dog" export OUTPUT_DIR="trained-lumina2-lora" accelerate launch train_dreambooth_lora_lumina2.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --output_dir=$OUTPUT_DIR \ --mixed_precision="bf16" \ --instance_prompt="a photo of sks dog" \ --resolution=1024 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --use_8bit_adam \ --learning_rate=1e-4 \ --report_to="wandb" \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --max_train_steps=500 \ --validation_prompt="A photo of sks dog in a bucket" \ --validation_epochs=25 \ --seed="0" \ --push_to_hub ``` For using `push_to_hub`, make you're logged into your Hugging Face account: ```bash hf auth login ``` To better track our training experiments, we're using the following flags in the command above: * `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before. * `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected. ## Notes Additionally, we welcome you to explore the following CLI arguments: * `--lora_layers`: The transformer modules to apply LoRA training on. Please specify the layers in a comma separated. E.g. - "to_k,to_q,to_v" will result in lora training of attention layers only. * `--system_prompt`: A custom system prompt to provide additional personality to the model. * `--max_sequence_length`: Maximum sequence length to use for text embeddings. We provide several options for optimizing memory optimization: * `--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used. * `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done. * `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library. Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2) of the `LuminaPipeline` to know more about the model.
diffusers/examples/dreambooth/README_lumina2.md/0
{ "file_path": "diffusers/examples/dreambooth/README_lumina2.md", "repo_id": "diffusers", "token_count": 1493 }
138
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import os import sys import tempfile import safetensors sys.path.append("..") from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402 logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger() stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) class DreamBoothLoRASDXLWithEDM(ExamplesTestsAccelerate): def test_dreambooth_lora_sdxl_with_edm(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/dreambooth/train_dreambooth_lora_sdxl.py --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-xl-pipe --do_edm_style_training --instance_data_dir docs/source/en/imgs --instance_prompt photo --resolution 64 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 2 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} """.split() run_command(self._launch_args + test_args) # save_pretrained smoke test self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))) # make sure the state_dict has the correct naming in the parameters. lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")) is_lora = all("lora" in k for k in lora_state_dict.keys()) self.assertTrue(is_lora) # when not training the text encoder, all the parameters in the state dict should start # with `"unet"` in their names. starts_with_unet = all(key.startswith("unet") for key in lora_state_dict.keys()) self.assertTrue(starts_with_unet) def test_dreambooth_lora_playground(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/dreambooth/train_dreambooth_lora_sdxl.py --pretrained_model_name_or_path hf-internal-testing/tiny-playground-v2-5-pipe --instance_data_dir docs/source/en/imgs --instance_prompt photo --resolution 64 --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 2 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --output_dir {tmpdir} """.split() run_command(self._launch_args + test_args) # save_pretrained smoke test self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))) # make sure the state_dict has the correct naming in the parameters. lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")) is_lora = all("lora" in k for k in lora_state_dict.keys()) self.assertTrue(is_lora) # when not training the text encoder, all the parameters in the state dict should start # with `"unet"` in their names. starts_with_unet = all(key.startswith("unet") for key in lora_state_dict.keys()) self.assertTrue(starts_with_unet)
diffusers/examples/dreambooth/test_dreambooth_lora_edm.py/0
{ "file_path": "diffusers/examples/dreambooth/test_dreambooth_lora_edm.py", "repo_id": "diffusers", "token_count": 1864 }
139
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import os import sys import tempfile sys.path.append("..") from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402 logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger() stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) class InstructPix2Pix(ExamplesTestsAccelerate): def test_instruct_pix2pix_checkpointing_checkpoints_total_limit(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/instruct_pix2pix/train_instruct_pix2pix.py --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe --dataset_name=hf-internal-testing/instructpix2pix-10-samples --resolution=64 --random_flip --train_batch_size=1 --max_train_steps=6 --checkpointing_steps=2 --checkpoints_total_limit=2 --output_dir {tmpdir} --seed=0 """.split() run_command(self._launch_args + test_args) self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"}, ) def test_instruct_pix2pix_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self): with tempfile.TemporaryDirectory() as tmpdir: test_args = f""" examples/instruct_pix2pix/train_instruct_pix2pix.py --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe --dataset_name=hf-internal-testing/instructpix2pix-10-samples --resolution=64 --random_flip --train_batch_size=1 --max_train_steps=4 --checkpointing_steps=2 --output_dir {tmpdir} --seed=0 """.split() run_command(self._launch_args + test_args) # check checkpoint directories exist self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4"}, ) resume_run_args = f""" examples/instruct_pix2pix/train_instruct_pix2pix.py --pretrained_model_name_or_path=hf-internal-testing/tiny-stable-diffusion-pipe --dataset_name=hf-internal-testing/instructpix2pix-10-samples --resolution=64 --random_flip --train_batch_size=1 --max_train_steps=8 --checkpointing_steps=2 --output_dir {tmpdir} --seed=0 --resume_from_checkpoint=checkpoint-4 --checkpoints_total_limit=2 """.split() run_command(self._launch_args + resume_run_args) # check checkpoint directories exist self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-6", "checkpoint-8"}, )
diffusers/examples/instruct_pix2pix/test_instruct_pix2pix.py/0
{ "file_path": "diffusers/examples/instruct_pix2pix/test_instruct_pix2pix.py", "repo_id": "diffusers", "token_count": 1823 }
140
# AnyTextPipeline Project page: https://aigcdesigngroup.github.io/homepage_anytext "AnyText comprises a diffusion pipeline with two primary elements: an auxiliary latent module and a text embedding module. The former uses inputs like text glyph, position, and masked image to generate latent features for text generation or editing. The latter employs an OCR model for encoding stroke data as embeddings, which blend with image caption embeddings from the tokenizer to generate texts that seamlessly integrate with the background. We employed text-control diffusion loss and text perceptual loss for training to further enhance writing accuracy." > **Note:** Each text line that needs to be generated should be enclosed in double quotes. For any usage questions, please refer to the [paper](https://huggingface.co/papers/2311.03054). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/tolgacangoz/b87ec9d2f265b448dd947c9d4a0da389/anytext.ipynb) ```py # This example requires the `anytext_controlnet.py` file: # !git clone --depth 1 https://github.com/huggingface/diffusers.git # %cd diffusers/examples/research_projects/anytext # Let's choose a font file shared by an HF staff: # !wget https://huggingface.co/spaces/ysharma/TranslateQuotesInImageForwards/resolve/main/arial-unicode-ms.ttf import torch from diffusers import DiffusionPipeline from anytext_controlnet import AnyTextControlNetModel from diffusers.utils import load_image anytext_controlnet = AnyTextControlNetModel.from_pretrained("tolgacangoz/anytext-controlnet", torch_dtype=torch.float16, variant="fp16",) pipe = DiffusionPipeline.from_pretrained("tolgacangoz/anytext", font_path="arial-unicode-ms.ttf", controlnet=anytext_controlnet, torch_dtype=torch.float16, trust_remote_code=False, # One needs to give permission to run this pipeline's code ).to("cuda") # generate image prompt = 'photo of caramel macchiato coffee on the table, top-down perspective, with "Any" "Text" written on it using cream' draw_pos = load_image("https://raw.githubusercontent.com/tyxsspa/AnyText/refs/heads/main/example_images/gen9.png") # There are two modes: "generate" and "edit". "edit" mode requires `ori_image` parameter for the image to be edited. image = pipe(prompt, num_inference_steps=20, mode="generate", draw_pos=draw_pos, ).images[0] image ```
diffusers/examples/research_projects/anytext/README.md/0
{ "file_path": "diffusers/examples/research_projects/anytext/README.md", "repo_id": "diffusers", "token_count": 913 }
141
import argparse import math import os from pathlib import Path import colossalai import torch import torch.nn.functional as F import torch.utils.checkpoint from colossalai.context.parallel_mode import ParallelMode from colossalai.core import global_context as gpc from colossalai.logging import disable_existing_loggers, get_dist_logger from colossalai.nn.optimizer.gemini_optimizer import GeminiAdamOptimizer from colossalai.nn.parallel.utils import get_static_torch_model from colossalai.utils import get_current_device from colossalai.utils.model.colo_init_context import ColoInitContext from huggingface_hub import create_repo, upload_folder from huggingface_hub.utils import insecure_hashlib from PIL import Image from torch.utils.data import Dataset from torchvision import transforms from tqdm.auto import tqdm from transformers import AutoTokenizer, PretrainedConfig from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, UNet2DConditionModel from diffusers.optimization import get_scheduler disable_existing_loggers() logger = get_dist_logger() def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str): text_encoder_config = PretrainedConfig.from_pretrained( pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, ) model_class = text_encoder_config.architectures[0] if model_class == "CLIPTextModel": from transformers import CLIPTextModel return CLIPTextModel elif model_class == "RobertaSeriesModelWithTransformation": from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation return RobertaSeriesModelWithTransformation else: raise ValueError(f"{model_class} is not supported.") def parse_args(input_args=None): parser = argparse.ArgumentParser(description="Simple example of a training script.") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--tokenizer_name", type=str, default=None, help="Pretrained tokenizer name or path if not the same as model_name", ) parser.add_argument( "--instance_data_dir", type=str, default=None, required=True, help="A folder containing the training data of instance images.", ) parser.add_argument( "--class_data_dir", type=str, default=None, required=False, help="A folder containing the training data of class images.", ) parser.add_argument( "--instance_prompt", type=str, default="a photo of sks dog", required=False, help="The prompt with identifier specifying the instance", ) parser.add_argument( "--class_prompt", type=str, default=None, help="The prompt to specify images in the same class as provided instance images.", ) parser.add_argument( "--with_prior_preservation", default=False, action="store_true", help="Flag to add prior preservation loss.", ) parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") parser.add_argument( "--num_class_images", type=int, default=100, help=( "Minimal class images for prior preservation loss. If there are not enough images already present in" " class_data_dir, additional images will be sampled with class_prompt." ), ) parser.add_argument( "--output_dir", type=str, default="text-inversion-model", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=512, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--placement", type=str, default="cpu", help="Placement Policy for Gemini. Valid when using colossalai as dist plan.", ) parser.add_argument( "--center_crop", default=False, action="store_true", help=( "Whether to center crop the input images to the resolution. If not set, the images will be randomly" " cropped. The images will be resized to the resolution first before cropping." ), ) parser.add_argument( "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." ) parser.add_argument( "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." ) parser.add_argument("--num_train_epochs", type=int, default=1) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument("--save_steps", type=int, default=500, help="Save checkpoint every X updates steps.") parser.add_argument( "--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--learning_rate", type=float, default=5e-6, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." ) parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." ), ) parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") if input_args is not None: args = parser.parse_args(input_args) else: args = parser.parse_args() env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) if env_local_rank != -1 and env_local_rank != args.local_rank: args.local_rank = env_local_rank if args.with_prior_preservation: if args.class_data_dir is None: raise ValueError("You must specify a data directory for class images.") if args.class_prompt is None: raise ValueError("You must specify prompt for class images.") else: if args.class_data_dir is not None: logger.warning("You need not use --class_data_dir without --with_prior_preservation.") if args.class_prompt is not None: logger.warning("You need not use --class_prompt without --with_prior_preservation.") return args class DreamBoothDataset(Dataset): """ A dataset to prepare the instance and class images with the prompts for fine-tuning the model. It pre-processes the images and the tokenizes prompts. """ def __init__( self, instance_data_root, instance_prompt, tokenizer, class_data_root=None, class_prompt=None, size=512, center_crop=False, ): self.size = size self.center_crop = center_crop self.tokenizer = tokenizer self.instance_data_root = Path(instance_data_root) if not self.instance_data_root.exists(): raise ValueError("Instance images root doesn't exists.") self.instance_images_path = list(Path(instance_data_root).iterdir()) self.num_instance_images = len(self.instance_images_path) self.instance_prompt = instance_prompt self._length = self.num_instance_images if class_data_root is not None: self.class_data_root = Path(class_data_root) self.class_data_root.mkdir(parents=True, exist_ok=True) self.class_images_path = list(self.class_data_root.iterdir()) self.num_class_images = len(self.class_images_path) self._length = max(self.num_class_images, self.num_instance_images) self.class_prompt = class_prompt else: self.class_data_root = None self.image_transforms = transforms.Compose( [ transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ] ) def __len__(self): return self._length def __getitem__(self, index): example = {} instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) if not instance_image.mode == "RGB": instance_image = instance_image.convert("RGB") example["instance_images"] = self.image_transforms(instance_image) example["instance_prompt_ids"] = self.tokenizer( self.instance_prompt, padding="do_not_pad", truncation=True, max_length=self.tokenizer.model_max_length, ).input_ids if self.class_data_root: class_image = Image.open(self.class_images_path[index % self.num_class_images]) if not class_image.mode == "RGB": class_image = class_image.convert("RGB") example["class_images"] = self.image_transforms(class_image) example["class_prompt_ids"] = self.tokenizer( self.class_prompt, padding="do_not_pad", truncation=True, max_length=self.tokenizer.model_max_length, ).input_ids return example class PromptDataset(Dataset): """A simple dataset to prepare the prompts to generate class images on multiple GPUs.""" def __init__(self, prompt, num_samples): self.prompt = prompt self.num_samples = num_samples def __len__(self): return self.num_samples def __getitem__(self, index): example = {} example["prompt"] = self.prompt example["index"] = index return example # Gemini + ZeRO DDP def gemini_zero_dpp(model: torch.nn.Module, placememt_policy: str = "auto"): from colossalai.nn.parallel import GeminiDDP model = GeminiDDP( model, device=get_current_device(), placement_policy=placememt_policy, pin_memory=True, search_range_mb=64 ) return model def main(args): if args.seed is None: colossalai.launch_from_torch(config={}) else: colossalai.launch_from_torch(config={}, seed=args.seed) local_rank = gpc.get_local_rank(ParallelMode.DATA) world_size = gpc.get_world_size(ParallelMode.DATA) if args.with_prior_preservation: class_images_dir = Path(args.class_data_dir) if not class_images_dir.exists(): class_images_dir.mkdir(parents=True) cur_class_images = len(list(class_images_dir.iterdir())) if cur_class_images < args.num_class_images: torch_dtype = torch.float16 if get_current_device() == "cuda" else torch.float32 pipeline = DiffusionPipeline.from_pretrained( args.pretrained_model_name_or_path, torch_dtype=torch_dtype, safety_checker=None, revision=args.revision, ) pipeline.set_progress_bar_config(disable=True) num_new_images = args.num_class_images - cur_class_images logger.info(f"Number of class images to sample: {num_new_images}.") sample_dataset = PromptDataset(args.class_prompt, num_new_images) sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) pipeline.to(get_current_device()) for example in tqdm( sample_dataloader, desc="Generating class images", disable=not local_rank == 0, ): images = pipeline(example["prompt"]).images for i, image in enumerate(images): hash_image = insecure_hashlib.sha1(image.tobytes()).hexdigest() image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" image.save(image_filename) del pipeline # Handle the repository creation if local_rank == 0: if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) if args.push_to_hub: repo_id = create_repo( repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token ).repo_id # Load the tokenizer if args.tokenizer_name: logger.info(f"Loading tokenizer from {args.tokenizer_name}", ranks=[0]) tokenizer = AutoTokenizer.from_pretrained( args.tokenizer_name, revision=args.revision, use_fast=False, ) elif args.pretrained_model_name_or_path: logger.info("Loading tokenizer from pretrained model", ranks=[0]) tokenizer = AutoTokenizer.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False, ) # import correct text encoder class text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path) # Load models and create wrapper for stable diffusion logger.info(f"Loading text_encoder from {args.pretrained_model_name_or_path}", ranks=[0]) text_encoder = text_encoder_cls.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, ) logger.info(f"Loading AutoencoderKL from {args.pretrained_model_name_or_path}", ranks=[0]) vae = AutoencoderKL.from_pretrained( args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision, ) logger.info(f"Loading UNet2DConditionModel from {args.pretrained_model_name_or_path}", ranks=[0]) with ColoInitContext(device=get_current_device()): unet = UNet2DConditionModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, low_cpu_mem_usage=False ) vae.requires_grad_(False) text_encoder.requires_grad_(False) if args.gradient_checkpointing: unet.enable_gradient_checkpointing() if args.scale_lr: args.learning_rate = args.learning_rate * args.train_batch_size * world_size unet = gemini_zero_dpp(unet, args.placement) # config optimizer for colossalai zero optimizer = GeminiAdamOptimizer(unet, lr=args.learning_rate, initial_scale=2**5, clipping_norm=args.max_grad_norm) # load noise_scheduler noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") # prepare dataset logger.info(f"Prepare dataset from {args.instance_data_dir}", ranks=[0]) train_dataset = DreamBoothDataset( instance_data_root=args.instance_data_dir, instance_prompt=args.instance_prompt, class_data_root=args.class_data_dir if args.with_prior_preservation else None, class_prompt=args.class_prompt, tokenizer=tokenizer, size=args.resolution, center_crop=args.center_crop, ) def collate_fn(examples): input_ids = [example["instance_prompt_ids"] for example in examples] pixel_values = [example["instance_images"] for example in examples] # Concat class and instance examples for prior preservation. # We do this to avoid doing two forward passes. if args.with_prior_preservation: input_ids += [example["class_prompt_ids"] for example in examples] pixel_values += [example["class_images"] for example in examples] pixel_values = torch.stack(pixel_values) pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() input_ids = tokenizer.pad( {"input_ids": input_ids}, padding="max_length", max_length=tokenizer.model_max_length, return_tensors="pt", ).input_ids batch = { "input_ids": input_ids, "pixel_values": pixel_values, } return batch train_dataloader = torch.utils.data.DataLoader( train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn, num_workers=1 ) # Scheduler and math around the number of training steps. overrode_max_train_steps = False num_update_steps_per_epoch = math.ceil(len(train_dataloader)) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch overrode_max_train_steps = True lr_scheduler = get_scheduler( args.lr_scheduler, optimizer=optimizer, num_warmup_steps=args.lr_warmup_steps, num_training_steps=args.max_train_steps, ) weight_dtype = torch.float32 if args.mixed_precision == "fp16": weight_dtype = torch.float16 elif args.mixed_precision == "bf16": weight_dtype = torch.bfloat16 # Move text_encode and vae to gpu. # For mixed precision training we cast the text_encoder and vae weights to half-precision # as these models are only used for inference, keeping weights in full precision is not required. vae.to(get_current_device(), dtype=weight_dtype) text_encoder.to(get_current_device(), dtype=weight_dtype) # We need to recalculate our total training steps as the size of the training dataloader may have changed. num_update_steps_per_epoch = math.ceil(len(train_dataloader)) if overrode_max_train_steps: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch # Afterwards we recalculate our number of training epochs args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) # Train! total_batch_size = args.train_batch_size * world_size logger.info("***** Running training *****", ranks=[0]) logger.info(f" Num examples = {len(train_dataset)}", ranks=[0]) logger.info(f" Num batches each epoch = {len(train_dataloader)}", ranks=[0]) logger.info(f" Num Epochs = {args.num_train_epochs}", ranks=[0]) logger.info(f" Instantaneous batch size per device = {args.train_batch_size}", ranks=[0]) logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}", ranks=[0]) logger.info(f" Total optimization steps = {args.max_train_steps}", ranks=[0]) # Only show the progress bar once on each machine. progress_bar = tqdm(range(args.max_train_steps), disable=not local_rank == 0) progress_bar.set_description("Steps") global_step = 0 torch.cuda.synchronize() for epoch in range(args.num_train_epochs): unet.train() for step, batch in enumerate(train_dataloader): torch.cuda.reset_peak_memory_stats() # Move batch to gpu for key, value in batch.items(): batch[key] = value.to(get_current_device(), non_blocking=True) # Convert images to latent space optimizer.zero_grad() latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() latents = latents * 0.18215 # Sample noise that we'll add to the latents noise = torch.randn_like(latents) bsz = latents.shape[0] # Sample a random timestep for each image timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) timesteps = timesteps.long() # Add noise to the latents according to the noise magnitude at each timestep # (this is the forward diffusion process) noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) # Get the text embedding for conditioning encoder_hidden_states = text_encoder(batch["input_ids"])[0] # Predict the noise residual model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample # Get the target for loss depending on the prediction type if noise_scheduler.config.prediction_type == "epsilon": target = noise elif noise_scheduler.config.prediction_type == "v_prediction": target = noise_scheduler.get_velocity(latents, noise, timesteps) else: raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") if args.with_prior_preservation: # Chunk the noise and model_pred into two parts and compute the loss on each part separately. model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) target, target_prior = torch.chunk(target, 2, dim=0) # Compute instance loss loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() # Compute prior loss prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") # Add the prior loss to the instance loss. loss = loss + args.prior_loss_weight * prior_loss else: loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") optimizer.backward(loss) optimizer.step() lr_scheduler.step() logger.info(f"max GPU_mem cost is {torch.cuda.max_memory_allocated() / 2**20} MB", ranks=[0]) # Checks if the accelerator has performed an optimization step behind the scenes progress_bar.update(1) global_step += 1 logs = { "loss": loss.detach().item(), "lr": optimizer.param_groups[0]["lr"], } # lr_scheduler.get_last_lr()[0]} progress_bar.set_postfix(**logs) if global_step % args.save_steps == 0: torch.cuda.synchronize() torch_unet = get_static_torch_model(unet) if local_rank == 0: pipeline = DiffusionPipeline.from_pretrained( args.pretrained_model_name_or_path, unet=torch_unet, revision=args.revision, ) save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") pipeline.save_pretrained(save_path) logger.info(f"Saving model checkpoint to {save_path}", ranks=[0]) if global_step >= args.max_train_steps: break torch.cuda.synchronize() unet = get_static_torch_model(unet) if local_rank == 0: pipeline = DiffusionPipeline.from_pretrained( args.pretrained_model_name_or_path, unet=unet, revision=args.revision, ) pipeline.save_pretrained(args.output_dir) logger.info(f"Saving model checkpoint to {args.output_dir}", ranks=[0]) if args.push_to_hub: upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message="End of training", ignore_patterns=["step_*", "epoch_*"], ) if __name__ == "__main__": args = parse_args() main(args)
diffusers/examples/research_projects/colossalai/train_dreambooth_colossalai.py/0
{ "file_path": "diffusers/examples/research_projects/colossalai/train_dreambooth_colossalai.py", "repo_id": "diffusers", "token_count": 11177 }
142
#!/usr/bin/env python # coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Script to fine-tune Stable Diffusion for LORA InstructPix2Pix. Base code referred from: https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py """ import argparse import logging import math import os import shutil from contextlib import nullcontext from pathlib import Path import accelerate import datasets import numpy as np import PIL import requests import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint import transformers from accelerate import Accelerator from accelerate.logging import get_logger from accelerate.utils import ProjectConfiguration, set_seed from datasets import load_dataset from huggingface_hub import create_repo, upload_folder from packaging import version from peft import LoraConfig from peft.utils import get_peft_model_state_dict from torchvision import transforms from tqdm.auto import tqdm from transformers import CLIPTextModel, CLIPTokenizer import diffusers from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionInstructPix2PixPipeline, UNet2DConditionModel from diffusers.optimization import get_scheduler from diffusers.training_utils import EMAModel, cast_training_params from diffusers.utils import check_min_version, convert_state_dict_to_diffusers, deprecate, is_wandb_available from diffusers.utils.constants import DIFFUSERS_REQUEST_TIMEOUT from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card from diffusers.utils.import_utils import is_xformers_available from diffusers.utils.torch_utils import is_compiled_module if is_wandb_available(): import wandb # Will error if the minimal version of diffusers is not installed. Remove at your own risks. check_min_version("0.32.0.dev0") logger = get_logger(__name__, log_level="INFO") DATASET_NAME_MAPPING = { "fusing/instructpix2pix-1000-samples": ("input_image", "edit_prompt", "edited_image"), } WANDB_TABLE_COL_NAMES = ["original_image", "edited_image", "edit_prompt"] def save_model_card( repo_id: str, images: list = None, base_model: str = None, dataset_name: str = None, repo_folder: str = None, ): img_str = "" if images is not None: for i, image in enumerate(images): image.save(os.path.join(repo_folder, f"image_{i}.png")) img_str += f"![img_{i}](./image_{i}.png)\n" model_description = f""" # LoRA text2image fine-tuning - {repo_id} These are LoRA adaption weights for {base_model}. The weights were fine-tuned on the {dataset_name} dataset. You can find some example images in the following. \n {img_str} """ model_card = load_or_create_model_card( repo_id_or_path=repo_id, from_training=True, license="creativeml-openrail-m", base_model=base_model, model_description=model_description, inference=True, ) tags = [ "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "instruct-pix2pix", "diffusers", "diffusers-training", "lora", ] model_card = populate_model_card(model_card, tags=tags) model_card.save(os.path.join(repo_folder, "README.md")) def log_validation( pipeline, args, accelerator, generator, ): logger.info( f"Running validation... \n Generating {args.num_validation_images} images with prompt:" f" {args.validation_prompt}." ) pipeline = pipeline.to(accelerator.device) pipeline.set_progress_bar_config(disable=True) # run inference original_image = download_image(args.val_image_url) edited_images = [] if torch.backends.mps.is_available(): autocast_ctx = nullcontext() else: autocast_ctx = torch.autocast(accelerator.device.type) with autocast_ctx: for _ in range(args.num_validation_images): edited_images.append( pipeline( args.validation_prompt, image=original_image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7, generator=generator, ).images[0] ) for tracker in accelerator.trackers: if tracker.name == "wandb": wandb_table = wandb.Table(columns=WANDB_TABLE_COL_NAMES) for edited_image in edited_images: wandb_table.add_data(wandb.Image(original_image), wandb.Image(edited_image), args.validation_prompt) tracker.log({"validation": wandb_table}) return edited_images def parse_args(): parser = argparse.ArgumentParser(description="Simple example of a training script for InstructPix2Pix.") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--variant", type=str, default=None, help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16", ) parser.add_argument( "--dataset_name", type=str, default=None, help=( "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," " or to a folder containing files that 🤗 Datasets can understand." ), ) parser.add_argument( "--dataset_config_name", type=str, default=None, help="The config of the Dataset, leave as None if there's only one config.", ) parser.add_argument( "--train_data_dir", type=str, default=None, help=( "A folder containing the training data. Folder contents must follow the structure described in" " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." ), ) parser.add_argument( "--original_image_column", type=str, default="input_image", help="The column of the dataset containing the original image on which edits where made.", ) parser.add_argument( "--edited_image_column", type=str, default="edited_image", help="The column of the dataset containing the edited image.", ) parser.add_argument( "--edit_prompt_column", type=str, default="edit_prompt", help="The column of the dataset containing the edit instruction.", ) parser.add_argument( "--val_image_url", type=str, default=None, help="URL to the original image that you would like to edit (used during inference for debugging purposes).", ) parser.add_argument( "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference." ) parser.add_argument( "--num_validation_images", type=int, default=4, help="Number of images that should be generated during validation with `validation_prompt`.", ) parser.add_argument( "--validation_epochs", type=int, default=1, help=( "Run fine-tuning validation every X epochs. The validation process consists of running the prompt" " `args.validation_prompt` multiple times: `args.num_validation_images`." ), ) parser.add_argument( "--max_train_samples", type=int, default=None, help=( "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." ), ) parser.add_argument( "--output_dir", type=str, default="instruct-pix2pix-model", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument( "--cache_dir", type=str, default=None, help="The directory where the downloaded models and datasets will be stored.", ) parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") parser.add_argument( "--resolution", type=int, default=256, help=( "The resolution for input images, all the images in the train/validation dataset will be resized to this" " resolution" ), ) parser.add_argument( "--center_crop", default=False, action="store_true", help=( "Whether to center crop the input images to the resolution. If not set, the images will be randomly" " cropped. The images will be resized to the resolution first before cropping." ), ) parser.add_argument( "--random_flip", action="store_true", help="whether to randomly flip images horizontally", ) parser.add_argument( "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." ) parser.add_argument("--num_train_epochs", type=int, default=100) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--gradient_checkpointing", action="store_true", help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--learning_rate", type=float, default=1e-4, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--scale_lr", action="store_true", default=False, help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--conditioning_dropout_prob", type=float, default=None, help="Conditioning dropout probability. Drops out the conditionings (image and edit prompt) used in training InstructPix2Pix. See section 3.2.1 in the paper: https://huggingface.co/papers/2211.09800.", ) parser.add_argument( "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." ) parser.add_argument( "--allow_tf32", action="store_true", help=( "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" ), ) parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.") parser.add_argument( "--non_ema_revision", type=str, default=None, required=False, help=( "Revision of pretrained non-ema model identifier. Must be a branch, tag or git identifier of the local or" " remote repository specified with --pretrained_model_name_or_path." ), ) parser.add_argument( "--dataloader_num_workers", type=int, default=0, help=( "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." ), ) parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") parser.add_argument( "--hub_model_id", type=str, default=None, help="The name of the repository to keep in sync with the local `output_dir`.", ) parser.add_argument( "--logging_dir", type=str, default="logs", help=( "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." ), ) parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." ), ) parser.add_argument( "--report_to", type=str, default="tensorboard", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") parser.add_argument( "--checkpointing_steps", type=int, default=500, help=( "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" " training using `--resume_from_checkpoint`." ), ) parser.add_argument( "--checkpoints_total_limit", type=int, default=None, help=("Max number of checkpoints to store."), ) parser.add_argument( "--resume_from_checkpoint", type=str, default=None, help=( "Whether training should be resumed from a previous checkpoint. Use a path saved by" ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' ), ) parser.add_argument( "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." ) parser.add_argument( "--rank", type=int, default=4, help=("The dimension of the LoRA update matrices."), ) args = parser.parse_args() env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) if env_local_rank != -1 and env_local_rank != args.local_rank: args.local_rank = env_local_rank # Sanity checks if args.dataset_name is None and args.train_data_dir is None: raise ValueError("Need either a dataset name or a training folder.") # default to using the same revision for the non-ema model if not specified if args.non_ema_revision is None: args.non_ema_revision = args.revision return args def convert_to_np(image, resolution): image = image.convert("RGB").resize((resolution, resolution)) return np.array(image).transpose(2, 0, 1) def download_image(url): image = PIL.Image.open(requests.get(url, stream=True, timeout=DIFFUSERS_REQUEST_TIMEOUT).raw) image = PIL.ImageOps.exif_transpose(image) image = image.convert("RGB") return image def main(): args = parse_args() if args.report_to == "wandb" and args.hub_token is not None: raise ValueError( "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token." " Please use `hf auth login` to authenticate with the Hub." ) if args.non_ema_revision is not None: deprecate( "non_ema_revision!=None", "0.15.0", message=( "Downloading 'non_ema' weights from revision branches of the Hub is deprecated. Please make sure to" " use `--variant=non_ema` instead." ), ) logging_dir = os.path.join(args.output_dir, args.logging_dir) accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) accelerator = Accelerator( gradient_accumulation_steps=args.gradient_accumulation_steps, mixed_precision=args.mixed_precision, log_with=args.report_to, project_config=accelerator_project_config, ) # Disable AMP for MPS. if torch.backends.mps.is_available(): accelerator.native_amp = False generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger.info(accelerator.state, main_process_only=False) if accelerator.is_local_main_process: datasets.utils.logging.set_verbosity_warning() transformers.utils.logging.set_verbosity_warning() diffusers.utils.logging.set_verbosity_info() else: datasets.utils.logging.set_verbosity_error() transformers.utils.logging.set_verbosity_error() diffusers.utils.logging.set_verbosity_error() # If passed along, set the training seed now. if args.seed is not None: set_seed(args.seed) # Handle the repository creation if accelerator.is_main_process: if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) if args.push_to_hub: repo_id = create_repo( repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token ).repo_id # Load scheduler, tokenizer and models. noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") tokenizer = CLIPTokenizer.from_pretrained( args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision ) text_encoder = CLIPTextModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, variant=args.variant ) vae = AutoencoderKL.from_pretrained( args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision, variant=args.variant ) unet = UNet2DConditionModel.from_pretrained( args.pretrained_model_name_or_path, subfolder="unet", revision=args.non_ema_revision ) # InstructPix2Pix uses an additional image for conditioning. To accommodate that, # it uses 8 channels (instead of 4) in the first (conv) layer of the UNet. This UNet is # then fine-tuned on the custom InstructPix2Pix dataset. This modified UNet is initialized # from the pre-trained checkpoints. For the extra channels added to the first layer, they are # initialized to zero. logger.info("Initializing the InstructPix2Pix UNet from the pretrained UNet.") in_channels = 8 out_channels = unet.conv_in.out_channels unet.register_to_config(in_channels=in_channels) with torch.no_grad(): new_conv_in = nn.Conv2d( in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding ) new_conv_in.weight.zero_() new_conv_in.weight[:, :in_channels, :, :].copy_(unet.conv_in.weight) unet.conv_in = new_conv_in # Freeze vae, text_encoder and unet vae.requires_grad_(False) text_encoder.requires_grad_(False) unet.requires_grad_(False) # referred to https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py # For mixed precision training we cast all non-trainable weights (vae, non-lora text_encoder and non-lora unet) to half-precision # as these weights are only used for inference, keeping weights in full precision is not required. weight_dtype = torch.float32 if accelerator.mixed_precision == "fp16": weight_dtype = torch.float16 elif accelerator.mixed_precision == "bf16": weight_dtype = torch.bfloat16 # Freeze the unet parameters before adding adapters unet.requires_grad_(False) unet_lora_config = LoraConfig( r=args.rank, lora_alpha=args.rank, init_lora_weights="gaussian", target_modules=["to_k", "to_q", "to_v", "to_out.0"], ) # Move unet, vae and text_encoder to device and cast to weight_dtype unet.to(accelerator.device, dtype=weight_dtype) vae.to(accelerator.device, dtype=weight_dtype) text_encoder.to(accelerator.device, dtype=weight_dtype) # Add adapter and make sure the trainable params are in float32. unet.add_adapter(unet_lora_config) if args.mixed_precision == "fp16": # only upcast trainable parameters (LoRA) into fp32 cast_training_params(unet, dtype=torch.float32) # Create EMA for the unet. if args.use_ema: ema_unet = EMAModel(unet.parameters(), model_cls=UNet2DConditionModel, model_config=unet.config) if args.enable_xformers_memory_efficient_attention: if is_xformers_available(): import xformers xformers_version = version.parse(xformers.__version__) if xformers_version == version.parse("0.0.16"): logger.warning( "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." ) unet.enable_xformers_memory_efficient_attention() else: raise ValueError("xformers is not available. Make sure it is installed correctly") trainable_params = filter(lambda p: p.requires_grad, unet.parameters()) def unwrap_model(model): model = accelerator.unwrap_model(model) model = model._orig_mod if is_compiled_module(model) else model return model # `accelerate` 0.16.0 will have better support for customized saving if version.parse(accelerate.__version__) >= version.parse("0.16.0"): # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format def save_model_hook(models, weights, output_dir): if accelerator.is_main_process: if args.use_ema: ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema")) for i, model in enumerate(models): model.save_pretrained(os.path.join(output_dir, "unet")) # make sure to pop weight so that corresponding model is not saved again if weights: weights.pop() def load_model_hook(models, input_dir): if args.use_ema: load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DConditionModel) ema_unet.load_state_dict(load_model.state_dict()) ema_unet.to(accelerator.device) del load_model for i in range(len(models)): # pop models so that they are not loaded again model = models.pop() # load diffusers style into model load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") model.register_to_config(**load_model.config) model.load_state_dict(load_model.state_dict()) del load_model accelerator.register_save_state_pre_hook(save_model_hook) accelerator.register_load_state_pre_hook(load_model_hook) if args.gradient_checkpointing: unet.enable_gradient_checkpointing() # Enable TF32 for faster training on Ampere GPUs, # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices if args.allow_tf32: torch.backends.cuda.matmul.allow_tf32 = True if args.scale_lr: args.learning_rate = ( args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes ) # Initialize the optimizer if args.use_8bit_adam: try: import bitsandbytes as bnb except ImportError: raise ImportError( "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" ) optimizer_cls = bnb.optim.AdamW8bit else: optimizer_cls = torch.optim.AdamW # train on only lora_layers optimizer = optimizer_cls( trainable_params, lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) # Get the datasets: you can either provide your own training and evaluation files (see below) # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). # In distributed training, the load_dataset function guarantees that only one local process can concurrently # download the dataset. if args.dataset_name is not None: # Downloading and loading a dataset from the hub. dataset = load_dataset( args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, ) else: data_files = {} if args.train_data_dir is not None: data_files["train"] = os.path.join(args.train_data_dir, "**") dataset = load_dataset( "imagefolder", data_files=data_files, cache_dir=args.cache_dir, ) # See more about loading custom images at # https://huggingface.co/docs/datasets/main/en/image_load#imagefolder # Preprocessing the datasets. # We need to tokenize inputs and targets. column_names = dataset["train"].column_names # 6. Get the column names for input/target. dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None) if args.original_image_column is None: original_image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] else: original_image_column = args.original_image_column if original_image_column not in column_names: raise ValueError( f"--original_image_column' value '{args.original_image_column}' needs to be one of: {', '.join(column_names)}" ) if args.edit_prompt_column is None: edit_prompt_column = dataset_columns[1] if dataset_columns is not None else column_names[1] else: edit_prompt_column = args.edit_prompt_column if edit_prompt_column not in column_names: raise ValueError( f"--edit_prompt_column' value '{args.edit_prompt_column}' needs to be one of: {', '.join(column_names)}" ) if args.edited_image_column is None: edited_image_column = dataset_columns[2] if dataset_columns is not None else column_names[2] else: edited_image_column = args.edited_image_column if edited_image_column not in column_names: raise ValueError( f"--edited_image_column' value '{args.edited_image_column}' needs to be one of: {', '.join(column_names)}" ) # Preprocessing the datasets. # We need to tokenize input captions and transform the images. def tokenize_captions(captions): inputs = tokenizer( captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" ) return inputs.input_ids # Preprocessing the datasets. train_transforms = transforms.Compose( [ transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), ] ) def preprocess_images(examples): original_images = np.concatenate( [convert_to_np(image, args.resolution) for image in examples[original_image_column]] ) edited_images = np.concatenate( [convert_to_np(image, args.resolution) for image in examples[edited_image_column]] ) # We need to ensure that the original and the edited images undergo the same # augmentation transforms. images = np.concatenate([original_images, edited_images]) images = torch.tensor(images) images = 2 * (images / 255) - 1 return train_transforms(images) def preprocess_train(examples): # Preprocess images. preprocessed_images = preprocess_images(examples) # Since the original and edited images were concatenated before # applying the transformations, we need to separate them and reshape # them accordingly. original_images, edited_images = preprocessed_images.chunk(2) original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) # Collate the preprocessed images into the `examples`. examples["original_pixel_values"] = original_images examples["edited_pixel_values"] = edited_images # Preprocess the captions. captions = list(examples[edit_prompt_column]) examples["input_ids"] = tokenize_captions(captions) return examples with accelerator.main_process_first(): if args.max_train_samples is not None: dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) # Set the training transforms train_dataset = dataset["train"].with_transform(preprocess_train) def collate_fn(examples): original_pixel_values = torch.stack([example["original_pixel_values"] for example in examples]) original_pixel_values = original_pixel_values.to(memory_format=torch.contiguous_format).float() edited_pixel_values = torch.stack([example["edited_pixel_values"] for example in examples]) edited_pixel_values = edited_pixel_values.to(memory_format=torch.contiguous_format).float() input_ids = torch.stack([example["input_ids"] for example in examples]) return { "original_pixel_values": original_pixel_values, "edited_pixel_values": edited_pixel_values, "input_ids": input_ids, } # DataLoaders creation: train_dataloader = torch.utils.data.DataLoader( train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=args.train_batch_size, num_workers=args.dataloader_num_workers, ) # Scheduler and math around the number of training steps. # Check the PR https://github.com/huggingface/diffusers/pull/8312 for detailed explanation. num_warmup_steps_for_scheduler = args.lr_warmup_steps * accelerator.num_processes if args.max_train_steps is None: len_train_dataloader_after_sharding = math.ceil(len(train_dataloader) / accelerator.num_processes) num_update_steps_per_epoch = math.ceil(len_train_dataloader_after_sharding / args.gradient_accumulation_steps) num_training_steps_for_scheduler = ( args.num_train_epochs * num_update_steps_per_epoch * accelerator.num_processes ) else: num_training_steps_for_scheduler = args.max_train_steps * accelerator.num_processes lr_scheduler = get_scheduler( args.lr_scheduler, optimizer=optimizer, num_warmup_steps=num_warmup_steps_for_scheduler, num_training_steps=num_training_steps_for_scheduler, ) # Prepare everything with our `accelerator`. unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( unet, optimizer, train_dataloader, lr_scheduler ) if args.use_ema: ema_unet.to(accelerator.device) # For mixed precision training we cast the text_encoder and vae weights to half-precision # as these models are only used for inference, keeping weights in full precision is not required. weight_dtype = torch.float32 if accelerator.mixed_precision == "fp16": weight_dtype = torch.float16 elif accelerator.mixed_precision == "bf16": weight_dtype = torch.bfloat16 # Move text_encode and vae to gpu and cast to weight_dtype text_encoder.to(accelerator.device, dtype=weight_dtype) vae.to(accelerator.device, dtype=weight_dtype) # We need to recalculate our total training steps as the size of the training dataloader may have changed. num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch if num_training_steps_for_scheduler != args.max_train_steps * accelerator.num_processes: logger.warning( f"The length of the 'train_dataloader' after 'accelerator.prepare' ({len(train_dataloader)}) does not match " f"the expected length ({len_train_dataloader_after_sharding}) when the learning rate scheduler was created. " f"This inconsistency may result in the learning rate scheduler not functioning properly." ) # Afterwards we recalculate our number of training epochs args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: accelerator.init_trackers("instruct-pix2pix", config=vars(args)) # Train! total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") logger.info(f" Total optimization steps = {args.max_train_steps}") global_step = 0 first_epoch = 0 # Potentially load in the weights and states from a previous save if args.resume_from_checkpoint: if args.resume_from_checkpoint != "latest": path = os.path.basename(args.resume_from_checkpoint) else: # Get the most recent checkpoint dirs = os.listdir(args.output_dir) dirs = [d for d in dirs if d.startswith("checkpoint")] dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) path = dirs[-1] if len(dirs) > 0 else None if path is None: accelerator.print( f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." ) args.resume_from_checkpoint = None else: accelerator.print(f"Resuming from checkpoint {path}") accelerator.load_state(os.path.join(args.output_dir, path)) global_step = int(path.split("-")[1]) resume_global_step = global_step * args.gradient_accumulation_steps first_epoch = global_step // num_update_steps_per_epoch resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) # Only show the progress bar once on each machine. progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) progress_bar.set_description("Steps") for epoch in range(first_epoch, args.num_train_epochs): unet.train() train_loss = 0.0 for step, batch in enumerate(train_dataloader): # Skip steps until we reach the resumed step if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: if step % args.gradient_accumulation_steps == 0: progress_bar.update(1) continue with accelerator.accumulate(unet): # We want to learn the denoising process w.r.t the edited images which # are conditioned on the original image (which was edited) and the edit instruction. # So, first, convert images to latent space. latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() latents = latents * vae.config.scaling_factor # Sample noise that we'll add to the latents noise = torch.randn_like(latents) bsz = latents.shape[0] # Sample a random timestep for each image timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) timesteps = timesteps.long() # Add noise to the latents according to the noise magnitude at each timestep # (this is the forward diffusion process) noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) # Get the text embedding for conditioning. encoder_hidden_states = text_encoder(batch["input_ids"])[0] # Get the additional image embedding for conditioning. # Instead of getting a diagonal Gaussian here, we simply take the mode. original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() # Conditioning dropout to support classifier-free guidance during inference. For more details # check out the section 3.2.1 of the original paper https://huggingface.co/papers/2211.09800. if args.conditioning_dropout_prob is not None: random_p = torch.rand(bsz, device=latents.device, generator=generator) # Sample masks for the edit prompts. prompt_mask = random_p < 2 * args.conditioning_dropout_prob prompt_mask = prompt_mask.reshape(bsz, 1, 1) # Final text conditioning. null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) # Sample masks for the original images. image_mask_dtype = original_image_embeds.dtype image_mask = 1 - ( (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) ) image_mask = image_mask.reshape(bsz, 1, 1, 1) # Final image conditioning. original_image_embeds = image_mask * original_image_embeds # Concatenate the `original_image_embeds` with the `noisy_latents`. concatenated_noisy_latents = torch.cat([noisy_latents, original_image_embeds], dim=1) # Get the target for loss depending on the prediction type if noise_scheduler.config.prediction_type == "epsilon": target = noise elif noise_scheduler.config.prediction_type == "v_prediction": target = noise_scheduler.get_velocity(latents, noise, timesteps) else: raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") # Predict the noise residual and compute loss model_pred = unet(concatenated_noisy_latents, timesteps, encoder_hidden_states, return_dict=False)[0] loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") # Gather the losses across all processes for logging (if we use distributed training). avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() train_loss += avg_loss.item() / args.gradient_accumulation_steps # Backpropagate accelerator.backward(loss) if accelerator.sync_gradients: accelerator.clip_grad_norm_(trainable_params, args.max_grad_norm) optimizer.step() lr_scheduler.step() optimizer.zero_grad() # Checks if the accelerator has performed an optimization step behind the scenes if accelerator.sync_gradients: if args.use_ema: ema_unet.step(trainable_params) progress_bar.update(1) global_step += 1 accelerator.log({"train_loss": train_loss}, step=global_step) train_loss = 0.0 if global_step % args.checkpointing_steps == 0: if accelerator.is_main_process: # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` if args.checkpoints_total_limit is not None: checkpoints = os.listdir(args.output_dir) checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints if len(checkpoints) >= args.checkpoints_total_limit: num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 removing_checkpoints = checkpoints[0:num_to_remove] logger.info( f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" ) logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") for removing_checkpoint in removing_checkpoints: removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) shutil.rmtree(removing_checkpoint) save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") accelerator.save_state(save_path) unwrapped_unet = unwrap_model(unet) unet_lora_state_dict = convert_state_dict_to_diffusers( get_peft_model_state_dict(unwrapped_unet) ) StableDiffusionInstructPix2PixPipeline.save_lora_weights( save_directory=save_path, unet_lora_layers=unet_lora_state_dict, safe_serialization=True, ) logger.info(f"Saved state to {save_path}") logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} progress_bar.set_postfix(**logs) if global_step >= args.max_train_steps: break if accelerator.is_main_process: if ( (args.val_image_url is not None) and (args.validation_prompt is not None) and (epoch % args.validation_epochs == 0) ): logger.info( f"Running validation... \n Generating {args.num_validation_images} images with prompt:" f" {args.validation_prompt}." ) # create pipeline if args.use_ema: # Store the UNet parameters temporarily and load the EMA parameters to perform inference. ema_unet.store(unet.parameters()) ema_unet.copy_to(unet.parameters()) # The models need unwrapping because for compatibility in distributed training mode. pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( args.pretrained_model_name_or_path, unet=unwrap_model(unet), text_encoder=unwrap_model(text_encoder), vae=unwrap_model(vae), revision=args.revision, variant=args.variant, torch_dtype=weight_dtype, ) # run inference log_validation( pipeline, args, accelerator, generator, ) if args.use_ema: # Switch back to the original UNet parameters. ema_unet.restore(unet.parameters()) del pipeline torch.cuda.empty_cache() # Create the pipeline using the trained modules and save it. accelerator.wait_for_everyone() if accelerator.is_main_process: if args.use_ema: ema_unet.copy_to(unet.parameters()) # store only LORA layers unet = unet.to(torch.float32) unwrapped_unet = unwrap_model(unet) unet_lora_state_dict = convert_state_dict_to_diffusers(get_peft_model_state_dict(unwrapped_unet)) StableDiffusionInstructPix2PixPipeline.save_lora_weights( save_directory=args.output_dir, unet_lora_layers=unet_lora_state_dict, safe_serialization=True, ) pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( args.pretrained_model_name_or_path, text_encoder=unwrap_model(text_encoder), vae=unwrap_model(vae), unet=unwrap_model(unet), revision=args.revision, variant=args.variant, ) pipeline.load_lora_weights(args.output_dir) images = None if (args.val_image_url is not None) and (args.validation_prompt is not None): images = log_validation( pipeline, args, accelerator, generator, ) if args.push_to_hub: save_model_card( repo_id, images=images, base_model=args.pretrained_model_name_or_path, dataset_name=args.dataset_name, repo_folder=args.output_dir, ) upload_folder( repo_id=repo_id, folder_path=args.output_dir, commit_message="End of training", ignore_patterns=["step_*", "epoch_*"], ) accelerator.end_training() if __name__ == "__main__": main()
diffusers/examples/research_projects/instructpix2pix_lora/train_instruct_pix2pix_lora.py/0
{ "file_path": "diffusers/examples/research_projects/instructpix2pix_lora/train_instruct_pix2pix_lora.py", "repo_id": "diffusers", "token_count": 21355 }
143
# Stable Diffusion text-to-image fine-tuning This extended LoRA training script was authored by [haofanwang](https://github.com/haofanwang). This is an experimental LoRA extension of [this example](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py). We further support add LoRA layers for text encoder. ## Training with LoRA Low-Rank Adaption of Large Language Models was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://huggingface.co/papers/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen*. In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages: - Previous pretrained weights are kept frozen so that model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114). - Rank-decomposition matrices have significantly fewer parameters than original model, which means that trained LoRA weights are easily portable. - LoRA attention layers allow to control to which extent the model is adapted toward new training images via a `scale` parameter. [cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository. With LoRA, it's possible to fine-tune Stable Diffusion on a custom image-caption pair dataset on consumer GPUs like Tesla T4, Tesla V100. ### Training First, you need to set up your development environment as is explained in the [installation section](#installing-the-dependencies). Make sure to set the `MODEL_NAME` and `DATASET_NAME` environment variables. Here, we will use [Stable Diffusion v1-4](https://hf.co/CompVis/stable-diffusion-v1-4) and the [Narutos dataset](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions). **___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** **___Note: It is quite useful to monitor the training progress by regularly generating sample images during training. [Weights and Biases](https://docs.wandb.ai/quickstart) is a nice solution to easily see generating images during training. All you need to do is to run `pip install wandb` before training to automatically log images.___** ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export DATASET_NAME="lambdalabs/naruto-blip-captions" ``` For this example we want to directly store the trained LoRA embeddings on the Hub, so we need to be logged in and add the `--push_to_hub` flag. ```bash hf auth login ``` Now we can start training! ```bash accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$DATASET_NAME --caption_column="text" \ --resolution=512 --random_flip \ --train_batch_size=1 \ --num_train_epochs=100 --checkpointing_steps=5000 \ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \ --seed=42 \ --output_dir="sd-naruto-model-lora" \ --validation_prompt="cute dragon creature" --report_to="wandb" --use_peft \ --lora_r=4 --lora_alpha=32 \ --lora_text_encoder_r=4 --lora_text_encoder_alpha=32 ``` The above command will also run inference as fine-tuning progresses and log the results to Weights and Biases. **___Note: When using LoRA we can use a much higher learning rate compared to non-LoRA fine-tuning. Here we use *1e-4* instead of the usual *1e-5*. Also, by using LoRA, it's possible to run `train_text_to_image_lora.py` in consumer GPUs like T4 or V100.___** The final LoRA embedding weights have been uploaded to [sayakpaul/sd-model-finetuned-lora-t4](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4). **___Note: [The final weights](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4/blob/main/pytorch_lora_weights.bin) are only 3 MB in size, which is orders of magnitudes smaller than the original model.___** You can check some inference samples that were logged during the course of the fine-tuning process [here](https://wandb.ai/sayakpaul/text2image-fine-tune/runs/q4lc0xsw). ### Inference Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline` after loading the trained LoRA weights. You need to pass the `output_dir` for loading the LoRA weights which, in this case, is `sd-naruto-model-lora`. ```python from diffusers import StableDiffusionPipeline import torch model_path = "sayakpaul/sd-model-finetuned-lora-t4" pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) pipe.unet.load_attn_procs(model_path) pipe.to("cuda") prompt = "A naruto with green eyes and red legs." image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] image.save("naruto.png") ```
diffusers/examples/research_projects/lora/README.md/0
{ "file_path": "diffusers/examples/research_projects/lora/README.md", "repo_id": "diffusers", "token_count": 1622 }
144
# Stable Diffusion text-to-image fine-tuning The `train_text_to_image.py` script shows how to fine-tune stable diffusion model on your own dataset. ___Note___: ___This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.___ ## Running locally with PyTorch ### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then cd in the example folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` ### Naruto example You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-4`, so you'll need to visit [its card](https://huggingface.co/CompVis/stable-diffusion-v1-4), read the license and tick the checkbox if you agree. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens). Run the following command to authenticate your token ```bash hf auth login ``` If you have already cloned the repo, then you won't need to go through these steps. <br> ## Use ONNXRuntime to accelerate training In order to leverage onnxruntime to accelerate training, please use train_text_to_image.py The command to train a DDPM UNetCondition model on the Naruto dataset with onnxruntime: ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export dataset_name="lambdalabs/naruto-blip-captions" accelerate launch --mixed_precision="fp16" train_text_to_image.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$dataset_name \ --use_ema \ --resolution=512 --center_crop --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --lr_scheduler="constant" --lr_warmup_steps=0 \ --output_dir="sd-naruto-model" ``` Please contact Prathik Rao (prathikr), Sunghoon Choi (hanbitmyths), Ashwini Khade (askhade), or Peng Wang (pengwa) on github with any questions.
diffusers/examples/research_projects/onnxruntime/text_to_image/README.md/0
{ "file_path": "diffusers/examples/research_projects/onnxruntime/text_to_image/README.md", "repo_id": "diffusers", "token_count": 844 }
145
# PromptDiffusion Pipeline From the project [page](https://zhendong-wang.github.io/prompt-diffusion.github.io/) "With a prompt consisting of a task-specific example pair of images and text guidance, and a new query image, Prompt Diffusion can comprehend the desired task and generate the corresponding output image on both seen (trained) and unseen (new) task types." For any usage questions, please refer to the [paper](https://huggingface.co/papers/2305.01115). Prepare models by converting them from the [checkpoint](https://huggingface.co/zhendongw/prompt-diffusion) To convert the controlnet, use cldm_v15.yaml from the [repository](https://github.com/Zhendong-Wang/Prompt-Diffusion/tree/main/models/): ```bash python convert_original_promptdiffusion_to_diffusers.py --checkpoint_path path-to-network-step04999.ckpt --original_config_file path-to-cldm_v15.yaml --dump_path path-to-output-directory ``` To learn about how to convert the fine-tuned stable diffusion model, see the [Load different Stable Diffusion formats guide](https://huggingface.co/docs/diffusers/main/en/using-diffusers/other-formats). ```py import torch from diffusers import UniPCMultistepScheduler from diffusers.utils import load_image from promptdiffusioncontrolnet import PromptDiffusionControlNetModel from pipeline_prompt_diffusion import PromptDiffusionPipeline from PIL import ImageOps image_a = ImageOps.invert(load_image("https://github.com/Zhendong-Wang/Prompt-Diffusion/blob/main/images_to_try/house_line.png?raw=true")) image_b = load_image("https://github.com/Zhendong-Wang/Prompt-Diffusion/blob/main/images_to_try/house.png?raw=true") query = ImageOps.invert(load_image("https://github.com/Zhendong-Wang/Prompt-Diffusion/blob/main/images_to_try/new_01.png?raw=true")) # load prompt diffusion controlnet and prompt diffusion controlnet = PromptDiffusionControlNetModel.from_pretrained("iczaw/prompt-diffusion-diffusers", subfolder="controlnet", torch_dtype=torch.float16) model_id = "path-to-model" pipe = PromptDiffusionPipeline.from_pretrained("iczaw/prompt-diffusion-diffusers", subfolder="base", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16") # speed up diffusion process with faster scheduler and memory optimization pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # remove following line if xformers is not installed pipe.enable_xformers_memory_efficient_attention() pipe.enable_model_cpu_offload() # generate image generator = torch.manual_seed(0) image = pipe("a tortoise", num_inference_steps=20, generator=generator, image_pair=[image_a,image_b], image=query).images[0] ```
diffusers/examples/research_projects/promptdiffusion/README.md/0
{ "file_path": "diffusers/examples/research_projects/promptdiffusion/README.md", "repo_id": "diffusers", "token_count": 829 }
146
# Training SANA Sprint Diffuser This README explains how to use the provided bash script commands to download a pre-trained teacher diffuser model and train it on a specific dataset, following the [SANA Sprint methodology](https://huggingface.co/papers/2503.09641). ## Setup ### 1. Define the local paths Set a variable for your desired output directory. This directory will store the downloaded model and the training checkpoints/results. ```bash your_local_path='output' # Or any other path you prefer mkdir -p $your_local_path # Create the directory if it doesn't exist ``` ### 2. Download the pre-trained model Download the SANA Sprint teacher model from Hugging Face Hub. The script uses the 1.6B parameter model. ```bash hf download Efficient-Large-Model/SANA_Sprint_1.6B_1024px_teacher_diffusers --local-dir $your_local_path/SANA_Sprint_1.6B_1024px_teacher_diffusers ``` *(Optional: You can also download the 0.6B model by replacing the model name: `Efficient-Large-Model/Sana_Sprint_0.6B_1024px_teacher_diffusers`)* ### 3. Acquire the dataset shards The training script in this example uses specific `.parquet` shards from a randomly selected `brivangl/midjourney-v6-llava` dataset instead of downloading the entire dataset automatically via `dataset_name`. The script specifically uses these three files: * `data/train_000.parquet` * `data/train_001.parquet` * `data/train_002.parquet` You can either: Let the script download the dataset automatically during first run Or download it manually **Note:** The full `brivangl/midjourney-v6-llava` dataset is much larger and contains many more shards. This script example explicitly trains *only* on the three specified shards. ## Usage Once the model is downloaded, you can run the training script. ```bash your_local_path='output' # Ensure this variable is set python train_sana_sprint_diffusers.py \ --pretrained_model_name_or_path=$your_local_path/SANA_Sprint_1.6B_1024px_teacher_diffusers \ --output_dir=$your_local_path \ --mixed_precision=bf16 \ --resolution=1024 \ --learning_rate=1e-6 \ --max_train_steps=30000 \ --dataloader_num_workers=8 \ --dataset_name='brivangl/midjourney-v6-llava' \ --file_path data/train_000.parquet data/train_001.parquet data/train_002.parquet \ --checkpointing_steps=500 --checkpoints_total_limit=10 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 \ --seed=453645634 \ --train_largest_timestep \ --misaligned_pairs_D \ --gradient_checkpointing \ --resume_from_checkpoint="latest" \ ``` ### Explanation of parameters * `--pretrained_model_name_or_path`: Path to the downloaded pre-trained model directory. * `--output_dir`: Directory where training logs, checkpoints, and the final model will be saved. * `--mixed_precision`: Use BF16 mixed precision for training, which can save memory and speed up training on compatible hardware. * `--resolution`: The image resolution used for training (1024x1024). * `--learning_rate`: The learning rate for the optimizer. * `--max_train_steps`: The total number of training steps to perform. * `--dataloader_num_workers`: Number of worker processes for loading data. Increase for faster data loading if your CPU and disk can handle it. * `--dataset_name`: The name of the dataset on Hugging Face Hub (`brivangl/midjourney-v6-llava`). * `--file_path`: **Specifies the local paths to the dataset shards to be used for training.** In this case, `data/train_000.parquet`, `data/train_001.parquet`, and `data/train_002.parquet`. * `--checkpointing_steps`: Save a training checkpoint every X steps. * `--checkpoints_total_limit`: Maximum number of checkpoints to keep. Older checkpoints will be deleted. * `--train_batch_size`: The batch size per GPU. * `--gradient_accumulation_steps`: Number of steps to accumulate gradients before performing an optimizer step. * `--seed`: Random seed for reproducibility. * `--train_largest_timestep`: A specific training strategy focusing on larger timesteps. * `--misaligned_pairs_D`: Another specific training strategy to add misaligned image-text pairs as fake data for GAN. * `--gradient_checkpointing`: Enable gradient checkpointing to save GPU memory. * `--resume_from_checkpoint`: Allows resuming training from the latest saved checkpoint in the `--output_dir`.
diffusers/examples/research_projects/sana/README.md/0
{ "file_path": "diffusers/examples/research_projects/sana/README.md", "repo_id": "diffusers", "token_count": 1374 }
147
# Show best practices for SDXL JAX import time import jax import jax.numpy as jnp import numpy as np from flax.jax_utils import replicate # Let's cache the model compilation, so that it doesn't take as long the next time around. from jax.experimental.compilation_cache import compilation_cache as cc from diffusers import FlaxStableDiffusionXLPipeline cc.initialize_cache("/tmp/sdxl_cache") NUM_DEVICES = jax.device_count() # 1. Let's start by downloading the model and loading it into our pipeline class # Adhering to JAX's functional approach, the model's parameters are returned seperatetely and # will have to be passed to the pipeline during inference pipeline, params = FlaxStableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", revision="refs/pr/95", split_head_dim=True ) # 2. We cast all parameters to bfloat16 EXCEPT the scheduler which we leave in # float32 to keep maximal precision scheduler_state = params.pop("scheduler") params = jax.tree_util.tree_map(lambda x: x.astype(jnp.bfloat16), params) params["scheduler"] = scheduler_state # 3. Next, we define the different inputs to the pipeline default_prompt = "a colorful photo of a castle in the middle of a forest with trees and bushes, by Ismail Inceoglu, shadows, high contrast, dynamic shading, hdr, detailed vegetation, digital painting, digital drawing, detailed painting, a detailed digital painting, gothic art, featured on deviantart" default_neg_prompt = "fog, grainy, purple" default_seed = 33 default_guidance_scale = 5.0 default_num_steps = 25 # 4. In order to be able to compile the pipeline # all inputs have to be tensors or strings # Let's tokenize the prompt and negative prompt def tokenize_prompt(prompt, neg_prompt): prompt_ids = pipeline.prepare_inputs(prompt) neg_prompt_ids = pipeline.prepare_inputs(neg_prompt) return prompt_ids, neg_prompt_ids # 5. To make full use of JAX's parallelization capabilities # the parameters and input tensors are duplicated across devices # To make sure every device generates a different image, we create # different seeds for each image. The model parameters won't change # during inference so we do not wrap them into a function p_params = replicate(params) def replicate_all(prompt_ids, neg_prompt_ids, seed): p_prompt_ids = replicate(prompt_ids) p_neg_prompt_ids = replicate(neg_prompt_ids) rng = jax.random.PRNGKey(seed) rng = jax.random.split(rng, NUM_DEVICES) return p_prompt_ids, p_neg_prompt_ids, rng # 6. Let's now put it all together in a generate function def generate( prompt, negative_prompt, seed=default_seed, guidance_scale=default_guidance_scale, num_inference_steps=default_num_steps, ): prompt_ids, neg_prompt_ids = tokenize_prompt(prompt, negative_prompt) prompt_ids, neg_prompt_ids, rng = replicate_all(prompt_ids, neg_prompt_ids, seed) images = pipeline( prompt_ids, p_params, rng, num_inference_steps=num_inference_steps, neg_prompt_ids=neg_prompt_ids, guidance_scale=guidance_scale, jit=True, ).images # convert the images to PIL images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) return pipeline.numpy_to_pil(np.array(images)) # 7. Remember that the first call will compile the function and hence be very slow. Let's run generate once # so that the pipeline call is compiled start = time.time() print("Compiling ...") generate(default_prompt, default_neg_prompt) print(f"Compiled in {time.time() - start}") # 8. Now the model forward pass will run very quickly, let's try it again start = time.time() prompt = "photo of a rhino dressed suit and tie sitting at a table in a bar with a bar stools, award winning photography, Elke vogelsang" neg_prompt = "cartoon, illustration, animation. face. male, female" images = generate(prompt, neg_prompt) print(f"Inference in {time.time() - start}") for i, image in enumerate(images): image.save(f"castle_{i}.png")
diffusers/examples/research_projects/sdxl_flax/sdxl_single.py/0
{ "file_path": "diffusers/examples/research_projects/sdxl_flax/sdxl_single.py", "repo_id": "diffusers", "token_count": 1341 }
148
## Textual Inversion fine-tuning example [Textual inversion](https://huggingface.co/papers/2208.01618) is a method to personalize text2image models like stable diffusion on your own images using just 3-5 examples. The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion. ## Running on Colab Colab for training [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb) Colab for inference [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) ## Running locally with PyTorch ### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then cd in the example folder and run: ```bash pip install -r requirements.txt ``` And initialize an [🤗 Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` ### Cat toy example First, let's login so that we can upload the checkpoint to the Hub during training: ```bash hf auth login ``` Now let's get our dataset. For this example we will use some cat images: https://huggingface.co/datasets/diffusers/cat_toy_example . Let's first download it locally: ```py from huggingface_hub import snapshot_download local_dir = "./cat" snapshot_download("diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes") ``` This will be our training data. Now we can launch the training using: **___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** **___Note: Please follow the [README_sdxl.md](./README_sdxl.md) if you are using the [stable-diffusion-xl](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).___** ```bash export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5" export DATA_DIR="./cat" accelerate launch textual_inversion.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_data_dir=$DATA_DIR \ --learnable_property="object" \ --placeholder_token="<cat-toy>" \ --initializer_token="toy" \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --max_train_steps=3000 \ --learning_rate=5.0e-04 \ --scale_lr \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --push_to_hub \ --output_dir="textual_inversion_cat" ``` A full training run takes ~1 hour on one V100 GPU. **Note**: As described in [the official paper](https://huggingface.co/papers/2208.01618) only one embedding vector is used for the placeholder token, *e.g.* `"<cat-toy>"`. However, one can also add multiple embedding vectors for the placeholder token to increase the number of fine-tuneable parameters. This can help the model to learn more complex details. To use multiple embedding vectors, you should define `--num_vectors` to a number larger than one, *e.g.*: ```bash --num_vectors 5 ``` The saved textual inversion vectors will then be larger in size compared to the default case. ### Inference Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `placeholder_token` in your prompt. ```python from diffusers import StableDiffusionPipeline import torch model_id = "path-to-your-trained-model" pipe = StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float16).to("cuda") repo_id_embeds = "path-to-your-learned-embeds" pipe.load_textual_inversion(repo_id_embeds) prompt = "A <cat-toy> backpack" image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] image.save("cat-backpack.png") ``` ## Training with Flax/JAX For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script. Before running the scripts, make sure to install the library's training dependencies: ```bash pip install -U -r requirements_flax.txt ``` ```bash export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" export DATA_DIR="path-to-dir-containing-images" python textual_inversion_flax.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_data_dir=$DATA_DIR \ --learnable_property="object" \ --placeholder_token="<cat-toy>" \ --initializer_token="toy" \ --resolution=512 \ --train_batch_size=1 \ --max_train_steps=3000 \ --learning_rate=5.0e-04 \ --scale_lr \ --output_dir="textual_inversion_cat" ``` It should be at least 70% faster than the PyTorch script with the same configuration. ### Training with xformers: You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation.
diffusers/examples/textual_inversion/README.md/0
{ "file_path": "diffusers/examples/textual_inversion/README.md", "repo_id": "diffusers", "token_count": 1783 }
149
#!/usr/bin/env python # coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import logging import os import shutil import sys import tempfile import torch from diffusers import VQModel from diffusers.utils.testing_utils import require_timm sys.path.append("..") from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402 logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger() stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) @require_timm class TextToImage(ExamplesTestsAccelerate): @property def test_vqmodel_config(self): return { "_class_name": "VQModel", "_diffusers_version": "0.17.0.dev0", "act_fn": "silu", "block_out_channels": [ 32, ], "down_block_types": [ "DownEncoderBlock2D", ], "in_channels": 3, "latent_channels": 4, "layers_per_block": 2, "norm_num_groups": 32, "norm_type": "spatial", "num_vq_embeddings": 32, "out_channels": 3, "sample_size": 32, "scaling_factor": 0.18215, "up_block_types": [ "UpDecoderBlock2D", ], "vq_embed_dim": 4, } @property def test_discriminator_config(self): return { "_class_name": "Discriminator", "_diffusers_version": "0.27.0.dev0", "in_channels": 3, "cond_channels": 0, "hidden_channels": 8, "depth": 4, } def get_vq_and_discriminator_configs(self, tmpdir): vqmodel_config_path = os.path.join(tmpdir, "vqmodel.json") discriminator_config_path = os.path.join(tmpdir, "discriminator.json") with open(vqmodel_config_path, "w") as fp: json.dump(self.test_vqmodel_config, fp) with open(discriminator_config_path, "w") as fp: json.dump(self.test_discriminator_config, fp) return vqmodel_config_path, discriminator_config_path def test_vqmodel(self): with tempfile.TemporaryDirectory() as tmpdir: vqmodel_config_path, discriminator_config_path = self.get_vq_and_discriminator_configs(tmpdir) test_args = f""" examples/vqgan/train_vqgan.py --dataset_name hf-internal-testing/dummy_image_text_data --resolution 32 --image_column image --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 2 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --model_config_name_or_path {vqmodel_config_path} --discriminator_config_name_or_path {discriminator_config_path} --output_dir {tmpdir} """.split() run_command(self._launch_args + test_args) # save_pretrained smoke test self.assertTrue( os.path.isfile(os.path.join(tmpdir, "discriminator", "diffusion_pytorch_model.safetensors")) ) self.assertTrue(os.path.isfile(os.path.join(tmpdir, "vqmodel", "diffusion_pytorch_model.safetensors"))) def test_vqmodel_checkpointing(self): with tempfile.TemporaryDirectory() as tmpdir: vqmodel_config_path, discriminator_config_path = self.get_vq_and_discriminator_configs(tmpdir) # Run training script with checkpointing # max_train_steps == 4, checkpointing_steps == 2 # Should create checkpoints at steps 2, 4 initial_run_args = f""" examples/vqgan/train_vqgan.py --dataset_name hf-internal-testing/dummy_image_text_data --resolution 32 --image_column image --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 4 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --model_config_name_or_path {vqmodel_config_path} --discriminator_config_name_or_path {discriminator_config_path} --checkpointing_steps=2 --output_dir {tmpdir} --seed=0 """.split() run_command(self._launch_args + initial_run_args) # check checkpoint directories exist self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4"}, ) # check can run an intermediate checkpoint model = VQModel.from_pretrained(tmpdir, subfolder="checkpoint-2/vqmodel") image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) _ = model(image) # Remove checkpoint 2 so that we can check only later checkpoints exist after resuming shutil.rmtree(os.path.join(tmpdir, "checkpoint-2")) self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4"}, ) # Run training script for 2 total steps resuming from checkpoint 4 resume_run_args = f""" examples/vqgan/train_vqgan.py --dataset_name hf-internal-testing/dummy_image_text_data --resolution 32 --image_column image --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 6 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --model_config_name_or_path {vqmodel_config_path} --discriminator_config_name_or_path {discriminator_config_path} --checkpointing_steps=1 --resume_from_checkpoint={os.path.join(tmpdir, "checkpoint-4")} --output_dir {tmpdir} --seed=0 """.split() run_command(self._launch_args + resume_run_args) # check can run new fully trained pipeline model = VQModel.from_pretrained(tmpdir, subfolder="vqmodel") image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) _ = model(image) # no checkpoint-2 -> check old checkpoints do not exist # check new checkpoints exist # In the current script, checkpointing_steps 1 is equivalent to checkpointing_steps 2 as after the generator gets trained for one step, # the discriminator gets trained and loss and saving happens after that. Thus we do not expect to get a checkpoint-5 self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"}, ) def test_vqmodel_checkpointing_use_ema(self): with tempfile.TemporaryDirectory() as tmpdir: vqmodel_config_path, discriminator_config_path = self.get_vq_and_discriminator_configs(tmpdir) # Run training script with checkpointing # max_train_steps == 4, checkpointing_steps == 2 # Should create checkpoints at steps 2, 4 initial_run_args = f""" examples/vqgan/train_vqgan.py --dataset_name hf-internal-testing/dummy_image_text_data --resolution 32 --image_column image --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 4 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --model_config_name_or_path {vqmodel_config_path} --discriminator_config_name_or_path {discriminator_config_path} --checkpointing_steps=2 --output_dir {tmpdir} --use_ema --seed=0 """.split() run_command(self._launch_args + initial_run_args) model = VQModel.from_pretrained(tmpdir, subfolder="vqmodel") image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) _ = model(image) # check checkpoint directories exist self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4"}, ) # check can run an intermediate checkpoint model = VQModel.from_pretrained(tmpdir, subfolder="checkpoint-2/vqmodel") image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) _ = model(image) # Remove checkpoint 2 so that we can check only later checkpoints exist after resuming shutil.rmtree(os.path.join(tmpdir, "checkpoint-2")) # Run training script for 2 total steps resuming from checkpoint 4 resume_run_args = f""" examples/vqgan/train_vqgan.py --dataset_name hf-internal-testing/dummy_image_text_data --resolution 32 --image_column image --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 6 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --model_config_name_or_path {vqmodel_config_path} --discriminator_config_name_or_path {discriminator_config_path} --checkpointing_steps=1 --resume_from_checkpoint={os.path.join(tmpdir, "checkpoint-4")} --output_dir {tmpdir} --use_ema --seed=0 """.split() run_command(self._launch_args + resume_run_args) # check can run new fully trained pipeline model = VQModel.from_pretrained(tmpdir, subfolder="vqmodel") image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) _ = model(image) # no checkpoint-2 -> check old checkpoints do not exist # check new checkpoints exist self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"}, ) def test_vqmodel_checkpointing_checkpoints_total_limit(self): with tempfile.TemporaryDirectory() as tmpdir: vqmodel_config_path, discriminator_config_path = self.get_vq_and_discriminator_configs(tmpdir) # Run training script with checkpointing # max_train_steps == 6, checkpointing_steps == 2, checkpoints_total_limit == 2 # Should create checkpoints at steps 2, 4, 6 # with checkpoint at step 2 deleted initial_run_args = f""" examples/vqgan/train_vqgan.py --dataset_name hf-internal-testing/dummy_image_text_data --resolution 32 --image_column image --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 6 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --model_config_name_or_path {vqmodel_config_path} --discriminator_config_name_or_path {discriminator_config_path} --output_dir {tmpdir} --checkpointing_steps=2 --checkpoints_total_limit=2 --seed=0 """.split() run_command(self._launch_args + initial_run_args) model = VQModel.from_pretrained(tmpdir, subfolder="vqmodel") image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) _ = model(image) # check checkpoint directories exist # checkpoint-2 should have been deleted self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"}) def test_vqmodel_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self): with tempfile.TemporaryDirectory() as tmpdir: vqmodel_config_path, discriminator_config_path = self.get_vq_and_discriminator_configs(tmpdir) # Run training script with checkpointing # max_train_steps == 4, checkpointing_steps == 2 # Should create checkpoints at steps 2, 4 initial_run_args = f""" examples/vqgan/train_vqgan.py --dataset_name hf-internal-testing/dummy_image_text_data --resolution 32 --image_column image --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 4 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --model_config_name_or_path {vqmodel_config_path} --discriminator_config_name_or_path {discriminator_config_path} --checkpointing_steps=2 --output_dir {tmpdir} --seed=0 """.split() run_command(self._launch_args + initial_run_args) model = VQModel.from_pretrained(tmpdir, subfolder="vqmodel") image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) _ = model(image) # check checkpoint directories exist self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4"}, ) # resume and we should try to checkpoint at 6, where we'll have to remove # checkpoint-2 and checkpoint-4 instead of just a single previous checkpoint resume_run_args = f""" examples/vqgan/train_vqgan.py --dataset_name hf-internal-testing/dummy_image_text_data --resolution 32 --image_column image --train_batch_size 1 --gradient_accumulation_steps 1 --max_train_steps 8 --learning_rate 5.0e-04 --scale_lr --lr_scheduler constant --lr_warmup_steps 0 --model_config_name_or_path {vqmodel_config_path} --discriminator_config_name_or_path {discriminator_config_path} --output_dir {tmpdir} --checkpointing_steps=2 --resume_from_checkpoint={os.path.join(tmpdir, "checkpoint-4")} --checkpoints_total_limit=2 --seed=0 """.split() run_command(self._launch_args + resume_run_args) model = VQModel.from_pretrained(tmpdir, subfolder="vqmodel") image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) _ = model(image) # check checkpoint directories exist self.assertEqual( {x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-6", "checkpoint-8"}, )
diffusers/examples/vqgan/test_vqgan.py/0
{ "file_path": "diffusers/examples/vqgan/test_vqgan.py", "repo_id": "diffusers", "token_count": 8162 }
150
""" Convert a CogView4 checkpoint from Megatron to the Diffusers format. Example usage: python scripts/convert_cogview4_to_diffusers.py \ --transformer_checkpoint_path 'your path/cogview4_6b/mp_rank_00/model_optim_rng.pt' \ --vae_checkpoint_path 'your path/cogview4_6b/imagekl_ch16.pt' \ --output_path "THUDM/CogView4-6B" \ --dtype "bf16" Arguments: --transformer_checkpoint_path: Path to Transformer state dict. --vae_checkpoint_path: Path to VAE state dict. --output_path: The path to save the converted model. --push_to_hub: Whether to push the converted checkpoint to the HF Hub or not. Defaults to `False`. --text_encoder_cache_dir: Cache directory where text encoder is located. Defaults to None, which means HF_HOME will be used. --dtype: The dtype to save the model in (default: "bf16", options: "fp16", "bf16", "fp32"). If None, the dtype of the state dict is considered. Default is "bf16" because CogView4 uses bfloat16 for training. Note: You must provide either --transformer_checkpoint_path or --vae_checkpoint_path. """ import argparse import torch from tqdm import tqdm from transformers import GlmModel, PreTrainedTokenizerFast from diffusers import ( AutoencoderKL, CogView4ControlPipeline, CogView4Pipeline, CogView4Transformer2DModel, FlowMatchEulerDiscreteScheduler, ) from diffusers.loaders.single_file_utils import convert_ldm_vae_checkpoint parser = argparse.ArgumentParser() parser.add_argument( "--transformer_checkpoint_path", default=None, type=str, help="Path to Megatron (not SAT) Transformer checkpoint, e.g., 'model_optim_rng.pt'.", ) parser.add_argument( "--vae_checkpoint_path", default=None, type=str, help="(Optional) Path to VAE checkpoint, e.g., 'imagekl_ch16.pt'.", ) parser.add_argument( "--output_path", required=True, type=str, help="Directory to save the final Diffusers format pipeline.", ) parser.add_argument( "--push_to_hub", action="store_true", default=False, help="Whether to push the converted model to the HuggingFace Hub.", ) parser.add_argument( "--text_encoder_cache_dir", type=str, default=None, help="Specify the cache directory for the text encoder.", ) parser.add_argument( "--dtype", type=str, default="bf16", choices=["fp16", "bf16", "fp32"], help="Data type to save the model in.", ) parser.add_argument( "--num_layers", type=int, default=28, help="Number of Transformer layers (e.g., 28, 48...).", ) parser.add_argument( "--num_heads", type=int, default=32, help="Number of attention heads.", ) parser.add_argument( "--hidden_size", type=int, default=4096, help="Transformer hidden dimension size.", ) parser.add_argument( "--attention_head_dim", type=int, default=128, help="Dimension of each attention head.", ) parser.add_argument( "--time_embed_dim", type=int, default=512, help="Dimension of time embeddings.", ) parser.add_argument( "--condition_dim", type=int, default=256, help="Dimension of condition embeddings.", ) parser.add_argument( "--pos_embed_max_size", type=int, default=128, help="Maximum size for positional embeddings.", ) parser.add_argument( "--control", action="store_true", default=False, help="Whether to use control model.", ) args = parser.parse_args() def swap_scale_shift(weight, dim): """ Swap the scale and shift components in the weight tensor. Args: weight (torch.Tensor): The original weight tensor. dim (int): The dimension along which to split. Returns: torch.Tensor: The modified weight tensor with scale and shift swapped. """ shift, scale = weight.chunk(2, dim=dim) new_weight = torch.cat([scale, shift], dim=dim) return new_weight def convert_megatron_transformer_checkpoint_to_diffusers( ckpt_path: str, num_layers: int, num_heads: int, hidden_size: int, ): """ Convert a Megatron Transformer checkpoint to Diffusers format. Args: ckpt_path (str): Path to the Megatron Transformer checkpoint. num_layers (int): Number of Transformer layers. num_heads (int): Number of attention heads. hidden_size (int): Hidden size of the Transformer. Returns: dict: The converted state dictionary compatible with Diffusers. """ ckpt = torch.load(ckpt_path, map_location="cpu", weights_only=False) mega = ckpt["model"] new_state_dict = {} # Patch Embedding new_state_dict["patch_embed.proj.weight"] = mega["encoder_expand_linear.weight"].reshape( hidden_size, 128 if args.control else 64 ) new_state_dict["patch_embed.proj.bias"] = mega["encoder_expand_linear.bias"] new_state_dict["patch_embed.text_proj.weight"] = mega["text_projector.weight"] new_state_dict["patch_embed.text_proj.bias"] = mega["text_projector.bias"] # Time Condition Embedding new_state_dict["time_condition_embed.timestep_embedder.linear_1.weight"] = mega[ "time_embedding.time_embed.0.weight" ] new_state_dict["time_condition_embed.timestep_embedder.linear_1.bias"] = mega["time_embedding.time_embed.0.bias"] new_state_dict["time_condition_embed.timestep_embedder.linear_2.weight"] = mega[ "time_embedding.time_embed.2.weight" ] new_state_dict["time_condition_embed.timestep_embedder.linear_2.bias"] = mega["time_embedding.time_embed.2.bias"] new_state_dict["time_condition_embed.condition_embedder.linear_1.weight"] = mega[ "label_embedding.label_embed.0.weight" ] new_state_dict["time_condition_embed.condition_embedder.linear_1.bias"] = mega[ "label_embedding.label_embed.0.bias" ] new_state_dict["time_condition_embed.condition_embedder.linear_2.weight"] = mega[ "label_embedding.label_embed.2.weight" ] new_state_dict["time_condition_embed.condition_embedder.linear_2.bias"] = mega[ "label_embedding.label_embed.2.bias" ] # Convert each Transformer layer for i in tqdm(range(num_layers), desc="Converting layers (Megatron->Diffusers)"): block_prefix = f"transformer_blocks.{i}." # AdaLayerNorm new_state_dict[block_prefix + "norm1.linear.weight"] = mega[f"decoder.layers.{i}.adaln.weight"] new_state_dict[block_prefix + "norm1.linear.bias"] = mega[f"decoder.layers.{i}.adaln.bias"] qkv_weight = mega[f"decoder.layers.{i}.self_attention.linear_qkv.weight"] qkv_bias = mega[f"decoder.layers.{i}.self_attention.linear_qkv.bias"] # Reshape to match SAT logic qkv_weight = qkv_weight.view(num_heads, 3, hidden_size // num_heads, hidden_size) qkv_weight = qkv_weight.permute(1, 0, 2, 3).reshape(3 * hidden_size, hidden_size) qkv_bias = qkv_bias.view(num_heads, 3, hidden_size // num_heads) qkv_bias = qkv_bias.permute(1, 0, 2).reshape(3 * hidden_size) # Assign to Diffusers keys q, k, v = torch.chunk(qkv_weight, 3, dim=0) qb, kb, vb = torch.chunk(qkv_bias, 3, dim=0) new_state_dict[block_prefix + "attn1.to_q.weight"] = q new_state_dict[block_prefix + "attn1.to_q.bias"] = qb new_state_dict[block_prefix + "attn1.to_k.weight"] = k new_state_dict[block_prefix + "attn1.to_k.bias"] = kb new_state_dict[block_prefix + "attn1.to_v.weight"] = v new_state_dict[block_prefix + "attn1.to_v.bias"] = vb # Attention Output new_state_dict[block_prefix + "attn1.to_out.0.weight"] = mega[ f"decoder.layers.{i}.self_attention.linear_proj.weight" ] new_state_dict[block_prefix + "attn1.to_out.0.bias"] = mega[ f"decoder.layers.{i}.self_attention.linear_proj.bias" ] # MLP new_state_dict[block_prefix + "ff.net.0.proj.weight"] = mega[f"decoder.layers.{i}.mlp.linear_fc1.weight"] new_state_dict[block_prefix + "ff.net.0.proj.bias"] = mega[f"decoder.layers.{i}.mlp.linear_fc1.bias"] new_state_dict[block_prefix + "ff.net.2.weight"] = mega[f"decoder.layers.{i}.mlp.linear_fc2.weight"] new_state_dict[block_prefix + "ff.net.2.bias"] = mega[f"decoder.layers.{i}.mlp.linear_fc2.bias"] # Final Layers new_state_dict["norm_out.linear.weight"] = swap_scale_shift(mega["adaln_final.weight"], dim=0) new_state_dict["norm_out.linear.bias"] = swap_scale_shift(mega["adaln_final.bias"], dim=0) new_state_dict["proj_out.weight"] = mega["output_projector.weight"] new_state_dict["proj_out.bias"] = mega["output_projector.bias"] return new_state_dict def convert_cogview4_vae_checkpoint_to_diffusers(ckpt_path, vae_config): """ Convert a CogView4 VAE checkpoint to Diffusers format. Args: ckpt_path (str): Path to the VAE checkpoint. vae_config (dict): Configuration dictionary for the VAE. Returns: dict: The converted VAE state dictionary compatible with Diffusers. """ original_state_dict = torch.load(ckpt_path, map_location="cpu", weights_only=False)["state_dict"] return convert_ldm_vae_checkpoint(original_state_dict, vae_config) def main(args): """ Main function to convert CogView4 checkpoints to Diffusers format. Args: args (argparse.Namespace): Parsed command-line arguments. """ # Determine the desired data type if args.dtype == "fp16": dtype = torch.float16 elif args.dtype == "bf16": dtype = torch.bfloat16 elif args.dtype == "fp32": dtype = torch.float32 else: raise ValueError(f"Unsupported dtype: {args.dtype}") transformer = None vae = None # Convert Transformer checkpoint if provided if args.transformer_checkpoint_path is not None: converted_transformer_state_dict = convert_megatron_transformer_checkpoint_to_diffusers( ckpt_path=args.transformer_checkpoint_path, num_layers=args.num_layers, num_heads=args.num_heads, hidden_size=args.hidden_size, ) transformer = CogView4Transformer2DModel( patch_size=2, in_channels=32 if args.control else 16, num_layers=args.num_layers, attention_head_dim=args.attention_head_dim, num_attention_heads=args.num_heads, out_channels=16, text_embed_dim=args.hidden_size, time_embed_dim=args.time_embed_dim, condition_dim=args.condition_dim, pos_embed_max_size=args.pos_embed_max_size, ) transformer.load_state_dict(converted_transformer_state_dict, strict=True) # Convert to the specified dtype if dtype is not None: transformer = transformer.to(dtype=dtype) # Convert VAE checkpoint if provided if args.vae_checkpoint_path is not None: vae_config = { "in_channels": 3, "out_channels": 3, "down_block_types": ("DownEncoderBlock2D",) * 4, "up_block_types": ("UpDecoderBlock2D",) * 4, "block_out_channels": (128, 512, 1024, 1024), "layers_per_block": 3, "act_fn": "silu", "latent_channels": 16, "norm_num_groups": 32, "sample_size": 1024, "scaling_factor": 1.0, "shift_factor": 0.0, "force_upcast": True, "use_quant_conv": False, "use_post_quant_conv": False, "mid_block_add_attention": False, } converted_vae_state_dict = convert_cogview4_vae_checkpoint_to_diffusers(args.vae_checkpoint_path, vae_config) vae = AutoencoderKL(**vae_config) vae.load_state_dict(converted_vae_state_dict, strict=True) if dtype is not None: vae = vae.to(dtype=dtype) # Load the text encoder and tokenizer text_encoder_id = "THUDM/glm-4-9b-hf" tokenizer = PreTrainedTokenizerFast.from_pretrained(text_encoder_id) text_encoder = GlmModel.from_pretrained( text_encoder_id, cache_dir=args.text_encoder_cache_dir, torch_dtype=torch.bfloat16 if args.dtype == "bf16" else torch.float32, ) for param in text_encoder.parameters(): param.data = param.data.contiguous() # Initialize the scheduler scheduler = FlowMatchEulerDiscreteScheduler( base_shift=0.25, max_shift=0.75, base_image_seq_len=256, use_dynamic_shifting=True, time_shift_type="linear" ) # Create the pipeline if args.control: pipe = CogView4ControlPipeline( tokenizer=tokenizer, text_encoder=text_encoder, vae=vae, transformer=transformer, scheduler=scheduler, ) else: pipe = CogView4Pipeline( tokenizer=tokenizer, text_encoder=text_encoder, vae=vae, transformer=transformer, scheduler=scheduler, ) # Save the converted pipeline pipe.save_pretrained( args.output_path, safe_serialization=True, max_shard_size="5GB", push_to_hub=args.push_to_hub, ) if __name__ == "__main__": main(args)
diffusers/scripts/convert_cogview4_to_diffusers_megatron.py/0
{ "file_path": "diffusers/scripts/convert_cogview4_to_diffusers_megatron.py", "repo_id": "diffusers", "token_count": 5759 }
151
import argparse import os import torch from huggingface_hub import snapshot_download from safetensors.torch import load_file from transformers import AutoTokenizer from diffusers import AutoencoderKL, FlowMatchEulerDiscreteScheduler, OmniGenPipeline, OmniGenTransformer2DModel def main(args): # checkpoint from https://huggingface.co/Shitao/OmniGen-v1 if not os.path.exists(args.origin_ckpt_path): print("Model not found, downloading...") cache_folder = os.getenv("HF_HUB_CACHE") args.origin_ckpt_path = snapshot_download( repo_id=args.origin_ckpt_path, cache_dir=cache_folder, ignore_patterns=["flax_model.msgpack", "rust_model.ot", "tf_model.h5", "model.pt"], ) print(f"Downloaded model to {args.origin_ckpt_path}") ckpt = os.path.join(args.origin_ckpt_path, "model.safetensors") ckpt = load_file(ckpt, device="cpu") mapping_dict = { "pos_embed": "patch_embedding.pos_embed", "x_embedder.proj.weight": "patch_embedding.output_image_proj.weight", "x_embedder.proj.bias": "patch_embedding.output_image_proj.bias", "input_x_embedder.proj.weight": "patch_embedding.input_image_proj.weight", "input_x_embedder.proj.bias": "patch_embedding.input_image_proj.bias", "final_layer.adaLN_modulation.1.weight": "norm_out.linear.weight", "final_layer.adaLN_modulation.1.bias": "norm_out.linear.bias", "final_layer.linear.weight": "proj_out.weight", "final_layer.linear.bias": "proj_out.bias", "time_token.mlp.0.weight": "time_token.linear_1.weight", "time_token.mlp.0.bias": "time_token.linear_1.bias", "time_token.mlp.2.weight": "time_token.linear_2.weight", "time_token.mlp.2.bias": "time_token.linear_2.bias", "t_embedder.mlp.0.weight": "t_embedder.linear_1.weight", "t_embedder.mlp.0.bias": "t_embedder.linear_1.bias", "t_embedder.mlp.2.weight": "t_embedder.linear_2.weight", "t_embedder.mlp.2.bias": "t_embedder.linear_2.bias", "llm.embed_tokens.weight": "embed_tokens.weight", } converted_state_dict = {} for k, v in ckpt.items(): if k in mapping_dict: converted_state_dict[mapping_dict[k]] = v elif "qkv" in k: to_q, to_k, to_v = v.chunk(3) converted_state_dict[f"layers.{k.split('.')[2]}.self_attn.to_q.weight"] = to_q converted_state_dict[f"layers.{k.split('.')[2]}.self_attn.to_k.weight"] = to_k converted_state_dict[f"layers.{k.split('.')[2]}.self_attn.to_v.weight"] = to_v elif "o_proj" in k: converted_state_dict[f"layers.{k.split('.')[2]}.self_attn.to_out.0.weight"] = v else: converted_state_dict[k[4:]] = v transformer = OmniGenTransformer2DModel( rope_scaling={ "long_factor": [ 1.0299999713897705, 1.0499999523162842, 1.0499999523162842, 1.0799999237060547, 1.2299998998641968, 1.2299998998641968, 1.2999999523162842, 1.4499999284744263, 1.5999999046325684, 1.6499998569488525, 1.8999998569488525, 2.859999895095825, 3.68999981880188, 5.419999599456787, 5.489999771118164, 5.489999771118164, 9.09000015258789, 11.579999923706055, 15.65999984741211, 15.769999504089355, 15.789999961853027, 18.360000610351562, 21.989999771118164, 23.079999923706055, 30.009998321533203, 32.35000228881836, 32.590003967285156, 35.56000518798828, 39.95000457763672, 53.840003967285156, 56.20000457763672, 57.95000457763672, 59.29000473022461, 59.77000427246094, 59.920005798339844, 61.190006256103516, 61.96000671386719, 62.50000762939453, 63.3700065612793, 63.48000717163086, 63.48000717163086, 63.66000747680664, 63.850006103515625, 64.08000946044922, 64.760009765625, 64.80001068115234, 64.81001281738281, 64.81001281738281, ], "short_factor": [ 1.05, 1.05, 1.05, 1.1, 1.1, 1.1, 1.2500000000000002, 1.2500000000000002, 1.4000000000000004, 1.4500000000000004, 1.5500000000000005, 1.8500000000000008, 1.9000000000000008, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.000000000000001, 2.1000000000000005, 2.1000000000000005, 2.2, 2.3499999999999996, 2.3499999999999996, 2.3499999999999996, 2.3499999999999996, 2.3999999999999995, 2.3999999999999995, 2.6499999999999986, 2.6999999999999984, 2.8999999999999977, 2.9499999999999975, 3.049999999999997, 3.049999999999997, 3.049999999999997, ], "type": "su", }, patch_size=2, in_channels=4, pos_embed_max_size=192, ) transformer.load_state_dict(converted_state_dict, strict=True) transformer.to(torch.bfloat16) num_model_params = sum(p.numel() for p in transformer.parameters()) print(f"Total number of transformer parameters: {num_model_params}") scheduler = FlowMatchEulerDiscreteScheduler(invert_sigmas=True, num_train_timesteps=1) vae = AutoencoderKL.from_pretrained(os.path.join(args.origin_ckpt_path, "vae"), torch_dtype=torch.float32) tokenizer = AutoTokenizer.from_pretrained(args.origin_ckpt_path) pipeline = OmniGenPipeline(tokenizer=tokenizer, transformer=transformer, vae=vae, scheduler=scheduler) pipeline.save_pretrained(args.dump_path) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--origin_ckpt_path", default="Shitao/OmniGen-v1", type=str, required=False, help="Path to the checkpoint to convert.", ) parser.add_argument( "--dump_path", default="OmniGen-v1-diffusers", type=str, required=False, help="Path to the output pipeline." ) args = parser.parse_args() main(args)
diffusers/scripts/convert_omnigen_to_diffusers.py/0
{ "file_path": "diffusers/scripts/convert_omnigen_to_diffusers.py", "repo_id": "diffusers", "token_count": 4235 }
152
# Run this script to convert the Stable Cascade model weights to a diffusers pipeline. import argparse from contextlib import nullcontext import torch from safetensors.torch import load_file from transformers import ( AutoTokenizer, CLIPConfig, CLIPImageProcessor, CLIPTextModelWithProjection, CLIPVisionModelWithProjection, ) from diffusers import ( DDPMWuerstchenScheduler, StableCascadeCombinedPipeline, StableCascadeDecoderPipeline, StableCascadePriorPipeline, ) from diffusers.loaders.single_file_utils import convert_stable_cascade_unet_single_file_to_diffusers from diffusers.models import StableCascadeUNet from diffusers.models.modeling_utils import load_model_dict_into_meta from diffusers.pipelines.wuerstchen import PaellaVQModel from diffusers.utils import is_accelerate_available if is_accelerate_available(): from accelerate import init_empty_weights parser = argparse.ArgumentParser(description="Convert Stable Cascade model weights to a diffusers pipeline") parser.add_argument("--model_path", type=str, help="Location of Stable Cascade weights") parser.add_argument("--stage_c_name", type=str, default="stage_c.safetensors", help="Name of stage c checkpoint file") parser.add_argument("--stage_b_name", type=str, default="stage_b.safetensors", help="Name of stage b checkpoint file") parser.add_argument("--skip_stage_c", action="store_true", help="Skip converting stage c") parser.add_argument("--skip_stage_b", action="store_true", help="Skip converting stage b") parser.add_argument("--use_safetensors", action="store_true", help="Use SafeTensors for conversion") parser.add_argument( "--prior_output_path", default="stable-cascade-prior", type=str, help="Hub organization to save the pipelines to" ) parser.add_argument( "--decoder_output_path", type=str, default="stable-cascade-decoder", help="Hub organization to save the pipelines to", ) parser.add_argument( "--combined_output_path", type=str, default="stable-cascade-combined", help="Hub organization to save the pipelines to", ) parser.add_argument("--save_combined", action="store_true") parser.add_argument("--push_to_hub", action="store_true", help="Push to hub") parser.add_argument("--variant", type=str, help="Set to bf16 to save bfloat16 weights") args = parser.parse_args() if args.skip_stage_b and args.skip_stage_c: raise ValueError("At least one stage should be converted") if (args.skip_stage_b or args.skip_stage_c) and args.save_combined: raise ValueError("Cannot skip stages when creating a combined pipeline") model_path = args.model_path device = "cpu" if args.variant == "bf16": dtype = torch.bfloat16 else: dtype = torch.float32 # set paths to model weights prior_checkpoint_path = f"{model_path}/{args.stage_c_name}" decoder_checkpoint_path = f"{model_path}/{args.stage_b_name}" # Clip Text encoder and tokenizer config = CLIPConfig.from_pretrained("laion/CLIP-ViT-bigG-14-laion2B-39B-b160k") config.text_config.projection_dim = config.projection_dim text_encoder = CLIPTextModelWithProjection.from_pretrained( "laion/CLIP-ViT-bigG-14-laion2B-39B-b160k", config=config.text_config ) tokenizer = AutoTokenizer.from_pretrained("laion/CLIP-ViT-bigG-14-laion2B-39B-b160k") # image processor feature_extractor = CLIPImageProcessor() image_encoder = CLIPVisionModelWithProjection.from_pretrained("openai/clip-vit-large-patch14") # scheduler for prior and decoder scheduler = DDPMWuerstchenScheduler() ctx = init_empty_weights if is_accelerate_available() else nullcontext if not args.skip_stage_c: # Prior if args.use_safetensors: prior_orig_state_dict = load_file(prior_checkpoint_path, device=device) else: prior_orig_state_dict = torch.load(prior_checkpoint_path, map_location=device) prior_state_dict = convert_stable_cascade_unet_single_file_to_diffusers(prior_orig_state_dict) with ctx(): prior_model = StableCascadeUNet( in_channels=16, out_channels=16, timestep_ratio_embedding_dim=64, patch_size=1, conditioning_dim=2048, block_out_channels=[2048, 2048], num_attention_heads=[32, 32], down_num_layers_per_block=[8, 24], up_num_layers_per_block=[24, 8], down_blocks_repeat_mappers=[1, 1], up_blocks_repeat_mappers=[1, 1], block_types_per_layer=[ ["SDCascadeResBlock", "SDCascadeTimestepBlock", "SDCascadeAttnBlock"], ["SDCascadeResBlock", "SDCascadeTimestepBlock", "SDCascadeAttnBlock"], ], clip_text_in_channels=1280, clip_text_pooled_in_channels=1280, clip_image_in_channels=768, clip_seq=4, kernel_size=3, dropout=[0.1, 0.1], self_attn=True, timestep_conditioning_type=["sca", "crp"], switch_level=[False], ) if is_accelerate_available(): load_model_dict_into_meta(prior_model, prior_state_dict) else: prior_model.load_state_dict(prior_state_dict) # Prior pipeline prior_pipeline = StableCascadePriorPipeline( prior=prior_model, tokenizer=tokenizer, text_encoder=text_encoder, image_encoder=image_encoder, scheduler=scheduler, feature_extractor=feature_extractor, ) prior_pipeline.to(dtype).save_pretrained( args.prior_output_path, push_to_hub=args.push_to_hub, variant=args.variant ) if not args.skip_stage_b: # Decoder if args.use_safetensors: decoder_orig_state_dict = load_file(decoder_checkpoint_path, device=device) else: decoder_orig_state_dict = torch.load(decoder_checkpoint_path, map_location=device) decoder_state_dict = convert_stable_cascade_unet_single_file_to_diffusers(decoder_orig_state_dict) with ctx(): decoder = StableCascadeUNet( in_channels=4, out_channels=4, timestep_ratio_embedding_dim=64, patch_size=2, conditioning_dim=1280, block_out_channels=[320, 640, 1280, 1280], down_num_layers_per_block=[2, 6, 28, 6], up_num_layers_per_block=[6, 28, 6, 2], down_blocks_repeat_mappers=[1, 1, 1, 1], up_blocks_repeat_mappers=[3, 3, 2, 2], num_attention_heads=[0, 0, 20, 20], block_types_per_layer=[ ["SDCascadeResBlock", "SDCascadeTimestepBlock"], ["SDCascadeResBlock", "SDCascadeTimestepBlock"], ["SDCascadeResBlock", "SDCascadeTimestepBlock", "SDCascadeAttnBlock"], ["SDCascadeResBlock", "SDCascadeTimestepBlock", "SDCascadeAttnBlock"], ], clip_text_pooled_in_channels=1280, clip_seq=4, effnet_in_channels=16, pixel_mapper_in_channels=3, kernel_size=3, dropout=[0, 0, 0.1, 0.1], self_attn=True, timestep_conditioning_type=["sca"], ) if is_accelerate_available(): load_model_dict_into_meta(decoder, decoder_state_dict) else: decoder.load_state_dict(decoder_state_dict) # VQGAN from Wuerstchen-V2 vqmodel = PaellaVQModel.from_pretrained("warp-ai/wuerstchen", subfolder="vqgan") # Decoder pipeline decoder_pipeline = StableCascadeDecoderPipeline( decoder=decoder, text_encoder=text_encoder, tokenizer=tokenizer, vqgan=vqmodel, scheduler=scheduler ) decoder_pipeline.to(dtype).save_pretrained( args.decoder_output_path, push_to_hub=args.push_to_hub, variant=args.variant ) if args.save_combined: # Stable Cascade combined pipeline stable_cascade_pipeline = StableCascadeCombinedPipeline( # Decoder text_encoder=text_encoder, tokenizer=tokenizer, decoder=decoder, scheduler=scheduler, vqgan=vqmodel, # Prior prior_text_encoder=text_encoder, prior_tokenizer=tokenizer, prior_prior=prior_model, prior_scheduler=scheduler, prior_image_encoder=image_encoder, prior_feature_extractor=feature_extractor, ) stable_cascade_pipeline.to(dtype).save_pretrained( args.combined_output_path, push_to_hub=args.push_to_hub, variant=args.variant )
diffusers/scripts/convert_stable_cascade.py/0
{ "file_path": "diffusers/scripts/convert_stable_cascade.py", "repo_id": "diffusers", "token_count": 3605 }
153
""" This script demonstrates how to extract a LoRA checkpoint from a fully finetuned model with the CogVideoX model. To make it work for other models: * Change the model class. Here we use `CogVideoXTransformer3DModel`. For Flux, it would be `FluxTransformer2DModel`, for example. (TODO: more reason to add `AutoModel`). * Spply path to the base checkpoint via `base_ckpt_path`. * Supply path to the fully fine-tuned checkpoint via `--finetune_ckpt_path`. * Change the `--rank` as needed. Example usage: ```bash python extract_lora_from_model.py \ --base_ckpt_path=THUDM/CogVideoX-5b \ --finetune_ckpt_path=finetrainers/cakeify-v0 \ --lora_out_path=cakeify_lora.safetensors ``` Script is adapted from https://github.com/Stability-AI/stability-ComfyUI-nodes/blob/001154622564b17223ce0191803c5fff7b87146c/control_lora_create.py """ import argparse import torch from safetensors.torch import save_file from tqdm.auto import tqdm from diffusers import CogVideoXTransformer3DModel RANK = 64 CLAMP_QUANTILE = 0.99 # Comes from # https://github.com/Stability-AI/stability-ComfyUI-nodes/blob/001154622564b17223ce0191803c5fff7b87146c/control_lora_create.py#L9 def extract_lora(diff, rank): # Important to use CUDA otherwise, very slow! if torch.cuda.is_available(): diff = diff.to("cuda") is_conv2d = len(diff.shape) == 4 kernel_size = None if not is_conv2d else diff.size()[2:4] is_conv2d_3x3 = is_conv2d and kernel_size != (1, 1) out_dim, in_dim = diff.size()[0:2] rank = min(rank, in_dim, out_dim) if is_conv2d: if is_conv2d_3x3: diff = diff.flatten(start_dim=1) else: diff = diff.squeeze() U, S, Vh = torch.linalg.svd(diff.float()) U = U[:, :rank] S = S[:rank] U = U @ torch.diag(S) Vh = Vh[:rank, :] dist = torch.cat([U.flatten(), Vh.flatten()]) hi_val = torch.quantile(dist, CLAMP_QUANTILE) low_val = -hi_val U = U.clamp(low_val, hi_val) Vh = Vh.clamp(low_val, hi_val) if is_conv2d: U = U.reshape(out_dim, rank, 1, 1) Vh = Vh.reshape(rank, in_dim, kernel_size[0], kernel_size[1]) return (U.cpu(), Vh.cpu()) def parse_args(): parser = argparse.ArgumentParser() parser.add_argument( "--base_ckpt_path", default=None, type=str, required=True, help="Base checkpoint path from which the model was finetuned. Can be a model ID on the Hub.", ) parser.add_argument( "--base_subfolder", default="transformer", type=str, help="subfolder to load the base checkpoint from if any.", ) parser.add_argument( "--finetune_ckpt_path", default=None, type=str, required=True, help="Fully fine-tuned checkpoint path. Can be a model ID on the Hub.", ) parser.add_argument( "--finetune_subfolder", default=None, type=str, help="subfolder to load the fulle finetuned checkpoint from if any.", ) parser.add_argument("--rank", default=64, type=int) parser.add_argument("--lora_out_path", default=None, type=str, required=True) args = parser.parse_args() if not args.lora_out_path.endswith(".safetensors"): raise ValueError("`lora_out_path` must end with `.safetensors`.") return args @torch.no_grad() def main(args): model_finetuned = CogVideoXTransformer3DModel.from_pretrained( args.finetune_ckpt_path, subfolder=args.finetune_subfolder, torch_dtype=torch.bfloat16 ) state_dict_ft = model_finetuned.state_dict() # Change the `subfolder` as needed. base_model = CogVideoXTransformer3DModel.from_pretrained( args.base_ckpt_path, subfolder=args.base_subfolder, torch_dtype=torch.bfloat16 ) state_dict = base_model.state_dict() output_dict = {} for k in tqdm(state_dict, desc="Extracting LoRA..."): original_param = state_dict[k] finetuned_param = state_dict_ft[k] if len(original_param.shape) >= 2: diff = finetuned_param.float() - original_param.float() out = extract_lora(diff, RANK) name = k if name.endswith(".weight"): name = name[: -len(".weight")] down_key = "{}.lora_A.weight".format(name) up_key = "{}.lora_B.weight".format(name) output_dict[up_key] = out[0].contiguous().to(finetuned_param.dtype) output_dict[down_key] = out[1].contiguous().to(finetuned_param.dtype) prefix = "transformer" if "transformer" in base_model.__class__.__name__.lower() else "unet" output_dict = {f"{prefix}.{k}": v for k, v in output_dict.items()} save_file(output_dict, args.lora_out_path) print(f"LoRA saved and it contains {len(output_dict)} keys.") if __name__ == "__main__": args = parse_args() main(args)
diffusers/scripts/extract_lora_from_model.py/0
{ "file_path": "diffusers/scripts/extract_lora_from_model.py", "repo_id": "diffusers", "token_count": 2140 }
154
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import numpy as np import torch import tqdm from ...models.unets.unet_1d import UNet1DModel from ...pipelines import DiffusionPipeline from ...utils.dummy_pt_objects import DDPMScheduler from ...utils.torch_utils import randn_tensor class ValueGuidedRLPipeline(DiffusionPipeline): r""" Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Parameters: value_function ([`UNet1DModel`]): A specialized UNet for fine-tuning trajectories base on reward. unet ([`UNet1DModel`]): UNet architecture to denoise the encoded trajectories. scheduler ([`SchedulerMixin`]): A scheduler to be used in combination with `unet` to denoise the encoded trajectories. Default for this application is [`DDPMScheduler`]. env (): An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. """ def __init__( self, value_function: UNet1DModel, unet: UNet1DModel, scheduler: DDPMScheduler, env, ): super().__init__() self.register_modules(value_function=value_function, unet=unet, scheduler=scheduler, env=env) self.data = env.get_dataset() self.means = {} for key in self.data.keys(): try: self.means[key] = self.data[key].mean() except: # noqa: E722 pass self.stds = {} for key in self.data.keys(): try: self.stds[key] = self.data[key].std() except: # noqa: E722 pass self.state_dim = env.observation_space.shape[0] self.action_dim = env.action_space.shape[0] def normalize(self, x_in, key): return (x_in - self.means[key]) / self.stds[key] def de_normalize(self, x_in, key): return x_in * self.stds[key] + self.means[key] def to_torch(self, x_in): if isinstance(x_in, dict): return {k: self.to_torch(v) for k, v in x_in.items()} elif torch.is_tensor(x_in): return x_in.to(self.unet.device) return torch.tensor(x_in, device=self.unet.device) def reset_x0(self, x_in, cond, act_dim): for key, val in cond.items(): x_in[:, key, act_dim:] = val.clone() return x_in def run_diffusion(self, x, conditions, n_guide_steps, scale): batch_size = x.shape[0] y = None for i in tqdm.tqdm(self.scheduler.timesteps): # create batch of timesteps to pass into model timesteps = torch.full((batch_size,), i, device=self.unet.device, dtype=torch.long) for _ in range(n_guide_steps): with torch.enable_grad(): x.requires_grad_() # permute to match dimension for pre-trained models y = self.value_function(x.permute(0, 2, 1), timesteps).sample grad = torch.autograd.grad([y.sum()], [x])[0] posterior_variance = self.scheduler._get_variance(i) model_std = torch.exp(0.5 * posterior_variance) grad = model_std * grad grad[timesteps < 2] = 0 x = x.detach() x = x + scale * grad x = self.reset_x0(x, conditions, self.action_dim) prev_x = self.unet(x.permute(0, 2, 1), timesteps).sample.permute(0, 2, 1) # TODO: verify deprecation of this kwarg x = self.scheduler.step(prev_x, i, x)["prev_sample"] # apply conditions to the trajectory (set the initial state) x = self.reset_x0(x, conditions, self.action_dim) x = self.to_torch(x) return x, y def __call__(self, obs, batch_size=64, planning_horizon=32, n_guide_steps=2, scale=0.1): # normalize the observations and create batch dimension obs = self.normalize(obs, "observations") obs = obs[None].repeat(batch_size, axis=0) conditions = {0: self.to_torch(obs)} shape = (batch_size, planning_horizon, self.state_dim + self.action_dim) # generate initial noise and apply our conditions (to make the trajectories start at current state) x1 = randn_tensor(shape, device=self.unet.device) x = self.reset_x0(x1, conditions, self.action_dim) x = self.to_torch(x) # run the diffusion process x, y = self.run_diffusion(x, conditions, n_guide_steps, scale) # sort output trajectories by value sorted_idx = y.argsort(0, descending=True).squeeze() sorted_values = x[sorted_idx] actions = sorted_values[:, :, : self.action_dim] actions = actions.detach().cpu().numpy() denorm_actions = self.de_normalize(actions, key="actions") # select the action with the highest value if y is not None: selected_index = 0 else: # if we didn't run value guiding, select a random action selected_index = np.random.randint(0, batch_size) denorm_actions = denorm_actions[selected_index, 0] return denorm_actions
diffusers/src/diffusers/experimental/rl/value_guided_sampling.py/0
{ "file_path": "diffusers/src/diffusers/experimental/rl/value_guided_sampling.py", "repo_id": "diffusers", "token_count": 2639 }
155
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Tuple, Union import torch from ..utils import get_logger from ..utils.torch_utils import unwrap_module from ._common import _ALL_TRANSFORMER_BLOCK_IDENTIFIERS from ._helpers import TransformerBlockRegistry from .hooks import BaseState, HookRegistry, ModelHook, StateManager logger = get_logger(__name__) # pylint: disable=invalid-name _FBC_LEADER_BLOCK_HOOK = "fbc_leader_block_hook" _FBC_BLOCK_HOOK = "fbc_block_hook" @dataclass class FirstBlockCacheConfig: r""" Configuration for [First Block Cache](https://github.com/chengzeyi/ParaAttention/blob/7a266123671b55e7e5a2fe9af3121f07a36afc78/README.md#first-block-cache-our-dynamic-caching). Args: threshold (`float`, defaults to `0.05`): The threshold to determine whether or not a forward pass through all layers of the model is required. A higher threshold usually results in a forward pass through a lower number of layers and faster inference, but might lead to poorer generation quality. A lower threshold may not result in significant generation speedup. The threshold is compared against the absmean difference of the residuals between the current and cached outputs from the first transformer block. If the difference is below the threshold, the forward pass is skipped. """ threshold: float = 0.05 class FBCSharedBlockState(BaseState): def __init__(self) -> None: super().__init__() self.head_block_output: Union[torch.Tensor, Tuple[torch.Tensor, ...]] = None self.head_block_residual: torch.Tensor = None self.tail_block_residuals: Union[torch.Tensor, Tuple[torch.Tensor, ...]] = None self.should_compute: bool = True def reset(self): self.tail_block_residuals = None self.should_compute = True class FBCHeadBlockHook(ModelHook): _is_stateful = True def __init__(self, state_manager: StateManager, threshold: float): self.state_manager = state_manager self.threshold = threshold self._metadata = None def initialize_hook(self, module): unwrapped_module = unwrap_module(module) self._metadata = TransformerBlockRegistry.get(unwrapped_module.__class__) return module def new_forward(self, module: torch.nn.Module, *args, **kwargs): original_hidden_states = self._metadata._get_parameter_from_args_kwargs("hidden_states", args, kwargs) output = self.fn_ref.original_forward(*args, **kwargs) is_output_tuple = isinstance(output, tuple) if is_output_tuple: hidden_states_residual = output[self._metadata.return_hidden_states_index] - original_hidden_states else: hidden_states_residual = output - original_hidden_states shared_state: FBCSharedBlockState = self.state_manager.get_state() hidden_states = encoder_hidden_states = None should_compute = self._should_compute_remaining_blocks(hidden_states_residual) shared_state.should_compute = should_compute if not should_compute: # Apply caching if is_output_tuple: hidden_states = ( shared_state.tail_block_residuals[0] + output[self._metadata.return_hidden_states_index] ) else: hidden_states = shared_state.tail_block_residuals[0] + output if self._metadata.return_encoder_hidden_states_index is not None: assert is_output_tuple encoder_hidden_states = ( shared_state.tail_block_residuals[1] + output[self._metadata.return_encoder_hidden_states_index] ) if is_output_tuple: return_output = [None] * len(output) return_output[self._metadata.return_hidden_states_index] = hidden_states return_output[self._metadata.return_encoder_hidden_states_index] = encoder_hidden_states return_output = tuple(return_output) else: return_output = hidden_states output = return_output else: if is_output_tuple: head_block_output = [None] * len(output) head_block_output[0] = output[self._metadata.return_hidden_states_index] head_block_output[1] = output[self._metadata.return_encoder_hidden_states_index] else: head_block_output = output shared_state.head_block_output = head_block_output shared_state.head_block_residual = hidden_states_residual return output def reset_state(self, module): self.state_manager.reset() return module @torch.compiler.disable def _should_compute_remaining_blocks(self, hidden_states_residual: torch.Tensor) -> bool: shared_state = self.state_manager.get_state() if shared_state.head_block_residual is None: return True prev_hidden_states_residual = shared_state.head_block_residual absmean = (hidden_states_residual - prev_hidden_states_residual).abs().mean() prev_hidden_states_absmean = prev_hidden_states_residual.abs().mean() diff = (absmean / prev_hidden_states_absmean).item() return diff > self.threshold class FBCBlockHook(ModelHook): def __init__(self, state_manager: StateManager, is_tail: bool = False): super().__init__() self.state_manager = state_manager self.is_tail = is_tail self._metadata = None def initialize_hook(self, module): unwrapped_module = unwrap_module(module) self._metadata = TransformerBlockRegistry.get(unwrapped_module.__class__) return module def new_forward(self, module: torch.nn.Module, *args, **kwargs): original_hidden_states = self._metadata._get_parameter_from_args_kwargs("hidden_states", args, kwargs) original_encoder_hidden_states = None if self._metadata.return_encoder_hidden_states_index is not None: original_encoder_hidden_states = self._metadata._get_parameter_from_args_kwargs( "encoder_hidden_states", args, kwargs ) shared_state = self.state_manager.get_state() if shared_state.should_compute: output = self.fn_ref.original_forward(*args, **kwargs) if self.is_tail: hidden_states_residual = encoder_hidden_states_residual = None if isinstance(output, tuple): hidden_states_residual = ( output[self._metadata.return_hidden_states_index] - shared_state.head_block_output[0] ) encoder_hidden_states_residual = ( output[self._metadata.return_encoder_hidden_states_index] - shared_state.head_block_output[1] ) else: hidden_states_residual = output - shared_state.head_block_output shared_state.tail_block_residuals = (hidden_states_residual, encoder_hidden_states_residual) return output if original_encoder_hidden_states is None: return_output = original_hidden_states else: return_output = [None, None] return_output[self._metadata.return_hidden_states_index] = original_hidden_states return_output[self._metadata.return_encoder_hidden_states_index] = original_encoder_hidden_states return_output = tuple(return_output) return return_output def apply_first_block_cache(module: torch.nn.Module, config: FirstBlockCacheConfig) -> None: """ Applies [First Block Cache](https://github.com/chengzeyi/ParaAttention/blob/4de137c5b96416489f06e43e19f2c14a772e28fd/README.md#first-block-cache-our-dynamic-caching) to a given module. First Block Cache builds on the ideas of [TeaCache](https://huggingface.co/papers/2411.19108). It is much simpler to implement generically for a wide range of models and has been integrated first for experimental purposes. Args: module (`torch.nn.Module`): The pytorch module to apply FBCache to. Typically, this should be a transformer architecture supported in Diffusers, such as `CogVideoXTransformer3DModel`, but external implementations may also work. config (`FirstBlockCacheConfig`): The configuration to use for applying the FBCache method. Example: ```python >>> import torch >>> from diffusers import CogView4Pipeline >>> from diffusers.hooks import apply_first_block_cache, FirstBlockCacheConfig >>> pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B", torch_dtype=torch.bfloat16) >>> pipe.to("cuda") >>> apply_first_block_cache(pipe.transformer, FirstBlockCacheConfig(threshold=0.2)) >>> prompt = "A photo of an astronaut riding a horse on mars" >>> image = pipe(prompt, generator=torch.Generator().manual_seed(42)).images[0] >>> image.save("output.png") ``` """ state_manager = StateManager(FBCSharedBlockState, (), {}) remaining_blocks = [] for name, submodule in module.named_children(): if name not in _ALL_TRANSFORMER_BLOCK_IDENTIFIERS or not isinstance(submodule, torch.nn.ModuleList): continue for index, block in enumerate(submodule): remaining_blocks.append((f"{name}.{index}", block)) head_block_name, head_block = remaining_blocks.pop(0) tail_block_name, tail_block = remaining_blocks.pop(-1) logger.debug(f"Applying FBCHeadBlockHook to '{head_block_name}'") _apply_fbc_head_block_hook(head_block, state_manager, config.threshold) for name, block in remaining_blocks: logger.debug(f"Applying FBCBlockHook to '{name}'") _apply_fbc_block_hook(block, state_manager) logger.debug(f"Applying FBCBlockHook to tail block '{tail_block_name}'") _apply_fbc_block_hook(tail_block, state_manager, is_tail=True) def _apply_fbc_head_block_hook(block: torch.nn.Module, state_manager: StateManager, threshold: float) -> None: registry = HookRegistry.check_if_exists_or_initialize(block) hook = FBCHeadBlockHook(state_manager, threshold) registry.register_hook(hook, _FBC_LEADER_BLOCK_HOOK) def _apply_fbc_block_hook(block: torch.nn.Module, state_manager: StateManager, is_tail: bool = False) -> None: registry = HookRegistry.check_if_exists_or_initialize(block) hook = FBCBlockHook(state_manager, is_tail) registry.register_hook(hook, _FBC_BLOCK_HOOK)
diffusers/src/diffusers/hooks/first_block_cache.py/0
{ "file_path": "diffusers/src/diffusers/hooks/first_block_cache.py", "repo_id": "diffusers", "token_count": 4570 }
156
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib import inspect import re from contextlib import nullcontext from typing import Optional import torch from huggingface_hub.utils import validate_hf_hub_args from typing_extensions import Self from .. import __version__ from ..quantizers import DiffusersAutoQuantizer from ..utils import deprecate, is_accelerate_available, is_torch_version, logging from ..utils.torch_utils import empty_device_cache from .single_file_utils import ( SingleFileComponentError, convert_animatediff_checkpoint_to_diffusers, convert_auraflow_transformer_checkpoint_to_diffusers, convert_autoencoder_dc_checkpoint_to_diffusers, convert_chroma_transformer_checkpoint_to_diffusers, convert_controlnet_checkpoint, convert_cosmos_transformer_checkpoint_to_diffusers, convert_flux_transformer_checkpoint_to_diffusers, convert_hidream_transformer_to_diffusers, convert_hunyuan_video_transformer_to_diffusers, convert_ldm_unet_checkpoint, convert_ldm_vae_checkpoint, convert_ltx_transformer_checkpoint_to_diffusers, convert_ltx_vae_checkpoint_to_diffusers, convert_lumina2_to_diffusers, convert_mochi_transformer_checkpoint_to_diffusers, convert_sana_transformer_to_diffusers, convert_sd3_transformer_checkpoint_to_diffusers, convert_stable_cascade_unet_single_file_to_diffusers, convert_wan_transformer_to_diffusers, convert_wan_vae_to_diffusers, create_controlnet_diffusers_config_from_ldm, create_unet_diffusers_config_from_ldm, create_vae_diffusers_config_from_ldm, fetch_diffusers_config, fetch_original_config, load_single_file_checkpoint, ) logger = logging.get_logger(__name__) if is_accelerate_available(): from accelerate import dispatch_model, init_empty_weights from ..models.model_loading_utils import load_model_dict_into_meta if is_torch_version(">=", "1.9.0") and is_accelerate_available(): _LOW_CPU_MEM_USAGE_DEFAULT = True else: _LOW_CPU_MEM_USAGE_DEFAULT = False SINGLE_FILE_LOADABLE_CLASSES = { "StableCascadeUNet": { "checkpoint_mapping_fn": convert_stable_cascade_unet_single_file_to_diffusers, }, "UNet2DConditionModel": { "checkpoint_mapping_fn": convert_ldm_unet_checkpoint, "config_mapping_fn": create_unet_diffusers_config_from_ldm, "default_subfolder": "unet", "legacy_kwargs": { "num_in_channels": "in_channels", # Legacy kwargs supported by `from_single_file` mapped to new args }, }, "AutoencoderKL": { "checkpoint_mapping_fn": convert_ldm_vae_checkpoint, "config_mapping_fn": create_vae_diffusers_config_from_ldm, "default_subfolder": "vae", }, "ControlNetModel": { "checkpoint_mapping_fn": convert_controlnet_checkpoint, "config_mapping_fn": create_controlnet_diffusers_config_from_ldm, }, "SD3Transformer2DModel": { "checkpoint_mapping_fn": convert_sd3_transformer_checkpoint_to_diffusers, "default_subfolder": "transformer", }, "MotionAdapter": { "checkpoint_mapping_fn": convert_animatediff_checkpoint_to_diffusers, }, "SparseControlNetModel": { "checkpoint_mapping_fn": convert_animatediff_checkpoint_to_diffusers, }, "FluxTransformer2DModel": { "checkpoint_mapping_fn": convert_flux_transformer_checkpoint_to_diffusers, "default_subfolder": "transformer", }, "ChromaTransformer2DModel": { "checkpoint_mapping_fn": convert_chroma_transformer_checkpoint_to_diffusers, "default_subfolder": "transformer", }, "LTXVideoTransformer3DModel": { "checkpoint_mapping_fn": convert_ltx_transformer_checkpoint_to_diffusers, "default_subfolder": "transformer", }, "AutoencoderKLLTXVideo": { "checkpoint_mapping_fn": convert_ltx_vae_checkpoint_to_diffusers, "default_subfolder": "vae", }, "AutoencoderDC": {"checkpoint_mapping_fn": convert_autoencoder_dc_checkpoint_to_diffusers}, "MochiTransformer3DModel": { "checkpoint_mapping_fn": convert_mochi_transformer_checkpoint_to_diffusers, "default_subfolder": "transformer", }, "HunyuanVideoTransformer3DModel": { "checkpoint_mapping_fn": convert_hunyuan_video_transformer_to_diffusers, "default_subfolder": "transformer", }, "AuraFlowTransformer2DModel": { "checkpoint_mapping_fn": convert_auraflow_transformer_checkpoint_to_diffusers, "default_subfolder": "transformer", }, "Lumina2Transformer2DModel": { "checkpoint_mapping_fn": convert_lumina2_to_diffusers, "default_subfolder": "transformer", }, "SanaTransformer2DModel": { "checkpoint_mapping_fn": convert_sana_transformer_to_diffusers, "default_subfolder": "transformer", }, "WanTransformer3DModel": { "checkpoint_mapping_fn": convert_wan_transformer_to_diffusers, "default_subfolder": "transformer", }, "WanVACETransformer3DModel": { "checkpoint_mapping_fn": convert_wan_transformer_to_diffusers, "default_subfolder": "transformer", }, "AutoencoderKLWan": { "checkpoint_mapping_fn": convert_wan_vae_to_diffusers, "default_subfolder": "vae", }, "HiDreamImageTransformer2DModel": { "checkpoint_mapping_fn": convert_hidream_transformer_to_diffusers, "default_subfolder": "transformer", }, "CosmosTransformer3DModel": { "checkpoint_mapping_fn": convert_cosmos_transformer_checkpoint_to_diffusers, "default_subfolder": "transformer", }, "QwenImageTransformer2DModel": { "checkpoint_mapping_fn": lambda x: x, "default_subfolder": "transformer", }, } def _should_convert_state_dict_to_diffusers(model_state_dict, checkpoint_state_dict): return not set(model_state_dict.keys()).issubset(set(checkpoint_state_dict.keys())) def _get_single_file_loadable_mapping_class(cls): diffusers_module = importlib.import_module(__name__.split(".")[0]) for loadable_class_str in SINGLE_FILE_LOADABLE_CLASSES: loadable_class = getattr(diffusers_module, loadable_class_str) if issubclass(cls, loadable_class): return loadable_class_str return None def _get_mapping_function_kwargs(mapping_fn, **kwargs): parameters = inspect.signature(mapping_fn).parameters mapping_kwargs = {} for parameter in parameters: if parameter in kwargs: mapping_kwargs[parameter] = kwargs[parameter] return mapping_kwargs class FromOriginalModelMixin: """ Load pretrained weights saved in the `.ckpt` or `.safetensors` format into a model. """ @classmethod @validate_hf_hub_args def from_single_file(cls, pretrained_model_link_or_path_or_dict: Optional[str] = None, **kwargs) -> Self: r""" Instantiate a model from pretrained weights saved in the original `.ckpt` or `.safetensors` format. The model is set in evaluation mode (`model.eval()`) by default. Parameters: pretrained_model_link_or_path_or_dict (`str`, *optional*): Can be either: - A link to the `.safetensors` or `.ckpt` file (for example `"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.safetensors"`) on the Hub. - A path to a local *file* containing the weights of the component model. - A state dict containing the component model weights. config (`str`, *optional*): - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline hosted on the Hub. - A path to a *directory* (for example `./my_pipeline_directory/`) containing the pipeline component configs in Diffusers format. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. original_config (`str`, *optional*): Dict or path to a yaml file containing the configuration for the model in its original format. If a dict is provided, it will be used to initialize the model configuration. torch_dtype (`torch.dtype`, *optional*): Override the default `torch.dtype` and load the model with another dtype. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to True, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 and is_accelerate_available() else `False`): Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument to `True` will raise an error. disable_mmap ('bool', *optional*, defaults to 'False'): Whether to disable mmap when loading a Safetensors model. This option can perform better when the model is on a network mount or hard drive, which may not handle the seeky-ness of mmap very well. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to overwrite load and saveable variables (for example the pipeline components of the specific pipeline class). The overwritten components are directly passed to the pipelines `__init__` method. See example below for more information. ```py >>> from diffusers import StableCascadeUNet >>> ckpt_path = "https://huggingface.co/stabilityai/stable-cascade/blob/main/stage_b_lite.safetensors" >>> model = StableCascadeUNet.from_single_file(ckpt_path) ``` """ mapping_class_name = _get_single_file_loadable_mapping_class(cls) # if class_name not in SINGLE_FILE_LOADABLE_CLASSES: if mapping_class_name is None: raise ValueError( f"FromOriginalModelMixin is currently only compatible with {', '.join(SINGLE_FILE_LOADABLE_CLASSES.keys())}" ) pretrained_model_link_or_path = kwargs.get("pretrained_model_link_or_path", None) if pretrained_model_link_or_path is not None: deprecation_message = ( "Please use `pretrained_model_link_or_path_or_dict` argument instead for model classes" ) deprecate("pretrained_model_link_or_path", "1.0.0", deprecation_message) pretrained_model_link_or_path_or_dict = pretrained_model_link_or_path config = kwargs.pop("config", None) original_config = kwargs.pop("original_config", None) if config is not None and original_config is not None: raise ValueError( "`from_single_file` cannot accept both `config` and `original_config` arguments. Please provide only one of these arguments" ) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) token = kwargs.pop("token", None) cache_dir = kwargs.pop("cache_dir", None) local_files_only = kwargs.pop("local_files_only", None) subfolder = kwargs.pop("subfolder", None) revision = kwargs.pop("revision", None) config_revision = kwargs.pop("config_revision", None) torch_dtype = kwargs.pop("torch_dtype", None) quantization_config = kwargs.pop("quantization_config", None) low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT) device = kwargs.pop("device", None) disable_mmap = kwargs.pop("disable_mmap", False) user_agent = {"diffusers": __version__, "file_type": "single_file", "framework": "pytorch"} # In order to ensure popular quantization methods are supported. Can be disable with `disable_telemetry` if quantization_config is not None: user_agent["quant"] = quantization_config.quant_method.value if torch_dtype is not None and not isinstance(torch_dtype, torch.dtype): torch_dtype = torch.float32 logger.warning( f"Passed `torch_dtype` {torch_dtype} is not a `torch.dtype`. Defaulting to `torch.float32`." ) if isinstance(pretrained_model_link_or_path_or_dict, dict): checkpoint = pretrained_model_link_or_path_or_dict else: checkpoint = load_single_file_checkpoint( pretrained_model_link_or_path_or_dict, force_download=force_download, proxies=proxies, token=token, cache_dir=cache_dir, local_files_only=local_files_only, revision=revision, disable_mmap=disable_mmap, user_agent=user_agent, ) if quantization_config is not None: hf_quantizer = DiffusersAutoQuantizer.from_config(quantization_config) hf_quantizer.validate_environment() torch_dtype = hf_quantizer.update_torch_dtype(torch_dtype) else: hf_quantizer = None mapping_functions = SINGLE_FILE_LOADABLE_CLASSES[mapping_class_name] checkpoint_mapping_fn = mapping_functions["checkpoint_mapping_fn"] if original_config is not None: if "config_mapping_fn" in mapping_functions: config_mapping_fn = mapping_functions["config_mapping_fn"] else: config_mapping_fn = None if config_mapping_fn is None: raise ValueError( ( f"`original_config` has been provided for {mapping_class_name} but no mapping function" "was found to convert the original config to a Diffusers config in" "`diffusers.loaders.single_file_utils`" ) ) if isinstance(original_config, str): # If original_config is a URL or filepath fetch the original_config dict original_config = fetch_original_config(original_config, local_files_only=local_files_only) config_mapping_kwargs = _get_mapping_function_kwargs(config_mapping_fn, **kwargs) diffusers_model_config = config_mapping_fn( original_config=original_config, checkpoint=checkpoint, **config_mapping_kwargs ) else: if config is not None: if isinstance(config, str): default_pretrained_model_config_name = config else: raise ValueError( ( "Invalid `config` argument. Please provide a string representing a repo id" "or path to a local Diffusers model repo." ) ) else: config = fetch_diffusers_config(checkpoint) default_pretrained_model_config_name = config["pretrained_model_name_or_path"] if "default_subfolder" in mapping_functions: subfolder = mapping_functions["default_subfolder"] subfolder = subfolder or config.pop( "subfolder", None ) # some configs contain a subfolder key, e.g. StableCascadeUNet diffusers_model_config = cls.load_config( pretrained_model_name_or_path=default_pretrained_model_config_name, subfolder=subfolder, local_files_only=local_files_only, token=token, revision=config_revision, ) expected_kwargs, optional_kwargs = cls._get_signature_keys(cls) # Map legacy kwargs to new kwargs if "legacy_kwargs" in mapping_functions: legacy_kwargs = mapping_functions["legacy_kwargs"] for legacy_key, new_key in legacy_kwargs.items(): if legacy_key in kwargs: kwargs[new_key] = kwargs.pop(legacy_key) model_kwargs = {k: kwargs.get(k) for k in kwargs if k in expected_kwargs or k in optional_kwargs} diffusers_model_config.update(model_kwargs) ctx = init_empty_weights if low_cpu_mem_usage else nullcontext with ctx(): model = cls.from_config(diffusers_model_config) checkpoint_mapping_kwargs = _get_mapping_function_kwargs(checkpoint_mapping_fn, **kwargs) if _should_convert_state_dict_to_diffusers(model.state_dict(), checkpoint): diffusers_format_checkpoint = checkpoint_mapping_fn( config=diffusers_model_config, checkpoint=checkpoint, **checkpoint_mapping_kwargs ) else: diffusers_format_checkpoint = checkpoint if not diffusers_format_checkpoint: raise SingleFileComponentError( f"Failed to load {mapping_class_name}. Weights for this component appear to be missing in the checkpoint." ) # Check if `_keep_in_fp32_modules` is not None use_keep_in_fp32_modules = (cls._keep_in_fp32_modules is not None) and ( (torch_dtype == torch.float16) or hasattr(hf_quantizer, "use_keep_in_fp32_modules") ) if use_keep_in_fp32_modules: keep_in_fp32_modules = cls._keep_in_fp32_modules if not isinstance(keep_in_fp32_modules, list): keep_in_fp32_modules = [keep_in_fp32_modules] else: keep_in_fp32_modules = [] if hf_quantizer is not None: hf_quantizer.preprocess_model( model=model, device_map=None, state_dict=diffusers_format_checkpoint, keep_in_fp32_modules=keep_in_fp32_modules, ) device_map = None if low_cpu_mem_usage: param_device = torch.device(device) if device else torch.device("cpu") empty_state_dict = model.state_dict() unexpected_keys = [ param_name for param_name in diffusers_format_checkpoint if param_name not in empty_state_dict ] device_map = {"": param_device} load_model_dict_into_meta( model, diffusers_format_checkpoint, dtype=torch_dtype, device_map=device_map, hf_quantizer=hf_quantizer, keep_in_fp32_modules=keep_in_fp32_modules, unexpected_keys=unexpected_keys, ) empty_device_cache() else: _, unexpected_keys = model.load_state_dict(diffusers_format_checkpoint, strict=False) if model._keys_to_ignore_on_load_unexpected is not None: for pat in model._keys_to_ignore_on_load_unexpected: unexpected_keys = [k for k in unexpected_keys if re.search(pat, k) is None] if len(unexpected_keys) > 0: logger.warning( f"Some weights of the model checkpoint were not used when initializing {cls.__name__}: \n {[', '.join(unexpected_keys)]}" ) if hf_quantizer is not None: hf_quantizer.postprocess_model(model) model.hf_quantizer = hf_quantizer if torch_dtype is not None and hf_quantizer is None: model.to(torch_dtype) model.eval() if device_map is not None: device_map_kwargs = {"device_map": device_map} dispatch_model(model, **device_map_kwargs) return model
diffusers/src/diffusers/loaders/single_file_model.py/0
{ "file_path": "diffusers/src/diffusers/loaders/single_file_model.py", "repo_id": "diffusers", "token_count": 9509 }
157
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os from typing import Optional, Union from huggingface_hub.utils import validate_hf_hub_args from ..configuration_utils import ConfigMixin from ..utils import logging logger = logging.get_logger(__name__) class AutoModel(ConfigMixin): config_name = "config.json" def __init__(self, *args, **kwargs): raise EnvironmentError( f"{self.__class__.__name__} is designed to be instantiated " f"using the `{self.__class__.__name__}.from_pretrained(pretrained_model_name_or_path)` or " f"`{self.__class__.__name__}.from_pipe(pipeline)` methods." ) @classmethod @validate_hf_hub_args def from_pretrained(cls, pretrained_model_or_path: Optional[Union[str, os.PathLike]] = None, **kwargs): r""" Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - `model.eval()` - by default, and dropout modules are deactivated. To train the model, set it back in training mode with `model.train()`. Parameters: pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): Can be either: - A string, the *model id* (for example `google/ddpm-celebahq-256`) of a pretrained model hosted on the Hub. - A path to a *directory* (for example `./my_model_directory`) containing the model weights saved with [`~ModelMixin.save_pretrained`]. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. torch_dtype (`torch.dtype`, *optional*): Override the default `torch.dtype` and load the model with another dtype. force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. output_loading_info (`bool`, *optional*, defaults to `False`): Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. from_flax (`bool`, *optional*, defaults to `False`): Load the model weights from a Flax checkpoint save file. subfolder (`str`, *optional*, defaults to `""`): The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (`str`, *optional*): Mirror source to resolve accessibility issues if you're downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information. device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): A map that specifies where each submodule should go. It doesn't need to be defined for each parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the same device. Defaults to `None`, meaning that the model will be loaded on CPU. Set `device_map="auto"` to have 🤗 Accelerate automatically compute the most optimized `device_map`. For more information about each option see [designing a device map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). max_memory (`Dict`, *optional*): A dictionary device identifier for the maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder (`str` or `os.PathLike`, *optional*): The path to offload weights if `device_map` contains the value `"disk"`. offload_state_dict (`bool`, *optional*): If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True` when there is some disk offload. low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument to `True` will raise an error. variant (`str`, *optional*): Load weights from a specified `variant` filename such as `"fp16"` or `"ema"`. This is ignored when loading `from_flax`. use_safetensors (`bool`, *optional*, defaults to `None`): If set to `None`, the `safetensors` weights are downloaded if they're available **and** if the `safetensors` library is installed. If set to `True`, the model is forcibly loaded from `safetensors` weights. If set to `False`, `safetensors` weights are not loaded. disable_mmap ('bool', *optional*, defaults to 'False'): Whether to disable mmap when loading a Safetensors model. This option can perform better when the model is on a network mount or hard drive, which may not handle the seeky-ness of mmap very well. <Tip> To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with `hf auth login`. You can also activate the special ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use this method in a firewalled environment. </Tip> Example: ```py from diffusers import AutoModel unet = AutoModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") ``` If you get the error message below, you need to finetune the weights for your downstream task: ```bash Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: - conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` """ cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) token = kwargs.pop("token", None) local_files_only = kwargs.pop("local_files_only", False) revision = kwargs.pop("revision", None) subfolder = kwargs.pop("subfolder", None) load_config_kwargs = { "cache_dir": cache_dir, "force_download": force_download, "proxies": proxies, "token": token, "local_files_only": local_files_only, "revision": revision, } library = None orig_class_name = None # Always attempt to fetch model_index.json first try: cls.config_name = "model_index.json" config = cls.load_config(pretrained_model_or_path, **load_config_kwargs) if subfolder is not None and subfolder in config: library, orig_class_name = config[subfolder] load_config_kwargs.update({"subfolder": subfolder}) except EnvironmentError as e: logger.debug(e) # Unable to load from model_index.json so fallback to loading from config if library is None and orig_class_name is None: cls.config_name = "config.json" config = cls.load_config(pretrained_model_or_path, subfolder=subfolder, **load_config_kwargs) if "_class_name" in config: # If we find a class name in the config, we can try to load the model as a diffusers model orig_class_name = config["_class_name"] library = "diffusers" load_config_kwargs.update({"subfolder": subfolder}) elif "model_type" in config: orig_class_name = "AutoModel" library = "transformers" load_config_kwargs.update({"subfolder": "" if subfolder is None else subfolder}) else: raise ValueError(f"Couldn't find model associated with the config file at {pretrained_model_or_path}.") from ..pipelines.pipeline_loading_utils import ALL_IMPORTABLE_CLASSES, get_class_obj_and_candidates model_cls, _ = get_class_obj_and_candidates( library_name=library, class_name=orig_class_name, importable_classes=ALL_IMPORTABLE_CLASSES, pipelines=None, is_pipeline_module=False, ) if model_cls is None: raise ValueError(f"AutoModel can't find a model linked to {orig_class_name}.") kwargs = {**load_config_kwargs, **kwargs} return model_cls.from_pretrained(pretrained_model_or_path, **kwargs)
diffusers/src/diffusers/models/auto_model.py/0
{ "file_path": "diffusers/src/diffusers/models/auto_model.py", "repo_id": "diffusers", "token_count": 4455 }
158
# Copyright 2025 Ollin Boer Bohan and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Optional, Tuple, Union import torch from ...configuration_utils import ConfigMixin, register_to_config from ...utils import BaseOutput from ...utils.accelerate_utils import apply_forward_hook from ..modeling_utils import ModelMixin from .vae import DecoderOutput, DecoderTiny, EncoderTiny @dataclass class AutoencoderTinyOutput(BaseOutput): """ Output of AutoencoderTiny encoding method. Args: latents (`torch.Tensor`): Encoded outputs of the `Encoder`. """ latents: torch.Tensor class AutoencoderTiny(ModelMixin, ConfigMixin): r""" A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. [`AutoencoderTiny`] is a wrapper around the original implementation of `TAESD`. This model inherits from [`ModelMixin`]. Check the superclass documentation for its generic methods implemented for all models (such as downloading or saving). Parameters: in_channels (`int`, *optional*, defaults to 3): Number of channels in the input image. out_channels (`int`, *optional*, defaults to 3): Number of channels in the output. encoder_block_out_channels (`Tuple[int]`, *optional*, defaults to `(64, 64, 64, 64)`): Tuple of integers representing the number of output channels for each encoder block. The length of the tuple should be equal to the number of encoder blocks. decoder_block_out_channels (`Tuple[int]`, *optional*, defaults to `(64, 64, 64, 64)`): Tuple of integers representing the number of output channels for each decoder block. The length of the tuple should be equal to the number of decoder blocks. act_fn (`str`, *optional*, defaults to `"relu"`): Activation function to be used throughout the model. latent_channels (`int`, *optional*, defaults to 4): Number of channels in the latent representation. The latent space acts as a compressed representation of the input image. upsampling_scaling_factor (`int`, *optional*, defaults to 2): Scaling factor for upsampling in the decoder. It determines the size of the output image during the upsampling process. num_encoder_blocks (`Tuple[int]`, *optional*, defaults to `(1, 3, 3, 3)`): Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The length of the tuple should be equal to the number of stages in the encoder. Each stage has a different number of encoder blocks. num_decoder_blocks (`Tuple[int]`, *optional*, defaults to `(3, 3, 3, 1)`): Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The length of the tuple should be equal to the number of stages in the decoder. Each stage has a different number of decoder blocks. latent_magnitude (`float`, *optional*, defaults to 3.0): Magnitude of the latent representation. This parameter scales the latent representation values to control the extent of information preservation. latent_shift (float, *optional*, defaults to 0.5): Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (`float`, *optional*, defaults to 1.0): The component-wise standard deviation of the trained latent space computed using the first batch of the training set. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1 / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper. For this Autoencoder, however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (`bool`, *optional*, default to `False`): If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE can be fine-tuned / trained to a lower range without losing too much precision, in which case `force_upcast` can be set to `False` (see this fp16-friendly [AutoEncoder](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)). """ _supports_gradient_checkpointing = True @register_to_config def __init__( self, in_channels: int = 3, out_channels: int = 3, encoder_block_out_channels: Tuple[int, ...] = (64, 64, 64, 64), decoder_block_out_channels: Tuple[int, ...] = (64, 64, 64, 64), act_fn: str = "relu", upsample_fn: str = "nearest", latent_channels: int = 4, upsampling_scaling_factor: int = 2, num_encoder_blocks: Tuple[int, ...] = (1, 3, 3, 3), num_decoder_blocks: Tuple[int, ...] = (3, 3, 3, 1), latent_magnitude: int = 3, latent_shift: float = 0.5, force_upcast: bool = False, scaling_factor: float = 1.0, shift_factor: float = 0.0, ): super().__init__() if len(encoder_block_out_channels) != len(num_encoder_blocks): raise ValueError("`encoder_block_out_channels` should have the same length as `num_encoder_blocks`.") if len(decoder_block_out_channels) != len(num_decoder_blocks): raise ValueError("`decoder_block_out_channels` should have the same length as `num_decoder_blocks`.") self.encoder = EncoderTiny( in_channels=in_channels, out_channels=latent_channels, num_blocks=num_encoder_blocks, block_out_channels=encoder_block_out_channels, act_fn=act_fn, ) self.decoder = DecoderTiny( in_channels=latent_channels, out_channels=out_channels, num_blocks=num_decoder_blocks, block_out_channels=decoder_block_out_channels, upsampling_scaling_factor=upsampling_scaling_factor, act_fn=act_fn, upsample_fn=upsample_fn, ) self.latent_magnitude = latent_magnitude self.latent_shift = latent_shift self.scaling_factor = scaling_factor self.use_slicing = False self.use_tiling = False # only relevant if vae tiling is enabled self.spatial_scale_factor = 2**out_channels self.tile_overlap_factor = 0.125 self.tile_sample_min_size = 512 self.tile_latent_min_size = self.tile_sample_min_size // self.spatial_scale_factor self.register_to_config(block_out_channels=decoder_block_out_channels) self.register_to_config(force_upcast=False) def scale_latents(self, x: torch.Tensor) -> torch.Tensor: """raw latents -> [0, 1]""" return x.div(2 * self.latent_magnitude).add(self.latent_shift).clamp(0, 1) def unscale_latents(self, x: torch.Tensor) -> torch.Tensor: """[0, 1] -> raw latents""" return x.sub(self.latent_shift).mul(2 * self.latent_magnitude) def enable_slicing(self) -> None: r""" Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. """ self.use_slicing = True def disable_slicing(self) -> None: r""" Disable sliced VAE decoding. If `enable_slicing` was previously enabled, this method will go back to computing decoding in one step. """ self.use_slicing = False def enable_tiling(self, use_tiling: bool = True) -> None: r""" Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images. """ self.use_tiling = use_tiling def disable_tiling(self) -> None: r""" Disable tiled VAE decoding. If `enable_tiling` was previously enabled, this method will go back to computing decoding in one step. """ self.enable_tiling(False) def _tiled_encode(self, x: torch.Tensor) -> torch.Tensor: r"""Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several steps. This is useful to keep memory use constant regardless of image size. To avoid tiling artifacts, the tiles overlap and are blended together to form a smooth output. Args: x (`torch.Tensor`): Input batch of images. Returns: `torch.Tensor`: Encoded batch of images. """ # scale of encoder output relative to input sf = self.spatial_scale_factor tile_size = self.tile_sample_min_size # number of pixels to blend and to traverse between tile blend_size = int(tile_size * self.tile_overlap_factor) traverse_size = tile_size - blend_size # tiles index (up/left) ti = range(0, x.shape[-2], traverse_size) tj = range(0, x.shape[-1], traverse_size) # mask for blending blend_masks = torch.stack( torch.meshgrid([torch.arange(tile_size / sf) / (blend_size / sf - 1)] * 2, indexing="ij") ) blend_masks = blend_masks.clamp(0, 1).to(x.device) # output array out = torch.zeros(x.shape[0], 4, x.shape[-2] // sf, x.shape[-1] // sf, device=x.device) for i in ti: for j in tj: tile_in = x[..., i : i + tile_size, j : j + tile_size] # tile result tile_out = out[..., i // sf : (i + tile_size) // sf, j // sf : (j + tile_size) // sf] tile = self.encoder(tile_in) h, w = tile.shape[-2], tile.shape[-1] # blend tile result into output blend_mask_i = torch.ones_like(blend_masks[0]) if i == 0 else blend_masks[0] blend_mask_j = torch.ones_like(blend_masks[1]) if j == 0 else blend_masks[1] blend_mask = blend_mask_i * blend_mask_j tile, blend_mask = tile[..., :h, :w], blend_mask[..., :h, :w] tile_out.copy_(blend_mask * tile + (1 - blend_mask) * tile_out) return out def _tiled_decode(self, x: torch.Tensor) -> torch.Tensor: r"""Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several steps. This is useful to keep memory use constant regardless of image size. To avoid tiling artifacts, the tiles overlap and are blended together to form a smooth output. Args: x (`torch.Tensor`): Input batch of images. Returns: `torch.Tensor`: Encoded batch of images. """ # scale of decoder output relative to input sf = self.spatial_scale_factor tile_size = self.tile_latent_min_size # number of pixels to blend and to traverse between tiles blend_size = int(tile_size * self.tile_overlap_factor) traverse_size = tile_size - blend_size # tiles index (up/left) ti = range(0, x.shape[-2], traverse_size) tj = range(0, x.shape[-1], traverse_size) # mask for blending blend_masks = torch.stack( torch.meshgrid([torch.arange(tile_size * sf) / (blend_size * sf - 1)] * 2, indexing="ij") ) blend_masks = blend_masks.clamp(0, 1).to(x.device) # output array out = torch.zeros(x.shape[0], 3, x.shape[-2] * sf, x.shape[-1] * sf, device=x.device) for i in ti: for j in tj: tile_in = x[..., i : i + tile_size, j : j + tile_size] # tile result tile_out = out[..., i * sf : (i + tile_size) * sf, j * sf : (j + tile_size) * sf] tile = self.decoder(tile_in) h, w = tile.shape[-2], tile.shape[-1] # blend tile result into output blend_mask_i = torch.ones_like(blend_masks[0]) if i == 0 else blend_masks[0] blend_mask_j = torch.ones_like(blend_masks[1]) if j == 0 else blend_masks[1] blend_mask = (blend_mask_i * blend_mask_j)[..., :h, :w] tile_out.copy_(blend_mask * tile + (1 - blend_mask) * tile_out) return out @apply_forward_hook def encode(self, x: torch.Tensor, return_dict: bool = True) -> Union[AutoencoderTinyOutput, Tuple[torch.Tensor]]: if self.use_slicing and x.shape[0] > 1: output = [ self._tiled_encode(x_slice) if self.use_tiling else self.encoder(x_slice) for x_slice in x.split(1) ] output = torch.cat(output) else: output = self._tiled_encode(x) if self.use_tiling else self.encoder(x) if not return_dict: return (output,) return AutoencoderTinyOutput(latents=output) @apply_forward_hook def decode( self, x: torch.Tensor, generator: Optional[torch.Generator] = None, return_dict: bool = True ) -> Union[DecoderOutput, Tuple[torch.Tensor]]: if self.use_slicing and x.shape[0] > 1: output = [ self._tiled_decode(x_slice) if self.use_tiling else self.decoder(x_slice) for x_slice in x.split(1) ] output = torch.cat(output) else: output = self._tiled_decode(x) if self.use_tiling else self.decoder(x) if not return_dict: return (output,) return DecoderOutput(sample=output) def forward( self, sample: torch.Tensor, return_dict: bool = True, ) -> Union[DecoderOutput, Tuple[torch.Tensor]]: r""" Args: sample (`torch.Tensor`): Input sample. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`DecoderOutput`] instead of a plain tuple. """ enc = self.encode(sample).latents # scale latents to be in [0, 1], then quantize latents to a byte tensor, # as if we were storing the latents in an RGBA uint8 image. scaled_enc = self.scale_latents(enc).mul_(255).round_().byte() # unquantize latents back into [0, 1], then unscale latents back to their original range, # as if we were loading the latents from an RGBA uint8 image. unscaled_enc = self.unscale_latents(scaled_enc / 255.0) dec = self.decode(unscaled_enc).sample if not return_dict: return (dec,) return DecoderOutput(sample=dec)
diffusers/src/diffusers/models/autoencoders/autoencoder_tiny.py/0
{ "file_path": "diffusers/src/diffusers/models/autoencoders/autoencoder_tiny.py", "repo_id": "diffusers", "token_count": 6568 }
159
# Copyright 2025 Stability AI, The HuggingFace Team and The InstantX Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Any, Dict, List, Optional, Tuple, Union import torch import torch.nn as nn from ...configuration_utils import ConfigMixin, register_to_config from ...loaders import FromOriginalModelMixin, PeftAdapterMixin from ...utils import USE_PEFT_BACKEND, logging, scale_lora_layers, unscale_lora_layers from ..attention import JointTransformerBlock from ..attention_processor import Attention, AttentionProcessor, FusedJointAttnProcessor2_0 from ..embeddings import CombinedTimestepTextProjEmbeddings, PatchEmbed from ..modeling_outputs import Transformer2DModelOutput from ..modeling_utils import ModelMixin from ..transformers.transformer_sd3 import SD3SingleTransformerBlock from .controlnet import BaseOutput, zero_module logger = logging.get_logger(__name__) # pylint: disable=invalid-name @dataclass class SD3ControlNetOutput(BaseOutput): controlnet_block_samples: Tuple[torch.Tensor] class SD3ControlNetModel(ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin): r""" ControlNet model for [Stable Diffusion 3](https://huggingface.co/papers/2403.03206). Parameters: sample_size (`int`, defaults to `128`): The width/height of the latents. This is fixed during training since it is used to learn a number of position embeddings. patch_size (`int`, defaults to `2`): Patch size to turn the input data into small patches. in_channels (`int`, defaults to `16`): The number of latent channels in the input. num_layers (`int`, defaults to `18`): The number of layers of transformer blocks to use. attention_head_dim (`int`, defaults to `64`): The number of channels in each head. num_attention_heads (`int`, defaults to `18`): The number of heads to use for multi-head attention. joint_attention_dim (`int`, defaults to `4096`): The embedding dimension to use for joint text-image attention. caption_projection_dim (`int`, defaults to `1152`): The embedding dimension of caption embeddings. pooled_projection_dim (`int`, defaults to `2048`): The embedding dimension of pooled text projections. out_channels (`int`, defaults to `16`): The number of latent channels in the output. pos_embed_max_size (`int`, defaults to `96`): The maximum latent height/width of positional embeddings. extra_conditioning_channels (`int`, defaults to `0`): The number of extra channels to use for conditioning for patch embedding. dual_attention_layers (`Tuple[int, ...]`, defaults to `()`): The number of dual-stream transformer blocks to use. qk_norm (`str`, *optional*, defaults to `None`): The normalization to use for query and key in the attention layer. If `None`, no normalization is used. pos_embed_type (`str`, defaults to `"sincos"`): The type of positional embedding to use. Choose between `"sincos"` and `None`. use_pos_embed (`bool`, defaults to `True`): Whether to use positional embeddings. force_zeros_for_pooled_projection (`bool`, defaults to `True`): Whether to force zeros for pooled projection embeddings. This is handled in the pipelines by reading the config value of the ControlNet model. """ _supports_gradient_checkpointing = True @register_to_config def __init__( self, sample_size: int = 128, patch_size: int = 2, in_channels: int = 16, num_layers: int = 18, attention_head_dim: int = 64, num_attention_heads: int = 18, joint_attention_dim: int = 4096, caption_projection_dim: int = 1152, pooled_projection_dim: int = 2048, out_channels: int = 16, pos_embed_max_size: int = 96, extra_conditioning_channels: int = 0, dual_attention_layers: Tuple[int, ...] = (), qk_norm: Optional[str] = None, pos_embed_type: Optional[str] = "sincos", use_pos_embed: bool = True, force_zeros_for_pooled_projection: bool = True, ): super().__init__() default_out_channels = in_channels self.out_channels = out_channels if out_channels is not None else default_out_channels self.inner_dim = num_attention_heads * attention_head_dim if use_pos_embed: self.pos_embed = PatchEmbed( height=sample_size, width=sample_size, patch_size=patch_size, in_channels=in_channels, embed_dim=self.inner_dim, pos_embed_max_size=pos_embed_max_size, pos_embed_type=pos_embed_type, ) else: self.pos_embed = None self.time_text_embed = CombinedTimestepTextProjEmbeddings( embedding_dim=self.inner_dim, pooled_projection_dim=pooled_projection_dim ) if joint_attention_dim is not None: self.context_embedder = nn.Linear(joint_attention_dim, caption_projection_dim) # `attention_head_dim` is doubled to account for the mixing. # It needs to crafted when we get the actual checkpoints. self.transformer_blocks = nn.ModuleList( [ JointTransformerBlock( dim=self.inner_dim, num_attention_heads=num_attention_heads, attention_head_dim=attention_head_dim, context_pre_only=False, qk_norm=qk_norm, use_dual_attention=True if i in dual_attention_layers else False, ) for i in range(num_layers) ] ) else: self.context_embedder = None self.transformer_blocks = nn.ModuleList( [ SD3SingleTransformerBlock( dim=self.inner_dim, num_attention_heads=num_attention_heads, attention_head_dim=attention_head_dim, ) for _ in range(num_layers) ] ) # controlnet_blocks self.controlnet_blocks = nn.ModuleList([]) for _ in range(len(self.transformer_blocks)): controlnet_block = nn.Linear(self.inner_dim, self.inner_dim) controlnet_block = zero_module(controlnet_block) self.controlnet_blocks.append(controlnet_block) pos_embed_input = PatchEmbed( height=sample_size, width=sample_size, patch_size=patch_size, in_channels=in_channels + extra_conditioning_channels, embed_dim=self.inner_dim, pos_embed_type=None, ) self.pos_embed_input = zero_module(pos_embed_input) self.gradient_checkpointing = False # Copied from diffusers.models.unets.unet_3d_condition.UNet3DConditionModel.enable_forward_chunking def enable_forward_chunking(self, chunk_size: Optional[int] = None, dim: int = 0) -> None: """ Sets the attention processor to use [feed forward chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers). Parameters: chunk_size (`int`, *optional*): The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually over each tensor of dim=`dim`. dim (`int`, *optional*, defaults to `0`): The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) or dim=1 (sequence length). """ if dim not in [0, 1]: raise ValueError(f"Make sure to set `dim` to either 0 or 1, not {dim}") # By default chunk size is 1 chunk_size = chunk_size or 1 def fn_recursive_feed_forward(module: torch.nn.Module, chunk_size: int, dim: int): if hasattr(module, "set_chunk_feed_forward"): module.set_chunk_feed_forward(chunk_size=chunk_size, dim=dim) for child in module.children(): fn_recursive_feed_forward(child, chunk_size, dim) for module in self.children(): fn_recursive_feed_forward(module, chunk_size, dim) @property # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.attn_processors def attn_processors(self) -> Dict[str, AttentionProcessor]: r""" Returns: `dict` of attention processors: A dictionary containing all attention processors used in the model with indexed by its weight name. """ # set recursively processors = {} def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): if hasattr(module, "get_processor"): processors[f"{name}.processor"] = module.get_processor() for sub_name, child in module.named_children(): fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) return processors for name, module in self.named_children(): fn_recursive_add_processors(name, module, processors) return processors # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_attn_processor def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): r""" Sets the attention processor to use to compute attention. Parameters: processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): The instantiated processor class or a dictionary of processor classes that will be set as the processor for **all** `Attention` layers. If `processor` is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. """ count = len(self.attn_processors.keys()) if isinstance(processor, dict) and len(processor) != count: raise ValueError( f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" f" number of attention layers: {count}. Please make sure to pass {count} processor classes." ) def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): if hasattr(module, "set_processor"): if not isinstance(processor, dict): module.set_processor(processor) else: module.set_processor(processor.pop(f"{name}.processor")) for sub_name, child in module.named_children(): fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) for name, module in self.named_children(): fn_recursive_attn_processor(name, module, processor) # Copied from diffusers.models.transformers.transformer_sd3.SD3Transformer2DModel.fuse_qkv_projections def fuse_qkv_projections(self): """ Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused. <Tip warning={true}> This API is 🧪 experimental. </Tip> """ self.original_attn_processors = None for _, attn_processor in self.attn_processors.items(): if "Added" in str(attn_processor.__class__.__name__): raise ValueError("`fuse_qkv_projections()` is not supported for models having added KV projections.") self.original_attn_processors = self.attn_processors for module in self.modules(): if isinstance(module, Attention): module.fuse_projections(fuse=True) self.set_attn_processor(FusedJointAttnProcessor2_0()) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.unfuse_qkv_projections def unfuse_qkv_projections(self): """Disables the fused QKV projection if enabled. <Tip warning={true}> This API is 🧪 experimental. </Tip> """ if self.original_attn_processors is not None: self.set_attn_processor(self.original_attn_processors) # Notes: This is for SD3.5 8b controlnet, which shares the pos_embed with the transformer # we should have handled this in conversion script def _get_pos_embed_from_transformer(self, transformer): pos_embed = PatchEmbed( height=transformer.config.sample_size, width=transformer.config.sample_size, patch_size=transformer.config.patch_size, in_channels=transformer.config.in_channels, embed_dim=transformer.inner_dim, pos_embed_max_size=transformer.config.pos_embed_max_size, ) pos_embed.load_state_dict(transformer.pos_embed.state_dict(), strict=True) return pos_embed @classmethod def from_transformer( cls, transformer, num_layers=12, num_extra_conditioning_channels=1, load_weights_from_transformer=True ): config = transformer.config config["num_layers"] = num_layers or config.num_layers config["extra_conditioning_channels"] = num_extra_conditioning_channels controlnet = cls.from_config(config) if load_weights_from_transformer: controlnet.pos_embed.load_state_dict(transformer.pos_embed.state_dict()) controlnet.time_text_embed.load_state_dict(transformer.time_text_embed.state_dict()) controlnet.context_embedder.load_state_dict(transformer.context_embedder.state_dict()) controlnet.transformer_blocks.load_state_dict(transformer.transformer_blocks.state_dict(), strict=False) controlnet.pos_embed_input = zero_module(controlnet.pos_embed_input) return controlnet def forward( self, hidden_states: torch.Tensor, controlnet_cond: torch.Tensor, conditioning_scale: float = 1.0, encoder_hidden_states: torch.Tensor = None, pooled_projections: torch.Tensor = None, timestep: torch.LongTensor = None, joint_attention_kwargs: Optional[Dict[str, Any]] = None, return_dict: bool = True, ) -> Union[torch.Tensor, Transformer2DModelOutput]: """ The [`SD3Transformer2DModel`] forward method. Args: hidden_states (`torch.Tensor` of shape `(batch size, channel, height, width)`): Input `hidden_states`. controlnet_cond (`torch.Tensor`): The conditional input tensor of shape `(batch_size, sequence_length, hidden_size)`. conditioning_scale (`float`, defaults to `1.0`): The scale factor for ControlNet outputs. encoder_hidden_states (`torch.Tensor` of shape `(batch size, sequence_len, embed_dims)`): Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. pooled_projections (`torch.Tensor` of shape `(batch_size, projection_dim)`): Embeddings projected from the embeddings of input conditions. timestep ( `torch.LongTensor`): Used to indicate denoising step. joint_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~models.transformer_2d.Transformer2DModelOutput`] instead of a plain tuple. Returns: If `return_dict` is True, an [`~models.transformer_2d.Transformer2DModelOutput`] is returned, otherwise a `tuple` where the first element is the sample tensor. """ if joint_attention_kwargs is not None: joint_attention_kwargs = joint_attention_kwargs.copy() lora_scale = joint_attention_kwargs.pop("scale", 1.0) else: lora_scale = 1.0 if USE_PEFT_BACKEND: # weight the lora layers by setting `lora_scale` for each PEFT layer scale_lora_layers(self, lora_scale) else: if joint_attention_kwargs is not None and joint_attention_kwargs.get("scale", None) is not None: logger.warning( "Passing `scale` via `joint_attention_kwargs` when not using the PEFT backend is ineffective." ) if self.pos_embed is not None and hidden_states.ndim != 4: raise ValueError("hidden_states must be 4D when pos_embed is used") # SD3.5 8b controlnet does not have a `pos_embed`, # it use the `pos_embed` from the transformer to process input before passing to controlnet elif self.pos_embed is None and hidden_states.ndim != 3: raise ValueError("hidden_states must be 3D when pos_embed is not used") if self.context_embedder is not None and encoder_hidden_states is None: raise ValueError("encoder_hidden_states must be provided when context_embedder is used") # SD3.5 8b controlnet does not have a `context_embedder`, it does not use `encoder_hidden_states` elif self.context_embedder is None and encoder_hidden_states is not None: raise ValueError("encoder_hidden_states should not be provided when context_embedder is not used") if self.pos_embed is not None: hidden_states = self.pos_embed(hidden_states) # takes care of adding positional embeddings too. temb = self.time_text_embed(timestep, pooled_projections) if self.context_embedder is not None: encoder_hidden_states = self.context_embedder(encoder_hidden_states) # add hidden_states = hidden_states + self.pos_embed_input(controlnet_cond) block_res_samples = () for block in self.transformer_blocks: if torch.is_grad_enabled() and self.gradient_checkpointing: if self.context_embedder is not None: encoder_hidden_states, hidden_states = self._gradient_checkpointing_func( block, hidden_states, encoder_hidden_states, temb, ) else: # SD3.5 8b controlnet use single transformer block, which does not use `encoder_hidden_states` hidden_states = self._gradient_checkpointing_func(block, hidden_states, temb) else: if self.context_embedder is not None: encoder_hidden_states, hidden_states = block( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, temb=temb ) else: # SD3.5 8b controlnet use single transformer block, which does not use `encoder_hidden_states` hidden_states = block(hidden_states, temb) block_res_samples = block_res_samples + (hidden_states,) controlnet_block_res_samples = () for block_res_sample, controlnet_block in zip(block_res_samples, self.controlnet_blocks): block_res_sample = controlnet_block(block_res_sample) controlnet_block_res_samples = controlnet_block_res_samples + (block_res_sample,) # 6. scaling controlnet_block_res_samples = [sample * conditioning_scale for sample in controlnet_block_res_samples] if USE_PEFT_BACKEND: # remove `lora_scale` from each PEFT layer unscale_lora_layers(self, lora_scale) if not return_dict: return (controlnet_block_res_samples,) return SD3ControlNetOutput(controlnet_block_samples=controlnet_block_res_samples) class SD3MultiControlNetModel(ModelMixin): r""" `SD3ControlNetModel` wrapper class for Multi-SD3ControlNet This module is a wrapper for multiple instances of the `SD3ControlNetModel`. The `forward()` API is designed to be compatible with `SD3ControlNetModel`. Args: controlnets (`List[SD3ControlNetModel]`): Provides additional conditioning to the unet during the denoising process. You must set multiple `SD3ControlNetModel` as a list. """ def __init__(self, controlnets): super().__init__() self.nets = nn.ModuleList(controlnets) def forward( self, hidden_states: torch.Tensor, controlnet_cond: List[torch.tensor], conditioning_scale: List[float], pooled_projections: torch.Tensor, encoder_hidden_states: torch.Tensor = None, timestep: torch.LongTensor = None, joint_attention_kwargs: Optional[Dict[str, Any]] = None, return_dict: bool = True, ) -> Union[SD3ControlNetOutput, Tuple]: for i, (image, scale, controlnet) in enumerate(zip(controlnet_cond, conditioning_scale, self.nets)): block_samples = controlnet( hidden_states=hidden_states, timestep=timestep, encoder_hidden_states=encoder_hidden_states, pooled_projections=pooled_projections, controlnet_cond=image, conditioning_scale=scale, joint_attention_kwargs=joint_attention_kwargs, return_dict=return_dict, ) # merge samples if i == 0: control_block_samples = block_samples else: control_block_samples = [ control_block_sample + block_sample for control_block_sample, block_sample in zip(control_block_samples[0], block_samples[0]) ] control_block_samples = (tuple(control_block_samples),) return control_block_samples
diffusers/src/diffusers/models/controlnets/controlnet_sd3.py/0
{ "file_path": "diffusers/src/diffusers/models/controlnets/controlnet_sd3.py", "repo_id": "diffusers", "token_count": 9970 }
160
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import numbers from typing import Dict, Optional, Tuple import torch import torch.nn as nn import torch.nn.functional as F from ..utils import is_torch_npu_available, is_torch_version from .activations import get_activation from .embeddings import CombinedTimestepLabelEmbeddings, PixArtAlphaCombinedTimestepSizeEmbeddings class AdaLayerNorm(nn.Module): r""" Norm layer modified to incorporate timestep embeddings. Parameters: embedding_dim (`int`): The size of each embedding vector. num_embeddings (`int`, *optional*): The size of the embeddings dictionary. output_dim (`int`, *optional*): norm_elementwise_affine (`bool`, defaults to `False): norm_eps (`bool`, defaults to `False`): chunk_dim (`int`, defaults to `0`): """ def __init__( self, embedding_dim: int, num_embeddings: Optional[int] = None, output_dim: Optional[int] = None, norm_elementwise_affine: bool = False, norm_eps: float = 1e-5, chunk_dim: int = 0, ): super().__init__() self.chunk_dim = chunk_dim output_dim = output_dim or embedding_dim * 2 if num_embeddings is not None: self.emb = nn.Embedding(num_embeddings, embedding_dim) else: self.emb = None self.silu = nn.SiLU() self.linear = nn.Linear(embedding_dim, output_dim) self.norm = nn.LayerNorm(output_dim // 2, norm_eps, norm_elementwise_affine) def forward( self, x: torch.Tensor, timestep: Optional[torch.Tensor] = None, temb: Optional[torch.Tensor] = None ) -> torch.Tensor: if self.emb is not None: temb = self.emb(timestep) temb = self.linear(self.silu(temb)) if self.chunk_dim == 1: # This is a bit weird why we have the order of "shift, scale" here and "scale, shift" in the # other if-branch. This branch is specific to CogVideoX and OmniGen for now. shift, scale = temb.chunk(2, dim=1) shift = shift[:, None, :] scale = scale[:, None, :] else: scale, shift = temb.chunk(2, dim=0) x = self.norm(x) * (1 + scale) + shift return x class FP32LayerNorm(nn.LayerNorm): def forward(self, inputs: torch.Tensor) -> torch.Tensor: origin_dtype = inputs.dtype return F.layer_norm( inputs.float(), self.normalized_shape, self.weight.float() if self.weight is not None else None, self.bias.float() if self.bias is not None else None, self.eps, ).to(origin_dtype) class SD35AdaLayerNormZeroX(nn.Module): r""" Norm layer adaptive layer norm zero (AdaLN-Zero). Parameters: embedding_dim (`int`): The size of each embedding vector. num_embeddings (`int`): The size of the embeddings dictionary. """ def __init__(self, embedding_dim: int, norm_type: str = "layer_norm", bias: bool = True) -> None: super().__init__() self.silu = nn.SiLU() self.linear = nn.Linear(embedding_dim, 9 * embedding_dim, bias=bias) if norm_type == "layer_norm": self.norm = nn.LayerNorm(embedding_dim, elementwise_affine=False, eps=1e-6) else: raise ValueError(f"Unsupported `norm_type` ({norm_type}) provided. Supported ones are: 'layer_norm'.") def forward( self, hidden_states: torch.Tensor, emb: Optional[torch.Tensor] = None, ) -> Tuple[torch.Tensor, ...]: emb = self.linear(self.silu(emb)) shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp, shift_msa2, scale_msa2, gate_msa2 = emb.chunk( 9, dim=1 ) norm_hidden_states = self.norm(hidden_states) hidden_states = norm_hidden_states * (1 + scale_msa[:, None]) + shift_msa[:, None] norm_hidden_states2 = norm_hidden_states * (1 + scale_msa2[:, None]) + shift_msa2[:, None] return hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp, norm_hidden_states2, gate_msa2 class AdaLayerNormZero(nn.Module): r""" Norm layer adaptive layer norm zero (adaLN-Zero). Parameters: embedding_dim (`int`): The size of each embedding vector. num_embeddings (`int`): The size of the embeddings dictionary. """ def __init__(self, embedding_dim: int, num_embeddings: Optional[int] = None, norm_type="layer_norm", bias=True): super().__init__() if num_embeddings is not None: self.emb = CombinedTimestepLabelEmbeddings(num_embeddings, embedding_dim) else: self.emb = None self.silu = nn.SiLU() self.linear = nn.Linear(embedding_dim, 6 * embedding_dim, bias=bias) if norm_type == "layer_norm": self.norm = nn.LayerNorm(embedding_dim, elementwise_affine=False, eps=1e-6) elif norm_type == "fp32_layer_norm": self.norm = FP32LayerNorm(embedding_dim, elementwise_affine=False, bias=False) else: raise ValueError( f"Unsupported `norm_type` ({norm_type}) provided. Supported ones are: 'layer_norm', 'fp32_layer_norm'." ) def forward( self, x: torch.Tensor, timestep: Optional[torch.Tensor] = None, class_labels: Optional[torch.LongTensor] = None, hidden_dtype: Optional[torch.dtype] = None, emb: Optional[torch.Tensor] = None, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: if self.emb is not None: emb = self.emb(timestep, class_labels, hidden_dtype=hidden_dtype) emb = self.linear(self.silu(emb)) shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = emb.chunk(6, dim=1) x = self.norm(x) * (1 + scale_msa[:, None]) + shift_msa[:, None] return x, gate_msa, shift_mlp, scale_mlp, gate_mlp class AdaLayerNormZeroSingle(nn.Module): r""" Norm layer adaptive layer norm zero (adaLN-Zero). Parameters: embedding_dim (`int`): The size of each embedding vector. num_embeddings (`int`): The size of the embeddings dictionary. """ def __init__(self, embedding_dim: int, norm_type="layer_norm", bias=True): super().__init__() self.silu = nn.SiLU() self.linear = nn.Linear(embedding_dim, 3 * embedding_dim, bias=bias) if norm_type == "layer_norm": self.norm = nn.LayerNorm(embedding_dim, elementwise_affine=False, eps=1e-6) else: raise ValueError( f"Unsupported `norm_type` ({norm_type}) provided. Supported ones are: 'layer_norm', 'fp32_layer_norm'." ) def forward( self, x: torch.Tensor, emb: Optional[torch.Tensor] = None, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: emb = self.linear(self.silu(emb)) shift_msa, scale_msa, gate_msa = emb.chunk(3, dim=1) x = self.norm(x) * (1 + scale_msa[:, None]) + shift_msa[:, None] return x, gate_msa class LuminaRMSNormZero(nn.Module): """ Norm layer adaptive RMS normalization zero. Parameters: embedding_dim (`int`): The size of each embedding vector. """ def __init__(self, embedding_dim: int, norm_eps: float, norm_elementwise_affine: bool): super().__init__() self.silu = nn.SiLU() self.linear = nn.Linear( min(embedding_dim, 1024), 4 * embedding_dim, bias=True, ) self.norm = RMSNorm(embedding_dim, eps=norm_eps) def forward( self, x: torch.Tensor, emb: Optional[torch.Tensor] = None, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: emb = self.linear(self.silu(emb)) scale_msa, gate_msa, scale_mlp, gate_mlp = emb.chunk(4, dim=1) x = self.norm(x) * (1 + scale_msa[:, None]) return x, gate_msa, scale_mlp, gate_mlp class AdaLayerNormSingle(nn.Module): r""" Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://huggingface.co/papers/2310.00426; Section 2.3). Parameters: embedding_dim (`int`): The size of each embedding vector. use_additional_conditions (`bool`): To use additional conditions for normalization or not. """ def __init__(self, embedding_dim: int, use_additional_conditions: bool = False): super().__init__() self.emb = PixArtAlphaCombinedTimestepSizeEmbeddings( embedding_dim, size_emb_dim=embedding_dim // 3, use_additional_conditions=use_additional_conditions ) self.silu = nn.SiLU() self.linear = nn.Linear(embedding_dim, 6 * embedding_dim, bias=True) def forward( self, timestep: torch.Tensor, added_cond_kwargs: Optional[Dict[str, torch.Tensor]] = None, batch_size: Optional[int] = None, hidden_dtype: Optional[torch.dtype] = None, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: # No modulation happening here. added_cond_kwargs = added_cond_kwargs or {"resolution": None, "aspect_ratio": None} embedded_timestep = self.emb(timestep, **added_cond_kwargs, batch_size=batch_size, hidden_dtype=hidden_dtype) return self.linear(self.silu(embedded_timestep)), embedded_timestep class AdaGroupNorm(nn.Module): r""" GroupNorm layer modified to incorporate timestep embeddings. Parameters: embedding_dim (`int`): The size of each embedding vector. num_embeddings (`int`): The size of the embeddings dictionary. num_groups (`int`): The number of groups to separate the channels into. act_fn (`str`, *optional*, defaults to `None`): The activation function to use. eps (`float`, *optional*, defaults to `1e-5`): The epsilon value to use for numerical stability. """ def __init__( self, embedding_dim: int, out_dim: int, num_groups: int, act_fn: Optional[str] = None, eps: float = 1e-5 ): super().__init__() self.num_groups = num_groups self.eps = eps if act_fn is None: self.act = None else: self.act = get_activation(act_fn) self.linear = nn.Linear(embedding_dim, out_dim * 2) def forward(self, x: torch.Tensor, emb: torch.Tensor) -> torch.Tensor: if self.act: emb = self.act(emb) emb = self.linear(emb) emb = emb[:, :, None, None] scale, shift = emb.chunk(2, dim=1) x = F.group_norm(x, self.num_groups, eps=self.eps) x = x * (1 + scale) + shift return x class AdaLayerNormContinuous(nn.Module): r""" Adaptive normalization layer with a norm layer (layer_norm or rms_norm). Args: embedding_dim (`int`): Embedding dimension to use during projection. conditioning_embedding_dim (`int`): Dimension of the input condition. elementwise_affine (`bool`, defaults to `True`): Boolean flag to denote if affine transformation should be applied. eps (`float`, defaults to 1e-5): Epsilon factor. bias (`bias`, defaults to `True`): Boolean flag to denote if bias should be use. norm_type (`str`, defaults to `"layer_norm"`): Normalization layer to use. Values supported: "layer_norm", "rms_norm". """ def __init__( self, embedding_dim: int, conditioning_embedding_dim: int, # NOTE: It is a bit weird that the norm layer can be configured to have scale and shift parameters # because the output is immediately scaled and shifted by the projected conditioning embeddings. # Note that AdaLayerNorm does not let the norm layer have scale and shift parameters. # However, this is how it was implemented in the original code, and it's rather likely you should # set `elementwise_affine` to False. elementwise_affine=True, eps=1e-5, bias=True, norm_type="layer_norm", ): super().__init__() self.silu = nn.SiLU() self.linear = nn.Linear(conditioning_embedding_dim, embedding_dim * 2, bias=bias) if norm_type == "layer_norm": self.norm = LayerNorm(embedding_dim, eps, elementwise_affine, bias) elif norm_type == "rms_norm": self.norm = RMSNorm(embedding_dim, eps, elementwise_affine) else: raise ValueError(f"unknown norm_type {norm_type}") def forward(self, x: torch.Tensor, conditioning_embedding: torch.Tensor) -> torch.Tensor: # convert back to the original dtype in case `conditioning_embedding`` is upcasted to float32 (needed for hunyuanDiT) emb = self.linear(self.silu(conditioning_embedding).to(x.dtype)) scale, shift = torch.chunk(emb, 2, dim=1) x = self.norm(x) * (1 + scale)[:, None, :] + shift[:, None, :] return x class LuminaLayerNormContinuous(nn.Module): def __init__( self, embedding_dim: int, conditioning_embedding_dim: int, # NOTE: It is a bit weird that the norm layer can be configured to have scale and shift parameters # because the output is immediately scaled and shifted by the projected conditioning embeddings. # Note that AdaLayerNorm does not let the norm layer have scale and shift parameters. # However, this is how it was implemented in the original code, and it's rather likely you should # set `elementwise_affine` to False. elementwise_affine=True, eps=1e-5, bias=True, norm_type="layer_norm", out_dim: Optional[int] = None, ): super().__init__() # AdaLN self.silu = nn.SiLU() self.linear_1 = nn.Linear(conditioning_embedding_dim, embedding_dim, bias=bias) if norm_type == "layer_norm": self.norm = LayerNorm(embedding_dim, eps, elementwise_affine, bias) elif norm_type == "rms_norm": self.norm = RMSNorm(embedding_dim, eps=eps, elementwise_affine=elementwise_affine) else: raise ValueError(f"unknown norm_type {norm_type}") self.linear_2 = None if out_dim is not None: self.linear_2 = nn.Linear(embedding_dim, out_dim, bias=bias) def forward( self, x: torch.Tensor, conditioning_embedding: torch.Tensor, ) -> torch.Tensor: # convert back to the original dtype in case `conditioning_embedding`` is upcasted to float32 (needed for hunyuanDiT) emb = self.linear_1(self.silu(conditioning_embedding).to(x.dtype)) scale = emb x = self.norm(x) * (1 + scale)[:, None, :] if self.linear_2 is not None: x = self.linear_2(x) return x class CogView3PlusAdaLayerNormZeroTextImage(nn.Module): r""" Norm layer adaptive layer norm zero (adaLN-Zero). Parameters: embedding_dim (`int`): The size of each embedding vector. num_embeddings (`int`): The size of the embeddings dictionary. """ def __init__(self, embedding_dim: int, dim: int): super().__init__() self.silu = nn.SiLU() self.linear = nn.Linear(embedding_dim, 12 * dim, bias=True) self.norm_x = nn.LayerNorm(dim, elementwise_affine=False, eps=1e-5) self.norm_c = nn.LayerNorm(dim, elementwise_affine=False, eps=1e-5) def forward( self, x: torch.Tensor, context: torch.Tensor, emb: Optional[torch.Tensor] = None, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: emb = self.linear(self.silu(emb)) ( shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp, c_shift_msa, c_scale_msa, c_gate_msa, c_shift_mlp, c_scale_mlp, c_gate_mlp, ) = emb.chunk(12, dim=1) normed_x = self.norm_x(x) normed_context = self.norm_c(context) x = normed_x * (1 + scale_msa[:, None]) + shift_msa[:, None] context = normed_context * (1 + c_scale_msa[:, None]) + c_shift_msa[:, None] return x, gate_msa, shift_mlp, scale_mlp, gate_mlp, context, c_gate_msa, c_shift_mlp, c_scale_mlp, c_gate_mlp class CogVideoXLayerNormZero(nn.Module): def __init__( self, conditioning_dim: int, embedding_dim: int, elementwise_affine: bool = True, eps: float = 1e-5, bias: bool = True, ) -> None: super().__init__() self.silu = nn.SiLU() self.linear = nn.Linear(conditioning_dim, 6 * embedding_dim, bias=bias) self.norm = nn.LayerNorm(embedding_dim, eps=eps, elementwise_affine=elementwise_affine) def forward( self, hidden_states: torch.Tensor, encoder_hidden_states: torch.Tensor, temb: torch.Tensor ) -> Tuple[torch.Tensor, torch.Tensor]: shift, scale, gate, enc_shift, enc_scale, enc_gate = self.linear(self.silu(temb)).chunk(6, dim=1) hidden_states = self.norm(hidden_states) * (1 + scale)[:, None, :] + shift[:, None, :] encoder_hidden_states = self.norm(encoder_hidden_states) * (1 + enc_scale)[:, None, :] + enc_shift[:, None, :] return hidden_states, encoder_hidden_states, gate[:, None, :], enc_gate[:, None, :] if is_torch_version(">=", "2.1.0"): LayerNorm = nn.LayerNorm else: # Has optional bias parameter compared to torch layer norm # TODO: replace with torch layernorm once min required torch version >= 2.1 class LayerNorm(nn.Module): r""" LayerNorm with the bias parameter. Args: dim (`int`): Dimensionality to use for the parameters. eps (`float`, defaults to 1e-5): Epsilon factor. elementwise_affine (`bool`, defaults to `True`): Boolean flag to denote if affine transformation should be applied. bias (`bias`, defaults to `True`): Boolean flag to denote if bias should be use. """ def __init__(self, dim, eps: float = 1e-5, elementwise_affine: bool = True, bias: bool = True): super().__init__() self.eps = eps if isinstance(dim, numbers.Integral): dim = (dim,) self.dim = torch.Size(dim) if elementwise_affine: self.weight = nn.Parameter(torch.ones(dim)) self.bias = nn.Parameter(torch.zeros(dim)) if bias else None else: self.weight = None self.bias = None def forward(self, input): return F.layer_norm(input, self.dim, self.weight, self.bias, self.eps) class RMSNorm(nn.Module): r""" RMS Norm as introduced in https://huggingface.co/papers/1910.07467 by Zhang et al. Args: dim (`int`): Number of dimensions to use for `weights`. Only effective when `elementwise_affine` is True. eps (`float`): Small value to use when calculating the reciprocal of the square-root. elementwise_affine (`bool`, defaults to `True`): Boolean flag to denote if affine transformation should be applied. bias (`bool`, defaults to False): If also training the `bias` param. """ def __init__(self, dim, eps: float, elementwise_affine: bool = True, bias: bool = False): super().__init__() self.eps = eps self.elementwise_affine = elementwise_affine if isinstance(dim, numbers.Integral): dim = (dim,) self.dim = torch.Size(dim) self.weight = None self.bias = None if elementwise_affine: self.weight = nn.Parameter(torch.ones(dim)) if bias: self.bias = nn.Parameter(torch.zeros(dim)) def forward(self, hidden_states): if is_torch_npu_available(): import torch_npu if self.weight is not None: # convert into half-precision if necessary if self.weight.dtype in [torch.float16, torch.bfloat16]: hidden_states = hidden_states.to(self.weight.dtype) hidden_states = torch_npu.npu_rms_norm(hidden_states, self.weight, epsilon=self.eps)[0] if self.bias is not None: hidden_states = hidden_states + self.bias else: input_dtype = hidden_states.dtype variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) hidden_states = hidden_states * torch.rsqrt(variance + self.eps) if self.weight is not None: # convert into half-precision if necessary if self.weight.dtype in [torch.float16, torch.bfloat16]: hidden_states = hidden_states.to(self.weight.dtype) hidden_states = hidden_states * self.weight if self.bias is not None: hidden_states = hidden_states + self.bias else: hidden_states = hidden_states.to(input_dtype) return hidden_states # TODO: (Dhruv) This can be replaced with regular RMSNorm in Mochi once `_keep_in_fp32_modules` is supported # for sharded checkpoints, see: https://github.com/huggingface/diffusers/issues/10013 class MochiRMSNorm(nn.Module): def __init__(self, dim, eps: float, elementwise_affine: bool = True): super().__init__() self.eps = eps if isinstance(dim, numbers.Integral): dim = (dim,) self.dim = torch.Size(dim) if elementwise_affine: self.weight = nn.Parameter(torch.ones(dim)) else: self.weight = None def forward(self, hidden_states): input_dtype = hidden_states.dtype variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) hidden_states = hidden_states * torch.rsqrt(variance + self.eps) if self.weight is not None: hidden_states = hidden_states * self.weight hidden_states = hidden_states.to(input_dtype) return hidden_states class GlobalResponseNorm(nn.Module): r""" Global response normalization as introduced in ConvNeXt-v2 (https://huggingface.co/papers/2301.00808). Args: dim (`int`): Number of dimensions to use for the `gamma` and `beta`. """ # Taken from https://github.com/facebookresearch/ConvNeXt-V2/blob/3608f67cc1dae164790c5d0aead7bf2d73d9719b/models/utils.py#L105 def __init__(self, dim): super().__init__() self.gamma = nn.Parameter(torch.zeros(1, 1, 1, dim)) self.beta = nn.Parameter(torch.zeros(1, 1, 1, dim)) def forward(self, x): gx = torch.norm(x, p=2, dim=(1, 2), keepdim=True) nx = gx / (gx.mean(dim=-1, keepdim=True) + 1e-6) return self.gamma * (x * nx) + self.beta + x class LpNorm(nn.Module): def __init__(self, p: int = 2, dim: int = -1, eps: float = 1e-12): super().__init__() self.p = p self.dim = dim self.eps = eps def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: return F.normalize(hidden_states, p=self.p, dim=self.dim, eps=self.eps) def get_normalization( norm_type: str = "batch_norm", num_features: Optional[int] = None, eps: float = 1e-5, elementwise_affine: bool = True, bias: bool = True, ) -> nn.Module: if norm_type == "rms_norm": norm = RMSNorm(num_features, eps=eps, elementwise_affine=elementwise_affine, bias=bias) elif norm_type == "layer_norm": norm = nn.LayerNorm(num_features, eps=eps, elementwise_affine=elementwise_affine, bias=bias) elif norm_type == "batch_norm": norm = nn.BatchNorm2d(num_features, eps=eps, affine=elementwise_affine) else: raise ValueError(f"{norm_type=} is not supported.") return norm
diffusers/src/diffusers/models/normalization.py/0
{ "file_path": "diffusers/src/diffusers/models/normalization.py", "repo_id": "diffusers", "token_count": 10885 }
161
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from typing import Optional, Tuple import torch from torch import nn from ...configuration_utils import ConfigMixin, register_to_config from ..attention_processor import Attention from ..embeddings import get_timestep_embedding from ..modeling_utils import ModelMixin class T5FilmDecoder(ModelMixin, ConfigMixin): r""" T5 style decoder with FiLM conditioning. Args: input_dims (`int`, *optional*, defaults to `128`): The number of input dimensions. targets_length (`int`, *optional*, defaults to `256`): The length of the targets. d_model (`int`, *optional*, defaults to `768`): Size of the input hidden states. num_layers (`int`, *optional*, defaults to `12`): The number of `DecoderLayer`'s to use. num_heads (`int`, *optional*, defaults to `12`): The number of attention heads to use. d_kv (`int`, *optional*, defaults to `64`): Size of the key-value projection vectors. d_ff (`int`, *optional*, defaults to `2048`): The number of dimensions in the intermediate feed-forward layer of `DecoderLayer`'s. dropout_rate (`float`, *optional*, defaults to `0.1`): Dropout probability. """ @register_to_config def __init__( self, input_dims: int = 128, targets_length: int = 256, max_decoder_noise_time: float = 2000.0, d_model: int = 768, num_layers: int = 12, num_heads: int = 12, d_kv: int = 64, d_ff: int = 2048, dropout_rate: float = 0.1, ): super().__init__() self.conditioning_emb = nn.Sequential( nn.Linear(d_model, d_model * 4, bias=False), nn.SiLU(), nn.Linear(d_model * 4, d_model * 4, bias=False), nn.SiLU(), ) self.position_encoding = nn.Embedding(targets_length, d_model) self.position_encoding.weight.requires_grad = False self.continuous_inputs_projection = nn.Linear(input_dims, d_model, bias=False) self.dropout = nn.Dropout(p=dropout_rate) self.decoders = nn.ModuleList() for lyr_num in range(num_layers): # FiLM conditional T5 decoder lyr = DecoderLayer(d_model=d_model, d_kv=d_kv, num_heads=num_heads, d_ff=d_ff, dropout_rate=dropout_rate) self.decoders.append(lyr) self.decoder_norm = T5LayerNorm(d_model) self.post_dropout = nn.Dropout(p=dropout_rate) self.spec_out = nn.Linear(d_model, input_dims, bias=False) def encoder_decoder_mask(self, query_input: torch.Tensor, key_input: torch.Tensor) -> torch.Tensor: mask = torch.mul(query_input.unsqueeze(-1), key_input.unsqueeze(-2)) return mask.unsqueeze(-3) def forward(self, encodings_and_masks, decoder_input_tokens, decoder_noise_time): batch, _, _ = decoder_input_tokens.shape assert decoder_noise_time.shape == (batch,) # decoder_noise_time is in [0, 1), so rescale to expected timing range. time_steps = get_timestep_embedding( decoder_noise_time * self.config.max_decoder_noise_time, embedding_dim=self.config.d_model, max_period=self.config.max_decoder_noise_time, ).to(dtype=self.dtype) conditioning_emb = self.conditioning_emb(time_steps).unsqueeze(1) assert conditioning_emb.shape == (batch, 1, self.config.d_model * 4) seq_length = decoder_input_tokens.shape[1] # If we want to use relative positions for audio context, we can just offset # this sequence by the length of encodings_and_masks. decoder_positions = torch.broadcast_to( torch.arange(seq_length, device=decoder_input_tokens.device), (batch, seq_length), ) position_encodings = self.position_encoding(decoder_positions) inputs = self.continuous_inputs_projection(decoder_input_tokens) inputs += position_encodings y = self.dropout(inputs) # decoder: No padding present. decoder_mask = torch.ones( decoder_input_tokens.shape[:2], device=decoder_input_tokens.device, dtype=inputs.dtype ) # Translate encoding masks to encoder-decoder masks. encodings_and_encdec_masks = [(x, self.encoder_decoder_mask(decoder_mask, y)) for x, y in encodings_and_masks] # cross attend style: concat encodings encoded = torch.cat([x[0] for x in encodings_and_encdec_masks], dim=1) encoder_decoder_mask = torch.cat([x[1] for x in encodings_and_encdec_masks], dim=-1) for lyr in self.decoders: y = lyr( y, conditioning_emb=conditioning_emb, encoder_hidden_states=encoded, encoder_attention_mask=encoder_decoder_mask, )[0] y = self.decoder_norm(y) y = self.post_dropout(y) spec_out = self.spec_out(y) return spec_out class DecoderLayer(nn.Module): r""" T5 decoder layer. Args: d_model (`int`): Size of the input hidden states. d_kv (`int`): Size of the key-value projection vectors. num_heads (`int`): Number of attention heads. d_ff (`int`): Size of the intermediate feed-forward layer. dropout_rate (`float`): Dropout probability. layer_norm_epsilon (`float`, *optional*, defaults to `1e-6`): A small value used for numerical stability to avoid dividing by zero. """ def __init__( self, d_model: int, d_kv: int, num_heads: int, d_ff: int, dropout_rate: float, layer_norm_epsilon: float = 1e-6 ): super().__init__() self.layer = nn.ModuleList() # cond self attention: layer 0 self.layer.append( T5LayerSelfAttentionCond(d_model=d_model, d_kv=d_kv, num_heads=num_heads, dropout_rate=dropout_rate) ) # cross attention: layer 1 self.layer.append( T5LayerCrossAttention( d_model=d_model, d_kv=d_kv, num_heads=num_heads, dropout_rate=dropout_rate, layer_norm_epsilon=layer_norm_epsilon, ) ) # Film Cond MLP + dropout: last layer self.layer.append( T5LayerFFCond(d_model=d_model, d_ff=d_ff, dropout_rate=dropout_rate, layer_norm_epsilon=layer_norm_epsilon) ) def forward( self, hidden_states: torch.Tensor, conditioning_emb: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, encoder_decoder_position_bias=None, ) -> Tuple[torch.Tensor]: hidden_states = self.layer[0]( hidden_states, conditioning_emb=conditioning_emb, attention_mask=attention_mask, ) if encoder_hidden_states is not None: encoder_extended_attention_mask = torch.where(encoder_attention_mask > 0, 0, -1e10).to( encoder_hidden_states.dtype ) hidden_states = self.layer[1]( hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_extended_attention_mask, ) # Apply Film Conditional Feed Forward layer hidden_states = self.layer[-1](hidden_states, conditioning_emb) return (hidden_states,) class T5LayerSelfAttentionCond(nn.Module): r""" T5 style self-attention layer with conditioning. Args: d_model (`int`): Size of the input hidden states. d_kv (`int`): Size of the key-value projection vectors. num_heads (`int`): Number of attention heads. dropout_rate (`float`): Dropout probability. """ def __init__(self, d_model: int, d_kv: int, num_heads: int, dropout_rate: float): super().__init__() self.layer_norm = T5LayerNorm(d_model) self.FiLMLayer = T5FiLMLayer(in_features=d_model * 4, out_features=d_model) self.attention = Attention(query_dim=d_model, heads=num_heads, dim_head=d_kv, out_bias=False, scale_qk=False) self.dropout = nn.Dropout(dropout_rate) def forward( self, hidden_states: torch.Tensor, conditioning_emb: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: # pre_self_attention_layer_norm normed_hidden_states = self.layer_norm(hidden_states) if conditioning_emb is not None: normed_hidden_states = self.FiLMLayer(normed_hidden_states, conditioning_emb) # Self-attention block attention_output = self.attention(normed_hidden_states) hidden_states = hidden_states + self.dropout(attention_output) return hidden_states class T5LayerCrossAttention(nn.Module): r""" T5 style cross-attention layer. Args: d_model (`int`): Size of the input hidden states. d_kv (`int`): Size of the key-value projection vectors. num_heads (`int`): Number of attention heads. dropout_rate (`float`): Dropout probability. layer_norm_epsilon (`float`): A small value used for numerical stability to avoid dividing by zero. """ def __init__(self, d_model: int, d_kv: int, num_heads: int, dropout_rate: float, layer_norm_epsilon: float): super().__init__() self.attention = Attention(query_dim=d_model, heads=num_heads, dim_head=d_kv, out_bias=False, scale_qk=False) self.layer_norm = T5LayerNorm(d_model, eps=layer_norm_epsilon) self.dropout = nn.Dropout(dropout_rate) def forward( self, hidden_states: torch.Tensor, key_value_states: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, ) -> torch.Tensor: normed_hidden_states = self.layer_norm(hidden_states) attention_output = self.attention( normed_hidden_states, encoder_hidden_states=key_value_states, attention_mask=attention_mask.squeeze(1), ) layer_output = hidden_states + self.dropout(attention_output) return layer_output class T5LayerFFCond(nn.Module): r""" T5 style feed-forward conditional layer. Args: d_model (`int`): Size of the input hidden states. d_ff (`int`): Size of the intermediate feed-forward layer. dropout_rate (`float`): Dropout probability. layer_norm_epsilon (`float`): A small value used for numerical stability to avoid dividing by zero. """ def __init__(self, d_model: int, d_ff: int, dropout_rate: float, layer_norm_epsilon: float): super().__init__() self.DenseReluDense = T5DenseGatedActDense(d_model=d_model, d_ff=d_ff, dropout_rate=dropout_rate) self.film = T5FiLMLayer(in_features=d_model * 4, out_features=d_model) self.layer_norm = T5LayerNorm(d_model, eps=layer_norm_epsilon) self.dropout = nn.Dropout(dropout_rate) def forward(self, hidden_states: torch.Tensor, conditioning_emb: Optional[torch.Tensor] = None) -> torch.Tensor: forwarded_states = self.layer_norm(hidden_states) if conditioning_emb is not None: forwarded_states = self.film(forwarded_states, conditioning_emb) forwarded_states = self.DenseReluDense(forwarded_states) hidden_states = hidden_states + self.dropout(forwarded_states) return hidden_states class T5DenseGatedActDense(nn.Module): r""" T5 style feed-forward layer with gated activations and dropout. Args: d_model (`int`): Size of the input hidden states. d_ff (`int`): Size of the intermediate feed-forward layer. dropout_rate (`float`): Dropout probability. """ def __init__(self, d_model: int, d_ff: int, dropout_rate: float): super().__init__() self.wi_0 = nn.Linear(d_model, d_ff, bias=False) self.wi_1 = nn.Linear(d_model, d_ff, bias=False) self.wo = nn.Linear(d_ff, d_model, bias=False) self.dropout = nn.Dropout(dropout_rate) self.act = NewGELUActivation() def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_gelu = self.act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states class T5LayerNorm(nn.Module): r""" T5 style layer normalization module. Args: hidden_size (`int`): Size of the input hidden states. eps (`float`, `optional`, defaults to `1e-6`): A small value used for numerical stability to avoid dividing by zero. """ def __init__(self, hidden_size: int, eps: float = 1e-6): """ Construct a layernorm module in the T5 style. No bias and no subtraction of mean. """ super().__init__() self.weight = nn.Parameter(torch.ones(hidden_size)) self.variance_epsilon = eps def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: # T5 uses a layer_norm which only scales and doesn't shift, which is also known as Root Mean # Square Layer Normalization https://huggingface.co/papers/1910.07467 thus variance is calculated # w/o mean and there is no bias. Additionally we want to make sure that the accumulation for # half-precision inputs is done in fp32 variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) # convert into half-precision if necessary if self.weight.dtype in [torch.float16, torch.bfloat16]: hidden_states = hidden_states.to(self.weight.dtype) return self.weight * hidden_states class NewGELUActivation(nn.Module): """ Implementation of the GELU activation function currently in Google BERT repo (identical to OpenAI GPT). Also see the Gaussian Error Linear Units paper: https://huggingface.co/papers/1606.08415 """ def forward(self, input: torch.Tensor) -> torch.Tensor: return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) class T5FiLMLayer(nn.Module): """ T5 style FiLM Layer. Args: in_features (`int`): Number of input features. out_features (`int`): Number of output features. """ def __init__(self, in_features: int, out_features: int): super().__init__() self.scale_bias = nn.Linear(in_features, out_features * 2, bias=False) def forward(self, x: torch.Tensor, conditioning_emb: torch.Tensor) -> torch.Tensor: emb = self.scale_bias(conditioning_emb) scale, shift = torch.chunk(emb, 2, -1) x = x * (1 + scale) + shift return x
diffusers/src/diffusers/models/transformers/t5_film_transformer.py/0
{ "file_path": "diffusers/src/diffusers/models/transformers/t5_film_transformer.py", "repo_id": "diffusers", "token_count": 7112 }
162
# Copyright 2025 OmniGen team and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from typing import Dict, List, Optional, Tuple, Union import torch import torch.nn as nn import torch.nn.functional as F from ...configuration_utils import ConfigMixin, register_to_config from ...utils import logging from ..attention_processor import Attention from ..embeddings import TimestepEmbedding, Timesteps, get_2d_sincos_pos_embed from ..modeling_outputs import Transformer2DModelOutput from ..modeling_utils import ModelMixin from ..normalization import AdaLayerNorm, RMSNorm logger = logging.get_logger(__name__) # pylint: disable=invalid-name class OmniGenFeedForward(nn.Module): def __init__(self, hidden_size: int, intermediate_size: int): super().__init__() self.gate_up_proj = nn.Linear(hidden_size, 2 * intermediate_size, bias=False) self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False) self.activation_fn = nn.SiLU() def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: up_states = self.gate_up_proj(hidden_states) gate, up_states = up_states.chunk(2, dim=-1) up_states = up_states * self.activation_fn(gate) return self.down_proj(up_states) class OmniGenPatchEmbed(nn.Module): def __init__( self, patch_size: int = 2, in_channels: int = 4, embed_dim: int = 768, bias: bool = True, interpolation_scale: float = 1, pos_embed_max_size: int = 192, base_size: int = 64, ): super().__init__() self.output_image_proj = nn.Conv2d( in_channels, embed_dim, kernel_size=(patch_size, patch_size), stride=patch_size, bias=bias ) self.input_image_proj = nn.Conv2d( in_channels, embed_dim, kernel_size=(patch_size, patch_size), stride=patch_size, bias=bias ) self.patch_size = patch_size self.interpolation_scale = interpolation_scale self.pos_embed_max_size = pos_embed_max_size pos_embed = get_2d_sincos_pos_embed( embed_dim, self.pos_embed_max_size, base_size=base_size, interpolation_scale=self.interpolation_scale, output_type="pt", ) self.register_buffer("pos_embed", pos_embed.float().unsqueeze(0), persistent=True) def _cropped_pos_embed(self, height, width): """Crops positional embeddings for SD3 compatibility.""" if self.pos_embed_max_size is None: raise ValueError("`pos_embed_max_size` must be set for cropping.") height = height // self.patch_size width = width // self.patch_size if height > self.pos_embed_max_size: raise ValueError( f"Height ({height}) cannot be greater than `pos_embed_max_size`: {self.pos_embed_max_size}." ) if width > self.pos_embed_max_size: raise ValueError( f"Width ({width}) cannot be greater than `pos_embed_max_size`: {self.pos_embed_max_size}." ) top = (self.pos_embed_max_size - height) // 2 left = (self.pos_embed_max_size - width) // 2 spatial_pos_embed = self.pos_embed.reshape(1, self.pos_embed_max_size, self.pos_embed_max_size, -1) spatial_pos_embed = spatial_pos_embed[:, top : top + height, left : left + width, :] spatial_pos_embed = spatial_pos_embed.reshape(1, -1, spatial_pos_embed.shape[-1]) return spatial_pos_embed def _patch_embeddings(self, hidden_states: torch.Tensor, is_input_image: bool) -> torch.Tensor: if is_input_image: hidden_states = self.input_image_proj(hidden_states) else: hidden_states = self.output_image_proj(hidden_states) hidden_states = hidden_states.flatten(2).transpose(1, 2) return hidden_states def forward( self, hidden_states: torch.Tensor, is_input_image: bool, padding_latent: torch.Tensor = None ) -> torch.Tensor: if isinstance(hidden_states, list): if padding_latent is None: padding_latent = [None] * len(hidden_states) patched_latents = [] for sub_latent, padding in zip(hidden_states, padding_latent): height, width = sub_latent.shape[-2:] sub_latent = self._patch_embeddings(sub_latent, is_input_image) pos_embed = self._cropped_pos_embed(height, width) sub_latent = sub_latent + pos_embed if padding is not None: sub_latent = torch.cat([sub_latent, padding.to(sub_latent.device)], dim=-2) patched_latents.append(sub_latent) else: height, width = hidden_states.shape[-2:] pos_embed = self._cropped_pos_embed(height, width) hidden_states = self._patch_embeddings(hidden_states, is_input_image) patched_latents = hidden_states + pos_embed return patched_latents class OmniGenSuScaledRotaryEmbedding(nn.Module): def __init__( self, dim, max_position_embeddings=131072, original_max_position_embeddings=4096, base=10000, rope_scaling=None ): super().__init__() self.dim = dim self.max_position_embeddings = max_position_embeddings self.base = base inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2, dtype=torch.int64).float() / self.dim)) self.register_buffer("inv_freq", tensor=inv_freq, persistent=False) self.short_factor = rope_scaling["short_factor"] self.long_factor = rope_scaling["long_factor"] self.original_max_position_embeddings = original_max_position_embeddings def forward(self, hidden_states, position_ids): seq_len = torch.max(position_ids) + 1 if seq_len > self.original_max_position_embeddings: ext_factors = torch.tensor(self.long_factor, dtype=torch.float32, device=hidden_states.device) else: ext_factors = torch.tensor(self.short_factor, dtype=torch.float32, device=hidden_states.device) inv_freq_shape = ( torch.arange(0, self.dim, 2, dtype=torch.int64, device=hidden_states.device).float() / self.dim ) self.inv_freq = 1.0 / (ext_factors * self.base**inv_freq_shape) inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1) position_ids_expanded = position_ids[:, None, :].float() # Force float32 since bfloat16 loses precision on long contexts # See https://github.com/huggingface/transformers/pull/29285 device_type = hidden_states.device.type device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu" with torch.autocast(device_type=device_type, enabled=False): freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2) emb = torch.cat((freqs, freqs), dim=-1)[0] scale = self.max_position_embeddings / self.original_max_position_embeddings if scale <= 1.0: scaling_factor = 1.0 else: scaling_factor = math.sqrt(1 + math.log(scale) / math.log(self.original_max_position_embeddings)) cos = emb.cos() * scaling_factor sin = emb.sin() * scaling_factor return cos, sin class OmniGenAttnProcessor2_0: r""" Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is used in the OmniGen model. """ def __init__(self): if not hasattr(F, "scaled_dot_product_attention"): raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.") def __call__( self, attn: Attention, hidden_states: torch.Tensor, encoder_hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, image_rotary_emb: Optional[torch.Tensor] = None, ) -> torch.Tensor: batch_size, sequence_length, _ = hidden_states.shape # Get Query-Key-Value Pair query = attn.to_q(hidden_states) key = attn.to_k(encoder_hidden_states) value = attn.to_v(encoder_hidden_states) bsz, q_len, query_dim = query.size() inner_dim = key.shape[-1] head_dim = query_dim // attn.heads # Get key-value heads kv_heads = inner_dim // head_dim query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2) key = key.view(batch_size, -1, kv_heads, head_dim).transpose(1, 2) value = value.view(batch_size, -1, kv_heads, head_dim).transpose(1, 2) # Apply RoPE if needed if image_rotary_emb is not None: from ..embeddings import apply_rotary_emb query = apply_rotary_emb(query, image_rotary_emb, use_real_unbind_dim=-2) key = apply_rotary_emb(key, image_rotary_emb, use_real_unbind_dim=-2) hidden_states = F.scaled_dot_product_attention(query, key, value, attn_mask=attention_mask) hidden_states = hidden_states.transpose(1, 2).type_as(query) hidden_states = hidden_states.reshape(bsz, q_len, attn.out_dim) hidden_states = attn.to_out[0](hidden_states) return hidden_states class OmniGenBlock(nn.Module): def __init__( self, hidden_size: int, num_attention_heads: int, num_key_value_heads: int, intermediate_size: int, rms_norm_eps: float, ) -> None: super().__init__() self.input_layernorm = RMSNorm(hidden_size, eps=rms_norm_eps) self.self_attn = Attention( query_dim=hidden_size, cross_attention_dim=hidden_size, dim_head=hidden_size // num_attention_heads, heads=num_attention_heads, kv_heads=num_key_value_heads, bias=False, out_dim=hidden_size, out_bias=False, processor=OmniGenAttnProcessor2_0(), ) self.post_attention_layernorm = RMSNorm(hidden_size, eps=rms_norm_eps) self.mlp = OmniGenFeedForward(hidden_size, intermediate_size) def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, image_rotary_emb: torch.Tensor ) -> torch.Tensor: # 1. Attention norm_hidden_states = self.input_layernorm(hidden_states) attn_output = self.self_attn( hidden_states=norm_hidden_states, encoder_hidden_states=norm_hidden_states, attention_mask=attention_mask, image_rotary_emb=image_rotary_emb, ) hidden_states = hidden_states + attn_output # 2. Feed Forward norm_hidden_states = self.post_attention_layernorm(hidden_states) ff_output = self.mlp(norm_hidden_states) hidden_states = hidden_states + ff_output return hidden_states class OmniGenTransformer2DModel(ModelMixin, ConfigMixin): """ The Transformer model introduced in OmniGen (https://huggingface.co/papers/2409.11340). Parameters: in_channels (`int`, defaults to `4`): The number of channels in the input. patch_size (`int`, defaults to `2`): The size of the spatial patches to use in the patch embedding layer. hidden_size (`int`, defaults to `3072`): The dimensionality of the hidden layers in the model. rms_norm_eps (`float`, defaults to `1e-5`): Eps for RMSNorm layer. num_attention_heads (`int`, defaults to `32`): The number of heads to use for multi-head attention. num_key_value_heads (`int`, defaults to `32`): The number of heads to use for keys and values in multi-head attention. intermediate_size (`int`, defaults to `8192`): Dimension of the hidden layer in FeedForward layers. num_layers (`int`, default to `32`): The number of layers of transformer blocks to use. pad_token_id (`int`, default to `32000`): The id of the padding token. vocab_size (`int`, default to `32064`): The size of the vocabulary of the embedding vocabulary. rope_base (`int`, default to `10000`): The default theta value to use when creating RoPE. rope_scaling (`Dict`, optional): The scaling factors for the RoPE. Must contain `short_factor` and `long_factor`. pos_embed_max_size (`int`, default to `192`): The maximum size of the positional embeddings. time_step_dim (`int`, default to `256`): Output dimension of timestep embeddings. flip_sin_to_cos (`bool`, default to `True`): Whether to flip the sin and cos in the positional embeddings when preparing timestep embeddings. downscale_freq_shift (`int`, default to `0`): The frequency shift to use when downscaling the timestep embeddings. timestep_activation_fn (`str`, default to `silu`): The activation function to use for the timestep embeddings. """ _supports_gradient_checkpointing = True _no_split_modules = ["OmniGenBlock"] _skip_layerwise_casting_patterns = ["patch_embedding", "embed_tokens", "norm"] @register_to_config def __init__( self, in_channels: int = 4, patch_size: int = 2, hidden_size: int = 3072, rms_norm_eps: float = 1e-5, num_attention_heads: int = 32, num_key_value_heads: int = 32, intermediate_size: int = 8192, num_layers: int = 32, pad_token_id: int = 32000, vocab_size: int = 32064, max_position_embeddings: int = 131072, original_max_position_embeddings: int = 4096, rope_base: int = 10000, rope_scaling: Dict = None, pos_embed_max_size: int = 192, time_step_dim: int = 256, flip_sin_to_cos: bool = True, downscale_freq_shift: int = 0, timestep_activation_fn: str = "silu", ): super().__init__() self.in_channels = in_channels self.out_channels = in_channels self.patch_embedding = OmniGenPatchEmbed( patch_size=patch_size, in_channels=in_channels, embed_dim=hidden_size, pos_embed_max_size=pos_embed_max_size, ) self.time_proj = Timesteps(time_step_dim, flip_sin_to_cos, downscale_freq_shift) self.time_token = TimestepEmbedding(time_step_dim, hidden_size, timestep_activation_fn) self.t_embedder = TimestepEmbedding(time_step_dim, hidden_size, timestep_activation_fn) self.embed_tokens = nn.Embedding(vocab_size, hidden_size, pad_token_id) self.rope = OmniGenSuScaledRotaryEmbedding( hidden_size // num_attention_heads, max_position_embeddings=max_position_embeddings, original_max_position_embeddings=original_max_position_embeddings, base=rope_base, rope_scaling=rope_scaling, ) self.layers = nn.ModuleList( [ OmniGenBlock(hidden_size, num_attention_heads, num_key_value_heads, intermediate_size, rms_norm_eps) for _ in range(num_layers) ] ) self.norm = RMSNorm(hidden_size, eps=rms_norm_eps) self.norm_out = AdaLayerNorm(hidden_size, norm_elementwise_affine=False, norm_eps=1e-6, chunk_dim=1) self.proj_out = nn.Linear(hidden_size, patch_size * patch_size * self.out_channels, bias=True) self.gradient_checkpointing = False def _get_multimodal_embeddings( self, input_ids: torch.Tensor, input_img_latents: List[torch.Tensor], input_image_sizes: Dict ) -> Optional[torch.Tensor]: if input_ids is None: return None input_img_latents = [x.to(self.dtype) for x in input_img_latents] condition_tokens = self.embed_tokens(input_ids) input_img_inx = 0 input_image_tokens = self.patch_embedding(input_img_latents, is_input_image=True) for b_inx in input_image_sizes.keys(): for start_inx, end_inx in input_image_sizes[b_inx]: # replace the placeholder in text tokens with the image embedding. condition_tokens[b_inx, start_inx:end_inx] = input_image_tokens[input_img_inx].to( condition_tokens.dtype ) input_img_inx += 1 return condition_tokens def forward( self, hidden_states: torch.Tensor, timestep: Union[int, float, torch.FloatTensor], input_ids: torch.Tensor, input_img_latents: List[torch.Tensor], input_image_sizes: Dict[int, List[int]], attention_mask: torch.Tensor, position_ids: torch.Tensor, return_dict: bool = True, ) -> Union[Transformer2DModelOutput, Tuple[torch.Tensor]]: batch_size, num_channels, height, width = hidden_states.shape p = self.config.patch_size post_patch_height, post_patch_width = height // p, width // p # 1. Patch & Timestep & Conditional Embedding hidden_states = self.patch_embedding(hidden_states, is_input_image=False) num_tokens_for_output_image = hidden_states.size(1) timestep_proj = self.time_proj(timestep).type_as(hidden_states) time_token = self.time_token(timestep_proj).unsqueeze(1) temb = self.t_embedder(timestep_proj) condition_tokens = self._get_multimodal_embeddings(input_ids, input_img_latents, input_image_sizes) if condition_tokens is not None: hidden_states = torch.cat([condition_tokens, time_token, hidden_states], dim=1) else: hidden_states = torch.cat([time_token, hidden_states], dim=1) seq_length = hidden_states.size(1) position_ids = position_ids.view(-1, seq_length).long() # 2. Attention mask preprocessing if attention_mask is not None and attention_mask.dim() == 3: dtype = hidden_states.dtype min_dtype = torch.finfo(dtype).min attention_mask = (1 - attention_mask) * min_dtype attention_mask = attention_mask.unsqueeze(1).type_as(hidden_states) # 3. Rotary position embedding image_rotary_emb = self.rope(hidden_states, position_ids) # 4. Transformer blocks for block in self.layers: if torch.is_grad_enabled() and self.gradient_checkpointing: hidden_states = self._gradient_checkpointing_func( block, hidden_states, attention_mask, image_rotary_emb ) else: hidden_states = block(hidden_states, attention_mask=attention_mask, image_rotary_emb=image_rotary_emb) # 5. Output norm & projection hidden_states = self.norm(hidden_states) hidden_states = hidden_states[:, -num_tokens_for_output_image:] hidden_states = self.norm_out(hidden_states, temb=temb) hidden_states = self.proj_out(hidden_states) hidden_states = hidden_states.reshape(batch_size, post_patch_height, post_patch_width, p, p, -1) output = hidden_states.permute(0, 5, 1, 3, 2, 4).flatten(4, 5).flatten(2, 3) if not return_dict: return (output,) return Transformer2DModelOutput(sample=output)
diffusers/src/diffusers/models/transformers/transformer_omnigen.py/0
{ "file_path": "diffusers/src/diffusers/models/transformers/transformer_omnigen.py", "repo_id": "diffusers", "token_count": 8867 }
163
# Copyright 2025 Alibaba DAMO-VILAB and The HuggingFace Team. All rights reserved. # Copyright 2025 The ModelScope Team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Any, Dict, List, Optional, Tuple, Union import torch import torch.nn as nn import torch.utils.checkpoint from ...configuration_utils import ConfigMixin, register_to_config from ...loaders import UNet2DConditionLoadersMixin from ...utils import BaseOutput, logging from ..activations import get_activation from ..attention_processor import ( ADDED_KV_ATTENTION_PROCESSORS, CROSS_ATTENTION_PROCESSORS, Attention, AttentionProcessor, AttnAddedKVProcessor, AttnProcessor, FusedAttnProcessor2_0, ) from ..embeddings import TimestepEmbedding, Timesteps from ..modeling_utils import ModelMixin from ..transformers.transformer_temporal import TransformerTemporalModel from .unet_3d_blocks import ( UNetMidBlock3DCrossAttn, get_down_block, get_up_block, ) logger = logging.get_logger(__name__) # pylint: disable=invalid-name @dataclass class UNet3DConditionOutput(BaseOutput): """ The output of [`UNet3DConditionModel`]. Args: sample (`torch.Tensor` of shape `(batch_size, num_channels, num_frames, height, width)`): The hidden states output conditioned on `encoder_hidden_states` input. Output of last layer of model. """ sample: torch.Tensor class UNet3DConditionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin): r""" A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample shaped output. This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented for all models (such as downloading or saving). Parameters: sample_size (`int` or `Tuple[int, int]`, *optional*, defaults to `None`): Height and width of input/output sample. in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "DownBlock3D")`): The tuple of downsample blocks to use. up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D")`): The tuple of upsample blocks to use. block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): The tuple of output channels for each block. layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. If `None`, normalization and activation layers is skipped in post-processing. norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. cross_attention_dim (`int`, *optional*, defaults to 1024): The dimension of the cross attention features. attention_head_dim (`int`, *optional*, defaults to 64): The dimension of the attention heads. num_attention_heads (`int`, *optional*): The number of attention heads. time_cond_proj_dim (`int`, *optional*, defaults to `None`): The dimension of `cond_proj` layer in the timestep embedding. """ _supports_gradient_checkpointing = False _skip_layerwise_casting_patterns = ["norm", "time_embedding"] @register_to_config def __init__( self, sample_size: Optional[int] = None, in_channels: int = 4, out_channels: int = 4, down_block_types: Tuple[str, ...] = ( "CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "DownBlock3D", ), up_block_types: Tuple[str, ...] = ( "UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", ), block_out_channels: Tuple[int, ...] = (320, 640, 1280, 1280), layers_per_block: int = 2, downsample_padding: int = 1, mid_block_scale_factor: float = 1, act_fn: str = "silu", norm_num_groups: Optional[int] = 32, norm_eps: float = 1e-5, cross_attention_dim: int = 1024, attention_head_dim: Union[int, Tuple[int]] = 64, num_attention_heads: Optional[Union[int, Tuple[int]]] = None, time_cond_proj_dim: Optional[int] = None, ): super().__init__() self.sample_size = sample_size if num_attention_heads is not None: raise NotImplementedError( "At the moment it is not possible to define the number of attention heads via `num_attention_heads` because of a naming issue as described in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131. Passing `num_attention_heads` will only be supported in diffusers v0.19." ) # If `num_attention_heads` is not defined (which is the case for most models) # it will default to `attention_head_dim`. This looks weird upon first reading it and it is. # The reason for this behavior is to correct for incorrectly named variables that were introduced # when this library was created. The incorrect naming was only discovered much later in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131 # Changing `attention_head_dim` to `num_attention_heads` for 40,000+ configurations is too backwards breaking # which is why we correct for the naming here. num_attention_heads = num_attention_heads or attention_head_dim # Check inputs if len(down_block_types) != len(up_block_types): raise ValueError( f"Must provide the same number of `down_block_types` as `up_block_types`. `down_block_types`: {down_block_types}. `up_block_types`: {up_block_types}." ) if len(block_out_channels) != len(down_block_types): raise ValueError( f"Must provide the same number of `block_out_channels` as `down_block_types`. `block_out_channels`: {block_out_channels}. `down_block_types`: {down_block_types}." ) if not isinstance(num_attention_heads, int) and len(num_attention_heads) != len(down_block_types): raise ValueError( f"Must provide the same number of `num_attention_heads` as `down_block_types`. `num_attention_heads`: {num_attention_heads}. `down_block_types`: {down_block_types}." ) # input conv_in_kernel = 3 conv_out_kernel = 3 conv_in_padding = (conv_in_kernel - 1) // 2 self.conv_in = nn.Conv2d( in_channels, block_out_channels[0], kernel_size=conv_in_kernel, padding=conv_in_padding ) # time time_embed_dim = block_out_channels[0] * 4 self.time_proj = Timesteps(block_out_channels[0], True, 0) timestep_input_dim = block_out_channels[0] self.time_embedding = TimestepEmbedding( timestep_input_dim, time_embed_dim, act_fn=act_fn, cond_proj_dim=time_cond_proj_dim, ) self.transformer_in = TransformerTemporalModel( num_attention_heads=8, attention_head_dim=attention_head_dim, in_channels=block_out_channels[0], num_layers=1, norm_num_groups=norm_num_groups, ) # class embedding self.down_blocks = nn.ModuleList([]) self.up_blocks = nn.ModuleList([]) if isinstance(num_attention_heads, int): num_attention_heads = (num_attention_heads,) * len(down_block_types) # down output_channel = block_out_channels[0] for i, down_block_type in enumerate(down_block_types): input_channel = output_channel output_channel = block_out_channels[i] is_final_block = i == len(block_out_channels) - 1 down_block = get_down_block( down_block_type, num_layers=layers_per_block, in_channels=input_channel, out_channels=output_channel, temb_channels=time_embed_dim, add_downsample=not is_final_block, resnet_eps=norm_eps, resnet_act_fn=act_fn, resnet_groups=norm_num_groups, cross_attention_dim=cross_attention_dim, num_attention_heads=num_attention_heads[i], downsample_padding=downsample_padding, dual_cross_attention=False, ) self.down_blocks.append(down_block) # mid self.mid_block = UNetMidBlock3DCrossAttn( in_channels=block_out_channels[-1], temb_channels=time_embed_dim, resnet_eps=norm_eps, resnet_act_fn=act_fn, output_scale_factor=mid_block_scale_factor, cross_attention_dim=cross_attention_dim, num_attention_heads=num_attention_heads[-1], resnet_groups=norm_num_groups, dual_cross_attention=False, ) # count how many layers upsample the images self.num_upsamplers = 0 # up reversed_block_out_channels = list(reversed(block_out_channels)) reversed_num_attention_heads = list(reversed(num_attention_heads)) output_channel = reversed_block_out_channels[0] for i, up_block_type in enumerate(up_block_types): is_final_block = i == len(block_out_channels) - 1 prev_output_channel = output_channel output_channel = reversed_block_out_channels[i] input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] # add upsample block for all BUT final layer if not is_final_block: add_upsample = True self.num_upsamplers += 1 else: add_upsample = False up_block = get_up_block( up_block_type, num_layers=layers_per_block + 1, in_channels=input_channel, out_channels=output_channel, prev_output_channel=prev_output_channel, temb_channels=time_embed_dim, add_upsample=add_upsample, resnet_eps=norm_eps, resnet_act_fn=act_fn, resnet_groups=norm_num_groups, cross_attention_dim=cross_attention_dim, num_attention_heads=reversed_num_attention_heads[i], dual_cross_attention=False, resolution_idx=i, ) self.up_blocks.append(up_block) prev_output_channel = output_channel # out if norm_num_groups is not None: self.conv_norm_out = nn.GroupNorm( num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps ) self.conv_act = get_activation("silu") else: self.conv_norm_out = None self.conv_act = None conv_out_padding = (conv_out_kernel - 1) // 2 self.conv_out = nn.Conv2d( block_out_channels[0], out_channels, kernel_size=conv_out_kernel, padding=conv_out_padding ) @property # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.attn_processors def attn_processors(self) -> Dict[str, AttentionProcessor]: r""" Returns: `dict` of attention processors: A dictionary containing all attention processors used in the model with indexed by its weight name. """ # set recursively processors = {} def fn_recursive_add_processors(name: str, module: torch.nn.Module, processors: Dict[str, AttentionProcessor]): if hasattr(module, "get_processor"): processors[f"{name}.processor"] = module.get_processor() for sub_name, child in module.named_children(): fn_recursive_add_processors(f"{name}.{sub_name}", child, processors) return processors for name, module in self.named_children(): fn_recursive_add_processors(name, module, processors) return processors # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_attention_slice def set_attention_slice(self, slice_size: Union[str, int, List[int]]) -> None: r""" Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. This is useful for saving some memory in exchange for a small decrease in speed. Args: slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`): When `"auto"`, input to the attention heads is halved, so attention is computed in two steps. If `"max"`, maximum amount of memory is saved by running only one slice at a time. If a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim` must be a multiple of `slice_size`. """ sliceable_head_dims = [] def fn_recursive_retrieve_sliceable_dims(module: torch.nn.Module): if hasattr(module, "set_attention_slice"): sliceable_head_dims.append(module.sliceable_head_dim) for child in module.children(): fn_recursive_retrieve_sliceable_dims(child) # retrieve number of attention layers for module in self.children(): fn_recursive_retrieve_sliceable_dims(module) num_sliceable_layers = len(sliceable_head_dims) if slice_size == "auto": # half the attention head size is usually a good trade-off between # speed and memory slice_size = [dim // 2 for dim in sliceable_head_dims] elif slice_size == "max": # make smallest slice possible slice_size = num_sliceable_layers * [1] slice_size = num_sliceable_layers * [slice_size] if not isinstance(slice_size, list) else slice_size if len(slice_size) != len(sliceable_head_dims): raise ValueError( f"You have provided {len(slice_size)}, but {self.config} has {len(sliceable_head_dims)} different" f" attention layers. Make sure to match `len(slice_size)` to be {len(sliceable_head_dims)}." ) for i in range(len(slice_size)): size = slice_size[i] dim = sliceable_head_dims[i] if size is not None and size > dim: raise ValueError(f"size {size} has to be smaller or equal to {dim}.") # Recursively walk through all the children. # Any children which exposes the set_attention_slice method # gets the message def fn_recursive_set_attention_slice(module: torch.nn.Module, slice_size: List[int]): if hasattr(module, "set_attention_slice"): module.set_attention_slice(slice_size.pop()) for child in module.children(): fn_recursive_set_attention_slice(child, slice_size) reversed_slice_size = list(reversed(slice_size)) for module in self.children(): fn_recursive_set_attention_slice(module, reversed_slice_size) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_attn_processor def set_attn_processor(self, processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]]): r""" Sets the attention processor to use to compute attention. Parameters: processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`): The instantiated processor class or a dictionary of processor classes that will be set as the processor for **all** `Attention` layers. If `processor` is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. """ count = len(self.attn_processors.keys()) if isinstance(processor, dict) and len(processor) != count: raise ValueError( f"A dict of processors was passed, but the number of processors {len(processor)} does not match the" f" number of attention layers: {count}. Please make sure to pass {count} processor classes." ) def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): if hasattr(module, "set_processor"): if not isinstance(processor, dict): module.set_processor(processor) else: module.set_processor(processor.pop(f"{name}.processor")) for sub_name, child in module.named_children(): fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor) for name, module in self.named_children(): fn_recursive_attn_processor(name, module, processor) def enable_forward_chunking(self, chunk_size: Optional[int] = None, dim: int = 0) -> None: """ Sets the attention processor to use [feed forward chunking](https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers). Parameters: chunk_size (`int`, *optional*): The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually over each tensor of dim=`dim`. dim (`int`, *optional*, defaults to `0`): The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) or dim=1 (sequence length). """ if dim not in [0, 1]: raise ValueError(f"Make sure to set `dim` to either 0 or 1, not {dim}") # By default chunk size is 1 chunk_size = chunk_size or 1 def fn_recursive_feed_forward(module: torch.nn.Module, chunk_size: int, dim: int): if hasattr(module, "set_chunk_feed_forward"): module.set_chunk_feed_forward(chunk_size=chunk_size, dim=dim) for child in module.children(): fn_recursive_feed_forward(child, chunk_size, dim) for module in self.children(): fn_recursive_feed_forward(module, chunk_size, dim) def disable_forward_chunking(self): def fn_recursive_feed_forward(module: torch.nn.Module, chunk_size: int, dim: int): if hasattr(module, "set_chunk_feed_forward"): module.set_chunk_feed_forward(chunk_size=chunk_size, dim=dim) for child in module.children(): fn_recursive_feed_forward(child, chunk_size, dim) for module in self.children(): fn_recursive_feed_forward(module, None, 0) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.set_default_attn_processor def set_default_attn_processor(self): """ Disables custom attention processors and sets the default attention implementation. """ if all(proc.__class__ in ADDED_KV_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): processor = AttnAddedKVProcessor() elif all(proc.__class__ in CROSS_ATTENTION_PROCESSORS for proc in self.attn_processors.values()): processor = AttnProcessor() else: raise ValueError( f"Cannot call `set_default_attn_processor` when attention processors are of type {next(iter(self.attn_processors.values()))}" ) self.set_attn_processor(processor) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.enable_freeu def enable_freeu(self, s1, s2, b1, b2): r"""Enables the FreeU mechanism from https://huggingface.co/papers/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of values that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. Args: s1 (`float`): Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process. s2 (`float`): Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process. b1 (`float`): Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (`float`): Scaling factor for stage 2 to amplify the contributions of backbone features. """ for i, upsample_block in enumerate(self.up_blocks): setattr(upsample_block, "s1", s1) setattr(upsample_block, "s2", s2) setattr(upsample_block, "b1", b1) setattr(upsample_block, "b2", b2) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.disable_freeu def disable_freeu(self): """Disables the FreeU mechanism.""" freeu_keys = {"s1", "s2", "b1", "b2"} for i, upsample_block in enumerate(self.up_blocks): for k in freeu_keys: if hasattr(upsample_block, k) or getattr(upsample_block, k, None) is not None: setattr(upsample_block, k, None) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.fuse_qkv_projections def fuse_qkv_projections(self): """ Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused. <Tip warning={true}> This API is 🧪 experimental. </Tip> """ self.original_attn_processors = None for _, attn_processor in self.attn_processors.items(): if "Added" in str(attn_processor.__class__.__name__): raise ValueError("`fuse_qkv_projections()` is not supported for models having added KV projections.") self.original_attn_processors = self.attn_processors for module in self.modules(): if isinstance(module, Attention): module.fuse_projections(fuse=True) self.set_attn_processor(FusedAttnProcessor2_0()) # Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.unfuse_qkv_projections def unfuse_qkv_projections(self): """Disables the fused QKV projection if enabled. <Tip warning={true}> This API is 🧪 experimental. </Tip> """ if self.original_attn_processors is not None: self.set_attn_processor(self.original_attn_processors) def forward( self, sample: torch.Tensor, timestep: Union[torch.Tensor, float, int], encoder_hidden_states: torch.Tensor, class_labels: Optional[torch.Tensor] = None, timestep_cond: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, cross_attention_kwargs: Optional[Dict[str, Any]] = None, down_block_additional_residuals: Optional[Tuple[torch.Tensor]] = None, mid_block_additional_residual: Optional[torch.Tensor] = None, return_dict: bool = True, ) -> Union[UNet3DConditionOutput, Tuple[torch.Tensor]]: r""" The [`UNet3DConditionModel`] forward method. Args: sample (`torch.Tensor`): The noisy input tensor with the following shape `(batch, num_channels, num_frames, height, width`. timestep (`torch.Tensor` or `float` or `int`): The number of timesteps to denoise an input. encoder_hidden_states (`torch.Tensor`): The encoder hidden states with shape `(batch, sequence_length, feature_dim)`. class_labels (`torch.Tensor`, *optional*, defaults to `None`): Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond: (`torch.Tensor`, *optional*, defaults to `None`): Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed through the `self.time_embedding` layer to obtain the timestep embeddings. attention_mask (`torch.Tensor`, *optional*, defaults to `None`): An attention mask of shape `(batch, key_tokens)` is applied to `encoder_hidden_states`. If `1` the mask is kept, otherwise if `0` it is discarded. Mask will be converted into a bias, which adds large negative values to the attention scores corresponding to "discard" tokens. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). down_block_additional_residuals: (`tuple` of `torch.Tensor`, *optional*): A tuple of tensors that if specified are added to the residuals of down unet blocks. mid_block_additional_residual: (`torch.Tensor`, *optional*): A tensor that if specified is added to the residual of the middle unet block. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~models.unets.unet_3d_condition.UNet3DConditionOutput`] instead of a plain tuple. cross_attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the [`AttnProcessor`]. Returns: [`~models.unets.unet_3d_condition.UNet3DConditionOutput`] or `tuple`: If `return_dict` is True, an [`~models.unets.unet_3d_condition.UNet3DConditionOutput`] is returned, otherwise a `tuple` is returned where the first element is the sample tensor. """ # By default samples have to be AT least a multiple of the overall upsampling factor. # The overall upsampling factor is equal to 2 ** (# num of upsampling layears). # However, the upsampling interpolation output size can be forced to fit any upsampling size # on the fly if necessary. default_overall_up_factor = 2**self.num_upsamplers # upsample size should be forwarded when sample is not a multiple of `default_overall_up_factor` forward_upsample_size = False upsample_size = None if any(s % default_overall_up_factor != 0 for s in sample.shape[-2:]): logger.info("Forward upsample size to force interpolation output size.") forward_upsample_size = True # prepare attention_mask if attention_mask is not None: attention_mask = (1 - attention_mask.to(sample.dtype)) * -10000.0 attention_mask = attention_mask.unsqueeze(1) # 1. time timesteps = timestep if not torch.is_tensor(timesteps): # TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensors if you can # This would be a good case for the `match` statement (Python 3.10+) is_mps = sample.device.type == "mps" is_npu = sample.device.type == "npu" if isinstance(timestep, float): dtype = torch.float32 if (is_mps or is_npu) else torch.float64 else: dtype = torch.int32 if (is_mps or is_npu) else torch.int64 timesteps = torch.tensor([timesteps], dtype=dtype, device=sample.device) elif len(timesteps.shape) == 0: timesteps = timesteps[None].to(sample.device) # broadcast to batch dimension in a way that's compatible with ONNX/Core ML num_frames = sample.shape[2] timesteps = timesteps.expand(sample.shape[0]) t_emb = self.time_proj(timesteps) # timesteps does not contain any weights and will always return f32 tensors # but time_embedding might actually be running in fp16. so we need to cast here. # there might be better ways to encapsulate this. t_emb = t_emb.to(dtype=self.dtype) emb = self.time_embedding(t_emb, timestep_cond) emb = emb.repeat_interleave(num_frames, dim=0, output_size=emb.shape[0] * num_frames) encoder_hidden_states = encoder_hidden_states.repeat_interleave( num_frames, dim=0, output_size=encoder_hidden_states.shape[0] * num_frames ) # 2. pre-process sample = sample.permute(0, 2, 1, 3, 4).reshape((sample.shape[0] * num_frames, -1) + sample.shape[3:]) sample = self.conv_in(sample) sample = self.transformer_in( sample, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, return_dict=False, )[0] # 3. down down_block_res_samples = (sample,) for downsample_block in self.down_blocks: if hasattr(downsample_block, "has_cross_attention") and downsample_block.has_cross_attention: sample, res_samples = downsample_block( hidden_states=sample, temb=emb, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, ) else: sample, res_samples = downsample_block(hidden_states=sample, temb=emb, num_frames=num_frames) down_block_res_samples += res_samples if down_block_additional_residuals is not None: new_down_block_res_samples = () for down_block_res_sample, down_block_additional_residual in zip( down_block_res_samples, down_block_additional_residuals ): down_block_res_sample = down_block_res_sample + down_block_additional_residual new_down_block_res_samples += (down_block_res_sample,) down_block_res_samples = new_down_block_res_samples # 4. mid if self.mid_block is not None: sample = self.mid_block( sample, emb, encoder_hidden_states=encoder_hidden_states, attention_mask=attention_mask, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, ) if mid_block_additional_residual is not None: sample = sample + mid_block_additional_residual # 5. up for i, upsample_block in enumerate(self.up_blocks): is_final_block = i == len(self.up_blocks) - 1 res_samples = down_block_res_samples[-len(upsample_block.resnets) :] down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] # if we have not reached the final block and need to forward the # upsample size, we do it here if not is_final_block and forward_upsample_size: upsample_size = down_block_res_samples[-1].shape[2:] if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention: sample = upsample_block( hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, encoder_hidden_states=encoder_hidden_states, upsample_size=upsample_size, attention_mask=attention_mask, num_frames=num_frames, cross_attention_kwargs=cross_attention_kwargs, ) else: sample = upsample_block( hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size, num_frames=num_frames, ) # 6. post-process if self.conv_norm_out: sample = self.conv_norm_out(sample) sample = self.conv_act(sample) sample = self.conv_out(sample) # reshape to (batch, channel, framerate, width, height) sample = sample[None, :].reshape((-1, num_frames) + sample.shape[1:]).permute(0, 2, 1, 3, 4) if not return_dict: return (sample,) return UNet3DConditionOutput(sample=sample)
diffusers/src/diffusers/models/unets/unet_3d_condition.py/0
{ "file_path": "diffusers/src/diffusers/models/unets/unet_3d_condition.py", "repo_id": "diffusers", "token_count": 14999 }
164
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import html from typing import List, Optional, Union import regex as re import torch from transformers import CLIPTextModel, CLIPTokenizer, T5EncoderModel, T5TokenizerFast from ...configuration_utils import FrozenDict from ...image_processor import VaeImageProcessor from ...loaders import FluxLoraLoaderMixin, TextualInversionLoaderMixin from ...models import AutoencoderKL from ...utils import USE_PEFT_BACKEND, is_ftfy_available, logging, scale_lora_layers, unscale_lora_layers from ..modular_pipeline import ModularPipelineBlocks, PipelineState from ..modular_pipeline_utils import ComponentSpec, ConfigSpec, InputParam, OutputParam from .modular_pipeline import FluxModularPipeline if is_ftfy_available(): import ftfy logger = logging.get_logger(__name__) # pylint: disable=invalid-name def basic_clean(text): text = ftfy.fix_text(text) text = html.unescape(html.unescape(text)) return text.strip() def whitespace_clean(text): text = re.sub(r"\s+", " ", text) text = text.strip() return text def prompt_clean(text): text = whitespace_clean(basic_clean(text)) return text # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.retrieve_latents def retrieve_latents( encoder_output: torch.Tensor, generator: Optional[torch.Generator] = None, sample_mode: str = "sample" ): if hasattr(encoder_output, "latent_dist") and sample_mode == "sample": return encoder_output.latent_dist.sample(generator) elif hasattr(encoder_output, "latent_dist") and sample_mode == "argmax": return encoder_output.latent_dist.mode() elif hasattr(encoder_output, "latents"): return encoder_output.latents else: raise AttributeError("Could not access latents of provided encoder_output") class FluxVaeEncoderStep(ModularPipelineBlocks): model_name = "flux" @property def description(self) -> str: return "Vae Encoder step that encode the input image into a latent representation" @property def expected_components(self) -> List[ComponentSpec]: return [ ComponentSpec("vae", AutoencoderKL), ComponentSpec( "image_processor", VaeImageProcessor, config=FrozenDict({"vae_scale_factor": 16, "vae_latent_channels": 16}), default_creation_method="from_config", ), ] @property def inputs(self) -> List[InputParam]: return [ InputParam("image", required=True), InputParam("height"), InputParam("width"), InputParam("generator"), InputParam("dtype", type_hint=torch.dtype, description="Data type of model tensor inputs"), InputParam( "preprocess_kwargs", type_hint=Optional[dict], description="A kwargs dictionary that if specified is passed along to the `ImageProcessor` as defined under `self.image_processor` in [diffusers.image_processor.VaeImageProcessor]", ), ] @property def intermediate_outputs(self) -> List[OutputParam]: return [ OutputParam( "image_latents", type_hint=torch.Tensor, description="The latents representing the reference image for image-to-image/inpainting generation", ) ] @staticmethod # Copied from diffusers.pipelines.stable_diffusion_3.pipeline_stable_diffusion_3_inpaint.StableDiffusion3InpaintPipeline._encode_vae_image with self.vae->vae def _encode_vae_image(vae, image: torch.Tensor, generator: torch.Generator): if isinstance(generator, list): image_latents = [ retrieve_latents(vae.encode(image[i : i + 1]), generator=generator[i]) for i in range(image.shape[0]) ] image_latents = torch.cat(image_latents, dim=0) else: image_latents = retrieve_latents(vae.encode(image), generator=generator) image_latents = (image_latents - vae.config.shift_factor) * vae.config.scaling_factor return image_latents @torch.no_grad() def __call__(self, components: FluxModularPipeline, state: PipelineState) -> PipelineState: block_state = self.get_block_state(state) block_state.preprocess_kwargs = block_state.preprocess_kwargs or {} block_state.device = components._execution_device block_state.dtype = block_state.dtype if block_state.dtype is not None else components.vae.dtype block_state.image = components.image_processor.preprocess( block_state.image, height=block_state.height, width=block_state.width, **block_state.preprocess_kwargs ) block_state.image = block_state.image.to(device=block_state.device, dtype=block_state.dtype) block_state.batch_size = block_state.image.shape[0] # if generator is a list, make sure the length of it matches the length of images (both should be batch_size) if isinstance(block_state.generator, list) and len(block_state.generator) != block_state.batch_size: raise ValueError( f"You have passed a list of generators of length {len(block_state.generator)}, but requested an effective batch" f" size of {block_state.batch_size}. Make sure the batch size matches the length of the generators." ) block_state.image_latents = self._encode_vae_image( components.vae, image=block_state.image, generator=block_state.generator ) self.set_block_state(state, block_state) return components, state class FluxTextEncoderStep(ModularPipelineBlocks): model_name = "flux" @property def description(self) -> str: return "Text Encoder step that generate text_embeddings to guide the video generation" @property def expected_components(self) -> List[ComponentSpec]: return [ ComponentSpec("text_encoder", CLIPTextModel), ComponentSpec("tokenizer", CLIPTokenizer), ComponentSpec("text_encoder_2", T5EncoderModel), ComponentSpec("tokenizer_2", T5TokenizerFast), ] @property def expected_configs(self) -> List[ConfigSpec]: return [] @property def inputs(self) -> List[InputParam]: return [ InputParam("prompt"), InputParam("prompt_2"), InputParam("joint_attention_kwargs"), ] @property def intermediate_outputs(self) -> List[OutputParam]: return [ OutputParam( "prompt_embeds", type_hint=torch.Tensor, description="text embeddings used to guide the image generation", ), OutputParam( "pooled_prompt_embeds", type_hint=torch.Tensor, description="pooled text embeddings used to guide the image generation", ), OutputParam( "text_ids", type_hint=torch.Tensor, description="ids from the text sequence for RoPE", ), ] @staticmethod def check_inputs(block_state): for prompt in [block_state.prompt, block_state.prompt_2]: if prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` or `prompt_2` has to be of type `str` or `list` but is {type(prompt)}") @staticmethod def _get_t5_prompt_embeds( components, prompt: Union[str, List[str]], num_images_per_prompt: int, max_sequence_length: int, device: torch.device, ): dtype = components.text_encoder_2.dtype prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) if isinstance(components, TextualInversionLoaderMixin): prompt = components.maybe_convert_prompt(prompt, components.tokenizer_2) text_inputs = components.tokenizer_2( prompt, padding="max_length", max_length=max_sequence_length, truncation=True, return_length=False, return_overflowing_tokens=False, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = components.tokenizer_2(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): removed_text = components.tokenizer_2.batch_decode(untruncated_ids[:, max_sequence_length - 1 : -1]) logger.warning( "The following part of your input was truncated because `max_sequence_length` is set to " f" {max_sequence_length} tokens: {removed_text}" ) prompt_embeds = components.text_encoder_2(text_input_ids.to(device), output_hidden_states=False)[0] prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) _, seq_len, _ = prompt_embeds.shape # duplicate text embeddings and attention mask for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) prompt_embeds = prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) return prompt_embeds @staticmethod def _get_clip_prompt_embeds( components, prompt: Union[str, List[str]], num_images_per_prompt: int, device: torch.device, ): prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) if isinstance(components, TextualInversionLoaderMixin): prompt = components.maybe_convert_prompt(prompt, components.tokenizer) text_inputs = components.tokenizer( prompt, padding="max_length", max_length=components.tokenizer.model_max_length, truncation=True, return_overflowing_tokens=False, return_length=False, return_tensors="pt", ) text_input_ids = text_inputs.input_ids tokenizer_max_length = components.tokenizer.model_max_length untruncated_ids = components.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): removed_text = components.tokenizer.batch_decode(untruncated_ids[:, tokenizer_max_length - 1 : -1]) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {tokenizer_max_length} tokens: {removed_text}" ) prompt_embeds = components.text_encoder(text_input_ids.to(device), output_hidden_states=False) # Use pooled output of CLIPTextModel prompt_embeds = prompt_embeds.pooler_output prompt_embeds = prompt_embeds.to(dtype=components.text_encoder.dtype, device=device) # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt) prompt_embeds = prompt_embeds.view(batch_size * num_images_per_prompt, -1) return prompt_embeds @staticmethod def encode_prompt( components, prompt: Union[str, List[str]], prompt_2: Union[str, List[str]], device: Optional[torch.device] = None, num_images_per_prompt: int = 1, prompt_embeds: Optional[torch.FloatTensor] = None, pooled_prompt_embeds: Optional[torch.FloatTensor] = None, max_sequence_length: int = 512, lora_scale: Optional[float] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is used in all text-encoders device: (`torch.device`): torch device num_images_per_prompt (`int`): number of images that should be generated per prompt prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. pooled_prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled text embeddings will be generated from `prompt` input argument. lora_scale (`float`, *optional*): A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. """ device = device or components._execution_device # set lora scale so that monkey patched LoRA # function of text encoder can correctly access it if lora_scale is not None and isinstance(components, FluxLoraLoaderMixin): components._lora_scale = lora_scale # dynamically adjust the LoRA scale if components.text_encoder is not None and USE_PEFT_BACKEND: scale_lora_layers(components.text_encoder, lora_scale) if components.text_encoder_2 is not None and USE_PEFT_BACKEND: scale_lora_layers(components.text_encoder_2, lora_scale) prompt = [prompt] if isinstance(prompt, str) else prompt if prompt_embeds is None: prompt_2 = prompt_2 or prompt prompt_2 = [prompt_2] if isinstance(prompt_2, str) else prompt_2 # We only use the pooled prompt output from the CLIPTextModel pooled_prompt_embeds = FluxTextEncoderStep._get_clip_prompt_embeds( components, prompt=prompt, device=device, num_images_per_prompt=num_images_per_prompt, ) prompt_embeds = FluxTextEncoderStep._get_t5_prompt_embeds( components, prompt=prompt_2, num_images_per_prompt=num_images_per_prompt, max_sequence_length=max_sequence_length, device=device, ) if components.text_encoder is not None: if isinstance(components, FluxLoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(components.text_encoder, lora_scale) if components.text_encoder_2 is not None: if isinstance(components, FluxLoraLoaderMixin) and USE_PEFT_BACKEND: # Retrieve the original scale by scaling back the LoRA layers unscale_lora_layers(components.text_encoder_2, lora_scale) dtype = components.text_encoder.dtype if components.text_encoder is not None else torch.bfloat16 text_ids = torch.zeros(prompt_embeds.shape[1], 3).to(device=device, dtype=dtype) return prompt_embeds, pooled_prompt_embeds, text_ids @torch.no_grad() def __call__(self, components: FluxModularPipeline, state: PipelineState) -> PipelineState: # Get inputs and intermediates block_state = self.get_block_state(state) self.check_inputs(block_state) block_state.device = components._execution_device # Encode input prompt block_state.text_encoder_lora_scale = ( block_state.joint_attention_kwargs.get("scale", None) if block_state.joint_attention_kwargs is not None else None ) (block_state.prompt_embeds, block_state.pooled_prompt_embeds, block_state.text_ids) = self.encode_prompt( components, prompt=block_state.prompt, prompt_2=None, prompt_embeds=None, pooled_prompt_embeds=None, device=block_state.device, num_images_per_prompt=1, # TODO: hardcoded for now. lora_scale=block_state.text_encoder_lora_scale, ) # Add outputs self.set_block_state(state, block_state) return components, state
diffusers/src/diffusers/modular_pipelines/flux/encoders.py/0
{ "file_path": "diffusers/src/diffusers/modular_pipelines/flux/encoders.py", "repo_id": "diffusers", "token_count": 7338 }
165
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Any, List, Tuple import torch from ...configuration_utils import FrozenDict from ...guiders import ClassifierFreeGuidance from ...models import WanTransformer3DModel from ...schedulers import UniPCMultistepScheduler from ...utils import logging from ..modular_pipeline import ( BlockState, LoopSequentialPipelineBlocks, ModularPipelineBlocks, PipelineState, ) from ..modular_pipeline_utils import ComponentSpec, InputParam, OutputParam from .modular_pipeline import WanModularPipeline logger = logging.get_logger(__name__) # pylint: disable=invalid-name class WanLoopDenoiser(ModularPipelineBlocks): model_name = "wan" @property def expected_components(self) -> List[ComponentSpec]: return [ ComponentSpec( "guider", ClassifierFreeGuidance, config=FrozenDict({"guidance_scale": 5.0}), default_creation_method="from_config", ), ComponentSpec("transformer", WanTransformer3DModel), ] @property def description(self) -> str: return ( "Step within the denoising loop that denoise the latents with guidance. " "This block should be used to compose the `sub_blocks` attribute of a `LoopSequentialPipelineBlocks` " "object (e.g. `WanDenoiseLoopWrapper`)" ) @property def inputs(self) -> List[Tuple[str, Any]]: return [ InputParam("attention_kwargs"), ] @property def intermediate_inputs(self) -> List[str]: return [ InputParam( "latents", required=True, type_hint=torch.Tensor, description="The initial latents to use for the denoising process. Can be generated in prepare_latent step.", ), InputParam( "num_inference_steps", required=True, type_hint=int, description="The number of inference steps to use for the denoising process. Can be generated in set_timesteps step.", ), InputParam( kwargs_type="guider_input_fields", description=( "All conditional model inputs that need to be prepared with guider. " "It should contain prompt_embeds/negative_prompt_embeds. " "Please add `kwargs_type=guider_input_fields` to their parameter spec (`OutputParam`) when they are created and added to the pipeline state" ), ), ] @torch.no_grad() def __call__( self, components: WanModularPipeline, block_state: BlockState, i: int, t: torch.Tensor ) -> PipelineState: # Map the keys we'll see on each `guider_state_batch` (e.g. guider_state_batch.prompt_embeds) # to the corresponding (cond, uncond) fields on block_state. (e.g. block_state.prompt_embeds, block_state.negative_prompt_embeds) guider_input_fields = { "prompt_embeds": ("prompt_embeds", "negative_prompt_embeds"), } transformer_dtype = components.transformer.dtype components.guider.set_state(step=i, num_inference_steps=block_state.num_inference_steps, timestep=t) # Prepare mini‐batches according to guidance method and `guider_input_fields` # Each guider_state_batch will have .prompt_embeds, .time_ids, text_embeds, image_embeds. # e.g. for CFG, we prepare two batches: one for uncond, one for cond # for first batch, guider_state_batch.prompt_embeds correspond to block_state.prompt_embeds # for second batch, guider_state_batch.prompt_embeds correspond to block_state.negative_prompt_embeds guider_state = components.guider.prepare_inputs(block_state, guider_input_fields) # run the denoiser for each guidance batch for guider_state_batch in guider_state: components.guider.prepare_models(components.transformer) cond_kwargs = guider_state_batch.as_dict() cond_kwargs = {k: v for k, v in cond_kwargs.items() if k in guider_input_fields} prompt_embeds = cond_kwargs.pop("prompt_embeds") # Predict the noise residual # store the noise_pred in guider_state_batch so that we can apply guidance across all batches guider_state_batch.noise_pred = components.transformer( hidden_states=block_state.latents.to(transformer_dtype), timestep=t.flatten(), encoder_hidden_states=prompt_embeds, attention_kwargs=block_state.attention_kwargs, return_dict=False, )[0] components.guider.cleanup_models(components.transformer) # Perform guidance block_state.noise_pred, block_state.scheduler_step_kwargs = components.guider(guider_state) return components, block_state class WanLoopAfterDenoiser(ModularPipelineBlocks): model_name = "wan" @property def expected_components(self) -> List[ComponentSpec]: return [ ComponentSpec("scheduler", UniPCMultistepScheduler), ] @property def description(self) -> str: return ( "step within the denoising loop that update the latents. " "This block should be used to compose the `sub_blocks` attribute of a `LoopSequentialPipelineBlocks` " "object (e.g. `WanDenoiseLoopWrapper`)" ) @property def inputs(self) -> List[Tuple[str, Any]]: return [] @property def intermediate_inputs(self) -> List[str]: return [ InputParam("generator"), ] @property def intermediate_outputs(self) -> List[OutputParam]: return [OutputParam("latents", type_hint=torch.Tensor, description="The denoised latents")] @torch.no_grad() def __call__(self, components: WanModularPipeline, block_state: BlockState, i: int, t: torch.Tensor): # Perform scheduler step using the predicted output latents_dtype = block_state.latents.dtype block_state.latents = components.scheduler.step( block_state.noise_pred.float(), t, block_state.latents.float(), **block_state.scheduler_step_kwargs, return_dict=False, )[0] if block_state.latents.dtype != latents_dtype: block_state.latents = block_state.latents.to(latents_dtype) return components, block_state class WanDenoiseLoopWrapper(LoopSequentialPipelineBlocks): model_name = "wan" @property def description(self) -> str: return ( "Pipeline block that iteratively denoise the latents over `timesteps`. " "The specific steps with each iteration can be customized with `sub_blocks` attributes" ) @property def loop_expected_components(self) -> List[ComponentSpec]: return [ ComponentSpec( "guider", ClassifierFreeGuidance, config=FrozenDict({"guidance_scale": 5.0}), default_creation_method="from_config", ), ComponentSpec("scheduler", UniPCMultistepScheduler), ComponentSpec("transformer", WanTransformer3DModel), ] @property def loop_intermediate_inputs(self) -> List[InputParam]: return [ InputParam( "timesteps", required=True, type_hint=torch.Tensor, description="The timesteps to use for the denoising process. Can be generated in set_timesteps step.", ), InputParam( "num_inference_steps", required=True, type_hint=int, description="The number of inference steps to use for the denoising process. Can be generated in set_timesteps step.", ), ] @torch.no_grad() def __call__(self, components: WanModularPipeline, state: PipelineState) -> PipelineState: block_state = self.get_block_state(state) block_state.num_warmup_steps = max( len(block_state.timesteps) - block_state.num_inference_steps * components.scheduler.order, 0 ) with self.progress_bar(total=block_state.num_inference_steps) as progress_bar: for i, t in enumerate(block_state.timesteps): components, block_state = self.loop_step(components, block_state, i=i, t=t) if i == len(block_state.timesteps) - 1 or ( (i + 1) > block_state.num_warmup_steps and (i + 1) % components.scheduler.order == 0 ): progress_bar.update() self.set_block_state(state, block_state) return components, state class WanDenoiseStep(WanDenoiseLoopWrapper): block_classes = [ WanLoopDenoiser, WanLoopAfterDenoiser, ] block_names = ["before_denoiser", "denoiser", "after_denoiser"] @property def description(self) -> str: return ( "Denoise step that iteratively denoise the latents. \n" "Its loop logic is defined in `WanDenoiseLoopWrapper.__call__` method \n" "At each iteration, it runs blocks defined in `sub_blocks` sequencially:\n" " - `WanLoopDenoiser`\n" " - `WanLoopAfterDenoiser`\n" "This block supports both text2vid tasks." )
diffusers/src/diffusers/modular_pipelines/wan/denoise.py/0
{ "file_path": "diffusers/src/diffusers/modular_pipelines/wan/denoise.py", "repo_id": "diffusers", "token_count": 4355 }
166
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Optional, Tuple, Union import torch import torch.utils.checkpoint from torch import nn from transformers import BertTokenizer from transformers.activations import QuickGELUActivation as QuickGELU from transformers.modeling_outputs import ( BaseModelOutputWithPastAndCrossAttentions, BaseModelOutputWithPooling, BaseModelOutputWithPoolingAndCrossAttentions, ) from transformers.models.blip_2.configuration_blip_2 import Blip2Config, Blip2VisionConfig from transformers.models.blip_2.modeling_blip_2 import ( Blip2Encoder, Blip2PreTrainedModel, Blip2QFormerAttention, Blip2QFormerIntermediate, Blip2QFormerOutput, ) from transformers.pytorch_utils import apply_chunking_to_forward from transformers.utils import ( logging, replace_return_docstrings, ) logger = logging.get_logger(__name__) # There is an implementation of Blip2 in `transformers` : https://github.com/huggingface/transformers/blob/main/src/transformers/models/blip_2/modeling_blip_2.py. # But it doesn't support getting multimodal embeddings. So, this module can be # replaced with a future `transformers` version supports that. class Blip2TextEmbeddings(nn.Module): """Construct the embeddings from word and position embeddings.""" def __init__(self, config): super().__init__() self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load # any TensorFlow checkpoint file self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout_prob) # position_ids (1, len position emb) is contiguous in memory and exported when serialized self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") self.config = config def forward( self, input_ids=None, position_ids=None, query_embeds=None, past_key_values_length=0, ): if input_ids is not None: seq_length = input_ids.size()[1] else: seq_length = 0 if position_ids is None: position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length].clone() if input_ids is not None: embeddings = self.word_embeddings(input_ids) if self.position_embedding_type == "absolute": position_embeddings = self.position_embeddings(position_ids) embeddings = embeddings + position_embeddings if query_embeds is not None: batch_size = embeddings.shape[0] # repeat the query embeddings for batch size query_embeds = query_embeds.repeat(batch_size, 1, 1) embeddings = torch.cat((query_embeds, embeddings), dim=1) else: embeddings = query_embeds embeddings = embeddings.to(query_embeds.dtype) embeddings = self.LayerNorm(embeddings) embeddings = self.dropout(embeddings) return embeddings # Copy-pasted from transformers.models.blip.modeling_blip.BlipVisionEmbeddings with Blip->Blip2 class Blip2VisionEmbeddings(nn.Module): def __init__(self, config: Blip2VisionConfig): super().__init__() self.config = config self.embed_dim = config.hidden_size self.image_size = config.image_size self.patch_size = config.patch_size self.class_embedding = nn.Parameter(torch.randn(1, 1, self.embed_dim)) self.patch_embedding = nn.Conv2d( in_channels=3, out_channels=self.embed_dim, kernel_size=self.patch_size, stride=self.patch_size, bias=False ) self.num_patches = (self.image_size // self.patch_size) ** 2 self.num_positions = self.num_patches + 1 self.position_embedding = nn.Parameter(torch.randn(1, self.num_positions, self.embed_dim)) def forward(self, pixel_values: torch.Tensor) -> torch.Tensor: batch_size = pixel_values.shape[0] target_dtype = self.patch_embedding.weight.dtype patch_embeds = self.patch_embedding(pixel_values.to(dtype=target_dtype)) # shape = [*, width, grid, grid] patch_embeds = patch_embeds.flatten(2).transpose(1, 2) class_embeds = self.class_embedding.expand(batch_size, 1, -1).to(target_dtype) embeddings = torch.cat([class_embeds, patch_embeds], dim=1) embeddings = embeddings + self.position_embedding[:, : embeddings.size(1), :].to(target_dtype) return embeddings # The Qformer encoder, which takes the visual embeddings, and the text input, to get multimodal embeddings class Blip2QFormerEncoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.layer = nn.ModuleList( [Blip2QFormerLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)] ) self.gradient_checkpointing = False def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=False, output_hidden_states=False, return_dict=True, query_length=0, ): all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions else None next_decoder_cache = () if use_cache else None for i in range(self.config.num_hidden_layers): layer_module = self.layer[i] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_head_mask = head_mask[i] if head_mask is not None else None past_key_value = past_key_values[i] if past_key_values is not None else None if getattr(self.config, "gradient_checkpointing", False) and torch.is_grad_enabled(): if use_cache: logger.warning( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False layer_outputs = self._gradient_checkpointing_func( layer_module, hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions, query_length, ) else: layer_outputs = layer_module( hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions, query_length, ) hidden_states = layer_outputs[0] if use_cache: next_decoder_cache += (layer_outputs[-1],) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) if layer_module.has_cross_attention: all_cross_attentions = all_cross_attentions + (layer_outputs[2],) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [ hidden_states, next_decoder_cache, all_hidden_states, all_self_attentions, all_cross_attentions, ] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=next_decoder_cache, hidden_states=all_hidden_states, attentions=all_self_attentions, cross_attentions=all_cross_attentions, ) # The layers making up the Qformer encoder class Blip2QFormerLayer(nn.Module): def __init__(self, config, layer_idx): super().__init__() self.chunk_size_feed_forward = config.chunk_size_feed_forward self.seq_len_dim = 1 self.attention = Blip2QFormerAttention(config) self.layer_idx = layer_idx if layer_idx % config.cross_attention_frequency == 0: self.crossattention = Blip2QFormerAttention(config, is_cross_attention=True) self.has_cross_attention = True else: self.has_cross_attention = False self.intermediate = Blip2QFormerIntermediate(config) self.intermediate_query = Blip2QFormerIntermediate(config) self.output_query = Blip2QFormerOutput(config) self.output = Blip2QFormerOutput(config) def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_value=None, output_attentions=False, query_length=0, ): # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None self_attention_outputs = self.attention( hidden_states, attention_mask, head_mask, output_attentions=output_attentions, past_key_value=self_attn_past_key_value, ) attention_output = self_attention_outputs[0] outputs = self_attention_outputs[1:-1] present_key_value = self_attention_outputs[-1] if query_length > 0: query_attention_output = attention_output[:, :query_length, :] if self.has_cross_attention: if encoder_hidden_states is None: raise ValueError("encoder_hidden_states must be given for cross-attention layers") cross_attention_outputs = self.crossattention( query_attention_output, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions=output_attentions, ) query_attention_output = cross_attention_outputs[0] # add cross attentions if we output attention weights outputs = outputs + cross_attention_outputs[1:-1] layer_output = apply_chunking_to_forward( self.feed_forward_chunk_query, self.chunk_size_feed_forward, self.seq_len_dim, query_attention_output, ) if attention_output.shape[1] > query_length: layer_output_text = apply_chunking_to_forward( self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output[:, query_length:, :], ) layer_output = torch.cat([layer_output, layer_output_text], dim=1) else: layer_output = apply_chunking_to_forward( self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output, ) outputs = (layer_output,) + outputs outputs = outputs + (present_key_value,) return outputs def feed_forward_chunk(self, attention_output): intermediate_output = self.intermediate(attention_output) layer_output = self.output(intermediate_output, attention_output) return layer_output def feed_forward_chunk_query(self, attention_output): intermediate_output = self.intermediate_query(attention_output) layer_output = self.output_query(intermediate_output, attention_output) return layer_output # ProjLayer used to project the multimodal Blip2 embeddings to be used in the text encoder class ProjLayer(nn.Module): def __init__(self, in_dim, out_dim, hidden_dim, drop_p=0.1, eps=1e-12): super().__init__() # Dense1 -> Act -> Dense2 -> Drop -> Res -> Norm self.dense1 = nn.Linear(in_dim, hidden_dim) self.act_fn = QuickGELU() self.dense2 = nn.Linear(hidden_dim, out_dim) self.dropout = nn.Dropout(drop_p) self.LayerNorm = nn.LayerNorm(out_dim, eps=eps) def forward(self, x): x_in = x x = self.LayerNorm(x) x = self.dropout(self.dense2(self.act_fn(self.dense1(x)))) + x_in return x # Copy-pasted from transformers.models.blip.modeling_blip.BlipVisionModel with Blip->Blip2, BLIP->BLIP_2 class Blip2VisionModel(Blip2PreTrainedModel): main_input_name = "pixel_values" config_class = Blip2VisionConfig def __init__(self, config: Blip2VisionConfig): super().__init__(config) self.config = config embed_dim = config.hidden_size self.embeddings = Blip2VisionEmbeddings(config) self.pre_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) self.encoder = Blip2Encoder(config) self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) self.post_init() @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=Blip2VisionConfig) def forward( self, pixel_values: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, BaseModelOutputWithPooling]: r""" Returns: """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if pixel_values is None: raise ValueError("You have to specify pixel_values") hidden_states = self.embeddings(pixel_values) hidden_states = self.pre_layernorm(hidden_states) encoder_outputs = self.encoder( inputs_embeds=hidden_states, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = encoder_outputs[0] last_hidden_state = self.post_layernorm(last_hidden_state) pooled_output = last_hidden_state[:, 0, :] pooled_output = self.post_layernorm(pooled_output) if not return_dict: return (last_hidden_state, pooled_output) + encoder_outputs[1:] return BaseModelOutputWithPooling( last_hidden_state=last_hidden_state, pooler_output=pooled_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) def get_input_embeddings(self): return self.embeddings # Qformer model, used to get multimodal embeddings from the text and image inputs class Blip2QFormerModel(Blip2PreTrainedModel): """ Querying Transformer (Q-Former), used in BLIP-2. """ def __init__(self, config: Blip2Config): super().__init__(config) self.config = config self.embeddings = Blip2TextEmbeddings(config.qformer_config) self.visual_encoder = Blip2VisionModel(config.vision_config) self.query_tokens = nn.Parameter(torch.zeros(1, config.num_query_tokens, config.qformer_config.hidden_size)) if not hasattr(config, "tokenizer") or config.tokenizer is None: self.tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", truncation_side="right") else: self.tokenizer = BertTokenizer.from_pretrained(config.tokenizer, truncation_side="right") self.tokenizer.add_special_tokens({"bos_token": "[DEC]"}) self.proj_layer = ProjLayer( in_dim=config.qformer_config.hidden_size, out_dim=config.qformer_config.hidden_size, hidden_dim=config.qformer_config.hidden_size * 4, drop_p=0.1, eps=1e-12, ) self.encoder = Blip2QFormerEncoder(config.qformer_config) self.post_init() def get_input_embeddings(self): return self.embeddings.word_embeddings def set_input_embeddings(self, value): self.embeddings.word_embeddings = value def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) def get_extended_attention_mask( self, attention_mask: torch.Tensor, input_shape: Tuple[int], device: torch.device, has_query: bool = False, ) -> torch.Tensor: """ Makes broadcastable attention and causal masks so that future and masked tokens are ignored. Arguments: attention_mask (`torch.Tensor`): Mask with ones indicating tokens to attend to, zeros for tokens to ignore. input_shape (`Tuple[int]`): The shape of the input to the model. device (`torch.device`): The device of the input to the model. Returns: `torch.Tensor` The extended attention mask, with a the same dtype as `attention_mask.dtype`. """ # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] # ourselves in which case we just need to make it broadcastable to all heads. if attention_mask.dim() == 3: extended_attention_mask = attention_mask[:, None, :, :] elif attention_mask.dim() == 2: # Provided a padding mask of dimensions [batch_size, seq_length] # - the model is an encoder, so make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length] extended_attention_mask = attention_mask[:, None, None, :] else: raise ValueError( "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format( input_shape, attention_mask.shape ) ) # Since attention_mask is 1.0 for positions we want to attend and 0.0 for # masked positions, this operation will create a tensor which is 0.0 for # positions we want to attend and -10000.0 for masked positions. # Since we are adding it to the raw scores before the softmax, this is # effectively the same as removing these entirely. extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 return extended_attention_mask def forward( self, text_input=None, image_input=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" encoder_hidden_states (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, `optional`): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, `optional`): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. past_key_values (`tuple(tuple(torch.Tensor))` of length `config.n_layers` with each tuple having 4 tensors of: shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `decoder_input_ids` of shape `(batch_size, sequence_length)`. use_cache (`bool`, `optional`): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). """ text = self.tokenizer(text_input, return_tensors="pt", padding=True) text = text.to(self.device) input_ids = text.input_ids batch_size = input_ids.shape[0] query_atts = torch.ones((batch_size, self.query_tokens.size()[1]), dtype=torch.long).to(self.device) attention_mask = torch.cat([query_atts, text.attention_mask], dim=1) output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict # past_key_values_length past_key_values_length = ( past_key_values[0][0].shape[2] - self.config.query_length if past_key_values is not None else 0 ) query_length = self.query_tokens.shape[1] embedding_output = self.embeddings( input_ids=input_ids, query_embeds=self.query_tokens, past_key_values_length=past_key_values_length, ) # embedding_output = self.layernorm(query_embeds) # embedding_output = self.dropout(embedding_output) input_shape = embedding_output.size()[:-1] batch_size, seq_length = input_shape device = embedding_output.device image_embeds_frozen = self.visual_encoder(image_input).last_hidden_state # image_embeds_frozen = torch.ones_like(image_embeds_frozen) encoder_hidden_states = image_embeds_frozen if attention_mask is None: attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape, device) # If a 2D or 3D attention mask is provided for the cross-attention # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] if encoder_hidden_states is not None: if isinstance(encoder_hidden_states, list): encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[0].size() else: encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) if isinstance(encoder_attention_mask, list): encoder_extended_attention_mask = [self.invert_attention_mask(mask) for mask in encoder_attention_mask] elif encoder_attention_mask is None: encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: encoder_extended_attention_mask = None # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] head_mask = self.get_head_mask(head_mask, self.config.qformer_config.num_hidden_layers) encoder_outputs = self.encoder( embedding_output, attention_mask=extended_attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, query_length=query_length, ) sequence_output = encoder_outputs[0] pooled_output = sequence_output[:, 0, :] if not return_dict: return self.proj_layer(sequence_output[:, :query_length, :]) return BaseModelOutputWithPoolingAndCrossAttentions( last_hidden_state=sequence_output, pooler_output=pooled_output, past_key_values=encoder_outputs.past_key_values, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, cross_attentions=encoder_outputs.cross_attentions, )
diffusers/src/diffusers/pipelines/blip_diffusion/modeling_blip2.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/blip_diffusion/modeling_blip2.py", "repo_id": "diffusers", "token_count": 12021 }
167
# Copyright 2025 Salesforce.com, inc. # Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import List, Optional, Union import PIL.Image import torch from transformers import CLIPTokenizer from ...models import AutoencoderKL, ControlNetModel, UNet2DConditionModel from ...schedulers import PNDMScheduler from ...utils import ( is_torch_xla_available, logging, replace_example_docstring, ) from ...utils.torch_utils import randn_tensor from ..blip_diffusion.blip_image_processing import BlipImageProcessor from ..blip_diffusion.modeling_blip2 import Blip2QFormerModel from ..blip_diffusion.modeling_ctx_clip import ContextCLIPTextModel from ..pipeline_utils import DeprecatedPipelineMixin, DiffusionPipeline, ImagePipelineOutput if is_torch_xla_available(): import torch_xla.core.xla_model as xm XLA_AVAILABLE = True else: XLA_AVAILABLE = False logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline >>> from diffusers.utils import load_image >>> from controlnet_aux import CannyDetector >>> import torch >>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( ... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 ... ).to("cuda") >>> style_subject = "flower" >>> tgt_subject = "teapot" >>> text_prompt = "on a marble table" >>> cldm_cond_image = load_image( ... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" ... ).resize((512, 512)) >>> canny = CannyDetector() >>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") >>> style_image = load_image( ... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" ... ) >>> guidance_scale = 7.5 >>> num_inference_steps = 50 >>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" >>> output = blip_diffusion_pipe( ... text_prompt, ... style_image, ... cldm_cond_image, ... style_subject, ... tgt_subject, ... guidance_scale=guidance_scale, ... num_inference_steps=num_inference_steps, ... neg_prompt=negative_prompt, ... height=512, ... width=512, ... ).images >>> output[0].save("image.png") ``` """ class BlipDiffusionControlNetPipeline(DeprecatedPipelineMixin, DiffusionPipeline): """ Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: tokenizer ([`CLIPTokenizer`]): Tokenizer for the text encoder text_encoder ([`ContextCLIPTextModel`]): Text encoder to encode the text prompt vae ([`AutoencoderKL`]): VAE model to map the latents to the image unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the image embedding. scheduler ([`PNDMScheduler`]): A scheduler to be used in combination with `unet` to generate image latents. qformer ([`Blip2QFormerModel`]): QFormer model to get multi-modal embeddings from the text and image. controlnet ([`ControlNetModel`]): ControlNet model to get the conditioning image embedding. image_processor ([`BlipImageProcessor`]): Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, `optional`, defaults to 2): Position of the context token in the text encoder. """ _last_supported_version = "0.33.1" model_cpu_offload_seq = "qformer->text_encoder->unet->vae" def __init__( self, tokenizer: CLIPTokenizer, text_encoder: ContextCLIPTextModel, vae: AutoencoderKL, unet: UNet2DConditionModel, scheduler: PNDMScheduler, qformer: Blip2QFormerModel, controlnet: ControlNetModel, image_processor: BlipImageProcessor, ctx_begin_pos: int = 2, mean: List[float] = None, std: List[float] = None, ): super().__init__() self.register_modules( tokenizer=tokenizer, text_encoder=text_encoder, vae=vae, unet=unet, scheduler=scheduler, qformer=qformer, controlnet=controlnet, image_processor=image_processor, ) self.register_to_config(ctx_begin_pos=ctx_begin_pos, mean=mean, std=std) def get_query_embeddings(self, input_image, src_subject): return self.qformer(image_input=input_image, text_input=src_subject, return_dict=False) # from the original Blip Diffusion code, specifies the target subject and augments the prompt by repeating it def _build_prompt(self, prompts, tgt_subjects, prompt_strength=1.0, prompt_reps=20): rv = [] for prompt, tgt_subject in zip(prompts, tgt_subjects): prompt = f"a {tgt_subject} {prompt.strip()}" # a trick to amplify the prompt rv.append(", ".join([prompt] * int(prompt_strength * prompt_reps))) return rv # Copied from diffusers.pipelines.consistency_models.pipeline_consistency_models.ConsistencyModelPipeline.prepare_latents def prepare_latents(self, batch_size, num_channels, height, width, dtype, device, generator, latents=None): shape = (batch_size, num_channels, height, width) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device=device, dtype=dtype) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents def encode_prompt(self, query_embeds, prompt, device=None): device = device or self._execution_device # embeddings for prompt, with query_embeds as context max_len = self.text_encoder.text_model.config.max_position_embeddings max_len -= self.qformer.config.num_query_tokens tokenized_prompt = self.tokenizer( prompt, padding="max_length", truncation=True, max_length=max_len, return_tensors="pt", ).to(device) batch_size = query_embeds.shape[0] ctx_begin_pos = [self.config.ctx_begin_pos] * batch_size text_embeddings = self.text_encoder( input_ids=tokenized_prompt.input_ids, ctx_embeddings=query_embeds, ctx_begin_pos=ctx_begin_pos, )[0] return text_embeddings # Adapted from diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline.prepare_image def prepare_control_image( self, image, width, height, batch_size, num_images_per_prompt, device, dtype, do_classifier_free_guidance=False, ): image = self.image_processor.preprocess( image, size={"width": width, "height": height}, do_rescale=True, do_center_crop=False, do_normalize=False, return_tensors="pt", )["pixel_values"].to(device) image_batch_size = image.shape[0] if image_batch_size == 1: repeat_by = batch_size else: # image batch size is the same as prompt batch size repeat_by = num_images_per_prompt image = image.repeat_interleave(repeat_by, dim=0) image = image.to(device=device, dtype=dtype) if do_classifier_free_guidance: image = torch.cat([image] * 2) return image @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, prompt: List[str], reference_image: PIL.Image.Image, condtioning_image: PIL.Image.Image, source_subject_category: List[str], target_subject_category: List[str], latents: Optional[torch.Tensor] = None, guidance_scale: float = 7.5, height: int = 512, width: int = 512, num_inference_steps: int = 50, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, neg_prompt: Optional[str] = "", prompt_strength: float = 1.0, prompt_reps: int = 20, output_type: Optional[str] = "pil", return_dict: bool = True, ): """ Function invoked when calling the pipeline for generation. Args: prompt (`List[str]`): The prompt or prompts to guide the image generation. reference_image (`PIL.Image.Image`): The reference image to condition the generation on. condtioning_image (`PIL.Image.Image`): The conditioning canny edge image to condition the generation on. source_subject_category (`List[str]`): The source subject category. target_subject_category (`List[str]`): The target subject category. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by random sampling. guidance_scale (`float`, *optional*, defaults to 7.5): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. height (`int`, *optional*, defaults to 512): The height of the generated image. width (`int`, *optional*, defaults to 512): The width of the generated image. seed (`int`, *optional*, defaults to 42): The seed to use for random generation. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. neg_prompt (`str`, *optional*, defaults to ""): The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). prompt_strength (`float`, *optional*, defaults to 1.0): The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps to amplify the prompt. prompt_reps (`int`, *optional*, defaults to 20): The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Examples: Returns: [`~pipelines.ImagePipelineOutput`] or `tuple` """ device = self._execution_device reference_image = self.image_processor.preprocess( reference_image, image_mean=self.config.mean, image_std=self.config.std, return_tensors="pt" )["pixel_values"] reference_image = reference_image.to(device) if isinstance(prompt, str): prompt = [prompt] if isinstance(source_subject_category, str): source_subject_category = [source_subject_category] if isinstance(target_subject_category, str): target_subject_category = [target_subject_category] batch_size = len(prompt) prompt = self._build_prompt( prompts=prompt, tgt_subjects=target_subject_category, prompt_strength=prompt_strength, prompt_reps=prompt_reps, ) query_embeds = self.get_query_embeddings(reference_image, source_subject_category) text_embeddings = self.encode_prompt(query_embeds, prompt, device) # 3. unconditional embedding do_classifier_free_guidance = guidance_scale > 1.0 if do_classifier_free_guidance: max_length = self.text_encoder.text_model.config.max_position_embeddings uncond_input = self.tokenizer( [neg_prompt] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt", ) uncond_embeddings = self.text_encoder( input_ids=uncond_input.input_ids.to(device), ctx_embeddings=None, )[0] # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) scale_down_factor = 2 ** (len(self.unet.config.block_out_channels) - 1) latents = self.prepare_latents( batch_size=batch_size, num_channels=self.unet.config.in_channels, height=height // scale_down_factor, width=width // scale_down_factor, generator=generator, latents=latents, dtype=self.unet.dtype, device=device, ) # set timesteps extra_set_kwargs = {} self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) cond_image = self.prepare_control_image( image=condtioning_image, width=width, height=height, batch_size=batch_size, num_images_per_prompt=1, device=device, dtype=self.controlnet.dtype, do_classifier_free_guidance=do_classifier_free_guidance, ) for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)): # expand the latents if we are doing classifier free guidance do_classifier_free_guidance = guidance_scale > 1.0 latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents down_block_res_samples, mid_block_res_sample = self.controlnet( latent_model_input, t, encoder_hidden_states=text_embeddings, controlnet_cond=cond_image, return_dict=False, ) noise_pred = self.unet( latent_model_input, timestep=t, encoder_hidden_states=text_embeddings, down_block_additional_residuals=down_block_res_samples, mid_block_additional_residual=mid_block_res_sample, )["sample"] # perform guidance if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) latents = self.scheduler.step( noise_pred, t, latents, )["prev_sample"] if XLA_AVAILABLE: xm.mark_step() image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] image = self.image_processor.postprocess(image, output_type=output_type) # Offload all models self.maybe_free_model_hooks() if not return_dict: return (image,) return ImagePipelineOutput(images=image)
diffusers/src/diffusers/pipelines/controlnet/pipeline_controlnet_blip_diffusion.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/controlnet/pipeline_controlnet_blip_diffusion.py", "repo_id": "diffusers", "token_count": 7781 }
168
from typing import TYPE_CHECKING from ....utils import DIFFUSERS_SLOW_IMPORT, _LazyModule _import_structure = { "mel": ["Mel"], "pipeline_audio_diffusion": ["AudioDiffusionPipeline"], } if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT: from .mel import Mel from .pipeline_audio_diffusion import AudioDiffusionPipeline else: import sys sys.modules[__name__] = _LazyModule( __name__, globals()["__file__"], _import_structure, module_spec=__spec__, )
diffusers/src/diffusers/pipelines/deprecated/audio_diffusion/__init__.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/deprecated/audio_diffusion/__init__.py", "repo_id": "diffusers", "token_count": 212 }
169
from typing import TYPE_CHECKING from ....utils import ( DIFFUSERS_SLOW_IMPORT, OptionalDependencyNotAvailable, _LazyModule, get_objects_from_module, is_torch_available, is_transformers_available, ) _dummy_objects = {} _import_structure = {} try: if not (is_transformers_available() and is_torch_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from ....utils import dummy_torch_and_transformers_objects _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects)) else: _import_structure["pipeline_cycle_diffusion"] = ["CycleDiffusionPipeline"] _import_structure["pipeline_stable_diffusion_inpaint_legacy"] = ["StableDiffusionInpaintPipelineLegacy"] _import_structure["pipeline_stable_diffusion_model_editing"] = ["StableDiffusionModelEditingPipeline"] _import_structure["pipeline_stable_diffusion_paradigms"] = ["StableDiffusionParadigmsPipeline"] _import_structure["pipeline_stable_diffusion_pix2pix_zero"] = ["StableDiffusionPix2PixZeroPipeline"] if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT: try: if not (is_transformers_available() and is_torch_available()): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from ....utils.dummy_torch_and_transformers_objects import * else: from .pipeline_cycle_diffusion import CycleDiffusionPipeline from .pipeline_stable_diffusion_inpaint_legacy import StableDiffusionInpaintPipelineLegacy from .pipeline_stable_diffusion_model_editing import StableDiffusionModelEditingPipeline from .pipeline_stable_diffusion_paradigms import StableDiffusionParadigmsPipeline from .pipeline_stable_diffusion_pix2pix_zero import StableDiffusionPix2PixZeroPipeline else: import sys sys.modules[__name__] = _LazyModule( __name__, globals()["__file__"], _import_structure, module_spec=__spec__, ) for name, value in _dummy_objects.items(): setattr(sys.modules[__name__], name, value)
diffusers/src/diffusers/pipelines/deprecated/stable_diffusion_variants/__init__.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/deprecated/stable_diffusion_variants/__init__.py", "repo_id": "diffusers", "token_count": 817 }
170
# Copyright 2025 Microsoft and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Callable, List, Optional, Tuple, Union import torch from transformers import CLIPTextModel, CLIPTokenizer from ....configuration_utils import ConfigMixin, register_to_config from ....models import ModelMixin, Transformer2DModel, VQModel from ....schedulers import VQDiffusionScheduler from ....utils import logging from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput logger = logging.get_logger(__name__) # pylint: disable=invalid-name class LearnedClassifierFreeSamplingEmbeddings(ModelMixin, ConfigMixin): """ Utility class for storing learned text embeddings for classifier free sampling """ @register_to_config def __init__(self, learnable: bool, hidden_size: Optional[int] = None, length: Optional[int] = None): super().__init__() self.learnable = learnable if self.learnable: assert hidden_size is not None, "learnable=True requires `hidden_size` to be set" assert length is not None, "learnable=True requires `length` to be set" embeddings = torch.zeros(length, hidden_size) else: embeddings = None self.embeddings = torch.nn.Parameter(embeddings) class VQDiffusionPipeline(DiffusionPipeline): r""" Pipeline for text-to-image generation using VQ Diffusion. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Args: vqvae ([`VQModel`]): Vector Quantized Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder ([`~transformers.CLIPTextModel`]): Frozen text-encoder ([clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32)). tokenizer ([`~transformers.CLIPTokenizer`]): A `CLIPTokenizer` to tokenize text. transformer ([`Transformer2DModel`]): A conditional `Transformer2DModel` to denoise the encoded image latents. scheduler ([`VQDiffusionScheduler`]): A scheduler to be used in combination with `transformer` to denoise the encoded image latents. """ vqvae: VQModel text_encoder: CLIPTextModel tokenizer: CLIPTokenizer transformer: Transformer2DModel learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings scheduler: VQDiffusionScheduler def __init__( self, vqvae: VQModel, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, transformer: Transformer2DModel, scheduler: VQDiffusionScheduler, learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings, ): super().__init__() self.register_modules( vqvae=vqvae, transformer=transformer, text_encoder=text_encoder, tokenizer=tokenizer, scheduler=scheduler, learned_classifier_free_sampling_embeddings=learned_classifier_free_sampling_embeddings, ) def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance): batch_size = len(prompt) if isinstance(prompt, list) else 1 # get prompt text embeddings text_inputs = self.tokenizer( prompt, padding="max_length", max_length=self.tokenizer.model_max_length, return_tensors="pt", ) text_input_ids = text_inputs.input_ids if text_input_ids.shape[-1] > self.tokenizer.model_max_length: removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {self.tokenizer.model_max_length} tokens: {removed_text}" ) text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] prompt_embeds = self.text_encoder(text_input_ids.to(self.device))[0] # NOTE: This additional step of normalizing the text embeddings is from VQ-Diffusion. # While CLIP does normalize the pooled output of the text transformer when combining # the image and text embeddings, CLIP does not directly normalize the last hidden state. # # CLIP normalizing the pooled output. # https://github.com/huggingface/transformers/blob/d92e22d1f28324f513f3080e5c47c071a3916721/src/transformers/models/clip/modeling_clip.py#L1052-L1053 prompt_embeds = prompt_embeds / prompt_embeds.norm(dim=-1, keepdim=True) # duplicate text embeddings for each generation per prompt prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0) if do_classifier_free_guidance: if self.learned_classifier_free_sampling_embeddings.learnable: negative_prompt_embeds = self.learned_classifier_free_sampling_embeddings.embeddings negative_prompt_embeds = negative_prompt_embeds.unsqueeze(0).repeat(batch_size, 1, 1) else: uncond_tokens = [""] * batch_size max_length = text_input_ids.shape[-1] uncond_input = self.tokenizer( uncond_tokens, padding="max_length", max_length=max_length, truncation=True, return_tensors="pt", ) negative_prompt_embeds = self.text_encoder(uncond_input.input_ids.to(self.device))[0] # See comment for normalizing text embeddings negative_prompt_embeds = negative_prompt_embeds / negative_prompt_embeds.norm(dim=-1, keepdim=True) # duplicate unconditional embeddings for each generation per prompt, using mps friendly method seq_len = negative_prompt_embeds.shape[1] negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) # For classifier free guidance, we need to do two forward passes. # Here we concatenate the unconditional and text embeddings into a single batch # to avoid doing two forward passes prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) return prompt_embeds @torch.no_grad() def __call__( self, prompt: Union[str, List[str]], num_inference_steps: int = 100, guidance_scale: float = 5.0, truncation_rate: float = 1.0, num_images_per_prompt: int = 1, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, ) -> Union[ImagePipelineOutput, Tuple]: """ The call function to the pipeline for generation. Args: prompt (`str` or `List[str]`): The prompt or prompts to guide image generation. num_inference_steps (`int`, *optional*, defaults to 100): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 7.5): A higher guidance scale value encourages the model to generate images closely linked to the text `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. truncation_rate (`float`, *optional*, defaults to 1.0 (equivalent to no truncation)): Used to "truncate" the predicted classes for x_0 such that the cumulative probability for a pixel is at most `truncation_rate`. The lowest probabilities that would increase the cumulative probability above `truncation_rate` are set to zero. num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. generator (`torch.Generator`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor` of shape (batch), *optional*): Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Must be valid embedding indices.If not provided, a latents tensor will be generated of completely masked latent pixels. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. callback (`Callable`, *optional*): A function that calls every `callback_steps` steps during inference. The function is called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function is called. If not specified, the callback is called at every step. Returns: [`~pipelines.ImagePipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images. """ if isinstance(prompt, str): batch_size = 1 elif isinstance(prompt, list): batch_size = len(prompt) else: raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") batch_size = batch_size * num_images_per_prompt do_classifier_free_guidance = guidance_scale > 1.0 prompt_embeds = self._encode_prompt(prompt, num_images_per_prompt, do_classifier_free_guidance) if (callback_steps is None) or ( callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) ): raise ValueError( f"`callback_steps` has to be a positive integer but is {callback_steps} of type" f" {type(callback_steps)}." ) # get the initial completely masked latents unless the user supplied it latents_shape = (batch_size, self.transformer.num_latent_pixels) if latents is None: mask_class = self.transformer.num_vector_embeds - 1 latents = torch.full(latents_shape, mask_class).to(self.device) else: if latents.shape != latents_shape: raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") if (latents < 0).any() or (latents >= self.transformer.num_vector_embeds).any(): raise ValueError( "Unexpected latents value(s). All latents be valid embedding indices i.e. in the range 0," f" {self.transformer.num_vector_embeds - 1} (inclusive)." ) latents = latents.to(self.device) # set timesteps self.scheduler.set_timesteps(num_inference_steps, device=self.device) timesteps_tensor = self.scheduler.timesteps.to(self.device) sample = latents for i, t in enumerate(self.progress_bar(timesteps_tensor)): # expand the sample if we are doing classifier free guidance latent_model_input = torch.cat([sample] * 2) if do_classifier_free_guidance else sample # predict the un-noised image # model_output == `log_p_x_0` model_output = self.transformer(latent_model_input, encoder_hidden_states=prompt_embeds, timestep=t).sample if do_classifier_free_guidance: model_output_uncond, model_output_text = model_output.chunk(2) model_output = model_output_uncond + guidance_scale * (model_output_text - model_output_uncond) model_output -= torch.logsumexp(model_output, dim=1, keepdim=True) model_output = self.truncate(model_output, truncation_rate) # remove `log(0)`'s (`-inf`s) model_output = model_output.clamp(-70) # compute the previous noisy sample x_t -> x_t-1 sample = self.scheduler.step(model_output, timestep=t, sample=sample, generator=generator).prev_sample # call the callback, if provided if callback is not None and i % callback_steps == 0: callback(i, t, sample) embedding_channels = self.vqvae.config.vq_embed_dim embeddings_shape = (batch_size, self.transformer.height, self.transformer.width, embedding_channels) embeddings = self.vqvae.quantize.get_codebook_entry(sample, shape=embeddings_shape) image = self.vqvae.decode(embeddings, force_not_quantize=True).sample image = (image / 2 + 0.5).clamp(0, 1) image = image.cpu().permute(0, 2, 3, 1).numpy() if output_type == "pil": image = self.numpy_to_pil(image) if not return_dict: return (image,) return ImagePipelineOutput(images=image) def truncate(self, log_p_x_0: torch.Tensor, truncation_rate: float) -> torch.Tensor: """ Truncates `log_p_x_0` such that for each column vector, the total cumulative probability is `truncation_rate` The lowest probabilities that would increase the cumulative probability above `truncation_rate` are set to zero. """ sorted_log_p_x_0, indices = torch.sort(log_p_x_0, 1, descending=True) sorted_p_x_0 = torch.exp(sorted_log_p_x_0) keep_mask = sorted_p_x_0.cumsum(dim=1) < truncation_rate # Ensure that at least the largest probability is not zeroed out all_true = torch.full_like(keep_mask[:, 0:1, :], True) keep_mask = torch.cat((all_true, keep_mask), dim=1) keep_mask = keep_mask[:, :-1, :] keep_mask = keep_mask.gather(1, indices.argsort(1)) rv = log_p_x_0.clone() rv[~keep_mask] = -torch.inf # -inf = log(0) return rv
diffusers/src/diffusers/pipelines/deprecated/vq_diffusion/pipeline_vq_diffusion.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/deprecated/vq_diffusion/pipeline_vq_diffusion.py", "repo_id": "diffusers", "token_count": 6456 }
171
# Copyright 2025 The Framepack Team, The HunyuanVideo Team and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import math from enum import Enum from typing import Any, Callable, Dict, List, Optional, Tuple, Union import numpy as np import torch from transformers import ( CLIPTextModel, CLIPTokenizer, LlamaModel, LlamaTokenizerFast, SiglipImageProcessor, SiglipVisionModel, ) from ...callbacks import MultiPipelineCallbacks, PipelineCallback from ...image_processor import PipelineImageInput from ...loaders import HunyuanVideoLoraLoaderMixin from ...models import AutoencoderKLHunyuanVideo, HunyuanVideoFramepackTransformer3DModel from ...schedulers import FlowMatchEulerDiscreteScheduler from ...utils import is_torch_xla_available, logging, replace_example_docstring from ...utils.torch_utils import randn_tensor from ...video_processor import VideoProcessor from ..pipeline_utils import DiffusionPipeline from .pipeline_output import HunyuanVideoFramepackPipelineOutput if is_torch_xla_available(): import torch_xla.core.xla_model as xm XLA_AVAILABLE = True else: XLA_AVAILABLE = False logger = logging.get_logger(__name__) # pylint: disable=invalid-name # TODO(yiyi): We can pack the checkpoints nicely with modular loader EXAMPLE_DOC_STRING = """ Examples: ##### Image-to-Video ```python >>> import torch >>> from diffusers import HunyuanVideoFramepackPipeline, HunyuanVideoFramepackTransformer3DModel >>> from diffusers.utils import export_to_video, load_image >>> from transformers import SiglipImageProcessor, SiglipVisionModel >>> transformer = HunyuanVideoFramepackTransformer3DModel.from_pretrained( ... "lllyasviel/FramePackI2V_HY", torch_dtype=torch.bfloat16 ... ) >>> feature_extractor = SiglipImageProcessor.from_pretrained( ... "lllyasviel/flux_redux_bfl", subfolder="feature_extractor" ... ) >>> image_encoder = SiglipVisionModel.from_pretrained( ... "lllyasviel/flux_redux_bfl", subfolder="image_encoder", torch_dtype=torch.float16 ... ) >>> pipe = HunyuanVideoFramepackPipeline.from_pretrained( ... "hunyuanvideo-community/HunyuanVideo", ... transformer=transformer, ... feature_extractor=feature_extractor, ... image_encoder=image_encoder, ... torch_dtype=torch.float16, ... ) >>> pipe.vae.enable_tiling() >>> pipe.to("cuda") >>> image = load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/penguin.png" ... ) >>> output = pipe( ... image=image, ... prompt="A penguin dancing in the snow", ... height=832, ... width=480, ... num_frames=91, ... num_inference_steps=30, ... guidance_scale=9.0, ... generator=torch.Generator().manual_seed(0), ... sampling_type="inverted_anti_drifting", ... ).frames[0] >>> export_to_video(output, "output.mp4", fps=30) ``` ##### First and Last Image-to-Video ```python >>> import torch >>> from diffusers import HunyuanVideoFramepackPipeline, HunyuanVideoFramepackTransformer3DModel >>> from diffusers.utils import export_to_video, load_image >>> from transformers import SiglipImageProcessor, SiglipVisionModel >>> transformer = HunyuanVideoFramepackTransformer3DModel.from_pretrained( ... "lllyasviel/FramePackI2V_HY", torch_dtype=torch.bfloat16 ... ) >>> feature_extractor = SiglipImageProcessor.from_pretrained( ... "lllyasviel/flux_redux_bfl", subfolder="feature_extractor" ... ) >>> image_encoder = SiglipVisionModel.from_pretrained( ... "lllyasviel/flux_redux_bfl", subfolder="image_encoder", torch_dtype=torch.float16 ... ) >>> pipe = HunyuanVideoFramepackPipeline.from_pretrained( ... "hunyuanvideo-community/HunyuanVideo", ... transformer=transformer, ... feature_extractor=feature_extractor, ... image_encoder=image_encoder, ... torch_dtype=torch.float16, ... ) >>> pipe.to("cuda") >>> prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective." >>> first_image = load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_first_frame.png" ... ) >>> last_image = load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_last_frame.png" ... ) >>> output = pipe( ... image=first_image, ... last_image=last_image, ... prompt=prompt, ... height=512, ... width=512, ... num_frames=91, ... num_inference_steps=30, ... guidance_scale=9.0, ... generator=torch.Generator().manual_seed(0), ... sampling_type="inverted_anti_drifting", ... ).frames[0] >>> export_to_video(output, "output.mp4", fps=30) ``` """ DEFAULT_PROMPT_TEMPLATE = { "template": ( "<|start_header_id|>system<|end_header_id|>\n\nDescribe the video by detailing the following aspects: " "1. The main content and theme of the video." "2. The color, shape, size, texture, quantity, text, and spatial relationships of the objects." "3. Actions, events, behaviors temporal relationships, physical movement changes of the objects." "4. background environment, light, style and atmosphere." "5. camera angles, movements, and transitions used in the video:<|eot_id|>" "<|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|>" ), "crop_start": 95, } # Copied from diffusers.pipelines.flux.pipeline_flux.calculate_shift def calculate_shift( image_seq_len, base_seq_len: int = 256, max_seq_len: int = 4096, base_shift: float = 0.5, max_shift: float = 1.15, ): m = (max_shift - base_shift) / (max_seq_len - base_seq_len) b = base_shift - m * base_seq_len mu = image_seq_len * m + b return mu # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps def retrieve_timesteps( scheduler, num_inference_steps: Optional[int] = None, device: Optional[Union[str, torch.device]] = None, timesteps: Optional[List[int]] = None, sigmas: Optional[List[float]] = None, **kwargs, ): r""" Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`. Args: scheduler (`SchedulerMixin`): The scheduler to get timesteps from. num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps` must be `None`. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. timesteps (`List[int]`, *optional*): Custom timesteps used to override the timestep spacing strategy of the scheduler. If `timesteps` is passed, `num_inference_steps` and `sigmas` must be `None`. sigmas (`List[float]`, *optional*): Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed, `num_inference_steps` and `timesteps` must be `None`. Returns: `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the second element is the number of inference steps. """ if timesteps is not None and sigmas is not None: raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values") if timesteps is not None: accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accepts_timesteps: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" timestep schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) elif sigmas is not None: accept_sigmas = "sigmas" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accept_sigmas: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" sigmas schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) else: scheduler.set_timesteps(num_inference_steps, device=device, **kwargs) timesteps = scheduler.timesteps return timesteps, num_inference_steps class FramepackSamplingType(str, Enum): VANILLA = "vanilla" INVERTED_ANTI_DRIFTING = "inverted_anti_drifting" class HunyuanVideoFramepackPipeline(DiffusionPipeline, HunyuanVideoLoraLoaderMixin): r""" Pipeline for text-to-video generation using HunyuanVideo. This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.). Args: text_encoder ([`LlamaModel`]): [Llava Llama3-8B](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers). tokenizer (`LlamaTokenizer`): Tokenizer from [Llava Llama3-8B](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers). transformer ([`HunyuanVideoTransformer3DModel`]): Conditional Transformer to denoise the encoded image latents. scheduler ([`FlowMatchEulerDiscreteScheduler`]): A scheduler to be used in combination with `transformer` to denoise the encoded image latents. vae ([`AutoencoderKLHunyuanVideo`]): Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder_2 ([`CLIPTextModel`]): [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. tokenizer_2 (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer). """ model_cpu_offload_seq = "text_encoder->text_encoder_2->image_encoder->transformer->vae" _callback_tensor_inputs = ["latents", "prompt_embeds"] def __init__( self, text_encoder: LlamaModel, tokenizer: LlamaTokenizerFast, transformer: HunyuanVideoFramepackTransformer3DModel, vae: AutoencoderKLHunyuanVideo, scheduler: FlowMatchEulerDiscreteScheduler, text_encoder_2: CLIPTextModel, tokenizer_2: CLIPTokenizer, image_encoder: SiglipVisionModel, feature_extractor: SiglipImageProcessor, ): super().__init__() self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, transformer=transformer, scheduler=scheduler, text_encoder_2=text_encoder_2, tokenizer_2=tokenizer_2, image_encoder=image_encoder, feature_extractor=feature_extractor, ) self.vae_scale_factor_temporal = self.vae.temporal_compression_ratio if getattr(self, "vae", None) else 4 self.vae_scale_factor_spatial = self.vae.spatial_compression_ratio if getattr(self, "vae", None) else 8 self.video_processor = VideoProcessor(vae_scale_factor=self.vae_scale_factor_spatial) # Copied from diffusers.pipelines.hunyuan_video.pipeline_hunyuan_video.HunyuanVideoPipeline._get_llama_prompt_embeds def _get_llama_prompt_embeds( self, prompt: Union[str, List[str]], prompt_template: Dict[str, Any], num_videos_per_prompt: int = 1, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, max_sequence_length: int = 256, num_hidden_layers_to_skip: int = 2, ) -> Tuple[torch.Tensor, torch.Tensor]: device = device or self._execution_device dtype = dtype or self.text_encoder.dtype prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) prompt = [prompt_template["template"].format(p) for p in prompt] crop_start = prompt_template.get("crop_start", None) if crop_start is None: prompt_template_input = self.tokenizer( prompt_template["template"], padding="max_length", return_tensors="pt", return_length=False, return_overflowing_tokens=False, return_attention_mask=False, ) crop_start = prompt_template_input["input_ids"].shape[-1] # Remove <|eot_id|> token and placeholder {} crop_start -= 2 max_sequence_length += crop_start text_inputs = self.tokenizer( prompt, max_length=max_sequence_length, padding="max_length", truncation=True, return_tensors="pt", return_length=False, return_overflowing_tokens=False, return_attention_mask=True, ) text_input_ids = text_inputs.input_ids.to(device=device) prompt_attention_mask = text_inputs.attention_mask.to(device=device) prompt_embeds = self.text_encoder( input_ids=text_input_ids, attention_mask=prompt_attention_mask, output_hidden_states=True, ).hidden_states[-(num_hidden_layers_to_skip + 1)] prompt_embeds = prompt_embeds.to(dtype=dtype) if crop_start is not None and crop_start > 0: prompt_embeds = prompt_embeds[:, crop_start:] prompt_attention_mask = prompt_attention_mask[:, crop_start:] # duplicate text embeddings for each generation per prompt, using mps friendly method _, seq_len, _ = prompt_embeds.shape prompt_embeds = prompt_embeds.repeat(1, num_videos_per_prompt, 1) prompt_embeds = prompt_embeds.view(batch_size * num_videos_per_prompt, seq_len, -1) prompt_attention_mask = prompt_attention_mask.repeat(1, num_videos_per_prompt) prompt_attention_mask = prompt_attention_mask.view(batch_size * num_videos_per_prompt, seq_len) return prompt_embeds, prompt_attention_mask # Copied from diffusers.pipelines.hunyuan_video.pipeline_hunyuan_video.HunyuanVideoPipeline._get_clip_prompt_embeds def _get_clip_prompt_embeds( self, prompt: Union[str, List[str]], num_videos_per_prompt: int = 1, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, max_sequence_length: int = 77, ) -> torch.Tensor: device = device or self._execution_device dtype = dtype or self.text_encoder_2.dtype prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) text_inputs = self.tokenizer_2( prompt, padding="max_length", max_length=max_sequence_length, truncation=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer_2(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): removed_text = self.tokenizer_2.batch_decode(untruncated_ids[:, max_sequence_length - 1 : -1]) logger.warning( "The following part of your input was truncated because CLIP can only handle sequences up to" f" {max_sequence_length} tokens: {removed_text}" ) prompt_embeds = self.text_encoder_2(text_input_ids.to(device), output_hidden_states=False).pooler_output # duplicate text embeddings for each generation per prompt, using mps friendly method prompt_embeds = prompt_embeds.repeat(1, num_videos_per_prompt) prompt_embeds = prompt_embeds.view(batch_size * num_videos_per_prompt, -1) return prompt_embeds # Copied from diffusers.pipelines.hunyuan_video.pipeline_hunyuan_video.HunyuanVideoPipeline.encode_prompt def encode_prompt( self, prompt: Union[str, List[str]], prompt_2: Union[str, List[str]] = None, prompt_template: Dict[str, Any] = DEFAULT_PROMPT_TEMPLATE, num_videos_per_prompt: int = 1, prompt_embeds: Optional[torch.Tensor] = None, pooled_prompt_embeds: Optional[torch.Tensor] = None, prompt_attention_mask: Optional[torch.Tensor] = None, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, max_sequence_length: int = 256, ): if prompt_embeds is None: prompt_embeds, prompt_attention_mask = self._get_llama_prompt_embeds( prompt, prompt_template, num_videos_per_prompt, device=device, dtype=dtype, max_sequence_length=max_sequence_length, ) if pooled_prompt_embeds is None: if prompt_2 is None: prompt_2 = prompt pooled_prompt_embeds = self._get_clip_prompt_embeds( prompt, num_videos_per_prompt, device=device, dtype=dtype, max_sequence_length=77, ) return prompt_embeds, pooled_prompt_embeds, prompt_attention_mask def encode_image( self, image: torch.Tensor, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None ): device = device or self._execution_device image = (image + 1) / 2.0 # [-1, 1] -> [0, 1] image = self.feature_extractor(images=image, return_tensors="pt", do_rescale=False).to( device=device, dtype=self.image_encoder.dtype ) image_embeds = self.image_encoder(**image).last_hidden_state return image_embeds.to(dtype=dtype) def check_inputs( self, prompt, prompt_2, height, width, prompt_embeds=None, callback_on_step_end_tensor_inputs=None, prompt_template=None, image=None, image_latents=None, last_image=None, last_image_latents=None, sampling_type=None, ): if height % 16 != 0 or width % 16 != 0: raise ValueError(f"`height` and `width` have to be divisible by 16 but are {height} and {width}.") if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt_2 is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)): raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}") if prompt_template is not None: if not isinstance(prompt_template, dict): raise ValueError(f"`prompt_template` has to be of type `dict` but is {type(prompt_template)}") if "template" not in prompt_template: raise ValueError( f"`prompt_template` has to contain a key `template` but only found {prompt_template.keys()}" ) sampling_types = [x.value for x in FramepackSamplingType.__members__.values()] if sampling_type not in sampling_types: raise ValueError(f"`sampling_type` has to be one of '{sampling_types}' but is '{sampling_type}'") if image is not None and image_latents is not None: raise ValueError("Only one of `image` or `image_latents` can be passed.") if last_image is not None and last_image_latents is not None: raise ValueError("Only one of `last_image` or `last_image_latents` can be passed.") if sampling_type != FramepackSamplingType.INVERTED_ANTI_DRIFTING and ( last_image is not None or last_image_latents is not None ): raise ValueError( 'Only `"inverted_anti_drifting"` inference type supports `last_image` or `last_image_latents`.' ) def prepare_latents( self, batch_size: int = 1, num_channels_latents: int = 16, height: int = 720, width: int = 1280, num_frames: int = 129, dtype: Optional[torch.dtype] = None, device: Optional[torch.device] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, ) -> torch.Tensor: if latents is not None: return latents.to(device=device, dtype=dtype) shape = ( batch_size, num_channels_latents, (num_frames - 1) // self.vae_scale_factor_temporal + 1, int(height) // self.vae_scale_factor_spatial, int(width) // self.vae_scale_factor_spatial, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) return latents def prepare_image_latents( self, image: torch.Tensor, dtype: Optional[torch.dtype] = None, device: Optional[torch.device] = None, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, ) -> torch.Tensor: device = device or self._execution_device if latents is None: image = image.unsqueeze(2).to(device=device, dtype=self.vae.dtype) latents = self.vae.encode(image).latent_dist.sample(generator=generator) latents = latents * self.vae.config.scaling_factor return latents.to(device=device, dtype=dtype) def enable_vae_slicing(self): r""" Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. """ self.vae.enable_slicing() def disable_vae_slicing(self): r""" Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to computing decoding in one step. """ self.vae.disable_slicing() def enable_vae_tiling(self): r""" Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images. """ self.vae.enable_tiling() def disable_vae_tiling(self): r""" Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to computing decoding in one step. """ self.vae.disable_tiling() @property def guidance_scale(self): return self._guidance_scale @property def num_timesteps(self): return self._num_timesteps @property def attention_kwargs(self): return self._attention_kwargs @property def current_timestep(self): return self._current_timestep @property def interrupt(self): return self._interrupt @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, image: PipelineImageInput, last_image: Optional[PipelineImageInput] = None, prompt: Union[str, List[str]] = None, prompt_2: Union[str, List[str]] = None, negative_prompt: Union[str, List[str]] = None, negative_prompt_2: Union[str, List[str]] = None, height: int = 720, width: int = 1280, num_frames: int = 129, latent_window_size: int = 9, num_inference_steps: int = 50, sigmas: List[float] = None, true_cfg_scale: float = 1.0, guidance_scale: float = 6.0, num_videos_per_prompt: Optional[int] = 1, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, image_latents: Optional[torch.Tensor] = None, last_image_latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, pooled_prompt_embeds: Optional[torch.Tensor] = None, prompt_attention_mask: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, negative_pooled_prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_attention_mask: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, attention_kwargs: Optional[Dict[str, Any]] = None, callback_on_step_end: Optional[ Union[Callable[[int, int, Dict], None], PipelineCallback, MultiPipelineCallbacks] ] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], prompt_template: Dict[str, Any] = DEFAULT_PROMPT_TEMPLATE, max_sequence_length: int = 256, sampling_type: FramepackSamplingType = FramepackSamplingType.INVERTED_ANTI_DRIFTING, ): r""" The call function to the pipeline for generation. Args: image (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`): The image to be used as the starting point for the video generation. last_image (`PIL.Image.Image` or `np.ndarray` or `torch.Tensor`, *optional*): The optional last image to be used as the ending point for the video generation. This is useful for generating transitions between two images. prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is will be used instead. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `true_cfg_scale` is not greater than `1`). negative_prompt_2 (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `negative_prompt` is used in all the text-encoders. height (`int`, defaults to `720`): The height in pixels of the generated image. width (`int`, defaults to `1280`): The width in pixels of the generated image. num_frames (`int`, defaults to `129`): The number of frames in the generated video. num_inference_steps (`int`, defaults to `50`): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. sigmas (`List[float]`, *optional*): Custom sigmas to use for the denoising process with schedulers which support a `sigmas` argument in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed will be used. true_cfg_scale (`float`, *optional*, defaults to 1.0): When > 1.0 and a provided `negative_prompt`, enables true classifier-free guidance. guidance_scale (`float`, defaults to `6.0`): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. Note that the only available HunyuanVideo model is CFG-distilled, which means that traditional guidance between unconditional and conditional latent is not applied. num_videos_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. image_latents (`torch.Tensor`, *optional*): Pre-encoded image latents. If not provided, the image will be encoded using the VAE. last_image_latents (`torch.Tensor`, *optional*): Pre-encoded last image latents. If not provided, the last image will be encoded using the VAE. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the `prompt` input argument. pooled_prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generated image. Choose between `PIL.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`HunyuanVideoFramepackPipelineOutput`] instead of a plain tuple. attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). clip_skip (`int`, *optional*): Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (`Callable`, `PipelineCallback`, `MultiPipelineCallbacks`, *optional*): A function or a subclass of `PipelineCallback` or `MultiPipelineCallbacks` that is called at the end of each denoising step during the inference. with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. Examples: Returns: [`~HunyuanVideoFramepackPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`HunyuanVideoFramepackPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images and the second element is a list of `bool`s indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content. """ if isinstance(callback_on_step_end, (PipelineCallback, MultiPipelineCallbacks)): callback_on_step_end_tensor_inputs = callback_on_step_end.tensor_inputs # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, prompt_2, height, width, prompt_embeds, callback_on_step_end_tensor_inputs, prompt_template, image, image_latents, last_image, last_image_latents, sampling_type, ) has_neg_prompt = negative_prompt is not None or ( negative_prompt_embeds is not None and negative_pooled_prompt_embeds is not None ) do_true_cfg = true_cfg_scale > 1 and has_neg_prompt self._guidance_scale = guidance_scale self._attention_kwargs = attention_kwargs self._current_timestep = None self._interrupt = False device = self._execution_device transformer_dtype = self.transformer.dtype vae_dtype = self.vae.dtype # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] # 3. Encode input prompt transformer_dtype = self.transformer.dtype prompt_embeds, pooled_prompt_embeds, prompt_attention_mask = self.encode_prompt( prompt=prompt, prompt_2=prompt_2, prompt_template=prompt_template, num_videos_per_prompt=num_videos_per_prompt, prompt_embeds=prompt_embeds, pooled_prompt_embeds=pooled_prompt_embeds, prompt_attention_mask=prompt_attention_mask, device=device, max_sequence_length=max_sequence_length, ) prompt_embeds = prompt_embeds.to(transformer_dtype) prompt_attention_mask = prompt_attention_mask.to(transformer_dtype) pooled_prompt_embeds = pooled_prompt_embeds.to(transformer_dtype) if do_true_cfg: negative_prompt_embeds, negative_pooled_prompt_embeds, negative_prompt_attention_mask = self.encode_prompt( prompt=negative_prompt, prompt_2=negative_prompt_2, prompt_template=prompt_template, num_videos_per_prompt=num_videos_per_prompt, prompt_embeds=negative_prompt_embeds, pooled_prompt_embeds=negative_pooled_prompt_embeds, prompt_attention_mask=negative_prompt_attention_mask, device=device, max_sequence_length=max_sequence_length, ) negative_prompt_embeds = negative_prompt_embeds.to(transformer_dtype) negative_prompt_attention_mask = negative_prompt_attention_mask.to(transformer_dtype) negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.to(transformer_dtype) # 4. Prepare image image = self.video_processor.preprocess(image, height, width) image_embeds = self.encode_image(image, device=device).to(transformer_dtype) if last_image is not None: # Credits: https://github.com/lllyasviel/FramePack/pull/167 # Users can modify the weighting strategy applied here last_image = self.video_processor.preprocess(last_image, height, width) last_image_embeds = self.encode_image(last_image, device=device).to(transformer_dtype) last_image_embeds = (image_embeds + last_image_embeds) / 2 # 5. Prepare latent variables num_channels_latents = self.transformer.config.in_channels window_num_frames = (latent_window_size - 1) * self.vae_scale_factor_temporal + 1 num_latent_sections = max(1, (num_frames + window_num_frames - 1) // window_num_frames) history_video = None total_generated_latent_frames = 0 image_latents = self.prepare_image_latents( image, dtype=torch.float32, device=device, generator=generator, latents=image_latents ) if last_image is not None: last_image_latents = self.prepare_image_latents( last_image, dtype=torch.float32, device=device, generator=generator ) # Specific to the released checkpoints: # - https://huggingface.co/lllyasviel/FramePackI2V_HY # - https://huggingface.co/lllyasviel/FramePack_F1_I2V_HY_20250503 # TODO: find a more generic way in future if there are more checkpoints if sampling_type == FramepackSamplingType.INVERTED_ANTI_DRIFTING: history_sizes = [1, 2, 16] history_latents = torch.zeros( batch_size, num_channels_latents, sum(history_sizes), height // self.vae_scale_factor_spatial, width // self.vae_scale_factor_spatial, device=device, dtype=torch.float32, ) elif sampling_type == FramepackSamplingType.VANILLA: history_sizes = [16, 2, 1] history_latents = torch.zeros( batch_size, num_channels_latents, sum(history_sizes), height // self.vae_scale_factor_spatial, width // self.vae_scale_factor_spatial, device=device, dtype=torch.float32, ) history_latents = torch.cat([history_latents, image_latents], dim=2) total_generated_latent_frames += 1 else: assert False # 6. Prepare guidance condition guidance = torch.tensor([guidance_scale] * batch_size, dtype=transformer_dtype, device=device) * 1000.0 # 7. Denoising loop for k in range(num_latent_sections): if sampling_type == FramepackSamplingType.INVERTED_ANTI_DRIFTING: latent_paddings = list(reversed(range(num_latent_sections))) if num_latent_sections > 4: latent_paddings = [3] + [2] * (num_latent_sections - 3) + [1, 0] is_first_section = k == 0 is_last_section = k == num_latent_sections - 1 latent_padding_size = latent_paddings[k] * latent_window_size indices = torch.arange(0, sum([1, latent_padding_size, latent_window_size, *history_sizes])) ( indices_prefix, indices_padding, indices_latents, indices_latents_history_1x, indices_latents_history_2x, indices_latents_history_4x, ) = indices.split([1, latent_padding_size, latent_window_size, *history_sizes], dim=0) # Inverted anti-drifting sampling: Figure 2(c) in the paper indices_clean_latents = torch.cat([indices_prefix, indices_latents_history_1x], dim=0) latents_prefix = image_latents latents_history_1x, latents_history_2x, latents_history_4x = history_latents[ :, :, : sum(history_sizes) ].split(history_sizes, dim=2) if last_image is not None and is_first_section: latents_history_1x = last_image_latents latents_clean = torch.cat([latents_prefix, latents_history_1x], dim=2) elif sampling_type == FramepackSamplingType.VANILLA: indices = torch.arange(0, sum([1, *history_sizes, latent_window_size])) ( indices_prefix, indices_latents_history_4x, indices_latents_history_2x, indices_latents_history_1x, indices_latents, ) = indices.split([1, *history_sizes, latent_window_size], dim=0) indices_clean_latents = torch.cat([indices_prefix, indices_latents_history_1x], dim=0) latents_prefix = image_latents latents_history_4x, latents_history_2x, latents_history_1x = history_latents[ :, :, -sum(history_sizes) : ].split(history_sizes, dim=2) latents_clean = torch.cat([latents_prefix, latents_history_1x], dim=2) else: assert False latents = self.prepare_latents( batch_size, num_channels_latents, height, width, window_num_frames, dtype=torch.float32, device=device, generator=generator, latents=None, ) sigmas = np.linspace(1.0, 0.0, num_inference_steps + 1)[:-1] if sigmas is None else sigmas image_seq_len = ( latents.shape[2] * latents.shape[3] * latents.shape[4] / self.transformer.config.patch_size**2 ) exp_max = 7.0 mu = calculate_shift( image_seq_len, self.scheduler.config.get("base_image_seq_len", 256), self.scheduler.config.get("max_image_seq_len", 4096), self.scheduler.config.get("base_shift", 0.5), self.scheduler.config.get("max_shift", 1.15), ) mu = min(mu, math.log(exp_max)) timesteps, num_inference_steps = retrieve_timesteps( self.scheduler, num_inference_steps, device, sigmas=sigmas, mu=mu ) num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order self._num_timesteps = len(timesteps) with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): if self.interrupt: continue self._current_timestep = t timestep = t.expand(latents.shape[0]) noise_pred = self.transformer( hidden_states=latents.to(transformer_dtype), timestep=timestep, encoder_hidden_states=prompt_embeds, encoder_attention_mask=prompt_attention_mask, pooled_projections=pooled_prompt_embeds, image_embeds=image_embeds, indices_latents=indices_latents, guidance=guidance, latents_clean=latents_clean.to(transformer_dtype), indices_latents_clean=indices_clean_latents, latents_history_2x=latents_history_2x.to(transformer_dtype), indices_latents_history_2x=indices_latents_history_2x, latents_history_4x=latents_history_4x.to(transformer_dtype), indices_latents_history_4x=indices_latents_history_4x, attention_kwargs=attention_kwargs, return_dict=False, )[0] if do_true_cfg: neg_noise_pred = self.transformer( hidden_states=latents.to(transformer_dtype), timestep=timestep, encoder_hidden_states=negative_prompt_embeds, encoder_attention_mask=negative_prompt_attention_mask, pooled_projections=negative_pooled_prompt_embeds, image_embeds=image_embeds, indices_latents=indices_latents, guidance=guidance, latents_clean=latents_clean.to(transformer_dtype), indices_latents_clean=indices_clean_latents, latents_history_2x=latents_history_2x.to(transformer_dtype), indices_latents_history_2x=indices_latents_history_2x, latents_history_4x=latents_history_4x.to(transformer_dtype), indices_latents_history_4x=indices_latents_history_4x, attention_kwargs=attention_kwargs, return_dict=False, )[0] noise_pred = neg_noise_pred + true_cfg_scale * (noise_pred - neg_noise_pred) # compute the previous noisy sample x_t -> x_t-1 latents = self.scheduler.step(noise_pred.float(), t, latents, return_dict=False)[0] if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) latents = callback_outputs.pop("latents", latents) prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if XLA_AVAILABLE: xm.mark_step() if sampling_type == FramepackSamplingType.INVERTED_ANTI_DRIFTING: if is_last_section: latents = torch.cat([image_latents, latents], dim=2) total_generated_latent_frames += latents.shape[2] history_latents = torch.cat([latents, history_latents], dim=2) real_history_latents = history_latents[:, :, :total_generated_latent_frames] section_latent_frames = ( (latent_window_size * 2 + 1) if is_last_section else (latent_window_size * 2) ) index_slice = (slice(None), slice(None), slice(0, section_latent_frames)) elif sampling_type == FramepackSamplingType.VANILLA: total_generated_latent_frames += latents.shape[2] history_latents = torch.cat([history_latents, latents], dim=2) real_history_latents = history_latents[:, :, -total_generated_latent_frames:] section_latent_frames = latent_window_size * 2 index_slice = (slice(None), slice(None), slice(-section_latent_frames, None)) else: assert False if history_video is None: if not output_type == "latent": current_latents = real_history_latents.to(vae_dtype) / self.vae.config.scaling_factor history_video = self.vae.decode(current_latents, return_dict=False)[0] else: history_video = [real_history_latents] else: if not output_type == "latent": overlapped_frames = (latent_window_size - 1) * self.vae_scale_factor_temporal + 1 current_latents = ( real_history_latents[index_slice].to(vae_dtype) / self.vae.config.scaling_factor ) current_video = self.vae.decode(current_latents, return_dict=False)[0] if sampling_type == FramepackSamplingType.INVERTED_ANTI_DRIFTING: history_video = self._soft_append(current_video, history_video, overlapped_frames) elif sampling_type == FramepackSamplingType.VANILLA: history_video = self._soft_append(history_video, current_video, overlapped_frames) else: assert False else: history_video.append(real_history_latents) self._current_timestep = None if not output_type == "latent": generated_frames = history_video.size(2) generated_frames = ( generated_frames - 1 ) // self.vae_scale_factor_temporal * self.vae_scale_factor_temporal + 1 history_video = history_video[:, :, :generated_frames] video = self.video_processor.postprocess_video(history_video, output_type=output_type) else: video = history_video # Offload all models self.maybe_free_model_hooks() if not return_dict: return (video,) return HunyuanVideoFramepackPipelineOutput(frames=video) def _soft_append(self, history: torch.Tensor, current: torch.Tensor, overlap: int = 0): if overlap <= 0: return torch.cat([history, current], dim=2) assert history.shape[2] >= overlap, f"Current length ({history.shape[2]}) must be >= overlap ({overlap})" assert current.shape[2] >= overlap, f"History length ({current.shape[2]}) must be >= overlap ({overlap})" weights = torch.linspace(1, 0, overlap, dtype=history.dtype, device=history.device).view(1, 1, -1, 1, 1) blended = weights * history[:, :, -overlap:] + (1 - weights) * current[:, :, :overlap] output = torch.cat([history[:, :, :-overlap], blended, current[:, :, overlap:]], dim=2) return output.to(history)
diffusers/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py", "repo_id": "diffusers", "token_count": 24668 }
172
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Callable, Dict, List, Optional, Union import PIL.Image import torch from transformers import CLIPImageProcessor, CLIPTextModelWithProjection, CLIPTokenizer, CLIPVisionModelWithProjection from ...models import PriorTransformer, UNet2DConditionModel, VQModel from ...schedulers import DDPMScheduler, UnCLIPScheduler from ...utils import deprecate, logging, replace_example_docstring from ..pipeline_utils import DiffusionPipeline from .pipeline_kandinsky2_2 import KandinskyV22Pipeline from .pipeline_kandinsky2_2_img2img import KandinskyV22Img2ImgPipeline from .pipeline_kandinsky2_2_inpainting import KandinskyV22InpaintPipeline from .pipeline_kandinsky2_2_prior import KandinskyV22PriorPipeline logger = logging.get_logger(__name__) # pylint: disable=invalid-name TEXT2IMAGE_EXAMPLE_DOC_STRING = """ Examples: ```py from diffusers import AutoPipelineForText2Image import torch pipe = AutoPipelineForText2Image.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 ) pipe.enable_model_cpu_offload() prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" image = pipe(prompt=prompt, num_inference_steps=25).images[0] ``` """ IMAGE2IMAGE_EXAMPLE_DOC_STRING = """ Examples: ```py from diffusers import AutoPipelineForImage2Image import torch import requests from io import BytesIO from PIL import Image import os pipe = AutoPipelineForImage2Image.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 ) pipe.enable_model_cpu_offload() prompt = "A fantasy landscape, Cinematic lighting" negative_prompt = "low quality, bad quality" url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) image = Image.open(BytesIO(response.content)).convert("RGB") image.thumbnail((768, 768)) image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] ``` """ INPAINT_EXAMPLE_DOC_STRING = """ Examples: ```py from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image import torch import numpy as np pipe = AutoPipelineForInpainting.from_pretrained( "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 ) pipe.enable_model_cpu_offload() prompt = "A fantasy landscape, Cinematic lighting" negative_prompt = "low quality, bad quality" original_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" ) mask = np.zeros((768, 768), dtype=np.float32) # Let's mask out an area above the cat's head mask[:250, 250:-250] = 1 image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] ``` """ class KandinskyV22CombinedPipeline(DiffusionPipeline): """ Combined Pipeline for text-to-image generation using Kandinsky This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: scheduler (Union[`DDIMScheduler`,`DDPMScheduler`]): A scheduler to be used in combination with `unet` to generate image latents. unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the image embedding. movq ([`VQModel`]): MoVQ Decoder to generate the image from the latents. prior_prior ([`PriorTransformer`]): The canonical unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder ([`CLIPVisionModelWithProjection`]): Frozen image-encoder. prior_text_encoder ([`CLIPTextModelWithProjection`]): Frozen text-encoder. prior_tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). prior_scheduler ([`UnCLIPScheduler`]): A scheduler to be used in combination with `prior` to generate image embedding. prior_image_processor ([`CLIPImageProcessor`]): A image_processor to be used to preprocess image from clip. """ model_cpu_offload_seq = "prior_text_encoder->prior_image_encoder->unet->movq" _load_connected_pipes = True _exclude_from_cpu_offload = ["prior_prior"] def __init__( self, unet: UNet2DConditionModel, scheduler: DDPMScheduler, movq: VQModel, prior_prior: PriorTransformer, prior_image_encoder: CLIPVisionModelWithProjection, prior_text_encoder: CLIPTextModelWithProjection, prior_tokenizer: CLIPTokenizer, prior_scheduler: UnCLIPScheduler, prior_image_processor: CLIPImageProcessor, ): super().__init__() self.register_modules( unet=unet, scheduler=scheduler, movq=movq, prior_prior=prior_prior, prior_image_encoder=prior_image_encoder, prior_text_encoder=prior_text_encoder, prior_tokenizer=prior_tokenizer, prior_scheduler=prior_scheduler, prior_image_processor=prior_image_processor, ) self.prior_pipe = KandinskyV22PriorPipeline( prior=prior_prior, image_encoder=prior_image_encoder, text_encoder=prior_text_encoder, tokenizer=prior_tokenizer, scheduler=prior_scheduler, image_processor=prior_image_processor, ) self.decoder_pipe = KandinskyV22Pipeline( unet=unet, scheduler=scheduler, movq=movq, ) def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None): self.decoder_pipe.enable_xformers_memory_efficient_attention(attention_op) def enable_sequential_cpu_offload(self, gpu_id: Optional[int] = None, device: Union[torch.device, str] = None): r""" Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. Note that offloading happens on a submodule basis. Memory savings are higher than with `enable_model_cpu_offload`, but performance is lower. """ self.prior_pipe.enable_sequential_cpu_offload(gpu_id=gpu_id, device=device) self.decoder_pipe.enable_sequential_cpu_offload(gpu_id=gpu_id, device=device) def progress_bar(self, iterable=None, total=None): self.prior_pipe.progress_bar(iterable=iterable, total=total) self.decoder_pipe.progress_bar(iterable=iterable, total=total) self.decoder_pipe.enable_model_cpu_offload() def set_progress_bar_config(self, **kwargs): self.prior_pipe.set_progress_bar_config(**kwargs) self.decoder_pipe.set_progress_bar_config(**kwargs) @torch.no_grad() @replace_example_docstring(TEXT2IMAGE_EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]], negative_prompt: Optional[Union[str, List[str]]] = None, num_inference_steps: int = 100, guidance_scale: float = 4.0, num_images_per_prompt: int = 1, height: int = 512, width: int = 512, prior_guidance_scale: float = 4.0, prior_num_inference_steps: int = 25, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, return_dict: bool = True, prior_callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, prior_callback_on_step_end_tensor_inputs: List[str] = ["latents"], callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], ): """ Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`): The prompt or prompts to guide the image generation. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. num_inference_steps (`int`, *optional*, defaults to 100): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. height (`int`, *optional*, defaults to 512): The height in pixels of the generated image. width (`int`, *optional*, defaults to 512): The width in pixels of the generated image. prior_guidance_scale (`float`, *optional*, defaults to 4.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. prior_num_inference_steps (`int`, *optional*, defaults to 100): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. guidance_scale (`float`, *optional*, defaults to 4.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` (`np.array`) or `"pt"` (`torch.Tensor`). return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. prior_callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference of the prior pipeline. The function is called with the following arguments: `prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. prior_callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `prior_callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your prior pipeline class. callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference of the decoder pipeline. The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. Examples: Returns: [`~pipelines.ImagePipelineOutput`] or `tuple` """ prior_outputs = self.prior_pipe( prompt=prompt, negative_prompt=negative_prompt, num_images_per_prompt=num_images_per_prompt, num_inference_steps=prior_num_inference_steps, generator=generator, latents=latents, guidance_scale=prior_guidance_scale, output_type="pt", return_dict=False, callback_on_step_end=prior_callback_on_step_end, callback_on_step_end_tensor_inputs=prior_callback_on_step_end_tensor_inputs, ) image_embeds = prior_outputs[0] negative_image_embeds = prior_outputs[1] prompt = [prompt] if not isinstance(prompt, (list, tuple)) else prompt if len(prompt) < image_embeds.shape[0] and image_embeds.shape[0] % len(prompt) == 0: prompt = (image_embeds.shape[0] // len(prompt)) * prompt outputs = self.decoder_pipe( image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, width=width, height=height, num_inference_steps=num_inference_steps, generator=generator, guidance_scale=guidance_scale, output_type=output_type, callback=callback, callback_steps=callback_steps, return_dict=return_dict, callback_on_step_end=callback_on_step_end, callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs, ) self.maybe_free_model_hooks() return outputs class KandinskyV22Img2ImgCombinedPipeline(DiffusionPipeline): """ Combined Pipeline for image-to-image generation using Kandinsky This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: scheduler (Union[`DDIMScheduler`,`DDPMScheduler`]): A scheduler to be used in combination with `unet` to generate image latents. unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the image embedding. movq ([`VQModel`]): MoVQ Decoder to generate the image from the latents. prior_prior ([`PriorTransformer`]): The canonical unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder ([`CLIPVisionModelWithProjection`]): Frozen image-encoder. prior_text_encoder ([`CLIPTextModelWithProjection`]): Frozen text-encoder. prior_tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). prior_scheduler ([`UnCLIPScheduler`]): A scheduler to be used in combination with `prior` to generate image embedding. prior_image_processor ([`CLIPImageProcessor`]): A image_processor to be used to preprocess image from clip. """ model_cpu_offload_seq = "prior_text_encoder->prior_image_encoder->unet->movq" _load_connected_pipes = True _exclude_from_cpu_offload = ["prior_prior"] def __init__( self, unet: UNet2DConditionModel, scheduler: DDPMScheduler, movq: VQModel, prior_prior: PriorTransformer, prior_image_encoder: CLIPVisionModelWithProjection, prior_text_encoder: CLIPTextModelWithProjection, prior_tokenizer: CLIPTokenizer, prior_scheduler: UnCLIPScheduler, prior_image_processor: CLIPImageProcessor, ): super().__init__() self.register_modules( unet=unet, scheduler=scheduler, movq=movq, prior_prior=prior_prior, prior_image_encoder=prior_image_encoder, prior_text_encoder=prior_text_encoder, prior_tokenizer=prior_tokenizer, prior_scheduler=prior_scheduler, prior_image_processor=prior_image_processor, ) self.prior_pipe = KandinskyV22PriorPipeline( prior=prior_prior, image_encoder=prior_image_encoder, text_encoder=prior_text_encoder, tokenizer=prior_tokenizer, scheduler=prior_scheduler, image_processor=prior_image_processor, ) self.decoder_pipe = KandinskyV22Img2ImgPipeline( unet=unet, scheduler=scheduler, movq=movq, ) def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None): self.decoder_pipe.enable_xformers_memory_efficient_attention(attention_op) def enable_model_cpu_offload(self, gpu_id: Optional[int] = None, device: Union[torch.device, str] = None): r""" Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. """ self.prior_pipe.enable_model_cpu_offload(gpu_id=gpu_id, device=device) self.decoder_pipe.enable_model_cpu_offload(gpu_id=gpu_id, device=device) def enable_sequential_cpu_offload(self, gpu_id: Optional[int] = None, device: Union[torch.device, str] = None): r""" Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. Note that offloading happens on a submodule basis. Memory savings are higher than with `enable_model_cpu_offload`, but performance is lower. """ self.prior_pipe.enable_sequential_cpu_offload(gpu_id=gpu_id, device=device) self.decoder_pipe.enable_sequential_cpu_offload(gpu_id=gpu_id, device=device) def progress_bar(self, iterable=None, total=None): self.prior_pipe.progress_bar(iterable=iterable, total=total) self.decoder_pipe.progress_bar(iterable=iterable, total=total) self.decoder_pipe.enable_model_cpu_offload() def set_progress_bar_config(self, **kwargs): self.prior_pipe.set_progress_bar_config(**kwargs) self.decoder_pipe.set_progress_bar_config(**kwargs) @torch.no_grad() @replace_example_docstring(IMAGE2IMAGE_EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]], image: Union[torch.Tensor, PIL.Image.Image, List[torch.Tensor], List[PIL.Image.Image]], negative_prompt: Optional[Union[str, List[str]]] = None, num_inference_steps: int = 100, guidance_scale: float = 4.0, strength: float = 0.3, num_images_per_prompt: int = 1, height: int = 512, width: int = 512, prior_guidance_scale: float = 4.0, prior_num_inference_steps: int = 25, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", callback: Optional[Callable[[int, int, torch.Tensor], None]] = None, callback_steps: int = 1, return_dict: bool = True, prior_callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, prior_callback_on_step_end_tensor_inputs: List[str] = ["latents"], callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], ): """ Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`): The prompt or prompts to guide the image generation. image (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`): `Image`, or tensor representing an image batch, that will be used as the starting point for the process. Can also accept image latents as `image`, if passing latents directly, it will not be encoded again. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. guidance_scale (`float`, *optional*, defaults to 4.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. strength (`float`, *optional*, defaults to 0.3): Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` will be used as a starting point, adding more noise to it the larger the `strength`. The number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will be maximum and the denoising process will run for the full number of iterations specified in `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. num_inference_steps (`int`, *optional*, defaults to 100): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. height (`int`, *optional*, defaults to 512): The height in pixels of the generated image. width (`int`, *optional*, defaults to 512): The width in pixels of the generated image. prior_guidance_scale (`float`, *optional*, defaults to 4.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. prior_num_inference_steps (`int`, *optional*, defaults to 100): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` (`np.array`) or `"pt"` (`torch.Tensor`). callback (`Callable`, *optional*): A function that calls every `callback_steps` steps during inference. The function is called with the following arguments: `callback(step: int, timestep: int, latents: torch.Tensor)`. callback_steps (`int`, *optional*, defaults to 1): The frequency at which the `callback` function is called. If not specified, the callback is called at every step. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. Examples: Returns: [`~pipelines.ImagePipelineOutput`] or `tuple` """ prior_outputs = self.prior_pipe( prompt=prompt, negative_prompt=negative_prompt, num_images_per_prompt=num_images_per_prompt, num_inference_steps=prior_num_inference_steps, generator=generator, latents=latents, guidance_scale=prior_guidance_scale, output_type="pt", return_dict=False, callback_on_step_end=prior_callback_on_step_end, callback_on_step_end_tensor_inputs=prior_callback_on_step_end_tensor_inputs, ) image_embeds = prior_outputs[0] negative_image_embeds = prior_outputs[1] prompt = [prompt] if not isinstance(prompt, (list, tuple)) else prompt image = [image] if isinstance(image, PIL.Image.Image) else image if len(prompt) < image_embeds.shape[0] and image_embeds.shape[0] % len(prompt) == 0: prompt = (image_embeds.shape[0] // len(prompt)) * prompt if ( isinstance(image, (list, tuple)) and len(image) < image_embeds.shape[0] and image_embeds.shape[0] % len(image) == 0 ): image = (image_embeds.shape[0] // len(image)) * image outputs = self.decoder_pipe( image=image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, width=width, height=height, strength=strength, num_inference_steps=num_inference_steps, generator=generator, guidance_scale=guidance_scale, output_type=output_type, callback=callback, callback_steps=callback_steps, return_dict=return_dict, callback_on_step_end=callback_on_step_end, callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs, ) self.maybe_free_model_hooks() return outputs class KandinskyV22InpaintCombinedPipeline(DiffusionPipeline): """ Combined Pipeline for inpainting generation using Kandinsky This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Args: scheduler (Union[`DDIMScheduler`,`DDPMScheduler`]): A scheduler to be used in combination with `unet` to generate image latents. unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the image embedding. movq ([`VQModel`]): MoVQ Decoder to generate the image from the latents. prior_prior ([`PriorTransformer`]): The canonical unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder ([`CLIPVisionModelWithProjection`]): Frozen image-encoder. prior_text_encoder ([`CLIPTextModelWithProjection`]): Frozen text-encoder. prior_tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). prior_scheduler ([`UnCLIPScheduler`]): A scheduler to be used in combination with `prior` to generate image embedding. prior_image_processor ([`CLIPImageProcessor`]): A image_processor to be used to preprocess image from clip. """ model_cpu_offload_seq = "prior_text_encoder->prior_image_encoder->unet->movq" _load_connected_pipes = True _exclude_from_cpu_offload = ["prior_prior"] def __init__( self, unet: UNet2DConditionModel, scheduler: DDPMScheduler, movq: VQModel, prior_prior: PriorTransformer, prior_image_encoder: CLIPVisionModelWithProjection, prior_text_encoder: CLIPTextModelWithProjection, prior_tokenizer: CLIPTokenizer, prior_scheduler: UnCLIPScheduler, prior_image_processor: CLIPImageProcessor, ): super().__init__() self.register_modules( unet=unet, scheduler=scheduler, movq=movq, prior_prior=prior_prior, prior_image_encoder=prior_image_encoder, prior_text_encoder=prior_text_encoder, prior_tokenizer=prior_tokenizer, prior_scheduler=prior_scheduler, prior_image_processor=prior_image_processor, ) self.prior_pipe = KandinskyV22PriorPipeline( prior=prior_prior, image_encoder=prior_image_encoder, text_encoder=prior_text_encoder, tokenizer=prior_tokenizer, scheduler=prior_scheduler, image_processor=prior_image_processor, ) self.decoder_pipe = KandinskyV22InpaintPipeline( unet=unet, scheduler=scheduler, movq=movq, ) def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None): self.decoder_pipe.enable_xformers_memory_efficient_attention(attention_op) def enable_sequential_cpu_offload(self, gpu_id: Optional[int] = None, device: Union[torch.device, str] = None): r""" Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. Note that offloading happens on a submodule basis. Memory savings are higher than with `enable_model_cpu_offload`, but performance is lower. """ self.prior_pipe.enable_sequential_cpu_offload(gpu_id=gpu_id, device=device) self.decoder_pipe.enable_sequential_cpu_offload(gpu_id=gpu_id, device=device) def progress_bar(self, iterable=None, total=None): self.prior_pipe.progress_bar(iterable=iterable, total=total) self.decoder_pipe.progress_bar(iterable=iterable, total=total) self.decoder_pipe.enable_model_cpu_offload() def set_progress_bar_config(self, **kwargs): self.prior_pipe.set_progress_bar_config(**kwargs) self.decoder_pipe.set_progress_bar_config(**kwargs) @torch.no_grad() @replace_example_docstring(INPAINT_EXAMPLE_DOC_STRING) def __call__( self, prompt: Union[str, List[str]], image: Union[torch.Tensor, PIL.Image.Image, List[torch.Tensor], List[PIL.Image.Image]], mask_image: Union[torch.Tensor, PIL.Image.Image, List[torch.Tensor], List[PIL.Image.Image]], negative_prompt: Optional[Union[str, List[str]]] = None, num_inference_steps: int = 100, guidance_scale: float = 4.0, num_images_per_prompt: int = 1, height: int = 512, width: int = 512, prior_guidance_scale: float = 4.0, prior_num_inference_steps: int = 25, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, output_type: Optional[str] = "pil", return_dict: bool = True, prior_callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, prior_callback_on_step_end_tensor_inputs: List[str] = ["latents"], callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], **kwargs, ): """ Function invoked when calling the pipeline for generation. Args: prompt (`str` or `List[str]`): The prompt or prompts to guide the image generation. image (`torch.Tensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.Tensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`): `Image`, or tensor representing an image batch, that will be used as the starting point for the process. Can also accept image latents as `image`, if passing latents directly, it will not be encoded again. mask_image (`np.array`): Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). num_images_per_prompt (`int`, *optional*, defaults to 1): The number of images to generate per prompt. guidance_scale (`float`, *optional*, defaults to 4.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. num_inference_steps (`int`, *optional*, defaults to 100): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. height (`int`, *optional*, defaults to 512): The height in pixels of the generated image. width (`int`, *optional*, defaults to 512): The width in pixels of the generated image. prior_guidance_scale (`float`, *optional*, defaults to 4.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. prior_num_inference_steps (`int`, *optional*, defaults to 100): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` (`np.array`) or `"pt"` (`torch.Tensor`). return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. prior_callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: `prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. prior_callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `prior_callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. Examples: Returns: [`~pipelines.ImagePipelineOutput`] or `tuple` """ prior_kwargs = {} if kwargs.get("prior_callback", None) is not None: prior_kwargs["callback"] = kwargs.pop("prior_callback") deprecate( "prior_callback", "1.0.0", "Passing `prior_callback` as an input argument to `__call__` is deprecated, consider use `prior_callback_on_step_end`", ) if kwargs.get("prior_callback_steps", None) is not None: deprecate( "prior_callback_steps", "1.0.0", "Passing `prior_callback_steps` as an input argument to `__call__` is deprecated, consider use `prior_callback_on_step_end`", ) prior_kwargs["callback_steps"] = kwargs.pop("prior_callback_steps") prior_outputs = self.prior_pipe( prompt=prompt, negative_prompt=negative_prompt, num_images_per_prompt=num_images_per_prompt, num_inference_steps=prior_num_inference_steps, generator=generator, latents=latents, guidance_scale=prior_guidance_scale, output_type="pt", return_dict=False, callback_on_step_end=prior_callback_on_step_end, callback_on_step_end_tensor_inputs=prior_callback_on_step_end_tensor_inputs, **prior_kwargs, ) image_embeds = prior_outputs[0] negative_image_embeds = prior_outputs[1] prompt = [prompt] if not isinstance(prompt, (list, tuple)) else prompt image = [image] if isinstance(image, PIL.Image.Image) else image mask_image = [mask_image] if isinstance(mask_image, PIL.Image.Image) else mask_image if len(prompt) < image_embeds.shape[0] and image_embeds.shape[0] % len(prompt) == 0: prompt = (image_embeds.shape[0] // len(prompt)) * prompt if ( isinstance(image, (list, tuple)) and len(image) < image_embeds.shape[0] and image_embeds.shape[0] % len(image) == 0 ): image = (image_embeds.shape[0] // len(image)) * image if ( isinstance(mask_image, (list, tuple)) and len(mask_image) < image_embeds.shape[0] and image_embeds.shape[0] % len(mask_image) == 0 ): mask_image = (image_embeds.shape[0] // len(mask_image)) * mask_image outputs = self.decoder_pipe( image=image, mask_image=mask_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, width=width, height=height, num_inference_steps=num_inference_steps, generator=generator, guidance_scale=guidance_scale, output_type=output_type, return_dict=return_dict, callback_on_step_end=callback_on_step_end, callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs, **kwargs, ) self.maybe_free_model_hooks() return outputs
diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py", "repo_id": "diffusers", "token_count": 18863 }
173
# Copyright 2025 ChatGLM3-6B Model Team, Kwai-Kolors Team and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os import re from typing import Dict, List, Optional, Union from sentencepiece import SentencePieceProcessor from transformers import PreTrainedTokenizer from transformers.tokenization_utils_base import BatchEncoding, EncodedInput from transformers.utils import PaddingStrategy class SPTokenizer: def __init__(self, model_path: str): # reload tokenizer assert os.path.isfile(model_path), model_path self.sp_model = SentencePieceProcessor(model_file=model_path) # BOS / EOS token IDs self.n_words: int = self.sp_model.vocab_size() self.bos_id: int = self.sp_model.bos_id() self.eos_id: int = self.sp_model.eos_id() self.pad_id: int = self.sp_model.unk_id() assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens self.special_tokens = {} self.index_special_tokens = {} for token in special_tokens: self.special_tokens[token] = self.n_words self.index_special_tokens[self.n_words] = token self.n_words += 1 self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) def tokenize(self, s: str, encode_special_tokens=False): if encode_special_tokens: last_index = 0 t = [] for match in re.finditer(self.role_special_token_expression, s): if last_index < match.start(): t.extend(self.sp_model.EncodeAsPieces(s[last_index : match.start()])) t.append(s[match.start() : match.end()]) last_index = match.end() if last_index < len(s): t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) return t else: return self.sp_model.EncodeAsPieces(s) def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: assert isinstance(s, str) t = self.sp_model.encode(s) if bos: t = [self.bos_id] + t if eos: t = t + [self.eos_id] return t def decode(self, t: List[int]) -> str: text, buffer = "", [] for token in t: if token in self.index_special_tokens: if buffer: text += self.sp_model.decode(buffer) buffer = [] text += self.index_special_tokens[token] else: buffer.append(token) if buffer: text += self.sp_model.decode(buffer) return text def decode_tokens(self, tokens: List[str]) -> str: text = self.sp_model.DecodePieces(tokens) return text def convert_token_to_id(self, token): """Converts a token (str) in an id using the vocab.""" if token in self.special_tokens: return self.special_tokens[token] return self.sp_model.PieceToId(token) def convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" if index in self.index_special_tokens: return self.index_special_tokens[index] if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0: return "" return self.sp_model.IdToPiece(index) class ChatGLMTokenizer(PreTrainedTokenizer): vocab_files_names = {"vocab_file": "tokenizer.model"} model_input_names = ["input_ids", "attention_mask", "position_ids"] def __init__( self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, **kwargs, ): self.name = "GLMTokenizer" self.vocab_file = vocab_file self.tokenizer = SPTokenizer(vocab_file) self.special_tokens = { "<bos>": self.tokenizer.bos_id, "<eos>": self.tokenizer.eos_id, "<pad>": self.tokenizer.pad_id, } self.encode_special_tokens = encode_special_tokens super().__init__( padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, encode_special_tokens=encode_special_tokens, **kwargs, ) def get_command(self, token): if token in self.special_tokens: return self.special_tokens[token] assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" return self.tokenizer.special_tokens[token] @property def unk_token(self) -> str: return "<unk>" @unk_token.setter def unk_token(self, value: str): self._unk_token = value @property def pad_token(self) -> str: return "<unk>" @pad_token.setter def pad_token(self, value: str): self._pad_token = value @property def pad_token_id(self): return self.get_command("<pad>") @property def eos_token(self) -> str: return "</s>" @eos_token.setter def eos_token(self, value: str): self._eos_token = value @property def eos_token_id(self): return self.get_command("<eos>") @property def vocab_size(self): return self.tokenizer.n_words def get_vocab(self): """Returns vocab as a dict""" vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} vocab.update(self.added_tokens_encoder) return vocab def _tokenize(self, text, **kwargs): return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) def _convert_token_to_id(self, token): """Converts a token (str) in an id using the vocab.""" return self.tokenizer.convert_token_to_id(token) def _convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" return self.tokenizer.convert_id_to_token(index) def convert_tokens_to_string(self, tokens: List[str]) -> str: return self.tokenizer.decode_tokens(tokens) def save_vocabulary(self, save_directory, filename_prefix=None): """ Save the vocabulary and special tokens file to a directory. Args: save_directory (`str`): The directory in which to save the vocabulary. filename_prefix (`str`, *optional*): An optional prefix to add to the named of the saved files. Returns: `Tuple(str)`: Paths to the files saved. """ if os.path.isdir(save_directory): vocab_file = os.path.join(save_directory, self.vocab_files_names["vocab_file"]) else: vocab_file = save_directory with open(self.vocab_file, "rb") as fin: proto_str = fin.read() with open(vocab_file, "wb") as writer: writer.write(proto_str) return (vocab_file,) def get_prefix_tokens(self): prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] return prefix_tokens def build_single_message(self, role, metadata, message): assert role in ["system", "user", "assistant", "observation"], role role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") message_tokens = self.tokenizer.encode(message) tokens = role_tokens + message_tokens return tokens def build_chat_input(self, query, history=None, role="user"): if history is None: history = [] input_ids = [] for item in history: content = item["content"] if item["role"] == "system" and "tools" in item: content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) input_ids.extend(self.build_single_message(role, "", query)) input_ids.extend([self.get_command("<|assistant|>")]) return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: """ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format: - single sequence: `[CLS] X [SEP]` - pair of sequences: `[CLS] A [SEP] B [SEP]` Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. """ prefix_tokens = self.get_prefix_tokens() token_ids_0 = prefix_tokens + token_ids_0 if token_ids_1 is not None: token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("<eos>")] return token_ids_0 def _pad( self, encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], max_length: Optional[int] = None, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, padding_side: Optional[bool] = None, ) -> dict: """ Pad encoded inputs (on left/right and up to predefined length or max length in the batch) Args: encoded_inputs: Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). max_length: maximum length of the returned list and optionally padding length (see below). Will truncate by taking into account the special tokens. padding_strategy: PaddingStrategy to use for padding. - PaddingStrategy.LONGEST Pad to the longest sequence in the batch - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) - PaddingStrategy.DO_NOT_PAD: Do not pad The tokenizer padding sides are defined in self.padding_side: - 'left': pads on the left of the sequences - 'right': pads on the right of the sequences pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability `>= 7.5` (Volta). padding_side (`str`, *optional*): The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name. return_attention_mask: (optional) Set to False to avoid returning attention mask (default: set to model specifics) """ # Load from model defaults assert self.padding_side == "left" required_input = encoded_inputs[self.model_input_names[0]] seq_length = len(required_input) if padding_strategy == PaddingStrategy.LONGEST: max_length = len(required_input) if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length # Initialize attention mask if not present. if "attention_mask" not in encoded_inputs: encoded_inputs["attention_mask"] = [1] * seq_length if "position_ids" not in encoded_inputs: encoded_inputs["position_ids"] = list(range(seq_length)) if needs_to_be_padded: difference = max_length - len(required_input) if "attention_mask" in encoded_inputs: encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] if "position_ids" in encoded_inputs: encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input return encoded_inputs
diffusers/src/diffusers/pipelines/kolors/tokenizer.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/kolors/tokenizer.py", "repo_id": "diffusers", "token_count": 5907 }
174
# Copyright 2025 Lightricks and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect from dataclasses import dataclass from typing import Any, Callable, Dict, List, Optional, Tuple, Union import PIL.Image import torch from transformers import T5EncoderModel, T5TokenizerFast from ...callbacks import MultiPipelineCallbacks, PipelineCallback from ...image_processor import PipelineImageInput from ...loaders import FromSingleFileMixin, LTXVideoLoraLoaderMixin from ...models.autoencoders import AutoencoderKLLTXVideo from ...models.transformers import LTXVideoTransformer3DModel from ...schedulers import FlowMatchEulerDiscreteScheduler from ...utils import is_torch_xla_available, logging, replace_example_docstring from ...utils.torch_utils import randn_tensor from ...video_processor import VideoProcessor from ..pipeline_utils import DiffusionPipeline from .pipeline_output import LTXPipelineOutput if is_torch_xla_available(): import torch_xla.core.xla_model as xm XLA_AVAILABLE = True else: XLA_AVAILABLE = False logger = logging.get_logger(__name__) # pylint: disable=invalid-name EXAMPLE_DOC_STRING = """ Examples: ```py >>> import torch >>> from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXConditionPipeline, LTXVideoCondition >>> from diffusers.utils import export_to_video, load_video, load_image >>> pipe = LTXConditionPipeline.from_pretrained("Lightricks/LTX-Video-0.9.5", torch_dtype=torch.bfloat16) >>> pipe.to("cuda") >>> # Load input image and video >>> video = load_video( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4" ... ) >>> image = load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input.jpg" ... ) >>> # Create conditioning objects >>> condition1 = LTXVideoCondition( ... image=image, ... frame_index=0, ... ) >>> condition2 = LTXVideoCondition( ... video=video, ... frame_index=80, ... ) >>> prompt = "The video depicts a long, straight highway stretching into the distance, flanked by metal guardrails. The road is divided into multiple lanes, with a few vehicles visible in the far distance. The surrounding landscape features dry, grassy fields on one side and rolling hills on the other. The sky is mostly clear with a few scattered clouds, suggesting a bright, sunny day. And then the camera switch to a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region." >>> negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted" >>> # Generate video >>> generator = torch.Generator("cuda").manual_seed(0) >>> # Text-only conditioning is also supported without the need to pass `conditions` >>> video = pipe( ... conditions=[condition1, condition2], ... prompt=prompt, ... negative_prompt=negative_prompt, ... width=768, ... height=512, ... num_frames=161, ... num_inference_steps=40, ... generator=generator, ... ).frames[0] >>> export_to_video(video, "output.mp4", fps=24) ``` """ @dataclass class LTXVideoCondition: """ Defines a single frame-conditioning item for LTX Video - a single frame or a sequence of frames. Attributes: image (`PIL.Image.Image`): The image to condition the video on. video (`List[PIL.Image.Image]`): The video to condition the video on. frame_index (`int`): The frame index at which the image or video will conditionally effect the video generation. strength (`float`, defaults to `1.0`): The strength of the conditioning effect. A value of `1.0` means the conditioning effect is fully applied. """ image: Optional[PIL.Image.Image] = None video: Optional[List[PIL.Image.Image]] = None frame_index: int = 0 strength: float = 1.0 # from LTX-Video/ltx_video/schedulers/rf.py def linear_quadratic_schedule(num_steps, threshold_noise=0.025, linear_steps=None): if linear_steps is None: linear_steps = num_steps // 2 if num_steps < 2: return torch.tensor([1.0]) linear_sigma_schedule = [i * threshold_noise / linear_steps for i in range(linear_steps)] threshold_noise_step_diff = linear_steps - threshold_noise * num_steps quadratic_steps = num_steps - linear_steps quadratic_coef = threshold_noise_step_diff / (linear_steps * quadratic_steps**2) linear_coef = threshold_noise / linear_steps - 2 * threshold_noise_step_diff / (quadratic_steps**2) const = quadratic_coef * (linear_steps**2) quadratic_sigma_schedule = [ quadratic_coef * (i**2) + linear_coef * i + const for i in range(linear_steps, num_steps) ] sigma_schedule = linear_sigma_schedule + quadratic_sigma_schedule + [1.0] sigma_schedule = [1.0 - x for x in sigma_schedule] return torch.tensor(sigma_schedule[:-1]) # Copied from diffusers.pipelines.flux.pipeline_flux.calculate_shift def calculate_shift( image_seq_len, base_seq_len: int = 256, max_seq_len: int = 4096, base_shift: float = 0.5, max_shift: float = 1.15, ): m = (max_shift - base_shift) / (max_seq_len - base_seq_len) b = base_shift - m * base_seq_len mu = image_seq_len * m + b return mu # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps def retrieve_timesteps( scheduler, num_inference_steps: Optional[int] = None, device: Optional[Union[str, torch.device]] = None, timesteps: Optional[List[int]] = None, sigmas: Optional[List[float]] = None, **kwargs, ): r""" Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`. Args: scheduler (`SchedulerMixin`): The scheduler to get timesteps from. num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps` must be `None`. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. timesteps (`List[int]`, *optional*): Custom timesteps used to override the timestep spacing strategy of the scheduler. If `timesteps` is passed, `num_inference_steps` and `sigmas` must be `None`. sigmas (`List[float]`, *optional*): Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed, `num_inference_steps` and `timesteps` must be `None`. Returns: `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the second element is the number of inference steps. """ if timesteps is not None and sigmas is not None: raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values") if timesteps is not None: accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accepts_timesteps: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" timestep schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) elif sigmas is not None: accept_sigmas = "sigmas" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accept_sigmas: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" sigmas schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) else: scheduler.set_timesteps(num_inference_steps, device=device, **kwargs) timesteps = scheduler.timesteps return timesteps, num_inference_steps # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.retrieve_latents def retrieve_latents( encoder_output: torch.Tensor, generator: Optional[torch.Generator] = None, sample_mode: str = "sample" ): if hasattr(encoder_output, "latent_dist") and sample_mode == "sample": return encoder_output.latent_dist.sample(generator) elif hasattr(encoder_output, "latent_dist") and sample_mode == "argmax": return encoder_output.latent_dist.mode() elif hasattr(encoder_output, "latents"): return encoder_output.latents else: raise AttributeError("Could not access latents of provided encoder_output") # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0): r""" Rescales `noise_cfg` tensor based on `guidance_rescale` to improve image quality and fix overexposure. Based on Section 3.4 from [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891). Args: noise_cfg (`torch.Tensor`): The predicted noise tensor for the guided diffusion process. noise_pred_text (`torch.Tensor`): The predicted noise tensor for the text-guided diffusion process. guidance_rescale (`float`, *optional*, defaults to 0.0): A rescale factor applied to the noise predictions. Returns: noise_cfg (`torch.Tensor`): The rescaled noise prediction tensor. """ std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True) std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True) # rescale the results from guidance (fixes overexposure) noise_pred_rescaled = noise_cfg * (std_text / std_cfg) # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg return noise_cfg class LTXConditionPipeline(DiffusionPipeline, FromSingleFileMixin, LTXVideoLoraLoaderMixin): r""" Pipeline for text/image/video-to-video generation. Reference: https://github.com/Lightricks/LTX-Video Args: transformer ([`LTXVideoTransformer3DModel`]): Conditional Transformer architecture to denoise the encoded video latents. scheduler ([`FlowMatchEulerDiscreteScheduler`]): A scheduler to be used in combination with `transformer` to denoise the encoded image latents. vae ([`AutoencoderKLLTXVideo`]): Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder ([`T5EncoderModel`]): [T5](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5EncoderModel), specifically the [google/t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl) variant. tokenizer (`CLIPTokenizer`): Tokenizer of class [CLIPTokenizer](https://huggingface.co/docs/transformers/en/model_doc/clip#transformers.CLIPTokenizer). tokenizer (`T5TokenizerFast`): Second Tokenizer of class [T5TokenizerFast](https://huggingface.co/docs/transformers/en/model_doc/t5#transformers.T5TokenizerFast). """ model_cpu_offload_seq = "text_encoder->transformer->vae" _optional_components = [] _callback_tensor_inputs = ["latents", "prompt_embeds", "negative_prompt_embeds"] def __init__( self, scheduler: FlowMatchEulerDiscreteScheduler, vae: AutoencoderKLLTXVideo, text_encoder: T5EncoderModel, tokenizer: T5TokenizerFast, transformer: LTXVideoTransformer3DModel, ): super().__init__() self.register_modules( vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, transformer=transformer, scheduler=scheduler, ) self.vae_spatial_compression_ratio = ( self.vae.spatial_compression_ratio if getattr(self, "vae", None) is not None else 32 ) self.vae_temporal_compression_ratio = ( self.vae.temporal_compression_ratio if getattr(self, "vae", None) is not None else 8 ) self.transformer_spatial_patch_size = ( self.transformer.config.patch_size if getattr(self, "transformer", None) is not None else 1 ) self.transformer_temporal_patch_size = ( self.transformer.config.patch_size_t if getattr(self, "transformer") is not None else 1 ) self.video_processor = VideoProcessor(vae_scale_factor=self.vae_spatial_compression_ratio) self.tokenizer_max_length = ( self.tokenizer.model_max_length if getattr(self, "tokenizer", None) is not None else 128 ) self.default_height = 512 self.default_width = 704 self.default_frames = 121 def _get_t5_prompt_embeds( self, prompt: Union[str, List[str]] = None, num_videos_per_prompt: int = 1, max_sequence_length: int = 256, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, ): device = device or self._execution_device dtype = dtype or self.text_encoder.dtype prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=max_sequence_length, truncation=True, add_special_tokens=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids prompt_attention_mask = text_inputs.attention_mask prompt_attention_mask = prompt_attention_mask.bool().to(device) untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_sequence_length - 1 : -1]) logger.warning( "The following part of your input was truncated because `max_sequence_length` is set to " f" {max_sequence_length} tokens: {removed_text}" ) prompt_embeds = self.text_encoder(text_input_ids.to(device), attention_mask=prompt_attention_mask)[0] prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) # duplicate text embeddings for each generation per prompt, using mps friendly method _, seq_len, _ = prompt_embeds.shape prompt_embeds = prompt_embeds.repeat(1, num_videos_per_prompt, 1) prompt_embeds = prompt_embeds.view(batch_size * num_videos_per_prompt, seq_len, -1) prompt_attention_mask = prompt_attention_mask.view(batch_size, -1) prompt_attention_mask = prompt_attention_mask.repeat(num_videos_per_prompt, 1) return prompt_embeds, prompt_attention_mask # Copied from diffusers.pipelines.mochi.pipeline_mochi.MochiPipeline.encode_prompt def encode_prompt( self, prompt: Union[str, List[str]], negative_prompt: Optional[Union[str, List[str]]] = None, do_classifier_free_guidance: bool = True, num_videos_per_prompt: int = 1, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, prompt_attention_mask: Optional[torch.Tensor] = None, negative_prompt_attention_mask: Optional[torch.Tensor] = None, max_sequence_length: int = 256, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): Whether to use classifier free guidance or not. num_videos_per_prompt (`int`, *optional*, defaults to 1): Number of videos that should be generated per prompt. torch device to place the resulting embeddings on prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. device: (`torch.device`, *optional*): torch device dtype: (`torch.dtype`, *optional*): torch dtype """ device = device or self._execution_device prompt = [prompt] if isinstance(prompt, str) else prompt if prompt is not None: batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: prompt_embeds, prompt_attention_mask = self._get_t5_prompt_embeds( prompt=prompt, num_videos_per_prompt=num_videos_per_prompt, max_sequence_length=max_sequence_length, device=device, dtype=dtype, ) if do_classifier_free_guidance and negative_prompt_embeds is None: negative_prompt = negative_prompt or "" negative_prompt = batch_size * [negative_prompt] if isinstance(negative_prompt, str) else negative_prompt if prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) negative_prompt_embeds, negative_prompt_attention_mask = self._get_t5_prompt_embeds( prompt=negative_prompt, num_videos_per_prompt=num_videos_per_prompt, max_sequence_length=max_sequence_length, device=device, dtype=dtype, ) return prompt_embeds, prompt_attention_mask, negative_prompt_embeds, negative_prompt_attention_mask def check_inputs( self, prompt, conditions, image, video, frame_index, strength, denoise_strength, height, width, callback_on_step_end_tensor_inputs=None, prompt_embeds=None, negative_prompt_embeds=None, prompt_attention_mask=None, negative_prompt_attention_mask=None, ): if height % 32 != 0 or width % 32 != 0: raise ValueError(f"`height` and `width` have to be divisible by 32 but are {height} and {width}.") if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if prompt_embeds is not None and prompt_attention_mask is None: raise ValueError("Must provide `prompt_attention_mask` when specifying `prompt_embeds`.") if negative_prompt_embeds is not None and negative_prompt_attention_mask is None: raise ValueError("Must provide `negative_prompt_attention_mask` when specifying `negative_prompt_embeds`.") if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) if prompt_attention_mask.shape != negative_prompt_attention_mask.shape: raise ValueError( "`prompt_attention_mask` and `negative_prompt_attention_mask` must have the same shape when passed directly, but" f" got: `prompt_attention_mask` {prompt_attention_mask.shape} != `negative_prompt_attention_mask`" f" {negative_prompt_attention_mask.shape}." ) if conditions is not None and (image is not None or video is not None): raise ValueError("If `conditions` is provided, `image` and `video` must not be provided.") if conditions is None: if isinstance(image, list) and isinstance(frame_index, list) and len(image) != len(frame_index): raise ValueError( "If `conditions` is not provided, `image` and `frame_index` must be of the same length." ) elif isinstance(image, list) and isinstance(strength, list) and len(image) != len(strength): raise ValueError("If `conditions` is not provided, `image` and `strength` must be of the same length.") elif isinstance(video, list) and isinstance(frame_index, list) and len(video) != len(frame_index): raise ValueError( "If `conditions` is not provided, `video` and `frame_index` must be of the same length." ) elif isinstance(video, list) and isinstance(strength, list) and len(video) != len(strength): raise ValueError("If `conditions` is not provided, `video` and `strength` must be of the same length.") if denoise_strength < 0 or denoise_strength > 1: raise ValueError(f"The value of strength should in [0.0, 1.0] but is {denoise_strength}") @staticmethod def _prepare_video_ids( batch_size: int, num_frames: int, height: int, width: int, patch_size: int = 1, patch_size_t: int = 1, device: torch.device = None, ) -> torch.Tensor: latent_sample_coords = torch.meshgrid( torch.arange(0, num_frames, patch_size_t, device=device), torch.arange(0, height, patch_size, device=device), torch.arange(0, width, patch_size, device=device), indexing="ij", ) latent_sample_coords = torch.stack(latent_sample_coords, dim=0) latent_coords = latent_sample_coords.unsqueeze(0).repeat(batch_size, 1, 1, 1, 1) latent_coords = latent_coords.reshape(batch_size, -1, num_frames * height * width) return latent_coords @staticmethod def _scale_video_ids( video_ids: torch.Tensor, scale_factor: int = 32, scale_factor_t: int = 8, frame_index: int = 0, device: torch.device = None, ) -> torch.Tensor: scaled_latent_coords = ( video_ids * torch.tensor([scale_factor_t, scale_factor, scale_factor], device=video_ids.device)[None, :, None] ) scaled_latent_coords[:, 0] = (scaled_latent_coords[:, 0] + 1 - scale_factor_t).clamp(min=0) scaled_latent_coords[:, 0] += frame_index return scaled_latent_coords @staticmethod # Copied from diffusers.pipelines.ltx.pipeline_ltx.LTXPipeline._pack_latents def _pack_latents(latents: torch.Tensor, patch_size: int = 1, patch_size_t: int = 1) -> torch.Tensor: # Unpacked latents of shape are [B, C, F, H, W] are patched into tokens of shape [B, C, F // p_t, p_t, H // p, p, W // p, p]. # The patch dimensions are then permuted and collapsed into the channel dimension of shape: # [B, F // p_t * H // p * W // p, C * p_t * p * p] (an ndim=3 tensor). # dim=0 is the batch size, dim=1 is the effective video sequence length, dim=2 is the effective number of input features batch_size, num_channels, num_frames, height, width = latents.shape post_patch_num_frames = num_frames // patch_size_t post_patch_height = height // patch_size post_patch_width = width // patch_size latents = latents.reshape( batch_size, -1, post_patch_num_frames, patch_size_t, post_patch_height, patch_size, post_patch_width, patch_size, ) latents = latents.permute(0, 2, 4, 6, 1, 3, 5, 7).flatten(4, 7).flatten(1, 3) return latents @staticmethod # Copied from diffusers.pipelines.ltx.pipeline_ltx.LTXPipeline._unpack_latents def _unpack_latents( latents: torch.Tensor, num_frames: int, height: int, width: int, patch_size: int = 1, patch_size_t: int = 1 ) -> torch.Tensor: # Packed latents of shape [B, S, D] (S is the effective video sequence length, D is the effective feature dimensions) # are unpacked and reshaped into a video tensor of shape [B, C, F, H, W]. This is the inverse operation of # what happens in the `_pack_latents` method. batch_size = latents.size(0) latents = latents.reshape(batch_size, num_frames, height, width, -1, patch_size_t, patch_size, patch_size) latents = latents.permute(0, 4, 1, 5, 2, 6, 3, 7).flatten(6, 7).flatten(4, 5).flatten(2, 3) return latents @staticmethod # Copied from diffusers.pipelines.ltx.pipeline_ltx.LTXPipeline._normalize_latents def _normalize_latents( latents: torch.Tensor, latents_mean: torch.Tensor, latents_std: torch.Tensor, scaling_factor: float = 1.0 ) -> torch.Tensor: # Normalize latents across the channel dimension [B, C, F, H, W] latents_mean = latents_mean.view(1, -1, 1, 1, 1).to(latents.device, latents.dtype) latents_std = latents_std.view(1, -1, 1, 1, 1).to(latents.device, latents.dtype) latents = (latents - latents_mean) * scaling_factor / latents_std return latents @staticmethod # Copied from diffusers.pipelines.ltx.pipeline_ltx.LTXPipeline._denormalize_latents def _denormalize_latents( latents: torch.Tensor, latents_mean: torch.Tensor, latents_std: torch.Tensor, scaling_factor: float = 1.0 ) -> torch.Tensor: # Denormalize latents across the channel dimension [B, C, F, H, W] latents_mean = latents_mean.view(1, -1, 1, 1, 1).to(latents.device, latents.dtype) latents_std = latents_std.view(1, -1, 1, 1, 1).to(latents.device, latents.dtype) latents = latents * latents_std / scaling_factor + latents_mean return latents def trim_conditioning_sequence(self, start_frame: int, sequence_num_frames: int, target_num_frames: int): """ Trim a conditioning sequence to the allowed number of frames. Args: start_frame (int): The target frame number of the first frame in the sequence. sequence_num_frames (int): The number of frames in the sequence. target_num_frames (int): The target number of frames in the generated video. Returns: int: updated sequence length """ scale_factor = self.vae_temporal_compression_ratio num_frames = min(sequence_num_frames, target_num_frames - start_frame) # Trim down to a multiple of temporal_scale_factor frames plus 1 num_frames = (num_frames - 1) // scale_factor * scale_factor + 1 return num_frames @staticmethod def add_noise_to_image_conditioning_latents( t: float, init_latents: torch.Tensor, latents: torch.Tensor, noise_scale: float, conditioning_mask: torch.Tensor, generator, eps=1e-6, ): """ Add timestep-dependent noise to the hard-conditioning latents. This helps with motion continuity, especially when conditioned on a single frame. """ noise = randn_tensor( latents.shape, generator=generator, device=latents.device, dtype=latents.dtype, ) # Add noise only to hard-conditioning latents (conditioning_mask = 1.0) need_to_noise = (conditioning_mask > 1.0 - eps).unsqueeze(-1) noised_latents = init_latents + noise_scale * noise * (t**2) latents = torch.where(need_to_noise, noised_latents, latents) return latents def prepare_latents( self, conditions: Optional[List[torch.Tensor]] = None, condition_strength: Optional[List[float]] = None, condition_frame_index: Optional[List[int]] = None, batch_size: int = 1, num_channels_latents: int = 128, height: int = 512, width: int = 704, num_frames: int = 161, num_prefix_latent_frames: int = 2, sigma: Optional[torch.Tensor] = None, latents: Optional[torch.Tensor] = None, generator: Optional[torch.Generator] = None, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, int]: num_latent_frames = (num_frames - 1) // self.vae_temporal_compression_ratio + 1 latent_height = height // self.vae_spatial_compression_ratio latent_width = width // self.vae_spatial_compression_ratio shape = (batch_size, num_channels_latents, num_latent_frames, latent_height, latent_width) noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) if latents is not None and sigma is not None: if latents.shape != shape: raise ValueError( f"Latents shape {latents.shape} does not match expected shape {shape}. Please check the input." ) latents = latents.to(device=device, dtype=dtype) sigma = sigma.to(device=device, dtype=dtype) latents = sigma * noise + (1 - sigma) * latents else: latents = noise if len(conditions) > 0: condition_latent_frames_mask = torch.zeros( (batch_size, num_latent_frames), device=device, dtype=torch.float32 ) extra_conditioning_latents = [] extra_conditioning_video_ids = [] extra_conditioning_mask = [] extra_conditioning_num_latents = 0 for data, strength, frame_index in zip(conditions, condition_strength, condition_frame_index): condition_latents = retrieve_latents(self.vae.encode(data), generator=generator) condition_latents = self._normalize_latents( condition_latents, self.vae.latents_mean, self.vae.latents_std ).to(device, dtype=dtype) num_data_frames = data.size(2) num_cond_frames = condition_latents.size(2) if frame_index == 0: latents[:, :, :num_cond_frames] = torch.lerp( latents[:, :, :num_cond_frames], condition_latents, strength ) condition_latent_frames_mask[:, :num_cond_frames] = strength else: if num_data_frames > 1: if num_cond_frames < num_prefix_latent_frames: raise ValueError( f"Number of latent frames must be at least {num_prefix_latent_frames} but got {num_data_frames}." ) if num_cond_frames > num_prefix_latent_frames: start_frame = frame_index // self.vae_temporal_compression_ratio + num_prefix_latent_frames end_frame = start_frame + num_cond_frames - num_prefix_latent_frames latents[:, :, start_frame:end_frame] = torch.lerp( latents[:, :, start_frame:end_frame], condition_latents[:, :, num_prefix_latent_frames:], strength, ) condition_latent_frames_mask[:, start_frame:end_frame] = strength condition_latents = condition_latents[:, :, :num_prefix_latent_frames] noise = randn_tensor(condition_latents.shape, generator=generator, device=device, dtype=dtype) condition_latents = torch.lerp(noise, condition_latents, strength) condition_video_ids = self._prepare_video_ids( batch_size, condition_latents.size(2), latent_height, latent_width, patch_size=self.transformer_spatial_patch_size, patch_size_t=self.transformer_temporal_patch_size, device=device, ) condition_video_ids = self._scale_video_ids( condition_video_ids, scale_factor=self.vae_spatial_compression_ratio, scale_factor_t=self.vae_temporal_compression_ratio, frame_index=frame_index, device=device, ) condition_latents = self._pack_latents( condition_latents, self.transformer_spatial_patch_size, self.transformer_temporal_patch_size, ) condition_conditioning_mask = torch.full( condition_latents.shape[:2], strength, device=device, dtype=dtype ) extra_conditioning_latents.append(condition_latents) extra_conditioning_video_ids.append(condition_video_ids) extra_conditioning_mask.append(condition_conditioning_mask) extra_conditioning_num_latents += condition_latents.size(1) video_ids = self._prepare_video_ids( batch_size, num_latent_frames, latent_height, latent_width, patch_size_t=self.transformer_temporal_patch_size, patch_size=self.transformer_spatial_patch_size, device=device, ) if len(conditions) > 0: conditioning_mask = condition_latent_frames_mask.gather(1, video_ids[:, 0]) else: conditioning_mask, extra_conditioning_num_latents = None, 0 video_ids = self._scale_video_ids( video_ids, scale_factor=self.vae_spatial_compression_ratio, scale_factor_t=self.vae_temporal_compression_ratio, frame_index=0, device=device, ) latents = self._pack_latents( latents, self.transformer_spatial_patch_size, self.transformer_temporal_patch_size ) if len(conditions) > 0 and len(extra_conditioning_latents) > 0: latents = torch.cat([*extra_conditioning_latents, latents], dim=1) video_ids = torch.cat([*extra_conditioning_video_ids, video_ids], dim=2) conditioning_mask = torch.cat([*extra_conditioning_mask, conditioning_mask], dim=1) return latents, conditioning_mask, video_ids, extra_conditioning_num_latents def get_timesteps(self, sigmas, timesteps, num_inference_steps, strength): num_steps = min(int(num_inference_steps * strength), num_inference_steps) start_index = max(num_inference_steps - num_steps, 0) sigmas = sigmas[start_index:] timesteps = timesteps[start_index:] return sigmas, timesteps, num_inference_steps - start_index @property def guidance_scale(self): return self._guidance_scale @property def guidance_rescale(self): return self._guidance_rescale @property def do_classifier_free_guidance(self): return self._guidance_scale > 1.0 @property def num_timesteps(self): return self._num_timesteps @property def current_timestep(self): return self._current_timestep @property def attention_kwargs(self): return self._attention_kwargs @property def interrupt(self): return self._interrupt @torch.no_grad() @replace_example_docstring(EXAMPLE_DOC_STRING) def __call__( self, conditions: Union[LTXVideoCondition, List[LTXVideoCondition]] = None, image: Union[PipelineImageInput, List[PipelineImageInput]] = None, video: List[PipelineImageInput] = None, frame_index: Union[int, List[int]] = 0, strength: Union[float, List[float]] = 1.0, denoise_strength: float = 1.0, prompt: Union[str, List[str]] = None, negative_prompt: Optional[Union[str, List[str]]] = None, height: int = 512, width: int = 704, num_frames: int = 161, frame_rate: int = 25, num_inference_steps: int = 50, timesteps: List[int] = None, guidance_scale: float = 3, guidance_rescale: float = 0.0, image_cond_noise_scale: float = 0.15, num_videos_per_prompt: Optional[int] = 1, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.Tensor] = None, prompt_embeds: Optional[torch.Tensor] = None, prompt_attention_mask: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_attention_mask: Optional[torch.Tensor] = None, decode_timestep: Union[float, List[float]] = 0.0, decode_noise_scale: Optional[Union[float, List[float]]] = None, output_type: Optional[str] = "pil", return_dict: bool = True, attention_kwargs: Optional[Dict[str, Any]] = None, callback_on_step_end: Optional[Callable[[int, int, Dict], None]] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], max_sequence_length: int = 256, ): r""" Function invoked when calling the pipeline for generation. Args: conditions (`List[LTXVideoCondition], *optional*`): The list of frame-conditioning items for the video generation.If not provided, conditions will be created using `image`, `video`, `frame_index` and `strength`. image (`PipelineImageInput` or `List[PipelineImageInput]`, *optional*): The image or images to condition the video generation. If not provided, one has to pass `video` or `conditions`. video (`List[PipelineImageInput]`, *optional*): The video to condition the video generation. If not provided, one has to pass `image` or `conditions`. frame_index (`int` or `List[int]`, *optional*): The frame index or frame indices at which the image or video will conditionally effect the video generation. If not provided, one has to pass `conditions`. strength (`float` or `List[float]`, *optional*): The strength or strengths of the conditioning effect. If not provided, one has to pass `conditions`. denoise_strength (`float`, defaults to `1.0`): The strength of the noise added to the latents for editing. Higher strength leads to more noise added to the latents, therefore leading to more differences between original video and generated video. This is useful for video-to-video editing. prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. height (`int`, defaults to `512`): The height in pixels of the generated image. This is set to 480 by default for the best results. width (`int`, defaults to `704`): The width in pixels of the generated image. This is set to 848 by default for the best results. num_frames (`int`, defaults to `161`): The number of video frames to generate num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. timesteps (`List[int]`, *optional*): Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed will be used. Must be in descending order. guidance_scale (`float`, defaults to `3 `): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://huggingface.co/papers/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://huggingface.co/papers/2205.11487). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. guidance_rescale (`float`, *optional*, defaults to 0.0): Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). Guidance rescale factor should fix overexposure when using zero terminal SNR. num_videos_per_prompt (`int`, *optional*, defaults to 1): The number of videos to generate per prompt. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.Tensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. prompt_attention_mask (`torch.Tensor`, *optional*): Pre-generated attention mask for text embeddings. negative_prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated negative text embeddings. For PixArt-Sigma this negative prompt should be "". If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. negative_prompt_attention_mask (`torch.FloatTensor`, *optional*): Pre-generated attention mask for negative text embeddings. decode_timestep (`float`, defaults to `0.0`): The timestep at which generated video is decoded. decode_noise_scale (`float`, defaults to `None`): The interpolation factor between random noise and denoised latents at the decode timestep. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.ltx.LTXPipelineOutput`] instead of a plain tuple. attention_kwargs (`dict`, *optional*): A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under `self.processor` in [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. max_sequence_length (`int` defaults to `128 `): Maximum sequence length to use with the `prompt`. Examples: Returns: [`~pipelines.ltx.LTXPipelineOutput`] or `tuple`: If `return_dict` is `True`, [`~pipelines.ltx.LTXPipelineOutput`] is returned, otherwise a `tuple` is returned where the first element is a list with the generated images. """ if isinstance(callback_on_step_end, (PipelineCallback, MultiPipelineCallbacks)): callback_on_step_end_tensor_inputs = callback_on_step_end.tensor_inputs # 1. Check inputs. Raise error if not correct self.check_inputs( prompt=prompt, conditions=conditions, image=image, video=video, frame_index=frame_index, strength=strength, denoise_strength=denoise_strength, height=height, width=width, callback_on_step_end_tensor_inputs=callback_on_step_end_tensor_inputs, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, prompt_attention_mask=prompt_attention_mask, negative_prompt_attention_mask=negative_prompt_attention_mask, ) self._guidance_scale = guidance_scale self._guidance_rescale = guidance_rescale self._attention_kwargs = attention_kwargs self._interrupt = False self._current_timestep = None # 2. Define call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if conditions is not None: if not isinstance(conditions, list): conditions = [conditions] strength = [condition.strength for condition in conditions] frame_index = [condition.frame_index for condition in conditions] image = [condition.image for condition in conditions] video = [condition.video for condition in conditions] elif image is not None or video is not None: if not isinstance(image, list): image = [image] num_conditions = 1 elif isinstance(image, list): num_conditions = len(image) if not isinstance(video, list): video = [video] num_conditions = 1 elif isinstance(video, list): num_conditions = len(video) if not isinstance(frame_index, list): frame_index = [frame_index] * num_conditions if not isinstance(strength, list): strength = [strength] * num_conditions device = self._execution_device vae_dtype = self.vae.dtype # 3. Prepare text embeddings & conditioning image/video ( prompt_embeds, prompt_attention_mask, negative_prompt_embeds, negative_prompt_attention_mask, ) = self.encode_prompt( prompt=prompt, negative_prompt=negative_prompt, do_classifier_free_guidance=self.do_classifier_free_guidance, num_videos_per_prompt=num_videos_per_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, prompt_attention_mask=prompt_attention_mask, negative_prompt_attention_mask=negative_prompt_attention_mask, max_sequence_length=max_sequence_length, device=device, ) if self.do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0) prompt_attention_mask = torch.cat([negative_prompt_attention_mask, prompt_attention_mask], dim=0) conditioning_tensors = [] is_conditioning_image_or_video = image is not None or video is not None if is_conditioning_image_or_video: for condition_image, condition_video, condition_frame_index, condition_strength in zip( image, video, frame_index, strength ): if condition_image is not None: condition_tensor = ( self.video_processor.preprocess(condition_image, height, width) .unsqueeze(2) .to(device, dtype=vae_dtype) ) elif condition_video is not None: condition_tensor = self.video_processor.preprocess_video(condition_video, height, width) num_frames_input = condition_tensor.size(2) num_frames_output = self.trim_conditioning_sequence( condition_frame_index, num_frames_input, num_frames ) condition_tensor = condition_tensor[:, :, :num_frames_output] condition_tensor = condition_tensor.to(device, dtype=vae_dtype) else: raise ValueError("Either `image` or `video` must be provided for conditioning.") if condition_tensor.size(2) % self.vae_temporal_compression_ratio != 1: raise ValueError( f"Number of frames in the video must be of the form (k * {self.vae_temporal_compression_ratio} + 1) " f"but got {condition_tensor.size(2)} frames." ) conditioning_tensors.append(condition_tensor) # 4. Prepare timesteps latent_num_frames = (num_frames - 1) // self.vae_temporal_compression_ratio + 1 latent_height = height // self.vae_spatial_compression_ratio latent_width = width // self.vae_spatial_compression_ratio if timesteps is None: sigmas = linear_quadratic_schedule(num_inference_steps) timesteps = sigmas * 1000 timesteps, num_inference_steps = retrieve_timesteps(self.scheduler, num_inference_steps, device, timesteps) sigmas = self.scheduler.sigmas num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0) latent_sigma = None if denoise_strength < 1: sigmas, timesteps, num_inference_steps = self.get_timesteps( sigmas, timesteps, num_inference_steps, denoise_strength ) latent_sigma = sigmas[:1].repeat(batch_size * num_videos_per_prompt) self._num_timesteps = len(timesteps) # 5. Prepare latent variables num_channels_latents = self.transformer.config.in_channels latents, conditioning_mask, video_coords, extra_conditioning_num_latents = self.prepare_latents( conditioning_tensors, strength, frame_index, batch_size=batch_size * num_videos_per_prompt, num_channels_latents=num_channels_latents, height=height, width=width, num_frames=num_frames, sigma=latent_sigma, latents=latents, generator=generator, device=device, dtype=torch.float32, ) video_coords = video_coords.float() video_coords[:, 0] = video_coords[:, 0] * (1.0 / frame_rate) init_latents = latents.clone() if is_conditioning_image_or_video else None if self.do_classifier_free_guidance: video_coords = torch.cat([video_coords, video_coords], dim=0) # 6. Denoising loop with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): if self.interrupt: continue self._current_timestep = t if image_cond_noise_scale > 0 and init_latents is not None: # Add timestep-dependent noise to the hard-conditioning latents # This helps with motion continuity, especially when conditioned on a single frame latents = self.add_noise_to_image_conditioning_latents( t / 1000.0, init_latents, latents, image_cond_noise_scale, conditioning_mask, generator, ) latent_model_input = torch.cat([latents] * 2) if self.do_classifier_free_guidance else latents if is_conditioning_image_or_video: conditioning_mask_model_input = ( torch.cat([conditioning_mask, conditioning_mask]) if self.do_classifier_free_guidance else conditioning_mask ) latent_model_input = latent_model_input.to(prompt_embeds.dtype) # broadcast to batch dimension in a way that's compatible with ONNX/Core ML timestep = t.expand(latent_model_input.shape[0]).unsqueeze(-1).float() if is_conditioning_image_or_video: timestep = torch.min(timestep, (1 - conditioning_mask_model_input) * 1000.0) with self.transformer.cache_context("cond_uncond"): noise_pred = self.transformer( hidden_states=latent_model_input, encoder_hidden_states=prompt_embeds, timestep=timestep, encoder_attention_mask=prompt_attention_mask, video_coords=video_coords, attention_kwargs=attention_kwargs, return_dict=False, )[0] if self.do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond) timestep, _ = timestep.chunk(2) if self.guidance_rescale > 0: # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf noise_pred = rescale_noise_cfg( noise_pred, noise_pred_text, guidance_rescale=self.guidance_rescale ) denoised_latents = self.scheduler.step( -noise_pred, t, latents, per_token_timesteps=timestep, return_dict=False )[0] if is_conditioning_image_or_video: tokens_to_denoise_mask = (t / 1000 - 1e-6 < (1.0 - conditioning_mask)).unsqueeze(-1) latents = torch.where(tokens_to_denoise_mask, denoised_latents, latents) else: latents = denoised_latents if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) latents = callback_outputs.pop("latents", latents) prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) # call the callback, if provided if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if XLA_AVAILABLE: xm.mark_step() if is_conditioning_image_or_video: latents = latents[:, extra_conditioning_num_latents:] latents = self._unpack_latents( latents, latent_num_frames, latent_height, latent_width, self.transformer_spatial_patch_size, self.transformer_temporal_patch_size, ) if output_type == "latent": video = latents else: latents = self._denormalize_latents( latents, self.vae.latents_mean, self.vae.latents_std, self.vae.config.scaling_factor ) latents = latents.to(prompt_embeds.dtype) if not self.vae.config.timestep_conditioning: timestep = None else: noise = randn_tensor(latents.shape, generator=generator, device=device, dtype=latents.dtype) if not isinstance(decode_timestep, list): decode_timestep = [decode_timestep] * batch_size if decode_noise_scale is None: decode_noise_scale = decode_timestep elif not isinstance(decode_noise_scale, list): decode_noise_scale = [decode_noise_scale] * batch_size timestep = torch.tensor(decode_timestep, device=device, dtype=latents.dtype) decode_noise_scale = torch.tensor(decode_noise_scale, device=device, dtype=latents.dtype)[ :, None, None, None, None ] latents = (1 - decode_noise_scale) * latents + decode_noise_scale * noise video = self.vae.decode(latents, timestep, return_dict=False)[0] video = self.video_processor.postprocess_video(video, output_type=output_type) # Offload all models self.maybe_free_model_hooks() if not return_dict: return (video,) return LTXPipelineOutput(frames=video)
diffusers/src/diffusers/pipelines/ltx/pipeline_ltx_condition.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/ltx/pipeline_ltx_condition.py", "repo_id": "diffusers", "token_count": 27913 }
175
# coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import fnmatch import importlib import inspect import os import re import sys from dataclasses import dataclass from pathlib import Path from typing import Any, Callable, Dict, List, Optional, Union, get_args, get_origin import numpy as np import PIL.Image import requests import torch from huggingface_hub import ( DDUFEntry, ModelCard, create_repo, hf_hub_download, model_info, read_dduf_file, snapshot_download, ) from huggingface_hub.utils import OfflineModeIsEnabled, validate_hf_hub_args from packaging import version from requests.exceptions import HTTPError from tqdm.auto import tqdm from typing_extensions import Self from .. import __version__ from ..configuration_utils import ConfigMixin from ..models import AutoencoderKL from ..models.attention_processor import FusedAttnProcessor2_0 from ..models.modeling_utils import _LOW_CPU_MEM_USAGE_DEFAULT, ModelMixin from ..quantizers import PipelineQuantizationConfig from ..quantizers.bitsandbytes.utils import _check_bnb_status from ..schedulers.scheduling_utils import SCHEDULER_CONFIG_NAME from ..utils import ( CONFIG_NAME, DEPRECATED_REVISION_ARGS, BaseOutput, PushToHubMixin, _get_detailed_type, _is_valid_type, is_accelerate_available, is_accelerate_version, is_hpu_available, is_torch_npu_available, is_torch_version, is_transformers_version, logging, numpy_to_pil, ) from ..utils.hub_utils import _check_legacy_sharding_variant_format, load_or_create_model_card, populate_model_card from ..utils.torch_utils import empty_device_cache, get_device, is_compiled_module if is_torch_npu_available(): import torch_npu # noqa: F401 from .pipeline_loading_utils import ( ALL_IMPORTABLE_CLASSES, CONNECTED_PIPES_KEYS, CUSTOM_PIPELINE_FILE_NAME, LOADABLE_CLASSES, _download_dduf_file, _fetch_class_library_tuple, _get_custom_components_and_folders, _get_custom_pipeline_class, _get_final_device_map, _get_ignore_patterns, _get_pipeline_class, _identify_model_variants, _maybe_raise_error_for_incorrect_transformers, _maybe_raise_warning_for_inpainting, _maybe_warn_for_wrong_component_in_quant_config, _resolve_custom_pipeline_and_cls, _unwrap_model, _update_init_kwargs_with_connected_pipeline, filter_model_files, load_sub_model, maybe_raise_or_warn, variant_compatible_siblings, warn_deprecated_model_variant, ) if is_accelerate_available(): import accelerate LIBRARIES = [] for library in LOADABLE_CLASSES: LIBRARIES.append(library) SUPPORTED_DEVICE_MAP = ["balanced"] + [get_device()] logger = logging.get_logger(__name__) @dataclass class ImagePipelineOutput(BaseOutput): """ Output class for image pipelines. Args: images (`List[PIL.Image.Image]` or `np.ndarray`) List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, num_channels)`. """ images: Union[List[PIL.Image.Image], np.ndarray] @dataclass class AudioPipelineOutput(BaseOutput): """ Output class for audio pipelines. Args: audios (`np.ndarray`) List of denoised audio samples of a NumPy array of shape `(batch_size, num_channels, sample_rate)`. """ audios: np.ndarray class DeprecatedPipelineMixin: """ A mixin that can be used to mark a pipeline as deprecated. Pipelines inheriting from this mixin will raise a warning when instantiated, indicating that they are deprecated and won't receive updates past the specified version. Tests will be skipped for pipelines that inherit from this mixin. Example usage: ```python class MyDeprecatedPipeline(DeprecatedPipelineMixin, DiffusionPipeline): _last_supported_version = "0.20.0" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) ``` """ # Override this in the inheriting class to specify the last version that will support this pipeline _last_supported_version = None def __init__(self, *args, **kwargs): # Get the class name for the warning message class_name = self.__class__.__name__ # Get the last supported version or use the current version if not specified version_info = getattr(self.__class__, "_last_supported_version", __version__) # Raise a warning that this pipeline is deprecated logger.warning( f"The {class_name} has been deprecated and will not receive bug fixes or feature updates after Diffusers version {version_info}. " ) # Call the parent class's __init__ method super().__init__(*args, **kwargs) class DiffusionPipeline(ConfigMixin, PushToHubMixin): r""" Base class for all pipelines. [`DiffusionPipeline`] stores all components (models, schedulers, and processors) for diffusion pipelines and provides methods for loading, downloading and saving models. It also includes methods to: - move all PyTorch modules to the device of your choice - enable/disable the progress bar for the denoising iteration Class attributes: - **config_name** (`str`) -- The configuration filename that stores the class and module names of all the diffusion pipeline's components. - **_optional_components** (`List[str]`) -- List of all optional components that don't have to be passed to the pipeline to function (should be overridden by subclasses). """ config_name = "model_index.json" model_cpu_offload_seq = None hf_device_map = None _optional_components = [] _exclude_from_cpu_offload = [] _load_connected_pipes = False _is_onnx = False def register_modules(self, **kwargs): for name, module in kwargs.items(): # retrieve library if module is None or isinstance(module, (tuple, list)) and module[0] is None: register_dict = {name: (None, None)} else: library, class_name = _fetch_class_library_tuple(module) register_dict = {name: (library, class_name)} # save model index config self.register_to_config(**register_dict) # set models setattr(self, name, module) def __setattr__(self, name: str, value: Any): if name in self.__dict__ and hasattr(self.config, name): # We need to overwrite the config if name exists in config if isinstance(getattr(self.config, name), (tuple, list)): if value is not None and self.config[name][0] is not None: class_library_tuple = _fetch_class_library_tuple(value) else: class_library_tuple = (None, None) self.register_to_config(**{name: class_library_tuple}) else: self.register_to_config(**{name: value}) super().__setattr__(name, value) def save_pretrained( self, save_directory: Union[str, os.PathLike], safe_serialization: bool = True, variant: Optional[str] = None, max_shard_size: Optional[Union[int, str]] = None, push_to_hub: bool = False, **kwargs, ): """ Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading method. The pipeline is easily reloaded using the [`~DiffusionPipeline.from_pretrained`] class method. Arguments: save_directory (`str` or `os.PathLike`): Directory to save a pipeline to. Will be created if it doesn't exist. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way with `pickle`. variant (`str`, *optional*): If specified, weights are saved in the format `pytorch_model.<variant>.bin`. max_shard_size (`int` or `str`, defaults to `None`): The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5GB"`). If expressed as an integer, the unit is bytes. Note that this limit will be decreased after a certain period of time (starting from Oct 2024) to allow users to upgrade to the latest version of `diffusers`. This is to establish a common default size for this argument across different libraries in the Hugging Face ecosystem (`transformers`, and `accelerate`, for example). push_to_hub (`bool`, *optional*, defaults to `False`): Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with `repo_id` (will default to the name of `save_directory` in your namespace). kwargs (`Dict[str, Any]`, *optional*): Additional keyword arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method. """ model_index_dict = dict(self.config) model_index_dict.pop("_class_name", None) model_index_dict.pop("_diffusers_version", None) model_index_dict.pop("_module", None) model_index_dict.pop("_name_or_path", None) if push_to_hub: commit_message = kwargs.pop("commit_message", None) private = kwargs.pop("private", None) create_pr = kwargs.pop("create_pr", False) token = kwargs.pop("token", None) repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1]) repo_id = create_repo(repo_id, exist_ok=True, private=private, token=token).repo_id expected_modules, optional_kwargs = self._get_signature_keys(self) def is_saveable_module(name, value): if name not in expected_modules: return False if name in self._optional_components and value[0] is None: return False return True model_index_dict = {k: v for k, v in model_index_dict.items() if is_saveable_module(k, v)} for pipeline_component_name in model_index_dict.keys(): sub_model = getattr(self, pipeline_component_name) model_cls = sub_model.__class__ # Dynamo wraps the original model in a private class. # I didn't find a public API to get the original class. if is_compiled_module(sub_model): sub_model = _unwrap_model(sub_model) model_cls = sub_model.__class__ save_method_name = None # search for the model's base class in LOADABLE_CLASSES for library_name, library_classes in LOADABLE_CLASSES.items(): if library_name in sys.modules: library = importlib.import_module(library_name) else: logger.info( f"{library_name} is not installed. Cannot save {pipeline_component_name} as {library_classes} from {library_name}" ) for base_class, save_load_methods in library_classes.items(): class_candidate = getattr(library, base_class, None) if class_candidate is not None and issubclass(model_cls, class_candidate): # if we found a suitable base class in LOADABLE_CLASSES then grab its save method save_method_name = save_load_methods[0] break if save_method_name is not None: break if save_method_name is None: logger.warning( f"self.{pipeline_component_name}={sub_model} of type {type(sub_model)} cannot be saved." ) # make sure that unsaveable components are not tried to be loaded afterward self.register_to_config(**{pipeline_component_name: (None, None)}) continue save_method = getattr(sub_model, save_method_name) # Call the save method with the argument safe_serialization only if it's supported save_method_signature = inspect.signature(save_method) save_method_accept_safe = "safe_serialization" in save_method_signature.parameters save_method_accept_variant = "variant" in save_method_signature.parameters save_method_accept_max_shard_size = "max_shard_size" in save_method_signature.parameters save_kwargs = {} if save_method_accept_safe: save_kwargs["safe_serialization"] = safe_serialization if save_method_accept_variant: save_kwargs["variant"] = variant if save_method_accept_max_shard_size and max_shard_size is not None: # max_shard_size is expected to not be None in ModelMixin save_kwargs["max_shard_size"] = max_shard_size save_method(os.path.join(save_directory, pipeline_component_name), **save_kwargs) # finally save the config self.save_config(save_directory) if push_to_hub: # Create a new empty model card and eventually tag it model_card = load_or_create_model_card(repo_id, token=token, is_pipeline=True) model_card = populate_model_card(model_card) model_card.save(os.path.join(save_directory, "README.md")) self._upload_folder( save_directory, repo_id, token=token, commit_message=commit_message, create_pr=create_pr, ) def to(self, *args, **kwargs) -> Self: r""" Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the arguments of `self.to(*args, **kwargs).` <Tip> If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, the returned pipeline is a copy of self with the desired torch.dtype and torch.device. </Tip> Here are the ways to call `to`: - `to(dtype, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype) - `to(device, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) - `to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline` to return a pipeline with the specified [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) and [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype) Arguments: dtype (`torch.dtype`, *optional*): Returns a pipeline with the specified [`dtype`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype) device (`torch.Device`, *optional*): Returns a pipeline with the specified [`device`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device) silence_dtype_warnings (`str`, *optional*, defaults to `False`): Whether to omit warnings if the target `dtype` is not compatible with the target `device`. Returns: [`DiffusionPipeline`]: The pipeline converted to specified `dtype` and/or `dtype`. """ dtype = kwargs.pop("dtype", None) device = kwargs.pop("device", None) silence_dtype_warnings = kwargs.pop("silence_dtype_warnings", False) dtype_arg = None device_arg = None if len(args) == 1: if isinstance(args[0], torch.dtype): dtype_arg = args[0] else: device_arg = torch.device(args[0]) if args[0] is not None else None elif len(args) == 2: if isinstance(args[0], torch.dtype): raise ValueError( "When passing two arguments, make sure the first corresponds to `device` and the second to `dtype`." ) device_arg = torch.device(args[0]) if args[0] is not None else None dtype_arg = args[1] elif len(args) > 2: raise ValueError("Please make sure to pass at most two arguments (`device` and `dtype`) `.to(...)`") if dtype is not None and dtype_arg is not None: raise ValueError( "You have passed `dtype` both as an argument and as a keyword argument. Please only pass one of the two." ) dtype = dtype or dtype_arg if device is not None and device_arg is not None: raise ValueError( "You have passed `device` both as an argument and as a keyword argument. Please only pass one of the two." ) device = device or device_arg device_type = torch.device(device).type if device is not None else None pipeline_has_bnb = any(any((_check_bnb_status(module))) for _, module in self.components.items()) # throw warning if pipeline is in "offloaded"-mode but user tries to manually set to GPU. def module_is_sequentially_offloaded(module): if not is_accelerate_available() or is_accelerate_version("<", "0.14.0"): return False _, _, is_loaded_in_8bit_bnb = _check_bnb_status(module) if is_loaded_in_8bit_bnb: return False return hasattr(module, "_hf_hook") and ( isinstance(module._hf_hook, accelerate.hooks.AlignDevicesHook) or hasattr(module._hf_hook, "hooks") and isinstance(module._hf_hook.hooks[0], accelerate.hooks.AlignDevicesHook) ) def module_is_offloaded(module): if not is_accelerate_available() or is_accelerate_version("<", "0.17.0.dev0"): return False return hasattr(module, "_hf_hook") and isinstance(module._hf_hook, accelerate.hooks.CpuOffload) # .to("cuda") would raise an error if the pipeline is sequentially offloaded, so we raise our own to make it clearer pipeline_is_sequentially_offloaded = any( module_is_sequentially_offloaded(module) for _, module in self.components.items() ) is_pipeline_device_mapped = self.hf_device_map is not None and len(self.hf_device_map) > 1 if is_pipeline_device_mapped: raise ValueError( "It seems like you have activated a device mapping strategy on the pipeline which doesn't allow explicit device placement using `to()`. You can call `reset_device_map()` to remove the existing device map from the pipeline." ) if device_type in ["cuda", "xpu"]: if pipeline_is_sequentially_offloaded and not pipeline_has_bnb: raise ValueError( "It seems like you have activated sequential model offloading by calling `enable_sequential_cpu_offload`, but are now attempting to move the pipeline to GPU. This is not compatible with offloading. Please, move your pipeline `.to('cpu')` or consider removing the move altogether if you use sequential offloading." ) # PR: https://github.com/huggingface/accelerate/pull/3223/ elif pipeline_has_bnb and is_accelerate_version("<", "1.1.0.dev0"): raise ValueError( "You are trying to call `.to('cuda')` on a pipeline that has models quantized with `bitsandbytes`. Your current `accelerate` installation does not support it. Please upgrade the installation." ) # Display a warning in this case (the operation succeeds but the benefits are lost) pipeline_is_offloaded = any(module_is_offloaded(module) for _, module in self.components.items()) if pipeline_is_offloaded and device_type in ["cuda", "xpu"]: logger.warning( f"It seems like you have activated model offloading by calling `enable_model_cpu_offload`, but are now manually moving the pipeline to GPU. It is strongly recommended against doing so as memory gains from offloading are likely to be lost. Offloading automatically takes care of moving the individual components {', '.join(self.components.keys())} to GPU when needed. To make sure offloading works as expected, you should consider moving the pipeline back to CPU: `pipeline.to('cpu')` or removing the move altogether if you use offloading." ) # Enable generic support for Intel Gaudi accelerator using GPU/HPU migration if device_type == "hpu" and kwargs.pop("hpu_migration", True) and is_hpu_available(): os.environ["PT_HPU_GPU_MIGRATION"] = "1" logger.debug("Environment variable set: PT_HPU_GPU_MIGRATION=1") import habana_frameworks.torch # noqa: F401 # HPU hardware check if not (hasattr(torch, "hpu") and torch.hpu.is_available()): raise ValueError("You are trying to call `.to('hpu')` but HPU device is unavailable.") os.environ["PT_HPU_MAX_COMPOUND_OP_SIZE"] = "1" logger.debug("Environment variable set: PT_HPU_MAX_COMPOUND_OP_SIZE=1") module_names, _ = self._get_signature_keys(self) modules = [getattr(self, n, None) for n in module_names] modules = [m for m in modules if isinstance(m, torch.nn.Module)] is_offloaded = pipeline_is_offloaded or pipeline_is_sequentially_offloaded for module in modules: _, is_loaded_in_4bit_bnb, is_loaded_in_8bit_bnb = _check_bnb_status(module) is_group_offloaded = self._maybe_raise_error_if_group_offload_active(module=module) if (is_loaded_in_4bit_bnb or is_loaded_in_8bit_bnb) and dtype is not None: logger.warning( f"The module '{module.__class__.__name__}' has been loaded in `bitsandbytes` {'4bit' if is_loaded_in_4bit_bnb else '8bit'} and conversion to {dtype} is not supported. Module is still in {'4bit' if is_loaded_in_4bit_bnb else '8bit'} precision." ) if is_loaded_in_8bit_bnb and device is not None: logger.warning( f"The module '{module.__class__.__name__}' has been loaded in `bitsandbytes` 8bit and moving it to {device} via `.to()` is not supported. Module is still on {module.device}." ) # Note: we also handle this at the ModelMixin level. The reason for doing it here too is that modeling # components can be from outside diffusers too, but still have group offloading enabled. if ( self._maybe_raise_error_if_group_offload_active(raise_error=False, module=module) and device is not None ): logger.warning( f"The module '{module.__class__.__name__}' is group offloaded and moving it to {device} via `.to()` is not supported." ) # This can happen for `transformer` models. CPU placement was added in # https://github.com/huggingface/transformers/pull/33122. So, we guard this accordingly. if is_loaded_in_4bit_bnb and device is not None and is_transformers_version(">", "4.44.0"): module.to(device=device) elif not is_loaded_in_4bit_bnb and not is_loaded_in_8bit_bnb and not is_group_offloaded: module.to(device, dtype) if ( module.dtype == torch.float16 and str(device) in ["cpu"] and not silence_dtype_warnings and not is_offloaded ): logger.warning( "Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It" " is not recommended to move them to `cpu` as running them will fail. Please make" " sure to use an accelerator to run the pipeline in inference, due to the lack of" " support for`float16` operations on this device in PyTorch. Please, remove the" " `torch_dtype=torch.float16` argument, or use another device for inference." ) return self @property def device(self) -> torch.device: r""" Returns: `torch.device`: The torch device on which the pipeline is located. """ module_names, _ = self._get_signature_keys(self) modules = [getattr(self, n, None) for n in module_names] modules = [m for m in modules if isinstance(m, torch.nn.Module)] for module in modules: return module.device return torch.device("cpu") @property def dtype(self) -> torch.dtype: r""" Returns: `torch.dtype`: The torch dtype on which the pipeline is located. """ module_names, _ = self._get_signature_keys(self) modules = [getattr(self, n, None) for n in module_names] modules = [m for m in modules if isinstance(m, torch.nn.Module)] for module in modules: return module.dtype return torch.float32 @classmethod @validate_hf_hub_args def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs) -> Self: r""" Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()`) by default. If you get the error message below, you need to finetune the weights for your downstream task: ``` Some weights of UNet2DConditionModel were not initialized from the model checkpoint at stable-diffusion-v1-5/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: - conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` Parameters: pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): Can be either: - A string, the *repo id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline hosted on the Hub. - A path to a *directory* (for example `./my_pipeline_directory/`) containing pipeline weights saved using [`~DiffusionPipeline.save_pretrained`]. - A path to a *directory* (for example `./my_pipeline_directory/`) containing a dduf file torch_dtype (`torch.dtype` or `dict[str, Union[str, torch.dtype]]`, *optional*): Override the default `torch.dtype` and load the model with another dtype. To load submodels with different dtype pass a `dict` (for example `{'transformer': torch.bfloat16, 'vae': torch.float16}`). Set the default dtype for unspecified components with `default` (for example `{'transformer': torch.bfloat16, 'default': torch.float16}`). If a component is not specified and no default is set, `torch.float32` is used. custom_pipeline (`str`, *optional*): <Tip warning={true}> 🧪 This is an experimental feature and may change in the future. </Tip> Can be either: - A string, the *repo id* (for example `hf-internal-testing/diffusers-dummy-pipeline`) of a custom pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines the custom pipeline. - A string, the *file name* of a community pipeline hosted on GitHub under [Community](https://github.com/huggingface/diffusers/tree/main/examples/community). Valid file names must match the file name and not the pipeline script (`clip_guided_stable_diffusion` instead of `clip_guided_stable_diffusion.py`). Community pipelines are always loaded from the current main branch of GitHub. - A path to a directory (`./my_pipeline_directory/`) containing a custom pipeline. The directory must contain a file called `pipeline.py` that defines the custom pipeline. For more information on how to load and create custom pipelines, please have a look at [Loading and Adding Custom Pipelines](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview) force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. cache_dir (`Union[str, os.PathLike]`, *optional*): Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. output_loading_info(`bool`, *optional*, defaults to `False`): Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. custom_revision (`str`, *optional*): The specific model version to use. It can be a branch name, a tag name, or a commit id similar to `revision` when loading a custom pipeline from the Hub. Defaults to the latest stable 🤗 Diffusers version. mirror (`str`, *optional*): Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information. device_map (`str`, *optional*): Strategy that dictates how the different components of a pipeline should be placed on available devices. Currently, only "balanced" `device_map` is supported. Check out [this](https://huggingface.co/docs/diffusers/main/en/tutorials/inference_with_big_models#device-placement) to know more. max_memory (`Dict`, *optional*): A dictionary device identifier for the maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder (`str` or `os.PathLike`, *optional*): The path to offload weights if device_map contains the value `"disk"`. offload_state_dict (`bool`, *optional*): If `True`, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to `True` when there is some disk offload. low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): Speed up model loading only loading the pretrained weights and not initializing the weights. This also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this argument to `True` will raise an error. use_safetensors (`bool`, *optional*, defaults to `None`): If set to `None`, the safetensors weights are downloaded if they're available **and** if the safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors weights. If set to `False`, safetensors weights are not loaded. use_onnx (`bool`, *optional*, defaults to `None`): If set to `True`, ONNX weights will always be downloaded if present. If set to `False`, ONNX weights will never be downloaded. By default `use_onnx` defaults to the `_is_onnx` class attribute which is `False` for non-ONNX pipelines and `True` for ONNX pipelines. ONNX weights include both files ending with `.onnx` and `.pb`. kwargs (remaining dictionary of keyword arguments, *optional*): Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline class). The overwritten components are passed directly to the pipelines `__init__` method. See example below for more information. variant (`str`, *optional*): Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when loading `from_flax`. dduf_file(`str`, *optional*): Load weights from the specified dduf file. <Tip> To use private or [gated](https://huggingface.co/docs/hub/models-gated#gated-models) models, log-in with `hf auth login`. </Tip> Examples: ```py >>> from diffusers import DiffusionPipeline >>> # Download pipeline from huggingface.co and cache. >>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") >>> # Download pipeline that requires an authorization token >>> # For more information on access tokens, please refer to this section >>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) >>> pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") >>> # Use a different scheduler >>> from diffusers import LMSDiscreteScheduler >>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) >>> pipeline.scheduler = scheduler ``` """ # Copy the kwargs to re-use during loading connected pipeline. kwargs_copied = kwargs.copy() cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) from_flax = kwargs.pop("from_flax", False) torch_dtype = kwargs.pop("torch_dtype", None) custom_pipeline = kwargs.pop("custom_pipeline", None) custom_revision = kwargs.pop("custom_revision", None) provider = kwargs.pop("provider", None) sess_options = kwargs.pop("sess_options", None) provider_options = kwargs.pop("provider_options", None) device_map = kwargs.pop("device_map", None) max_memory = kwargs.pop("max_memory", None) offload_folder = kwargs.pop("offload_folder", None) offload_state_dict = kwargs.pop("offload_state_dict", None) low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT) variant = kwargs.pop("variant", None) dduf_file = kwargs.pop("dduf_file", None) use_safetensors = kwargs.pop("use_safetensors", None) use_onnx = kwargs.pop("use_onnx", None) load_connected_pipeline = kwargs.pop("load_connected_pipeline", False) quantization_config = kwargs.pop("quantization_config", None) if torch_dtype is not None and not isinstance(torch_dtype, dict) and not isinstance(torch_dtype, torch.dtype): torch_dtype = torch.float32 logger.warning( f"Passed `torch_dtype` {torch_dtype} is not a `torch.dtype`. Defaulting to `torch.float32`." ) if low_cpu_mem_usage and not is_accelerate_available(): low_cpu_mem_usage = False logger.warning( "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the" " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install" " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip" " install accelerate\n```\n." ) if quantization_config is not None and not isinstance(quantization_config, PipelineQuantizationConfig): raise ValueError("`quantization_config` must be an instance of `PipelineQuantizationConfig`.") if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"): raise NotImplementedError( "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set" " `low_cpu_mem_usage=False`." ) if device_map is not None and not is_torch_version(">=", "1.9.0"): raise NotImplementedError( "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set" " `device_map=None`." ) if device_map is not None and not is_accelerate_available(): raise NotImplementedError( "Using `device_map` requires the `accelerate` library. Please install it using: `pip install accelerate`." ) if device_map is not None and not isinstance(device_map, str): raise ValueError("`device_map` must be a string.") if device_map is not None and device_map not in SUPPORTED_DEVICE_MAP: raise NotImplementedError( f"{device_map} not supported. Supported strategies are: {', '.join(SUPPORTED_DEVICE_MAP)}" ) if device_map is not None and device_map in SUPPORTED_DEVICE_MAP: if is_accelerate_version("<", "0.28.0"): raise NotImplementedError("Device placement requires `accelerate` version `0.28.0` or later.") if low_cpu_mem_usage is False and device_map is not None: raise ValueError( f"You cannot set `low_cpu_mem_usage` to False while using device_map={device_map} for loading and" " dispatching. Please make sure to set `low_cpu_mem_usage=True`." ) if dduf_file: if custom_pipeline: raise NotImplementedError("Custom pipelines are not supported with DDUF at the moment.") if load_connected_pipeline: raise NotImplementedError("Connected pipelines are not supported with DDUF at the moment.") # 1. Download the checkpoints and configs # use snapshot download here to get it working from from_pretrained if not os.path.isdir(pretrained_model_name_or_path): if pretrained_model_name_or_path.count("/") > 1: raise ValueError( f'The provided pretrained_model_name_or_path "{pretrained_model_name_or_path}"' " is neither a valid local path nor a valid repo id. Please check the parameter." ) cached_folder = cls.download( pretrained_model_name_or_path, cache_dir=cache_dir, force_download=force_download, proxies=proxies, local_files_only=local_files_only, token=token, revision=revision, from_flax=from_flax, use_safetensors=use_safetensors, use_onnx=use_onnx, custom_pipeline=custom_pipeline, custom_revision=custom_revision, variant=variant, dduf_file=dduf_file, load_connected_pipeline=load_connected_pipeline, **kwargs, ) else: cached_folder = pretrained_model_name_or_path # The variant filenames can have the legacy sharding checkpoint format that we check and throw # a warning if detected. if variant is not None and _check_legacy_sharding_variant_format(folder=cached_folder, variant=variant): warn_msg = ( f"Warning: The repository contains sharded checkpoints for variant '{variant}' maybe in a deprecated format. " "Please check your files carefully:\n\n" "- Correct format example: diffusion_pytorch_model.fp16-00003-of-00003.safetensors\n" "- Deprecated format example: diffusion_pytorch_model-00001-of-00002.fp16.safetensors\n\n" "If you find any files in the deprecated format:\n" "1. Remove all existing checkpoint files for this variant.\n" "2. Re-obtain the correct files by running `save_pretrained()`.\n\n" "This will ensure you're using the most up-to-date and compatible checkpoint format." ) logger.warning(warn_msg) dduf_entries = None if dduf_file: dduf_file_path = os.path.join(cached_folder, dduf_file) dduf_entries = read_dduf_file(dduf_file_path) # The reader contains already all the files needed, no need to check it again cached_folder = "" config_dict = cls.load_config(cached_folder, dduf_entries=dduf_entries) if dduf_file: _maybe_raise_error_for_incorrect_transformers(config_dict) # pop out "_ignore_files" as it is only needed for download config_dict.pop("_ignore_files", None) # 2. Define which model components should load variants # We retrieve the information by matching whether variant model checkpoints exist in the subfolders. # Example: `diffusion_pytorch_model.safetensors` -> `diffusion_pytorch_model.fp16.safetensors` # with variant being `"fp16"`. model_variants = _identify_model_variants(folder=cached_folder, variant=variant, config=config_dict) if len(model_variants) == 0 and variant is not None: error_message = f"You are trying to load the model files of the `variant={variant}`, but no such modeling files are available." raise ValueError(error_message) # 3. Load the pipeline class, if using custom module then load it from the hub # if we load from explicit class, let's use it custom_pipeline, custom_class_name = _resolve_custom_pipeline_and_cls( folder=cached_folder, config=config_dict, custom_pipeline=custom_pipeline ) pipeline_class = _get_pipeline_class( cls, config=config_dict, load_connected_pipeline=load_connected_pipeline, custom_pipeline=custom_pipeline, class_name=custom_class_name, cache_dir=cache_dir, revision=custom_revision, ) if device_map is not None and pipeline_class._load_connected_pipes: raise NotImplementedError("`device_map` is not yet supported for connected pipelines.") # DEPRECATED: To be removed in 1.0.0 # we are deprecating the `StableDiffusionInpaintPipelineLegacy` pipeline which gets loaded # when a user requests for a `StableDiffusionInpaintPipeline` with `diffusers` version being <= 0.5.1. _maybe_raise_warning_for_inpainting( pipeline_class=pipeline_class, pretrained_model_name_or_path=pretrained_model_name_or_path, config=config_dict, ) # 4. Define expected modules given pipeline signature # and define non-None initialized modules (=`init_kwargs`) # some modules can be passed directly to the init # in this case they are already instantiated in `kwargs` # extract them here expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class) expected_types = pipeline_class._get_signature_types() passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs} init_dict, unused_kwargs, _ = pipeline_class.extract_init_dict(config_dict, **kwargs) # define init kwargs and make sure that optional component modules are filtered out init_kwargs = { k: init_dict.pop(k) for k in optional_kwargs if k in init_dict and k not in pipeline_class._optional_components } init_kwargs = {**init_kwargs, **passed_pipe_kwargs} # remove `null` components def load_module(name, value): if value[0] is None: return False if name in passed_class_obj and passed_class_obj[name] is None: return False return True init_dict = {k: v for k, v in init_dict.items() if load_module(k, v)} # Special case: safety_checker must be loaded separately when using `from_flax` if from_flax and "safety_checker" in init_dict and "safety_checker" not in passed_class_obj: raise NotImplementedError( "The safety checker cannot be automatically loaded when loading weights `from_flax`." " Please, pass `safety_checker=None` to `from_pretrained`, and load the safety checker" " separately if you need it." ) # 5. Throw nice warnings / errors for fast accelerate loading if len(unused_kwargs) > 0: logger.warning( f"Keyword arguments {unused_kwargs} are not expected by {pipeline_class.__name__} and will be ignored." ) # import it here to avoid circular import from diffusers import pipelines # 6. device map delegation final_device_map = None if device_map is not None: final_device_map = _get_final_device_map( device_map=device_map, pipeline_class=pipeline_class, passed_class_obj=passed_class_obj, init_dict=init_dict, library=library, max_memory=max_memory, torch_dtype=torch_dtype, cached_folder=cached_folder, force_download=force_download, proxies=proxies, local_files_only=local_files_only, token=token, revision=revision, ) # 7. Load each module in the pipeline current_device_map = None _maybe_warn_for_wrong_component_in_quant_config(init_dict, quantization_config) for name, (library_name, class_name) in logging.tqdm(init_dict.items(), desc="Loading pipeline components..."): # 7.1 device_map shenanigans if final_device_map is not None: if isinstance(final_device_map, dict) and len(final_device_map) > 0: component_device = final_device_map.get(name, None) if component_device is not None: current_device_map = {"": component_device} else: current_device_map = None elif isinstance(final_device_map, str): current_device_map = final_device_map # 7.2 - now that JAX/Flax is an official framework of the library, we might load from Flax names class_name = class_name[4:] if class_name.startswith("Flax") else class_name # 7.3 Define all importable classes is_pipeline_module = hasattr(pipelines, library_name) importable_classes = ALL_IMPORTABLE_CLASSES loaded_sub_model = None # 7.4 Use passed sub model or load class_name from library_name if name in passed_class_obj: # if the model is in a pipeline module, then we load it from the pipeline # check that passed_class_obj has correct parent class maybe_raise_or_warn( library_name, library, class_name, importable_classes, passed_class_obj, name, is_pipeline_module ) loaded_sub_model = passed_class_obj[name] else: # load sub model sub_model_dtype = ( torch_dtype.get(name, torch_dtype.get("default", torch.float32)) if isinstance(torch_dtype, dict) else torch_dtype ) loaded_sub_model = load_sub_model( library_name=library_name, class_name=class_name, importable_classes=importable_classes, pipelines=pipelines, is_pipeline_module=is_pipeline_module, pipeline_class=pipeline_class, torch_dtype=sub_model_dtype, provider=provider, sess_options=sess_options, device_map=current_device_map, max_memory=max_memory, offload_folder=offload_folder, offload_state_dict=offload_state_dict, model_variants=model_variants, name=name, from_flax=from_flax, variant=variant, low_cpu_mem_usage=low_cpu_mem_usage, cached_folder=cached_folder, use_safetensors=use_safetensors, dduf_entries=dduf_entries, provider_options=provider_options, quantization_config=quantization_config, ) logger.info( f"Loaded {name} as {class_name} from `{name}` subfolder of {pretrained_model_name_or_path}." ) init_kwargs[name] = loaded_sub_model # UNet(...), # DiffusionSchedule(...) # 8. Handle connected pipelines. if pipeline_class._load_connected_pipes and os.path.isfile(os.path.join(cached_folder, "README.md")): init_kwargs = _update_init_kwargs_with_connected_pipeline( init_kwargs=init_kwargs, passed_pipe_kwargs=passed_pipe_kwargs, passed_class_objs=passed_class_obj, folder=cached_folder, **kwargs_copied, ) # 9. Potentially add passed objects if expected missing_modules = set(expected_modules) - set(init_kwargs.keys()) passed_modules = list(passed_class_obj.keys()) optional_modules = pipeline_class._optional_components if len(missing_modules) > 0 and missing_modules <= set(passed_modules + optional_modules): for module in missing_modules: init_kwargs[module] = passed_class_obj.get(module, None) elif len(missing_modules) > 0: passed_modules = set(list(init_kwargs.keys()) + list(passed_class_obj.keys())) - set(optional_kwargs) raise ValueError( f"Pipeline {pipeline_class} expected {expected_modules}, but only {passed_modules} were passed." ) # 10. Type checking init arguments for kw, arg in init_kwargs.items(): # Too complex to validate with type annotation alone if "scheduler" in kw: continue # Many tokenizer annotations don't include its "Fast" variant, so skip this # e.g T5Tokenizer but not T5TokenizerFast elif "tokenizer" in kw: continue elif ( arg is not None # Skip if None and not expected_types[kw] == (inspect.Signature.empty,) # Skip if no type annotations and not _is_valid_type(arg, expected_types[kw]) # Check type ): logger.warning(f"Expected types for {kw}: {expected_types[kw]}, got {_get_detailed_type(arg)}.") # 11. Instantiate the pipeline model = pipeline_class(**init_kwargs) # 12. Save where the model was instantiated from model.register_to_config(_name_or_path=pretrained_model_name_or_path) if device_map is not None: setattr(model, "hf_device_map", final_device_map) if quantization_config is not None: setattr(model, "quantization_config", quantization_config) return model @property def name_or_path(self) -> str: return getattr(self.config, "_name_or_path", None) @property def _execution_device(self): r""" Returns the device on which the pipeline's models will be executed. After calling [`~DiffusionPipeline.enable_sequential_cpu_offload`] the execution device can only be inferred from Accelerate's module hooks. """ from ..hooks.group_offloading import _get_group_onload_device # When apply group offloading at the leaf_level, we're in the same situation as accelerate's sequential # offloading. We need to return the onload device of the group offloading hooks so that the intermediates # required for computation (latents, prompt embeddings, etc.) can be created on the correct device. for name, model in self.components.items(): if not isinstance(model, torch.nn.Module): continue try: return _get_group_onload_device(model) except ValueError: pass for name, model in self.components.items(): if not isinstance(model, torch.nn.Module) or name in self._exclude_from_cpu_offload: continue if not hasattr(model, "_hf_hook"): return self.device for module in model.modules(): if ( hasattr(module, "_hf_hook") and hasattr(module._hf_hook, "execution_device") and module._hf_hook.execution_device is not None ): return torch.device(module._hf_hook.execution_device) return self.device def remove_all_hooks(self): r""" Removes all hooks that were added when using `enable_sequential_cpu_offload` or `enable_model_cpu_offload`. """ for _, model in self.components.items(): if isinstance(model, torch.nn.Module) and hasattr(model, "_hf_hook"): accelerate.hooks.remove_hook_from_module(model, recurse=True) self._all_hooks = [] def enable_model_cpu_offload(self, gpu_id: Optional[int] = None, device: Union[torch.device, str] = None): r""" Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the accelerator when its `forward` method is called, and the model remains in accelerator until the next model runs. Memory savings are lower than with `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. Arguments: gpu_id (`int`, *optional*): The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (`torch.Device` or `str`, *optional*, defaults to None): The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will automatically detect the available accelerator and use. """ self._maybe_raise_error_if_group_offload_active(raise_error=True) is_pipeline_device_mapped = self.hf_device_map is not None and len(self.hf_device_map) > 1 if is_pipeline_device_mapped: raise ValueError( "It seems like you have activated a device mapping strategy on the pipeline so calling `enable_model_cpu_offload() isn't allowed. You can call `reset_device_map()` first and then call `enable_model_cpu_offload()`." ) if self.model_cpu_offload_seq is None: raise ValueError( "Model CPU offload cannot be enabled because no `model_cpu_offload_seq` class attribute is set." ) if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): from accelerate import cpu_offload_with_hook else: raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") self.remove_all_hooks() if device is None: device = get_device() if device == "cpu": raise RuntimeError("`enable_model_cpu_offload` requires accelerator, but not found") torch_device = torch.device(device) device_index = torch_device.index if gpu_id is not None and device_index is not None: raise ValueError( f"You have passed both `gpu_id`={gpu_id} and an index as part of the passed device `device`={device}" f"Cannot pass both. Please make sure to either not define `gpu_id` or not pass the index as part of the device: `device`={torch_device.type}" ) # _offload_gpu_id should be set to passed gpu_id (or id in passed `device`) or default to previously set id or default to 0 self._offload_gpu_id = gpu_id or torch_device.index or getattr(self, "_offload_gpu_id", 0) device_type = torch_device.type device = torch.device(f"{device_type}:{self._offload_gpu_id}") self._offload_device = device self.to("cpu", silence_dtype_warnings=True) empty_device_cache(device.type) all_model_components = {k: v for k, v in self.components.items() if isinstance(v, torch.nn.Module)} self._all_hooks = [] hook = None for model_str in self.model_cpu_offload_seq.split("->"): model = all_model_components.pop(model_str, None) if not isinstance(model, torch.nn.Module): continue # This is because the model would already be placed on a CUDA device. _, _, is_loaded_in_8bit_bnb = _check_bnb_status(model) if is_loaded_in_8bit_bnb: logger.info( f"Skipping the hook placement for the {model.__class__.__name__} as it is loaded in `bitsandbytes` 8bit." ) continue _, hook = cpu_offload_with_hook(model, device, prev_module_hook=hook) self._all_hooks.append(hook) # CPU offload models that are not in the seq chain unless they are explicitly excluded # these models will stay on CPU until maybe_free_model_hooks is called # some models cannot be in the seq chain because they are iteratively called, such as controlnet for name, model in all_model_components.items(): if not isinstance(model, torch.nn.Module): continue if name in self._exclude_from_cpu_offload: model.to(device) else: _, hook = cpu_offload_with_hook(model, device) self._all_hooks.append(hook) def maybe_free_model_hooks(self): r""" Method that performs the following: - Offloads all components. - Removes all model hooks that were added when using `enable_model_cpu_offload`, and then applies them again. In case the model has not been offloaded, this function is a no-op. - Resets stateful diffusers hooks of denoiser components if they were added with [`~hooks.HookRegistry.register_hook`]. Make sure to add this function to the end of the `__call__` function of your pipeline so that it functions correctly when applying `enable_model_cpu_offload`. """ for component in self.components.values(): if hasattr(component, "_reset_stateful_cache"): component._reset_stateful_cache() if not hasattr(self, "_all_hooks") or len(self._all_hooks) == 0: # `enable_model_cpu_offload` has not be called, so silently do nothing return # make sure the model is in the same state as before calling it self.enable_model_cpu_offload(device=getattr(self, "_offload_device", "cuda")) def enable_sequential_cpu_offload(self, gpu_id: Optional[int] = None, device: Union[torch.device, str] = None): r""" Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state dicts of all `torch.nn.Module` components (except those in `self._exclude_from_cpu_offload`) are saved to CPU and then moved to `torch.device('meta')` and loaded to accelerator only when their specific submodule has its `forward` method called. Offloading happens on a submodule basis. Memory savings are higher than with `enable_model_cpu_offload`, but performance is lower. Arguments: gpu_id (`int`, *optional*): The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (`torch.Device` or `str`, *optional*, defaults to None): The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will automatically detect the available accelerator and use. """ self._maybe_raise_error_if_group_offload_active(raise_error=True) if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"): from accelerate import cpu_offload else: raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher") self.remove_all_hooks() is_pipeline_device_mapped = self.hf_device_map is not None and len(self.hf_device_map) > 1 if is_pipeline_device_mapped: raise ValueError( "It seems like you have activated a device mapping strategy on the pipeline so calling `enable_sequential_cpu_offload() isn't allowed. You can call `reset_device_map()` first and then call `enable_sequential_cpu_offload()`." ) if device is None: device = get_device() if device == "cpu": raise RuntimeError("`enable_sequential_cpu_offload` requires accelerator, but not found") torch_device = torch.device(device) device_index = torch_device.index if gpu_id is not None and device_index is not None: raise ValueError( f"You have passed both `gpu_id`={gpu_id} and an index as part of the passed device `device`={device}" f"Cannot pass both. Please make sure to either not define `gpu_id` or not pass the index as part of the device: `device`={torch_device.type}" ) # _offload_gpu_id should be set to passed gpu_id (or id in passed `device`) or default to previously set id or default to 0 self._offload_gpu_id = gpu_id or torch_device.index or getattr(self, "_offload_gpu_id", 0) device_type = torch_device.type device = torch.device(f"{device_type}:{self._offload_gpu_id}") self._offload_device = device if self.device.type != "cpu": orig_device_type = self.device.type self.to("cpu", silence_dtype_warnings=True) empty_device_cache(orig_device_type) for name, model in self.components.items(): if not isinstance(model, torch.nn.Module): continue if name in self._exclude_from_cpu_offload: model.to(device) else: # make sure to offload buffers if not all high level weights # are of type nn.Module offload_buffers = len(model._parameters) > 0 cpu_offload(model, device, offload_buffers=offload_buffers) def reset_device_map(self): r""" Resets the device maps (if any) to None. """ if self.hf_device_map is None: return else: self.remove_all_hooks() for name, component in self.components.items(): if isinstance(component, torch.nn.Module): component.to("cpu") self.hf_device_map = None @classmethod @validate_hf_hub_args def download(cls, pretrained_model_name, **kwargs) -> Union[str, os.PathLike]: r""" Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. Parameters: pretrained_model_name (`str` or `os.PathLike`, *optional*): A string, the *repository id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline hosted on the Hub. custom_pipeline (`str`, *optional*): Can be either: - A string, the *repository id* (for example `CompVis/ldm-text2im-large-256`) of a pretrained pipeline hosted on the Hub. The repository must contain a file called `pipeline.py` that defines the custom pipeline. - A string, the *file name* of a community pipeline hosted on GitHub under [Community](https://github.com/huggingface/diffusers/tree/main/examples/community). Valid file names must match the file name and not the pipeline script (`clip_guided_stable_diffusion` instead of `clip_guided_stable_diffusion.py`). Community pipelines are always loaded from the current `main` branch of GitHub. - A path to a *directory* (`./my_pipeline_directory/`) containing a custom pipeline. The directory must contain a file called `pipeline.py` that defines the custom pipeline. <Tip warning={true}> 🧪 This is an experimental feature and may change in the future. </Tip> For more information on how to load and create custom pipelines, take a look at [How to contribute a community pipeline](https://huggingface.co/docs/diffusers/main/en/using-diffusers/contribute_pipeline). force_download (`bool`, *optional*, defaults to `False`): Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. proxies (`Dict[str, str]`, *optional*): A dictionary of proxy servers to use by protocol or endpoint, for example, `{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. output_loading_info(`bool`, *optional*, defaults to `False`): Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (`bool`, *optional*, defaults to `False`): Whether to only load local model weights and configuration files or not. If set to `True`, the model won't be downloaded from the Hub. token (`str` or *bool*, *optional*): The token to use as HTTP bearer authorization for remote files. If `True`, the token generated from `diffusers-cli login` (stored in `~/.huggingface`) is used. revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. custom_revision (`str`, *optional*, defaults to `"main"`): The specific model version to use. It can be a branch name, a tag name, or a commit id similar to `revision` when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a custom pipeline from GitHub, otherwise it defaults to `"main"` when loading from the Hub. mirror (`str`, *optional*): Mirror source to resolve accessibility issues if you're downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information. variant (`str`, *optional*): Load weights from a specified variant filename such as `"fp16"` or `"ema"`. This is ignored when loading `from_flax`. dduf_file(`str`, *optional*): Load weights from the specified DDUF file. use_safetensors (`bool`, *optional*, defaults to `None`): If set to `None`, the safetensors weights are downloaded if they're available **and** if the safetensors library is installed. If set to `True`, the model is forcibly loaded from safetensors weights. If set to `False`, safetensors weights are not loaded. use_onnx (`bool`, *optional*, defaults to `False`): If set to `True`, ONNX weights will always be downloaded if present. If set to `False`, ONNX weights will never be downloaded. By default `use_onnx` defaults to the `_is_onnx` class attribute which is `False` for non-ONNX pipelines and `True` for ONNX pipelines. ONNX weights include both files ending with `.onnx` and `.pb`. trust_remote_code (`bool`, *optional*, defaults to `False`): Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This option should only be set to `True` for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. Returns: `os.PathLike`: A path to the downloaded pipeline. <Tip> To use private or [gated models](https://huggingface.co/docs/hub/models-gated#gated-models), log-in with `hf auth login </Tip> """ cache_dir = kwargs.pop("cache_dir", None) force_download = kwargs.pop("force_download", False) proxies = kwargs.pop("proxies", None) local_files_only = kwargs.pop("local_files_only", None) token = kwargs.pop("token", None) revision = kwargs.pop("revision", None) from_flax = kwargs.pop("from_flax", False) custom_pipeline = kwargs.pop("custom_pipeline", None) custom_revision = kwargs.pop("custom_revision", None) variant = kwargs.pop("variant", None) use_safetensors = kwargs.pop("use_safetensors", None) use_onnx = kwargs.pop("use_onnx", None) load_connected_pipeline = kwargs.pop("load_connected_pipeline", False) trust_remote_code = kwargs.pop("trust_remote_code", False) dduf_file: Optional[Dict[str, DDUFEntry]] = kwargs.pop("dduf_file", None) if dduf_file: if custom_pipeline: raise NotImplementedError("Custom pipelines are not supported with DDUF at the moment.") if load_connected_pipeline: raise NotImplementedError("Connected pipelines are not supported with DDUF at the moment.") return _download_dduf_file( pretrained_model_name=pretrained_model_name, dduf_file=dduf_file, pipeline_class_name=cls.__name__, cache_dir=cache_dir, proxies=proxies, local_files_only=local_files_only, token=token, revision=revision, ) allow_pickle = True if (use_safetensors is None or use_safetensors is False) else False use_safetensors = use_safetensors if use_safetensors is not None else True allow_patterns = None ignore_patterns = None model_info_call_error: Optional[Exception] = None if not local_files_only: try: info = model_info(pretrained_model_name, token=token, revision=revision) except (HTTPError, OfflineModeIsEnabled, requests.ConnectionError) as e: logger.warning(f"Couldn't connect to the Hub: {e}.\nWill try to load from local cache.") local_files_only = True model_info_call_error = e # save error to reraise it if model is not cached locally if not local_files_only: config_file = hf_hub_download( pretrained_model_name, cls.config_name, cache_dir=cache_dir, revision=revision, proxies=proxies, force_download=force_download, token=token, ) config_dict = cls._dict_from_json_file(config_file) ignore_filenames = config_dict.pop("_ignore_files", []) filenames = {sibling.rfilename for sibling in info.siblings} if variant is not None and _check_legacy_sharding_variant_format(filenames=filenames, variant=variant): warn_msg = ( f"Warning: The repository contains sharded checkpoints for variant '{variant}' maybe in a deprecated format. " "Please check your files carefully:\n\n" "- Correct format example: diffusion_pytorch_model.fp16-00003-of-00003.safetensors\n" "- Deprecated format example: diffusion_pytorch_model-00001-of-00002.fp16.safetensors\n\n" "If you find any files in the deprecated format:\n" "1. Remove all existing checkpoint files for this variant.\n" "2. Re-obtain the correct files by running `save_pretrained()`.\n\n" "This will ensure you're using the most up-to-date and compatible checkpoint format." ) logger.warning(warn_msg) filenames = set(filenames) - set(ignore_filenames) if revision in DEPRECATED_REVISION_ARGS and version.parse( version.parse(__version__).base_version ) >= version.parse("0.22.0"): warn_deprecated_model_variant(pretrained_model_name, token, variant, revision, filenames) custom_components, folder_names = _get_custom_components_and_folders( pretrained_model_name, config_dict, filenames, variant ) custom_class_name = None if custom_pipeline is None and isinstance(config_dict["_class_name"], (list, tuple)): custom_pipeline = config_dict["_class_name"][0] custom_class_name = config_dict["_class_name"][1] load_pipe_from_hub = custom_pipeline is not None and f"{custom_pipeline}.py" in filenames load_components_from_hub = len(custom_components) > 0 if load_pipe_from_hub and not trust_remote_code: raise ValueError( f"The repository for {pretrained_model_name} contains custom code in {custom_pipeline}.py which must be executed to correctly " f"load the model. You can inspect the repository content at https://hf.co/{pretrained_model_name}/blob/main/{custom_pipeline}.py.\n" f"Please pass the argument `trust_remote_code=True` to allow custom code to be run." ) if load_components_from_hub and not trust_remote_code: raise ValueError( f"The repository for {pretrained_model_name} contains custom code in {'.py, '.join([os.path.join(k, v) for k, v in custom_components.items()])} which must be executed to correctly " f"load the model. You can inspect the repository content at {', '.join([f'https://hf.co/{pretrained_model_name}/{k}/{v}.py' for k, v in custom_components.items()])}.\n" f"Please pass the argument `trust_remote_code=True` to allow custom code to be run." ) # retrieve passed components that should not be downloaded pipeline_class = _get_pipeline_class( cls, config_dict, load_connected_pipeline=load_connected_pipeline, custom_pipeline=custom_pipeline, repo_id=pretrained_model_name if load_pipe_from_hub else None, hub_revision=revision, class_name=custom_class_name, cache_dir=cache_dir, revision=custom_revision, ) expected_components, _ = cls._get_signature_keys(pipeline_class) passed_components = [k for k in expected_components if k in kwargs] # retrieve the names of the folders containing model weights model_folder_names = { os.path.split(f)[0] for f in filter_model_files(filenames) if os.path.split(f)[0] in folder_names } # retrieve all patterns that should not be downloaded and error out when needed ignore_patterns = _get_ignore_patterns( passed_components, model_folder_names, filenames, use_safetensors, from_flax, allow_pickle, use_onnx, pipeline_class._is_onnx, variant, ) model_filenames, variant_filenames = variant_compatible_siblings( filenames, variant=variant, ignore_patterns=ignore_patterns ) # all filenames compatible with variant will be added allow_patterns = list(model_filenames) # allow all patterns from non-model folders # this enables downloading schedulers, tokenizers, ... allow_patterns += [f"{k}/*" for k in folder_names if k not in model_folder_names] # add custom component files allow_patterns += [f"{k}/{f}.py" for k, f in custom_components.items()] # add custom pipeline file allow_patterns += [f"{custom_pipeline}.py"] if f"{custom_pipeline}.py" in filenames else [] # also allow downloading config.json files with the model allow_patterns += [os.path.join(k, "config.json") for k in model_folder_names] allow_patterns += [ SCHEDULER_CONFIG_NAME, CONFIG_NAME, cls.config_name, CUSTOM_PIPELINE_FILE_NAME, ] # Don't download any objects that are passed allow_patterns = [ p for p in allow_patterns if not (len(p.split("/")) == 2 and p.split("/")[0] in passed_components) ] if pipeline_class._load_connected_pipes: allow_patterns.append("README.md") # Don't download index files of forbidden patterns either ignore_patterns = ignore_patterns + [f"{i}.index.*json" for i in ignore_patterns] re_ignore_pattern = [re.compile(fnmatch.translate(p)) for p in ignore_patterns] re_allow_pattern = [re.compile(fnmatch.translate(p)) for p in allow_patterns] expected_files = [f for f in filenames if not any(p.match(f) for p in re_ignore_pattern)] expected_files = [f for f in expected_files if any(p.match(f) for p in re_allow_pattern)] snapshot_folder = Path(config_file).parent pipeline_is_cached = all((snapshot_folder / f).is_file() for f in expected_files) if pipeline_is_cached and not force_download: # if the pipeline is cached, we can directly return it # else call snapshot_download return snapshot_folder user_agent = {"pipeline_class": cls.__name__} if custom_pipeline is not None and not custom_pipeline.endswith(".py"): user_agent["custom_pipeline"] = custom_pipeline # download all allow_patterns - ignore_patterns try: cached_folder = snapshot_download( pretrained_model_name, cache_dir=cache_dir, proxies=proxies, local_files_only=local_files_only, token=token, revision=revision, allow_patterns=allow_patterns, ignore_patterns=ignore_patterns, user_agent=user_agent, ) cls_name = cls.load_config(os.path.join(cached_folder, "model_index.json")).get("_class_name", None) cls_name = cls_name[4:] if isinstance(cls_name, str) and cls_name.startswith("Flax") else cls_name diffusers_module = importlib.import_module(__name__.split(".")[0]) pipeline_class = getattr(diffusers_module, cls_name, None) if isinstance(cls_name, str) else None if pipeline_class is not None and pipeline_class._load_connected_pipes: modelcard = ModelCard.load(os.path.join(cached_folder, "README.md")) connected_pipes = sum([getattr(modelcard.data, k, []) for k in CONNECTED_PIPES_KEYS], []) for connected_pipe_repo_id in connected_pipes: download_kwargs = { "cache_dir": cache_dir, "force_download": force_download, "proxies": proxies, "local_files_only": local_files_only, "token": token, "variant": variant, "use_safetensors": use_safetensors, } DiffusionPipeline.download(connected_pipe_repo_id, **download_kwargs) return cached_folder except FileNotFoundError: # Means we tried to load pipeline with `local_files_only=True` but the files have not been found in local cache. # This can happen in two cases: # 1. If the user passed `local_files_only=True` => we raise the error directly # 2. If we forced `local_files_only=True` when `model_info` failed => we raise the initial error if model_info_call_error is None: # 1. user passed `local_files_only=True` raise else: # 2. we forced `local_files_only=True` when `model_info` failed raise EnvironmentError( f"Cannot load model {pretrained_model_name}: model is not cached locally and an error occurred" " while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace" " above." ) from model_info_call_error @classmethod def _get_signature_keys(cls, obj): parameters = inspect.signature(obj.__init__).parameters required_parameters = {k: v for k, v in parameters.items() if v.default == inspect._empty} optional_parameters = set({k for k, v in parameters.items() if v.default != inspect._empty}) expected_modules = set(required_parameters.keys()) - {"self"} optional_names = list(optional_parameters) for name in optional_names: if name in cls._optional_components: expected_modules.add(name) optional_parameters.remove(name) return sorted(expected_modules), sorted(optional_parameters) @classmethod def _get_signature_types(cls): signature_types = {} for k, v in inspect.signature(cls.__init__).parameters.items(): if inspect.isclass(v.annotation): signature_types[k] = (v.annotation,) elif get_origin(v.annotation) == Union: signature_types[k] = get_args(v.annotation) elif get_origin(v.annotation) in [List, Dict, list, dict]: signature_types[k] = (v.annotation,) else: logger.warning(f"cannot get type annotation for Parameter {k} of {cls}.") return signature_types @property def components(self) -> Dict[str, Any]: r""" The `self.components` property can be useful to run different pipelines with the same weights and configurations without reallocating additional memory. Returns (`dict`): A dictionary containing all the modules needed to initialize the pipeline. Examples: ```py >>> from diffusers import ( ... StableDiffusionPipeline, ... StableDiffusionImg2ImgPipeline, ... StableDiffusionInpaintPipeline, ... ) >>> text2img = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") >>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) >>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) ``` """ expected_modules, optional_parameters = self._get_signature_keys(self) components = { k: getattr(self, k) for k in self.config.keys() if not k.startswith("_") and k not in optional_parameters } actual = sorted(set(components.keys())) expected = sorted(expected_modules) if actual != expected: raise ValueError( f"{self} has been incorrectly initialized or {self.__class__} is incorrectly implemented. Expected" f" {expected} to be defined, but {actual} are defined." ) return components @staticmethod def numpy_to_pil(images): """ Convert a NumPy image or a batch of images to a PIL image. """ return numpy_to_pil(images) @torch.compiler.disable def progress_bar(self, iterable=None, total=None): if not hasattr(self, "_progress_bar_config"): self._progress_bar_config = {} elif not isinstance(self._progress_bar_config, dict): raise ValueError( f"`self._progress_bar_config` should be of type `dict`, but is {type(self._progress_bar_config)}." ) if iterable is not None: return tqdm(iterable, **self._progress_bar_config) elif total is not None: return tqdm(total=total, **self._progress_bar_config) else: raise ValueError("Either `total` or `iterable` has to be defined.") def set_progress_bar_config(self, **kwargs): self._progress_bar_config = kwargs def enable_xformers_memory_efficient_attention(self, attention_op: Optional[Callable] = None): r""" Enable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed. <Tip warning={true}> ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent. </Tip> Parameters: attention_op (`Callable`, *optional*): Override the default `None` operator for use as `op` argument to the [`memory_efficient_attention()`](https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention) function of xFormers. Examples: ```py >>> import torch >>> from diffusers import DiffusionPipeline >>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp >>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) >>> pipe = pipe.to("cuda") >>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) >>> # Workaround for not accepting attention shape using VAE for Flash Attention >>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) ``` """ self.set_use_memory_efficient_attention_xformers(True, attention_op) def disable_xformers_memory_efficient_attention(self): r""" Disable memory efficient attention from [xFormers](https://facebookresearch.github.io/xformers/). """ self.set_use_memory_efficient_attention_xformers(False) def set_use_memory_efficient_attention_xformers( self, valid: bool, attention_op: Optional[Callable] = None ) -> None: # Recursively walk through all the children. # Any children which exposes the set_use_memory_efficient_attention_xformers method # gets the message def fn_recursive_set_mem_eff(module: torch.nn.Module): if hasattr(module, "set_use_memory_efficient_attention_xformers"): module.set_use_memory_efficient_attention_xformers(valid, attention_op) for child in module.children(): fn_recursive_set_mem_eff(child) module_names, _ = self._get_signature_keys(self) modules = [getattr(self, n, None) for n in module_names] modules = [m for m in modules if isinstance(m, torch.nn.Module)] for module in modules: fn_recursive_set_mem_eff(module) def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): r""" Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. For more than one attention head, the computation is performed sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. <Tip warning={true}> ⚠️ Don't enable attention slicing if you're already using `scaled_dot_product_attention` (SDPA) from PyTorch 2.0 or xFormers. These attention computations are already very memory efficient so you won't need to enable this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! </Tip> Args: slice_size (`str` or `int`, *optional*, defaults to `"auto"`): When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If `"max"`, maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim` must be a multiple of `slice_size`. Examples: ```py >>> import torch >>> from diffusers import StableDiffusionPipeline >>> pipe = StableDiffusionPipeline.from_pretrained( ... "stable-diffusion-v1-5/stable-diffusion-v1-5", ... torch_dtype=torch.float16, ... use_safetensors=True, ... ) >>> prompt = "a photo of an astronaut riding a horse on mars" >>> pipe.enable_attention_slicing() >>> image = pipe(prompt).images[0] ``` """ self.set_attention_slice(slice_size) def disable_attention_slicing(self): r""" Disable sliced attention computation. If `enable_attention_slicing` was previously called, attention is computed in one step. """ # set slice_size = `None` to disable `attention slicing` self.enable_attention_slicing(None) def set_attention_slice(self, slice_size: Optional[int]): module_names, _ = self._get_signature_keys(self) modules = [getattr(self, n, None) for n in module_names] modules = [m for m in modules if isinstance(m, torch.nn.Module) and hasattr(m, "set_attention_slice")] for module in modules: module.set_attention_slice(slice_size) @classmethod def from_pipe(cls, pipeline, **kwargs): r""" Create a new pipeline from a given pipeline. This method is useful to create a new pipeline from the existing pipeline components without reallocating additional memory. Arguments: pipeline (`DiffusionPipeline`): The pipeline from which to create a new pipeline. Returns: `DiffusionPipeline`: A new pipeline with the same weights and configurations as `pipeline`. Examples: ```py >>> from diffusers import StableDiffusionPipeline, StableDiffusionSAGPipeline >>> pipe = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5") >>> new_pipe = StableDiffusionSAGPipeline.from_pipe(pipe) ``` """ original_config = dict(pipeline.config) torch_dtype = kwargs.pop("torch_dtype", torch.float32) # derive the pipeline class to instantiate custom_pipeline = kwargs.pop("custom_pipeline", None) custom_revision = kwargs.pop("custom_revision", None) if custom_pipeline is not None: pipeline_class = _get_custom_pipeline_class(custom_pipeline, revision=custom_revision) else: pipeline_class = cls expected_modules, optional_kwargs = cls._get_signature_keys(pipeline_class) # true_optional_modules are optional components with default value in signature so it is ok not to pass them to `__init__` # e.g. `image_encoder` for StableDiffusionPipeline parameters = inspect.signature(cls.__init__).parameters true_optional_modules = set( {k for k, v in parameters.items() if v.default != inspect._empty and k in expected_modules} ) # get the class of each component based on its type hint # e.g. {"unet": UNet2DConditionModel, "text_encoder": CLIPTextMode} component_types = pipeline_class._get_signature_types() pretrained_model_name_or_path = original_config.pop("_name_or_path", None) # allow users pass modules in `kwargs` to override the original pipeline's components passed_class_obj = {k: kwargs.pop(k) for k in expected_modules if k in kwargs} original_class_obj = {} for name, component in pipeline.components.items(): if name in expected_modules and name not in passed_class_obj: # for model components, we will not switch over if the class does not matches the type hint in the new pipeline's signature if ( not isinstance(component, ModelMixin) or type(component) in component_types[name] or (component is None and name in cls._optional_components) ): original_class_obj[name] = component else: logger.warning( f"component {name} is not switched over to new pipeline because type does not match the expected." f" {name} is {type(component)} while the new pipeline expect {component_types[name]}." f" please pass the component of the correct type to the new pipeline. `from_pipe(..., {name}={name})`" ) # allow users pass optional kwargs to override the original pipelines config attribute passed_pipe_kwargs = {k: kwargs.pop(k) for k in optional_kwargs if k in kwargs} original_pipe_kwargs = { k: original_config[k] for k in original_config.keys() if k in optional_kwargs and k not in passed_pipe_kwargs } # config attribute that were not expected by pipeline is stored as its private attribute # (i.e. when the original pipeline was also instantiated with `from_pipe` from another pipeline that has this config) # in this case, we will pass them as optional arguments if they can be accepted by the new pipeline additional_pipe_kwargs = [ k[1:] for k in original_config.keys() if k.startswith("_") and k[1:] in optional_kwargs and k[1:] not in passed_pipe_kwargs ] for k in additional_pipe_kwargs: original_pipe_kwargs[k] = original_config.pop(f"_{k}") pipeline_kwargs = { **passed_class_obj, **original_class_obj, **passed_pipe_kwargs, **original_pipe_kwargs, **kwargs, } # store unused config as private attribute in the new pipeline unused_original_config = { f"{'' if k.startswith('_') else '_'}{k}": v for k, v in original_config.items() if k not in pipeline_kwargs } optional_components = ( pipeline._optional_components if hasattr(pipeline, "_optional_components") and pipeline._optional_components else [] ) missing_modules = ( set(expected_modules) - set(optional_components) - set(pipeline_kwargs.keys()) - set(true_optional_modules) ) if len(missing_modules) > 0: raise ValueError( f"Pipeline {pipeline_class} expected {expected_modules}, but only {set(list(passed_class_obj.keys()) + list(original_class_obj.keys()))} were passed" ) new_pipeline = pipeline_class(**pipeline_kwargs) if pretrained_model_name_or_path is not None: new_pipeline.register_to_config(_name_or_path=pretrained_model_name_or_path) new_pipeline.register_to_config(**unused_original_config) if torch_dtype is not None: new_pipeline.to(dtype=torch_dtype) return new_pipeline def _maybe_raise_error_if_group_offload_active( self, raise_error: bool = False, module: Optional[torch.nn.Module] = None ) -> bool: from ..hooks.group_offloading import _is_group_offload_enabled components = self.components.values() if module is None else [module] components = [component for component in components if isinstance(component, torch.nn.Module)] for component in components: if _is_group_offload_enabled(component): if raise_error: raise ValueError( "You are trying to apply model/sequential CPU offloading to a pipeline that contains components " "with group offloading enabled. This is not supported. Please disable group offloading for " "components of the pipeline to use other offloading methods." ) return True return False class StableDiffusionMixin: r""" Helper for DiffusionPipeline with vae and unet.(mainly for LDM such as stable diffusion) """ def enable_vae_slicing(self): r""" Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. """ self.vae.enable_slicing() def disable_vae_slicing(self): r""" Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to computing decoding in one step. """ self.vae.disable_slicing() def enable_vae_tiling(self): r""" Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images. """ self.vae.enable_tiling() def disable_vae_tiling(self): r""" Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to computing decoding in one step. """ self.vae.disable_tiling() def enable_freeu(self, s1: float, s2: float, b1: float, b2: float): r"""Enables the FreeU mechanism as in https://huggingface.co/papers/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the [official repository](https://github.com/ChenyangSi/FreeU) for combinations of the values that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. Args: s1 (`float`): Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to mitigate "oversmoothing effect" in the enhanced denoising process. s2 (`float`): Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to mitigate "oversmoothing effect" in the enhanced denoising process. b1 (`float`): Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (`float`): Scaling factor for stage 2 to amplify the contributions of backbone features. """ if not hasattr(self, "unet"): raise ValueError("The pipeline must have `unet` for using FreeU.") self.unet.enable_freeu(s1=s1, s2=s2, b1=b1, b2=b2) def disable_freeu(self): """Disables the FreeU mechanism if enabled.""" self.unet.disable_freeu() def fuse_qkv_projections(self, unet: bool = True, vae: bool = True): """ Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused. <Tip warning={true}> This API is 🧪 experimental. </Tip> Args: unet (`bool`, defaults to `True`): To apply fusion on the UNet. vae (`bool`, defaults to `True`): To apply fusion on the VAE. """ self.fusing_unet = False self.fusing_vae = False if unet: self.fusing_unet = True self.unet.fuse_qkv_projections() self.unet.set_attn_processor(FusedAttnProcessor2_0()) if vae: if not isinstance(self.vae, AutoencoderKL): raise ValueError("`fuse_qkv_projections()` is only supported for the VAE of type `AutoencoderKL`.") self.fusing_vae = True self.vae.fuse_qkv_projections() self.vae.set_attn_processor(FusedAttnProcessor2_0()) def unfuse_qkv_projections(self, unet: bool = True, vae: bool = True): """Disable QKV projection fusion if enabled. <Tip warning={true}> This API is 🧪 experimental. </Tip> Args: unet (`bool`, defaults to `True`): To apply fusion on the UNet. vae (`bool`, defaults to `True`): To apply fusion on the VAE. """ if unet: if not self.fusing_unet: logger.warning("The UNet was not initially fused for QKV projections. Doing nothing.") else: self.unet.unfuse_qkv_projections() self.fusing_unet = False if vae: if not self.fusing_vae: logger.warning("The VAE was not initially fused for QKV projections. Doing nothing.") else: self.vae.unfuse_qkv_projections() self.fusing_vae = False
diffusers/src/diffusers/pipelines/pipeline_utils.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/pipeline_utils.py", "repo_id": "diffusers", "token_count": 45400 }
176
from typing import TYPE_CHECKING from ...utils import ( DIFFUSERS_SLOW_IMPORT, OptionalDependencyNotAvailable, _LazyModule, get_objects_from_module, is_torch_available, is_transformers_available, is_transformers_version, ) _dummy_objects = {} _import_structure = {} try: if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.27.0")): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from ...utils import dummy_torch_and_transformers_objects _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects)) else: _import_structure["modeling_stable_audio"] = ["StableAudioProjectionModel"] _import_structure["pipeline_stable_audio"] = ["StableAudioPipeline"] if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT: try: if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.27.0")): raise OptionalDependencyNotAvailable() except OptionalDependencyNotAvailable: from ...utils.dummy_torch_and_transformers_objects import * else: from .modeling_stable_audio import StableAudioProjectionModel from .pipeline_stable_audio import StableAudioPipeline else: import sys sys.modules[__name__] = _LazyModule( __name__, globals()["__file__"], _import_structure, module_spec=__spec__, ) for name, value in _dummy_objects.items(): setattr(sys.modules[__name__], name, value)
diffusers/src/diffusers/pipelines/stable_audio/__init__.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/stable_audio/__init__.py", "repo_id": "diffusers", "token_count": 604 }
177
import numpy as np import torch from ...utils import is_invisible_watermark_available if is_invisible_watermark_available(): from imwatermark import WatermarkEncoder # Copied from https://github.com/Stability-AI/generative-models/blob/613af104c6b85184091d42d374fef420eddb356d/scripts/demo/streamlit_helpers.py#L66 WATERMARK_MESSAGE = 0b101100111110110010010000011110111011000110011110 # bin(x)[2:] gives bits of x as str, use int to convert them to 0/1 WATERMARK_BITS = [int(bit) for bit in bin(WATERMARK_MESSAGE)[2:]] class StableDiffusionXLWatermarker: def __init__(self): self.watermark = WATERMARK_BITS self.encoder = WatermarkEncoder() self.encoder.set_watermark("bits", self.watermark) def apply_watermark(self, images: torch.Tensor): # can't encode images that are smaller than 256 if images.shape[-1] < 256: return images images = (255 * (images / 2 + 0.5)).cpu().permute(0, 2, 3, 1).float().numpy() # Convert RGB to BGR, which is the channel order expected by the watermark encoder. images = images[:, :, :, ::-1] # Add watermark and convert BGR back to RGB images = [self.encoder.encode(image, "dwtDct")[:, :, ::-1] for image in images] images = np.array(images) images = torch.from_numpy(images).permute(0, 3, 1, 2) images = torch.clamp(2 * (images / 255 - 0.5), min=-1.0, max=1.0) return images
diffusers/src/diffusers/pipelines/stable_diffusion_xl/watermark.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/stable_diffusion_xl/watermark.py", "repo_id": "diffusers", "token_count": 596 }
178
# Copyright 2025 Kakao Brain and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from torch import nn from ...configuration_utils import ConfigMixin, register_to_config from ...models import ModelMixin class UnCLIPTextProjModel(ModelMixin, ConfigMixin): """ Utility class for CLIP embeddings. Used to combine the image and text embeddings into a format usable by the decoder. For more details, see the original paper: https://huggingface.co/papers/2204.06125 section 2.1 """ @register_to_config def __init__( self, *, clip_extra_context_tokens: int = 4, clip_embeddings_dim: int = 768, time_embed_dim: int, cross_attention_dim, ): super().__init__() self.learned_classifier_free_guidance_embeddings = nn.Parameter(torch.zeros(clip_embeddings_dim)) # parameters for additional clip time embeddings self.embedding_proj = nn.Linear(clip_embeddings_dim, time_embed_dim) self.clip_image_embeddings_project_to_time_embeddings = nn.Linear(clip_embeddings_dim, time_embed_dim) # parameters for encoder hidden states self.clip_extra_context_tokens = clip_extra_context_tokens self.clip_extra_context_tokens_proj = nn.Linear( clip_embeddings_dim, self.clip_extra_context_tokens * cross_attention_dim ) self.encoder_hidden_states_proj = nn.Linear(clip_embeddings_dim, cross_attention_dim) self.text_encoder_hidden_states_norm = nn.LayerNorm(cross_attention_dim) def forward(self, *, image_embeddings, prompt_embeds, text_encoder_hidden_states, do_classifier_free_guidance): if do_classifier_free_guidance: # Add the classifier free guidance embeddings to the image embeddings image_embeddings_batch_size = image_embeddings.shape[0] classifier_free_guidance_embeddings = self.learned_classifier_free_guidance_embeddings.unsqueeze(0) classifier_free_guidance_embeddings = classifier_free_guidance_embeddings.expand( image_embeddings_batch_size, -1 ) image_embeddings = torch.cat([classifier_free_guidance_embeddings, image_embeddings], dim=0) # The image embeddings batch size and the text embeddings batch size are equal assert image_embeddings.shape[0] == prompt_embeds.shape[0] batch_size = prompt_embeds.shape[0] # "Specifically, we modify the architecture described in Nichol et al. (2021) by projecting and # adding CLIP embeddings to the existing timestep embedding, ... time_projected_prompt_embeds = self.embedding_proj(prompt_embeds) time_projected_image_embeddings = self.clip_image_embeddings_project_to_time_embeddings(image_embeddings) additive_clip_time_embeddings = time_projected_image_embeddings + time_projected_prompt_embeds # ... and by projecting CLIP embeddings into four # extra tokens of context that are concatenated to the sequence of outputs from the GLIDE text encoder" clip_extra_context_tokens = self.clip_extra_context_tokens_proj(image_embeddings) clip_extra_context_tokens = clip_extra_context_tokens.reshape(batch_size, -1, self.clip_extra_context_tokens) clip_extra_context_tokens = clip_extra_context_tokens.permute(0, 2, 1) text_encoder_hidden_states = self.encoder_hidden_states_proj(text_encoder_hidden_states) text_encoder_hidden_states = self.text_encoder_hidden_states_norm(text_encoder_hidden_states) text_encoder_hidden_states = torch.cat([clip_extra_context_tokens, text_encoder_hidden_states], dim=1) return text_encoder_hidden_states, additive_clip_time_embeddings
diffusers/src/diffusers/pipelines/unclip/text_proj.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/unclip/text_proj.py", "repo_id": "diffusers", "token_count": 1638 }
179
# Copyright (c) 2022 Dominic Rampas MIT License # Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Union import torch import torch.nn as nn from ...configuration_utils import ConfigMixin, register_to_config from ...models.autoencoders.vae import DecoderOutput, VectorQuantizer from ...models.modeling_utils import ModelMixin from ...models.vq_model import VQEncoderOutput from ...utils.accelerate_utils import apply_forward_hook class MixingResidualBlock(nn.Module): """ Residual block with mixing used by Paella's VQ-VAE. """ def __init__(self, inp_channels, embed_dim): super().__init__() # depthwise self.norm1 = nn.LayerNorm(inp_channels, elementwise_affine=False, eps=1e-6) self.depthwise = nn.Sequential( nn.ReplicationPad2d(1), nn.Conv2d(inp_channels, inp_channels, kernel_size=3, groups=inp_channels) ) # channelwise self.norm2 = nn.LayerNorm(inp_channels, elementwise_affine=False, eps=1e-6) self.channelwise = nn.Sequential( nn.Linear(inp_channels, embed_dim), nn.GELU(), nn.Linear(embed_dim, inp_channels) ) self.gammas = nn.Parameter(torch.zeros(6), requires_grad=True) def forward(self, x): mods = self.gammas x_temp = self.norm1(x.permute(0, 2, 3, 1)).permute(0, 3, 1, 2) * (1 + mods[0]) + mods[1] x = x + self.depthwise(x_temp) * mods[2] x_temp = self.norm2(x.permute(0, 2, 3, 1)).permute(0, 3, 1, 2) * (1 + mods[3]) + mods[4] x = x + self.channelwise(x_temp.permute(0, 2, 3, 1)).permute(0, 3, 1, 2) * mods[5] return x class PaellaVQModel(ModelMixin, ConfigMixin): r"""VQ-VAE model from Paella model. This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library implements for all the model (such as downloading or saving, etc.) Parameters: in_channels (int, *optional*, defaults to 3): Number of channels in the input image. out_channels (int, *optional*, defaults to 3): Number of channels in the output. up_down_scale_factor (int, *optional*, defaults to 2): Up and Downscale factor of the input image. levels (int, *optional*, defaults to 2): Number of levels in the model. bottleneck_blocks (int, *optional*, defaults to 12): Number of bottleneck blocks in the model. embed_dim (int, *optional*, defaults to 384): Number of hidden channels in the model. latent_channels (int, *optional*, defaults to 4): Number of latent channels in the VQ-VAE model. num_vq_embeddings (int, *optional*, defaults to 8192): Number of codebook vectors in the VQ-VAE. scale_factor (float, *optional*, defaults to 0.3764): Scaling factor of the latent space. """ @register_to_config def __init__( self, in_channels: int = 3, out_channels: int = 3, up_down_scale_factor: int = 2, levels: int = 2, bottleneck_blocks: int = 12, embed_dim: int = 384, latent_channels: int = 4, num_vq_embeddings: int = 8192, scale_factor: float = 0.3764, ): super().__init__() c_levels = [embed_dim // (2**i) for i in reversed(range(levels))] # Encoder blocks self.in_block = nn.Sequential( nn.PixelUnshuffle(up_down_scale_factor), nn.Conv2d(in_channels * up_down_scale_factor**2, c_levels[0], kernel_size=1), ) down_blocks = [] for i in range(levels): if i > 0: down_blocks.append(nn.Conv2d(c_levels[i - 1], c_levels[i], kernel_size=4, stride=2, padding=1)) block = MixingResidualBlock(c_levels[i], c_levels[i] * 4) down_blocks.append(block) down_blocks.append( nn.Sequential( nn.Conv2d(c_levels[-1], latent_channels, kernel_size=1, bias=False), nn.BatchNorm2d(latent_channels), # then normalize them to have mean 0 and std 1 ) ) self.down_blocks = nn.Sequential(*down_blocks) # Vector Quantizer self.vquantizer = VectorQuantizer(num_vq_embeddings, vq_embed_dim=latent_channels, legacy=False, beta=0.25) # Decoder blocks up_blocks = [nn.Sequential(nn.Conv2d(latent_channels, c_levels[-1], kernel_size=1))] for i in range(levels): for j in range(bottleneck_blocks if i == 0 else 1): block = MixingResidualBlock(c_levels[levels - 1 - i], c_levels[levels - 1 - i] * 4) up_blocks.append(block) if i < levels - 1: up_blocks.append( nn.ConvTranspose2d( c_levels[levels - 1 - i], c_levels[levels - 2 - i], kernel_size=4, stride=2, padding=1 ) ) self.up_blocks = nn.Sequential(*up_blocks) self.out_block = nn.Sequential( nn.Conv2d(c_levels[0], out_channels * up_down_scale_factor**2, kernel_size=1), nn.PixelShuffle(up_down_scale_factor), ) @apply_forward_hook def encode(self, x: torch.Tensor, return_dict: bool = True) -> VQEncoderOutput: h = self.in_block(x) h = self.down_blocks(h) if not return_dict: return (h,) return VQEncoderOutput(latents=h) @apply_forward_hook def decode( self, h: torch.Tensor, force_not_quantize: bool = True, return_dict: bool = True ) -> Union[DecoderOutput, torch.Tensor]: if not force_not_quantize: quant, _, _ = self.vquantizer(h) else: quant = h x = self.up_blocks(quant) dec = self.out_block(x) if not return_dict: return (dec,) return DecoderOutput(sample=dec) def forward(self, sample: torch.Tensor, return_dict: bool = True) -> Union[DecoderOutput, torch.Tensor]: r""" Args: sample (`torch.Tensor`): Input sample. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`DecoderOutput`] instead of a plain tuple. """ x = sample h = self.encode(x).latents dec = self.decode(h).sample if not return_dict: return (dec,) return DecoderOutput(sample=dec)
diffusers/src/diffusers/pipelines/wuerstchen/modeling_paella_vq_model.py/0
{ "file_path": "diffusers/src/diffusers/pipelines/wuerstchen/modeling_paella_vq_model.py", "repo_id": "diffusers", "token_count": 3040 }
180
# Copyright 2025 The HuggingFace Team and City96. All rights reserved. # # # # Licensed under the Apache License, Version 2.0 (the "License"); # # you may not use this file except in compliance with the License. # # You may obtain a copy of the License at # # # # http://www.apache.org/licenses/LICENSE-2.0 # # # # Unless required by applicable law or agreed to in writing, software # # distributed under the License is distributed on an "AS IS" BASIS, # # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # # See the License for the specific language governing permissions and # # limitations under the License. import inspect import os from contextlib import nullcontext import gguf import torch import torch.nn as nn from ...utils import is_accelerate_available, is_kernels_available if is_accelerate_available(): import accelerate from accelerate import init_empty_weights from accelerate.hooks import add_hook_to_module, remove_hook_from_module can_use_cuda_kernels = ( os.getenv("DIFFUSERS_GGUF_CUDA_KERNELS", "false").lower() in ["1", "true", "yes"] and torch.cuda.is_available() and torch.cuda.get_device_capability()[0] >= 7 ) if can_use_cuda_kernels and is_kernels_available(): from kernels import get_kernel ops = get_kernel("Isotr0py/ggml") else: ops = None UNQUANTIZED_TYPES = {gguf.GGMLQuantizationType.F32, gguf.GGMLQuantizationType.F16, gguf.GGMLQuantizationType.BF16} STANDARD_QUANT_TYPES = { gguf.GGMLQuantizationType.Q4_0, gguf.GGMLQuantizationType.Q4_1, gguf.GGMLQuantizationType.Q5_0, gguf.GGMLQuantizationType.Q5_1, gguf.GGMLQuantizationType.Q8_0, gguf.GGMLQuantizationType.Q8_1, } KQUANT_TYPES = { gguf.GGMLQuantizationType.Q2_K, gguf.GGMLQuantizationType.Q3_K, gguf.GGMLQuantizationType.Q4_K, gguf.GGMLQuantizationType.Q5_K, gguf.GGMLQuantizationType.Q6_K, } IMATRIX_QUANT_TYPES = { gguf.GGMLQuantizationType.IQ1_M, gguf.GGMLQuantizationType.IQ1_S, gguf.GGMLQuantizationType.IQ2_XXS, gguf.GGMLQuantizationType.IQ2_XS, gguf.GGMLQuantizationType.IQ2_S, gguf.GGMLQuantizationType.IQ3_XXS, gguf.GGMLQuantizationType.IQ3_S, gguf.GGMLQuantizationType.IQ4_XS, gguf.GGMLQuantizationType.IQ4_NL, } # TODO(Isotr0py): Currently, we don't have MMQ kernel for I-Matrix quantization. # Consolidate DEQUANT_TYPES, MMVQ_QUANT_TYPES and MMQ_QUANT_TYPES after we add # MMQ kernel for I-Matrix quantization. DEQUANT_TYPES = STANDARD_QUANT_TYPES | KQUANT_TYPES | IMATRIX_QUANT_TYPES MMVQ_QUANT_TYPES = STANDARD_QUANT_TYPES | KQUANT_TYPES | IMATRIX_QUANT_TYPES MMQ_QUANT_TYPES = STANDARD_QUANT_TYPES | KQUANT_TYPES def _fused_mul_mat_gguf(x: torch.Tensor, qweight: torch.Tensor, qweight_type: int) -> torch.Tensor: # there is no need to call any kernel for fp16/bf16 if qweight_type in UNQUANTIZED_TYPES: return x @ qweight.T # TODO(Isotr0py): GGUF's MMQ and MMVQ implementation are designed for # contiguous batching and inefficient with diffusers' batching, # so we disabled it now. # elif qweight_type in MMVQ_QUANT_TYPES: # y = ops.ggml_mul_mat_vec_a8(qweight, x, qweight_type, qweight.shape[0]) # elif qweight_type in MMQ_QUANT_TYPES: # y = ops.ggml_mul_mat_a8(qweight, x, qweight_type, qweight.shape[0]) # If there is no available MMQ kernel, fallback to dequantize if qweight_type in DEQUANT_TYPES: block_size, type_size = gguf.GGML_QUANT_SIZES[qweight_type] shape = (qweight.shape[0], qweight.shape[1] // type_size * block_size) weight = ops.ggml_dequantize(qweight, qweight_type, *shape) y = x @ weight.to(x.dtype).T else: # Raise an error if the quantization type is not supported. # Might be useful if llama.cpp adds a new quantization type. # Wrap to GGMLQuantizationType IntEnum to make sure it's a valid type. qweight_type = gguf.GGMLQuantizationType(qweight_type) raise NotImplementedError(f"Unsupported GGUF quantization type: {qweight_type}") return y.as_tensor() # Copied from diffusers.quantizers.bitsandbytes.utils._create_accelerate_new_hook def _create_accelerate_new_hook(old_hook): r""" Creates a new hook based on the old hook. Use it only if you know what you are doing ! This method is a copy of: https://github.com/huggingface/peft/blob/748f7968f3a31ec06a1c2b0328993319ad9a150a/src/peft/utils/other.py#L245 with some changes """ old_hook_cls = getattr(accelerate.hooks, old_hook.__class__.__name__) old_hook_attr = old_hook.__dict__ filtered_old_hook_attr = {} old_hook_init_signature = inspect.signature(old_hook_cls.__init__) for k in old_hook_attr.keys(): if k in old_hook_init_signature.parameters: filtered_old_hook_attr[k] = old_hook_attr[k] new_hook = old_hook_cls(**filtered_old_hook_attr) return new_hook def _replace_with_gguf_linear(model, compute_dtype, state_dict, prefix="", modules_to_not_convert=[]): def _should_convert_to_gguf(state_dict, prefix): weight_key = prefix + "weight" return weight_key in state_dict and isinstance(state_dict[weight_key], GGUFParameter) has_children = list(model.children()) if not has_children: return for name, module in model.named_children(): module_prefix = prefix + name + "." _replace_with_gguf_linear(module, compute_dtype, state_dict, module_prefix, modules_to_not_convert) if ( isinstance(module, nn.Linear) and _should_convert_to_gguf(state_dict, module_prefix) and name not in modules_to_not_convert ): ctx = init_empty_weights if is_accelerate_available() else nullcontext with ctx(): model._modules[name] = GGUFLinear( module.in_features, module.out_features, module.bias is not None, compute_dtype=compute_dtype, ) model._modules[name].source_cls = type(module) # Force requires_grad to False to avoid unexpected errors model._modules[name].requires_grad_(False) return model def _dequantize_gguf_and_restore_linear(model, modules_to_not_convert=[]): for name, module in model.named_children(): if isinstance(module, GGUFLinear) and name not in modules_to_not_convert: device = module.weight.device bias = getattr(module, "bias", None) ctx = init_empty_weights if is_accelerate_available() else nullcontext with ctx(): new_module = nn.Linear( module.in_features, module.out_features, module.bias is not None, device=device, ) new_module.weight = nn.Parameter(dequantize_gguf_tensor(module.weight)) if bias is not None: new_module.bias = bias # Create a new hook and attach it in case we use accelerate if hasattr(module, "_hf_hook"): old_hook = module._hf_hook new_hook = _create_accelerate_new_hook(old_hook) remove_hook_from_module(module) add_hook_to_module(new_module, new_hook) new_module.to(device) model._modules[name] = new_module has_children = list(module.children()) if has_children: _dequantize_gguf_and_restore_linear(module, modules_to_not_convert) return model # dequantize operations based on torch ports of GGUF dequantize_functions # from City96 # more info: https://github.com/city96/ComfyUI-GGUF/blob/main/dequant.py QK_K = 256 K_SCALE_SIZE = 12 def to_uint32(x): x = x.view(torch.uint8).to(torch.int32) return (x[:, 0] | x[:, 1] << 8 | x[:, 2] << 16 | x[:, 3] << 24).unsqueeze(1) def split_block_dims(blocks, *args): n_max = blocks.shape[1] dims = list(args) + [n_max - sum(args)] return torch.split(blocks, dims, dim=1) def get_scale_min(scales): n_blocks = scales.shape[0] scales = scales.view(torch.uint8) scales = scales.reshape((n_blocks, 3, 4)) d, m, m_d = torch.split(scales, scales.shape[-2] // 3, dim=-2) sc = torch.cat([d & 0x3F, (m_d & 0x0F) | ((d >> 2) & 0x30)], dim=-1) min = torch.cat([m & 0x3F, (m_d >> 4) | ((m >> 2) & 0x30)], dim=-1) return (sc.reshape((n_blocks, 8)), min.reshape((n_blocks, 8))) def dequantize_blocks_Q8_0(blocks, block_size, type_size, dtype=None): d, x = split_block_dims(blocks, 2) d = d.view(torch.float16).to(dtype) x = x.view(torch.int8) return d * x def dequantize_blocks_Q5_1(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] d, m, qh, qs = split_block_dims(blocks, 2, 2, 4) d = d.view(torch.float16).to(dtype) m = m.view(torch.float16).to(dtype) qh = to_uint32(qh) qh = qh.reshape((n_blocks, 1)) >> torch.arange(32, device=d.device, dtype=torch.int32).reshape(1, 32) ql = qs.reshape((n_blocks, -1, 1, block_size // 2)) >> torch.tensor( [0, 4], device=d.device, dtype=torch.uint8 ).reshape(1, 1, 2, 1) qh = (qh & 1).to(torch.uint8) ql = (ql & 0x0F).reshape((n_blocks, -1)) qs = ql | (qh << 4) return (d * qs) + m def dequantize_blocks_Q5_0(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] d, qh, qs = split_block_dims(blocks, 2, 4) d = d.view(torch.float16).to(dtype) qh = to_uint32(qh) qh = qh.reshape(n_blocks, 1) >> torch.arange(32, device=d.device, dtype=torch.int32).reshape(1, 32) ql = qs.reshape(n_blocks, -1, 1, block_size // 2) >> torch.tensor( [0, 4], device=d.device, dtype=torch.uint8 ).reshape(1, 1, 2, 1) qh = (qh & 1).to(torch.uint8) ql = (ql & 0x0F).reshape(n_blocks, -1) qs = (ql | (qh << 4)).to(torch.int8) - 16 return d * qs def dequantize_blocks_Q4_1(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] d, m, qs = split_block_dims(blocks, 2, 2) d = d.view(torch.float16).to(dtype) m = m.view(torch.float16).to(dtype) qs = qs.reshape((n_blocks, -1, 1, block_size // 2)) >> torch.tensor( [0, 4], device=d.device, dtype=torch.uint8 ).reshape(1, 1, 2, 1) qs = (qs & 0x0F).reshape(n_blocks, -1) return (d * qs) + m def dequantize_blocks_Q4_0(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] d, qs = split_block_dims(blocks, 2) d = d.view(torch.float16).to(dtype) qs = qs.reshape((n_blocks, -1, 1, block_size // 2)) >> torch.tensor( [0, 4], device=d.device, dtype=torch.uint8 ).reshape((1, 1, 2, 1)) qs = (qs & 0x0F).reshape((n_blocks, -1)).to(torch.int8) - 8 return d * qs def dequantize_blocks_Q6_K(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] ( ql, qh, scales, d, ) = split_block_dims(blocks, QK_K // 2, QK_K // 4, QK_K // 16) scales = scales.view(torch.int8).to(dtype) d = d.view(torch.float16).to(dtype) d = (d * scales).reshape((n_blocks, QK_K // 16, 1)) ql = ql.reshape((n_blocks, -1, 1, 64)) >> torch.tensor([0, 4], device=d.device, dtype=torch.uint8).reshape( (1, 1, 2, 1) ) ql = (ql & 0x0F).reshape((n_blocks, -1, 32)) qh = qh.reshape((n_blocks, -1, 1, 32)) >> torch.tensor([0, 2, 4, 6], device=d.device, dtype=torch.uint8).reshape( (1, 1, 4, 1) ) qh = (qh & 0x03).reshape((n_blocks, -1, 32)) q = (ql | (qh << 4)).to(torch.int8) - 32 q = q.reshape((n_blocks, QK_K // 16, -1)) return (d * q).reshape((n_blocks, QK_K)) def dequantize_blocks_Q5_K(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] d, dmin, scales, qh, qs = split_block_dims(blocks, 2, 2, K_SCALE_SIZE, QK_K // 8) d = d.view(torch.float16).to(dtype) dmin = dmin.view(torch.float16).to(dtype) sc, m = get_scale_min(scales) d = (d * sc).reshape((n_blocks, -1, 1)) dm = (dmin * m).reshape((n_blocks, -1, 1)) ql = qs.reshape((n_blocks, -1, 1, 32)) >> torch.tensor([0, 4], device=d.device, dtype=torch.uint8).reshape( (1, 1, 2, 1) ) qh = qh.reshape((n_blocks, -1, 1, 32)) >> torch.arange(0, 8, device=d.device, dtype=torch.uint8).reshape( (1, 1, 8, 1) ) ql = (ql & 0x0F).reshape((n_blocks, -1, 32)) qh = (qh & 0x01).reshape((n_blocks, -1, 32)) q = ql | (qh << 4) return (d * q - dm).reshape((n_blocks, QK_K)) def dequantize_blocks_Q4_K(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] d, dmin, scales, qs = split_block_dims(blocks, 2, 2, K_SCALE_SIZE) d = d.view(torch.float16).to(dtype) dmin = dmin.view(torch.float16).to(dtype) sc, m = get_scale_min(scales) d = (d * sc).reshape((n_blocks, -1, 1)) dm = (dmin * m).reshape((n_blocks, -1, 1)) qs = qs.reshape((n_blocks, -1, 1, 32)) >> torch.tensor([0, 4], device=d.device, dtype=torch.uint8).reshape( (1, 1, 2, 1) ) qs = (qs & 0x0F).reshape((n_blocks, -1, 32)) return (d * qs - dm).reshape((n_blocks, QK_K)) def dequantize_blocks_Q3_K(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] hmask, qs, scales, d = split_block_dims(blocks, QK_K // 8, QK_K // 4, 12) d = d.view(torch.float16).to(dtype) lscales, hscales = scales[:, :8], scales[:, 8:] lscales = lscales.reshape((n_blocks, 1, 8)) >> torch.tensor([0, 4], device=d.device, dtype=torch.uint8).reshape( (1, 2, 1) ) lscales = lscales.reshape((n_blocks, 16)) hscales = hscales.reshape((n_blocks, 1, 4)) >> torch.tensor( [0, 2, 4, 6], device=d.device, dtype=torch.uint8 ).reshape((1, 4, 1)) hscales = hscales.reshape((n_blocks, 16)) scales = (lscales & 0x0F) | ((hscales & 0x03) << 4) scales = scales.to(torch.int8) - 32 dl = (d * scales).reshape((n_blocks, 16, 1)) ql = qs.reshape((n_blocks, -1, 1, 32)) >> torch.tensor([0, 2, 4, 6], device=d.device, dtype=torch.uint8).reshape( (1, 1, 4, 1) ) qh = hmask.reshape(n_blocks, -1, 1, 32) >> torch.arange(0, 8, device=d.device, dtype=torch.uint8).reshape( (1, 1, 8, 1) ) ql = ql.reshape((n_blocks, 16, QK_K // 16)) & 3 qh = (qh.reshape((n_blocks, 16, QK_K // 16)) & 1) ^ 1 q = ql.to(torch.int8) - (qh << 2).to(torch.int8) return (dl * q).reshape((n_blocks, QK_K)) def dequantize_blocks_Q2_K(blocks, block_size, type_size, dtype=None): n_blocks = blocks.shape[0] scales, qs, d, dmin = split_block_dims(blocks, QK_K // 16, QK_K // 4, 2) d = d.view(torch.float16).to(dtype) dmin = dmin.view(torch.float16).to(dtype) # (n_blocks, 16, 1) dl = (d * (scales & 0xF)).reshape((n_blocks, QK_K // 16, 1)) ml = (dmin * (scales >> 4)).reshape((n_blocks, QK_K // 16, 1)) shift = torch.tensor([0, 2, 4, 6], device=d.device, dtype=torch.uint8).reshape((1, 1, 4, 1)) qs = (qs.reshape((n_blocks, -1, 1, 32)) >> shift) & 3 qs = qs.reshape((n_blocks, QK_K // 16, 16)) qs = dl * qs - ml return qs.reshape((n_blocks, -1)) def dequantize_blocks_BF16(blocks, block_size, type_size, dtype=None): return (blocks.view(torch.int16).to(torch.int32) << 16).view(torch.float32) GGML_QUANT_SIZES = gguf.GGML_QUANT_SIZES dequantize_functions = { gguf.GGMLQuantizationType.BF16: dequantize_blocks_BF16, gguf.GGMLQuantizationType.Q8_0: dequantize_blocks_Q8_0, gguf.GGMLQuantizationType.Q5_1: dequantize_blocks_Q5_1, gguf.GGMLQuantizationType.Q5_0: dequantize_blocks_Q5_0, gguf.GGMLQuantizationType.Q4_1: dequantize_blocks_Q4_1, gguf.GGMLQuantizationType.Q4_0: dequantize_blocks_Q4_0, gguf.GGMLQuantizationType.Q6_K: dequantize_blocks_Q6_K, gguf.GGMLQuantizationType.Q5_K: dequantize_blocks_Q5_K, gguf.GGMLQuantizationType.Q4_K: dequantize_blocks_Q4_K, gguf.GGMLQuantizationType.Q3_K: dequantize_blocks_Q3_K, gguf.GGMLQuantizationType.Q2_K: dequantize_blocks_Q2_K, } SUPPORTED_GGUF_QUANT_TYPES = list(dequantize_functions.keys()) def _quant_shape_from_byte_shape(shape, type_size, block_size): return (*shape[:-1], shape[-1] // type_size * block_size) def dequantize_gguf_tensor(tensor): if not hasattr(tensor, "quant_type"): return tensor quant_type = tensor.quant_type dequant_fn = dequantize_functions[quant_type] block_size, type_size = GGML_QUANT_SIZES[quant_type] tensor = tensor.view(torch.uint8) shape = _quant_shape_from_byte_shape(tensor.shape, type_size, block_size) n_blocks = tensor.numel() // type_size blocks = tensor.reshape((n_blocks, type_size)) dequant = dequant_fn(blocks, block_size, type_size) dequant = dequant.reshape(shape) return dequant.as_tensor() class GGUFParameter(torch.nn.Parameter): def __new__(cls, data, requires_grad=False, quant_type=None): data = data if data is not None else torch.empty(0) self = torch.Tensor._make_subclass(cls, data, requires_grad) self.quant_type = quant_type block_size, type_size = GGML_QUANT_SIZES[quant_type] self.quant_shape = _quant_shape_from_byte_shape(self.shape, type_size, block_size) return self def as_tensor(self): return torch.Tensor._make_subclass(torch.Tensor, self, self.requires_grad) @staticmethod def _extract_quant_type(args): # When converting from original format checkpoints we often use splits, cats etc on tensors # this method ensures that the returned tensor type from those operations remains GGUFParameter # so that we preserve quant_type information for arg in args: if isinstance(arg, list) and isinstance(arg[0], GGUFParameter): return arg[0].quant_type if isinstance(arg, GGUFParameter): return arg.quant_type return None @classmethod def __torch_function__(cls, func, types, args=(), kwargs=None): if kwargs is None: kwargs = {} result = super().__torch_function__(func, types, args, kwargs) if isinstance(result, torch.Tensor): quant_type = cls._extract_quant_type(args) return cls(result, quant_type=quant_type) # Handle tuples and lists elif type(result) in (list, tuple): # Preserve the original type (tuple or list) quant_type = cls._extract_quant_type(args) wrapped = [cls(x, quant_type=quant_type) if isinstance(x, torch.Tensor) else x for x in result] return type(result)(wrapped) else: return result class GGUFLinear(nn.Linear): def __init__( self, in_features, out_features, bias=False, compute_dtype=None, device=None, ) -> None: super().__init__(in_features, out_features, bias, device) self.compute_dtype = compute_dtype self.device = device def forward(self, inputs: torch.Tensor): if ops is not None and self.weight.is_cuda and inputs.is_cuda: return self.forward_cuda(inputs) return self.forward_native(inputs) def forward_native(self, inputs: torch.Tensor): weight = dequantize_gguf_tensor(self.weight) weight = weight.to(self.compute_dtype) bias = self.bias.to(self.compute_dtype) if self.bias is not None else None output = torch.nn.functional.linear(inputs, weight, bias) return output def forward_cuda(self, inputs: torch.Tensor): quant_type = self.weight.quant_type output = _fused_mul_mat_gguf(inputs.to(self.compute_dtype), self.weight, quant_type) if self.bias is not None: output += self.bias.to(self.compute_dtype) return output
diffusers/src/diffusers/quantizers/gguf/utils.py/0
{ "file_path": "diffusers/src/diffusers/quantizers/gguf/utils.py", "repo_id": "diffusers", "token_count": 9162 }
181
# Copyright 2025 TSAIL Team and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # DISCLAIMER: This file is strongly influenced by https://github.com/LuChengTHU/dpm-solver and https://github.com/NVlabs/edm import math from typing import List, Optional, Tuple, Union import numpy as np import torch from ..configuration_utils import ConfigMixin, register_to_config from .scheduling_dpmsolver_sde import BrownianTreeNoiseSampler from .scheduling_utils import SchedulerMixin, SchedulerOutput class CosineDPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin): """ Implements a variant of `DPMSolverMultistepScheduler` with cosine schedule, proposed by Nichol and Dhariwal (2021). This scheduler was used in Stable Audio Open [1]. [1] Evans, Parker, et al. "Stable Audio Open" https://huggingface.co/papers/2407.14358 This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving. Args: sigma_min (`float`, *optional*, defaults to 0.3): Minimum noise magnitude in the sigma schedule. This was set to 0.3 in Stable Audio Open [1]. sigma_max (`float`, *optional*, defaults to 500): Maximum noise magnitude in the sigma schedule. This was set to 500 in Stable Audio Open [1]. sigma_data (`float`, *optional*, defaults to 1.0): The standard deviation of the data distribution. This is set to 1.0 in Stable Audio Open [1]. sigma_schedule (`str`, *optional*, defaults to `exponential`): Sigma schedule to compute the `sigmas`. By default, we the schedule introduced in the EDM paper (https://huggingface.co/papers/2206.00364). Other acceptable value is "exponential". The exponential schedule was incorporated in this model: https://huggingface.co/stabilityai/cosxl. num_train_timesteps (`int`, defaults to 1000): The number of diffusion steps to train the model. solver_order (`int`, defaults to 2): The DPMSolver order which can be `1` or `2`. It is recommended to use `solver_order=2`. prediction_type (`str`, defaults to `v_prediction`, *optional*): Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process), `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen Video](https://imagen.research.google/video/paper.pdf) paper). solver_type (`str`, defaults to `midpoint`): Solver type for the second-order solver; can be `midpoint` or `heun`. The solver type slightly affects the sample quality, especially for a small number of steps. It is recommended to use `midpoint` solvers. lower_order_final (`bool`, defaults to `True`): Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (`bool`, defaults to `False`): Whether to use Euler's method in the final step. It is a trade-off between numerical stability and detail richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference steps, but sometimes may result in blurring. final_sigmas_type (`str`, defaults to `"zero"`): The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0. """ _compatibles = [] order = 1 @register_to_config def __init__( self, sigma_min: float = 0.3, sigma_max: float = 500, sigma_data: float = 1.0, sigma_schedule: str = "exponential", num_train_timesteps: int = 1000, solver_order: int = 2, prediction_type: str = "v_prediction", rho: float = 7.0, solver_type: str = "midpoint", lower_order_final: bool = True, euler_at_final: bool = False, final_sigmas_type: Optional[str] = "zero", # "zero", "sigma_min" ): if solver_type not in ["midpoint", "heun"]: if solver_type in ["logrho", "bh1", "bh2"]: self.register_to_config(solver_type="midpoint") else: raise NotImplementedError(f"{solver_type} is not implemented for {self.__class__}") ramp = torch.linspace(0, 1, num_train_timesteps) if sigma_schedule == "karras": sigmas = self._compute_karras_sigmas(ramp) elif sigma_schedule == "exponential": sigmas = self._compute_exponential_sigmas(ramp) self.timesteps = self.precondition_noise(sigmas) self.sigmas = torch.cat([sigmas, torch.zeros(1, device=sigmas.device)]) # setable values self.num_inference_steps = None self.model_outputs = [None] * solver_order self.lower_order_nums = 0 self._step_index = None self._begin_index = None self.sigmas = self.sigmas.to("cpu") # to avoid too much CPU/GPU communication @property def init_noise_sigma(self): # standard deviation of the initial noise distribution return (self.config.sigma_max**2 + 1) ** 0.5 @property def step_index(self): """ The index counter for current timestep. It will increase 1 after each scheduler step. """ return self._step_index @property def begin_index(self): """ The index for the first timestep. It should be set from pipeline with `set_begin_index` method. """ return self._begin_index # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.set_begin_index def set_begin_index(self, begin_index: int = 0): """ Sets the begin index for the scheduler. This function should be run from pipeline before the inference. Args: begin_index (`int`): The begin index for the scheduler. """ self._begin_index = begin_index # Copied from diffusers.schedulers.scheduling_edm_euler.EDMEulerScheduler.precondition_inputs def precondition_inputs(self, sample, sigma): c_in = self._get_conditioning_c_in(sigma) scaled_sample = sample * c_in return scaled_sample def precondition_noise(self, sigma): if not isinstance(sigma, torch.Tensor): sigma = torch.tensor([sigma]) return sigma.atan() / math.pi * 2 # Copied from diffusers.schedulers.scheduling_edm_euler.EDMEulerScheduler.precondition_outputs def precondition_outputs(self, sample, model_output, sigma): sigma_data = self.config.sigma_data c_skip = sigma_data**2 / (sigma**2 + sigma_data**2) if self.config.prediction_type == "epsilon": c_out = sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5 elif self.config.prediction_type == "v_prediction": c_out = -sigma * sigma_data / (sigma**2 + sigma_data**2) ** 0.5 else: raise ValueError(f"Prediction type {self.config.prediction_type} is not supported.") denoised = c_skip * sample + c_out * model_output return denoised # Copied from diffusers.schedulers.scheduling_edm_euler.EDMEulerScheduler.scale_model_input def scale_model_input(self, sample: torch.Tensor, timestep: Union[float, torch.Tensor]) -> torch.Tensor: """ Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm. Args: sample (`torch.Tensor`): The input sample. timestep (`int`, *optional*): The current timestep in the diffusion chain. Returns: `torch.Tensor`: A scaled input sample. """ if self.step_index is None: self._init_step_index(timestep) sigma = self.sigmas[self.step_index] sample = self.precondition_inputs(sample, sigma) self.is_scale_input_called = True return sample def set_timesteps(self, num_inference_steps: int = None, device: Union[str, torch.device] = None): """ Sets the discrete timesteps used for the diffusion chain (to be run before inference). Args: num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. """ self.num_inference_steps = num_inference_steps ramp = torch.linspace(0, 1, self.num_inference_steps) if self.config.sigma_schedule == "karras": sigmas = self._compute_karras_sigmas(ramp) elif self.config.sigma_schedule == "exponential": sigmas = self._compute_exponential_sigmas(ramp) sigmas = sigmas.to(dtype=torch.float32, device=device) self.timesteps = self.precondition_noise(sigmas) if self.config.final_sigmas_type == "sigma_min": sigma_last = self.config.sigma_min elif self.config.final_sigmas_type == "zero": sigma_last = 0 else: raise ValueError( f"`final_sigmas_type` must be one of 'zero', or 'sigma_min', but got {self.config.final_sigmas_type}" ) self.sigmas = torch.cat([sigmas, torch.tensor([sigma_last], dtype=torch.float32, device=device)]) self.model_outputs = [ None, ] * self.config.solver_order self.lower_order_nums = 0 # add an index counter for schedulers that allow duplicated timesteps self._step_index = None self._begin_index = None self.sigmas = self.sigmas.to("cpu") # to avoid too much CPU/GPU communication # if a noise sampler is used, reinitialise it self.noise_sampler = None # Copied from diffusers.schedulers.scheduling_edm_euler.EDMEulerScheduler._compute_karras_sigmas def _compute_karras_sigmas(self, ramp, sigma_min=None, sigma_max=None) -> torch.Tensor: """Constructs the noise schedule of Karras et al. (2022).""" sigma_min = sigma_min or self.config.sigma_min sigma_max = sigma_max or self.config.sigma_max rho = self.config.rho min_inv_rho = sigma_min ** (1 / rho) max_inv_rho = sigma_max ** (1 / rho) sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho return sigmas # Copied from diffusers.schedulers.scheduling_edm_euler.EDMEulerScheduler._compute_exponential_sigmas def _compute_exponential_sigmas(self, ramp, sigma_min=None, sigma_max=None) -> torch.Tensor: """Implementation closely follows k-diffusion. https://github.com/crowsonkb/k-diffusion/blob/6ab5146d4a5ef63901326489f31f1d8e7dd36b48/k_diffusion/sampling.py#L26 """ sigma_min = sigma_min or self.config.sigma_min sigma_max = sigma_max or self.config.sigma_max sigmas = torch.linspace(math.log(sigma_min), math.log(sigma_max), len(ramp)).exp().flip(0) return sigmas # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._sigma_to_t def _sigma_to_t(self, sigma, log_sigmas): # get log sigma log_sigma = np.log(np.maximum(sigma, 1e-10)) # get distribution dists = log_sigma - log_sigmas[:, np.newaxis] # get sigmas range low_idx = np.cumsum((dists >= 0), axis=0).argmax(axis=0).clip(max=log_sigmas.shape[0] - 2) high_idx = low_idx + 1 low = log_sigmas[low_idx] high = log_sigmas[high_idx] # interpolate sigmas w = (low - log_sigma) / (low - high) w = np.clip(w, 0, 1) # transform interpolation to time range t = (1 - w) * low_idx + w * high_idx t = t.reshape(sigma.shape) return t def _sigma_to_alpha_sigma_t(self, sigma): alpha_t = torch.tensor(1) # Inputs are pre-scaled before going into unet, so alpha_t = 1 sigma_t = sigma return alpha_t, sigma_t def convert_model_output( self, model_output: torch.Tensor, sample: torch.Tensor = None, ) -> torch.Tensor: """ Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an integral of the data prediction model. <Tip> The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise prediction and data prediction models. </Tip> Args: model_output (`torch.Tensor`): The direct output from the learned diffusion model. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. Returns: `torch.Tensor`: The converted model output. """ sigma = self.sigmas[self.step_index] x0_pred = self.precondition_outputs(sample, model_output, sigma) return x0_pred def dpm_solver_first_order_update( self, model_output: torch.Tensor, sample: torch.Tensor = None, noise: Optional[torch.Tensor] = None, ) -> torch.Tensor: """ One step for the first-order DPMSolver (equivalent to DDIM). Args: model_output (`torch.Tensor`): The direct output from the learned diffusion model. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. Returns: `torch.Tensor`: The sample tensor at the previous timestep. """ sigma_t, sigma_s = self.sigmas[self.step_index + 1], self.sigmas[self.step_index] alpha_t, sigma_t = self._sigma_to_alpha_sigma_t(sigma_t) alpha_s, sigma_s = self._sigma_to_alpha_sigma_t(sigma_s) lambda_t = torch.log(alpha_t) - torch.log(sigma_t) lambda_s = torch.log(alpha_s) - torch.log(sigma_s) h = lambda_t - lambda_s assert noise is not None x_t = ( (sigma_t / sigma_s * torch.exp(-h)) * sample + (alpha_t * (1 - torch.exp(-2.0 * h))) * model_output + sigma_t * torch.sqrt(1.0 - torch.exp(-2 * h)) * noise ) return x_t def multistep_dpm_solver_second_order_update( self, model_output_list: List[torch.Tensor], sample: torch.Tensor = None, noise: Optional[torch.Tensor] = None, ) -> torch.Tensor: """ One step for the second-order multistep DPMSolver. Args: model_output_list (`List[torch.Tensor]`): The direct outputs from learned diffusion model at current and latter timesteps. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. Returns: `torch.Tensor`: The sample tensor at the previous timestep. """ sigma_t, sigma_s0, sigma_s1 = ( self.sigmas[self.step_index + 1], self.sigmas[self.step_index], self.sigmas[self.step_index - 1], ) alpha_t, sigma_t = self._sigma_to_alpha_sigma_t(sigma_t) alpha_s0, sigma_s0 = self._sigma_to_alpha_sigma_t(sigma_s0) alpha_s1, sigma_s1 = self._sigma_to_alpha_sigma_t(sigma_s1) lambda_t = torch.log(alpha_t) - torch.log(sigma_t) lambda_s0 = torch.log(alpha_s0) - torch.log(sigma_s0) lambda_s1 = torch.log(alpha_s1) - torch.log(sigma_s1) m0, m1 = model_output_list[-1], model_output_list[-2] h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1 r0 = h_0 / h D0, D1 = m0, (1.0 / r0) * (m0 - m1) # sde-dpmsolver++ assert noise is not None if self.config.solver_type == "midpoint": x_t = ( (sigma_t / sigma_s0 * torch.exp(-h)) * sample + (alpha_t * (1 - torch.exp(-2.0 * h))) * D0 + 0.5 * (alpha_t * (1 - torch.exp(-2.0 * h))) * D1 + sigma_t * torch.sqrt(1.0 - torch.exp(-2 * h)) * noise ) elif self.config.solver_type == "heun": x_t = ( (sigma_t / sigma_s0 * torch.exp(-h)) * sample + (alpha_t * (1 - torch.exp(-2.0 * h))) * D0 + (alpha_t * ((1.0 - torch.exp(-2.0 * h)) / (-2.0 * h) + 1.0)) * D1 + sigma_t * torch.sqrt(1.0 - torch.exp(-2 * h)) * noise ) return x_t # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.index_for_timestep def index_for_timestep(self, timestep, schedule_timesteps=None): if schedule_timesteps is None: schedule_timesteps = self.timesteps index_candidates = (schedule_timesteps == timestep).nonzero() if len(index_candidates) == 0: step_index = len(self.timesteps) - 1 # The sigma index that is taken for the **very** first `step` # is always the second index (or the last index if there is only 1) # This way we can ensure we don't accidentally skip a sigma in # case we start in the middle of the denoising schedule (e.g. for image-to-image) elif len(index_candidates) > 1: step_index = index_candidates[1].item() else: step_index = index_candidates[0].item() return step_index # Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler._init_step_index def _init_step_index(self, timestep): """ Initialize the step_index counter for the scheduler. """ if self.begin_index is None: if isinstance(timestep, torch.Tensor): timestep = timestep.to(self.timesteps.device) self._step_index = self.index_for_timestep(timestep) else: self._step_index = self._begin_index def step( self, model_output: torch.Tensor, timestep: Union[int, torch.Tensor], sample: torch.Tensor, generator=None, return_dict: bool = True, ) -> Union[SchedulerOutput, Tuple]: """ Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with the multistep DPMSolver. Args: model_output (`torch.Tensor`): The direct output from learned diffusion model. timestep (`int`): The current discrete timestep in the diffusion chain. sample (`torch.Tensor`): A current instance of a sample created by the diffusion process. generator (`torch.Generator`, *optional*): A random number generator. return_dict (`bool`): Whether or not to return a [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`. Returns: [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`: If return_dict is `True`, [`~schedulers.scheduling_utils.SchedulerOutput`] is returned, otherwise a tuple is returned where the first element is the sample tensor. """ if self.num_inference_steps is None: raise ValueError( "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" ) if self.step_index is None: self._init_step_index(timestep) # Improve numerical stability for small number of steps lower_order_final = (self.step_index == len(self.timesteps) - 1) and ( self.config.euler_at_final or (self.config.lower_order_final and len(self.timesteps) < 15) or self.config.final_sigmas_type == "zero" ) lower_order_second = ( (self.step_index == len(self.timesteps) - 2) and self.config.lower_order_final and len(self.timesteps) < 15 ) model_output = self.convert_model_output(model_output, sample=sample) for i in range(self.config.solver_order - 1): self.model_outputs[i] = self.model_outputs[i + 1] self.model_outputs[-1] = model_output if self.noise_sampler is None: seed = None if generator is not None: seed = ( [g.initial_seed() for g in generator] if isinstance(generator, list) else generator.initial_seed() ) self.noise_sampler = BrownianTreeNoiseSampler( model_output, sigma_min=self.config.sigma_min, sigma_max=self.config.sigma_max, seed=seed ) noise = self.noise_sampler(self.sigmas[self.step_index], self.sigmas[self.step_index + 1]).to( model_output.device ) if self.config.solver_order == 1 or self.lower_order_nums < 1 or lower_order_final: prev_sample = self.dpm_solver_first_order_update(model_output, sample=sample, noise=noise) elif self.config.solver_order == 2 or self.lower_order_nums < 2 or lower_order_second: prev_sample = self.multistep_dpm_solver_second_order_update(self.model_outputs, sample=sample, noise=noise) if self.lower_order_nums < self.config.solver_order: self.lower_order_nums += 1 # upon completion increase step index by one self._step_index += 1 if not return_dict: return (prev_sample,) return SchedulerOutput(prev_sample=prev_sample) # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler.add_noise def add_noise( self, original_samples: torch.Tensor, noise: torch.Tensor, timesteps: torch.Tensor, ) -> torch.Tensor: # Make sure sigmas and timesteps have the same device and dtype as original_samples sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): # mps does not support float64 schedule_timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) timesteps = timesteps.to(original_samples.device, dtype=torch.float32) else: schedule_timesteps = self.timesteps.to(original_samples.device) timesteps = timesteps.to(original_samples.device) # self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index if self.begin_index is None: step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timesteps] elif self.step_index is not None: # add_noise is called after first denoising step (for inpainting) step_indices = [self.step_index] * timesteps.shape[0] else: # add noise is called before first denoising step to create initial latent(img2img) step_indices = [self.begin_index] * timesteps.shape[0] sigma = sigmas[step_indices].flatten() while len(sigma.shape) < len(original_samples.shape): sigma = sigma.unsqueeze(-1) noisy_samples = original_samples + noise * sigma return noisy_samples # Copied from diffusers.schedulers.scheduling_edm_euler.EDMEulerScheduler._get_conditioning_c_in def _get_conditioning_c_in(self, sigma): c_in = 1 / ((sigma**2 + self.config.sigma_data**2) ** 0.5) return c_in def __len__(self): return self.config.num_train_timesteps
diffusers/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py/0
{ "file_path": "diffusers/src/diffusers/schedulers/scheduling_cosine_dpmsolver_multistep.py", "repo_id": "diffusers", "token_count": 10962 }
182
# Copyright 2025 Katherine Crowson and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Optional, Tuple, Union import flax import jax.numpy as jnp from scipy import integrate from ..configuration_utils import ConfigMixin, register_to_config from .scheduling_utils_flax import ( CommonSchedulerState, FlaxKarrasDiffusionSchedulers, FlaxSchedulerMixin, FlaxSchedulerOutput, broadcast_to_shape_from_left, ) @flax.struct.dataclass class LMSDiscreteSchedulerState: common: CommonSchedulerState # setable values init_noise_sigma: jnp.ndarray timesteps: jnp.ndarray sigmas: jnp.ndarray num_inference_steps: Optional[int] = None # running values derivatives: Optional[jnp.ndarray] = None @classmethod def create( cls, common: CommonSchedulerState, init_noise_sigma: jnp.ndarray, timesteps: jnp.ndarray, sigmas: jnp.ndarray ): return cls(common=common, init_noise_sigma=init_noise_sigma, timesteps=timesteps, sigmas=sigmas) @dataclass class FlaxLMSSchedulerOutput(FlaxSchedulerOutput): state: LMSDiscreteSchedulerState class FlaxLMSDiscreteScheduler(FlaxSchedulerMixin, ConfigMixin): """ Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by Katherine Crowson: https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181 [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and [`~SchedulerMixin.from_pretrained`] functions. Args: num_train_timesteps (`int`): number of diffusion steps used to train the model. beta_start (`float`): the starting `beta` value of inference. beta_end (`float`): the final `beta` value. beta_schedule (`str`): the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from `linear` or `scaled_linear`. trained_betas (`jnp.ndarray`, optional): option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. prediction_type (`str`, default `epsilon`, optional): prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 https://imagen.research.google/video/paper.pdf) dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`): the `dtype` used for params and computation. """ _compatibles = [e.name for e in FlaxKarrasDiffusionSchedulers] dtype: jnp.dtype @property def has_state(self): return True @register_to_config def __init__( self, num_train_timesteps: int = 1000, beta_start: float = 0.0001, beta_end: float = 0.02, beta_schedule: str = "linear", trained_betas: Optional[jnp.ndarray] = None, prediction_type: str = "epsilon", dtype: jnp.dtype = jnp.float32, ): self.dtype = dtype def create_state(self, common: Optional[CommonSchedulerState] = None) -> LMSDiscreteSchedulerState: if common is None: common = CommonSchedulerState.create(self) timesteps = jnp.arange(0, self.config.num_train_timesteps).round()[::-1] sigmas = ((1 - common.alphas_cumprod) / common.alphas_cumprod) ** 0.5 # standard deviation of the initial noise distribution init_noise_sigma = sigmas.max() return LMSDiscreteSchedulerState.create( common=common, init_noise_sigma=init_noise_sigma, timesteps=timesteps, sigmas=sigmas, ) def scale_model_input(self, state: LMSDiscreteSchedulerState, sample: jnp.ndarray, timestep: int) -> jnp.ndarray: """ Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the K-LMS algorithm. Args: state (`LMSDiscreteSchedulerState`): the `FlaxLMSDiscreteScheduler` state data class instance. sample (`jnp.ndarray`): current instance of sample being created by diffusion process. timestep (`int`): current discrete timestep in the diffusion chain. Returns: `jnp.ndarray`: scaled input sample """ (step_index,) = jnp.where(state.timesteps == timestep, size=1) step_index = step_index[0] sigma = state.sigmas[step_index] sample = sample / ((sigma**2 + 1) ** 0.5) return sample def get_lms_coefficient(self, state: LMSDiscreteSchedulerState, order, t, current_order): """ Compute a linear multistep coefficient. Args: order (TODO): t (TODO): current_order (TODO): """ def lms_derivative(tau): prod = 1.0 for k in range(order): if current_order == k: continue prod *= (tau - state.sigmas[t - k]) / (state.sigmas[t - current_order] - state.sigmas[t - k]) return prod integrated_coeff = integrate.quad(lms_derivative, state.sigmas[t], state.sigmas[t + 1], epsrel=1e-4)[0] return integrated_coeff def set_timesteps( self, state: LMSDiscreteSchedulerState, num_inference_steps: int, shape: Tuple = () ) -> LMSDiscreteSchedulerState: """ Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. Args: state (`LMSDiscreteSchedulerState`): the `FlaxLMSDiscreteScheduler` state data class instance. num_inference_steps (`int`): the number of diffusion steps used when generating samples with a pre-trained model. """ timesteps = jnp.linspace(self.config.num_train_timesteps - 1, 0, num_inference_steps, dtype=self.dtype) low_idx = jnp.floor(timesteps).astype(jnp.int32) high_idx = jnp.ceil(timesteps).astype(jnp.int32) frac = jnp.mod(timesteps, 1.0) sigmas = ((1 - state.common.alphas_cumprod) / state.common.alphas_cumprod) ** 0.5 sigmas = (1 - frac) * sigmas[low_idx] + frac * sigmas[high_idx] sigmas = jnp.concatenate([sigmas, jnp.array([0.0], dtype=self.dtype)]) timesteps = timesteps.astype(jnp.int32) # initial running values derivatives = jnp.zeros((0,) + shape, dtype=self.dtype) return state.replace( timesteps=timesteps, sigmas=sigmas, num_inference_steps=num_inference_steps, derivatives=derivatives, ) def step( self, state: LMSDiscreteSchedulerState, model_output: jnp.ndarray, timestep: int, sample: jnp.ndarray, order: int = 4, return_dict: bool = True, ) -> Union[FlaxLMSSchedulerOutput, Tuple]: """ Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise). Args: state (`LMSDiscreteSchedulerState`): the `FlaxLMSDiscreteScheduler` state data class instance. model_output (`jnp.ndarray`): direct output from learned diffusion model. timestep (`int`): current discrete timestep in the diffusion chain. sample (`jnp.ndarray`): current instance of sample being created by diffusion process. order: coefficient for multi-step inference. return_dict (`bool`): option for returning tuple rather than FlaxLMSSchedulerOutput class Returns: [`FlaxLMSSchedulerOutput`] or `tuple`: [`FlaxLMSSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. """ if state.num_inference_steps is None: raise ValueError( "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" ) sigma = state.sigmas[timestep] # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise if self.config.prediction_type == "epsilon": pred_original_sample = sample - sigma * model_output elif self.config.prediction_type == "v_prediction": # * c_out + input * c_skip pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1)) else: raise ValueError( f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" ) # 2. Convert to an ODE derivative derivative = (sample - pred_original_sample) / sigma state = state.replace(derivatives=jnp.append(state.derivatives, derivative)) if len(state.derivatives) > order: state = state.replace(derivatives=jnp.delete(state.derivatives, 0)) # 3. Compute linear multistep coefficients order = min(timestep + 1, order) lms_coeffs = [self.get_lms_coefficient(state, order, timestep, curr_order) for curr_order in range(order)] # 4. Compute previous sample based on the derivatives path prev_sample = sample + sum( coeff * derivative for coeff, derivative in zip(lms_coeffs, reversed(state.derivatives)) ) if not return_dict: return (prev_sample, state) return FlaxLMSSchedulerOutput(prev_sample=prev_sample, state=state) def add_noise( self, state: LMSDiscreteSchedulerState, original_samples: jnp.ndarray, noise: jnp.ndarray, timesteps: jnp.ndarray, ) -> jnp.ndarray: sigma = state.sigmas[timesteps].flatten() sigma = broadcast_to_shape_from_left(sigma, noise.shape) noisy_samples = original_samples + noise * sigma return noisy_samples def __len__(self): return self.config.num_train_timesteps
diffusers/src/diffusers/schedulers/scheduling_lms_discrete_flax.py/0
{ "file_path": "diffusers/src/diffusers/schedulers/scheduling_lms_discrete_flax.py", "repo_id": "diffusers", "token_count": 4681 }
183
# This file is autogenerated by the command `make fix-copies`, do not edit. from ..utils import DummyObject, requires_backends class OnnxStableDiffusionImg2ImgPipeline(metaclass=DummyObject): _backends = ["torch", "transformers", "onnx"] def __init__(self, *args, **kwargs): requires_backends(self, ["torch", "transformers", "onnx"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) class OnnxStableDiffusionInpaintPipeline(metaclass=DummyObject): _backends = ["torch", "transformers", "onnx"] def __init__(self, *args, **kwargs): requires_backends(self, ["torch", "transformers", "onnx"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) class OnnxStableDiffusionInpaintPipelineLegacy(metaclass=DummyObject): _backends = ["torch", "transformers", "onnx"] def __init__(self, *args, **kwargs): requires_backends(self, ["torch", "transformers", "onnx"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) class OnnxStableDiffusionPipeline(metaclass=DummyObject): _backends = ["torch", "transformers", "onnx"] def __init__(self, *args, **kwargs): requires_backends(self, ["torch", "transformers", "onnx"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) class OnnxStableDiffusionUpscalePipeline(metaclass=DummyObject): _backends = ["torch", "transformers", "onnx"] def __init__(self, *args, **kwargs): requires_backends(self, ["torch", "transformers", "onnx"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) class StableDiffusionOnnxPipeline(metaclass=DummyObject): _backends = ["torch", "transformers", "onnx"] def __init__(self, *args, **kwargs): requires_backends(self, ["torch", "transformers", "onnx"]) @classmethod def from_config(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"]) @classmethod def from_pretrained(cls, *args, **kwargs): requires_backends(cls, ["torch", "transformers", "onnx"])
diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py/0
{ "file_path": "diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_onnx_objects.py", "repo_id": "diffusers", "token_count": 1270 }
184
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import io import json from typing import List, Literal, Optional, Union, cast import requests from .deprecation_utils import deprecate from .import_utils import is_safetensors_available, is_torch_available if is_torch_available(): import torch from ..image_processor import VaeImageProcessor from ..video_processor import VideoProcessor if is_safetensors_available(): import safetensors.torch DTYPE_MAP = { "float16": torch.float16, "float32": torch.float32, "bfloat16": torch.bfloat16, "uint8": torch.uint8, } from PIL import Image def detect_image_type(data: bytes) -> str: if data.startswith(b"\xff\xd8"): return "jpeg" elif data.startswith(b"\x89PNG\r\n\x1a\n"): return "png" elif data.startswith(b"GIF87a") or data.startswith(b"GIF89a"): return "gif" elif data.startswith(b"BM"): return "bmp" return "unknown" def check_inputs_decode( endpoint: str, tensor: "torch.Tensor", processor: Optional[Union["VaeImageProcessor", "VideoProcessor"]] = None, do_scaling: bool = True, scaling_factor: Optional[float] = None, shift_factor: Optional[float] = None, output_type: Literal["mp4", "pil", "pt"] = "pil", return_type: Literal["mp4", "pil", "pt"] = "pil", image_format: Literal["png", "jpg"] = "jpg", partial_postprocess: bool = False, input_tensor_type: Literal["binary"] = "binary", output_tensor_type: Literal["binary"] = "binary", height: Optional[int] = None, width: Optional[int] = None, ): if tensor.ndim == 3 and height is None and width is None: raise ValueError("`height` and `width` required for packed latents.") if ( output_type == "pt" and return_type == "pil" and not partial_postprocess and not isinstance(processor, (VaeImageProcessor, VideoProcessor)) ): raise ValueError("`processor` is required.") if do_scaling and scaling_factor is None: deprecate( "do_scaling", "1.0.0", "`do_scaling` is deprecated, pass `scaling_factor` and `shift_factor` if required.", standard_warn=False, ) def postprocess_decode( response: requests.Response, processor: Optional[Union["VaeImageProcessor", "VideoProcessor"]] = None, output_type: Literal["mp4", "pil", "pt"] = "pil", return_type: Literal["mp4", "pil", "pt"] = "pil", partial_postprocess: bool = False, ): if output_type == "pt" or (output_type == "pil" and processor is not None): output_tensor = response.content parameters = response.headers shape = json.loads(parameters["shape"]) dtype = parameters["dtype"] torch_dtype = DTYPE_MAP[dtype] output_tensor = torch.frombuffer(bytearray(output_tensor), dtype=torch_dtype).reshape(shape) if output_type == "pt": if partial_postprocess: if return_type == "pil": output = [Image.fromarray(image.numpy()) for image in output_tensor] if len(output) == 1: output = output[0] elif return_type == "pt": output = output_tensor else: if processor is None or return_type == "pt": output = output_tensor else: if isinstance(processor, VideoProcessor): output = cast( List[Image.Image], processor.postprocess_video(output_tensor, output_type="pil")[0], ) else: output = cast( Image.Image, processor.postprocess(output_tensor, output_type="pil")[0], ) elif output_type == "pil" and return_type == "pil" and processor is None: output = Image.open(io.BytesIO(response.content)).convert("RGB") detected_format = detect_image_type(response.content) output.format = detected_format elif output_type == "pil" and processor is not None: if return_type == "pil": output = [ Image.fromarray(image) for image in (output_tensor.permute(0, 2, 3, 1).float().numpy() * 255).round().astype("uint8") ] elif return_type == "pt": output = output_tensor elif output_type == "mp4" and return_type == "mp4": output = response.content return output def prepare_decode( tensor: "torch.Tensor", processor: Optional[Union["VaeImageProcessor", "VideoProcessor"]] = None, do_scaling: bool = True, scaling_factor: Optional[float] = None, shift_factor: Optional[float] = None, output_type: Literal["mp4", "pil", "pt"] = "pil", image_format: Literal["png", "jpg"] = "jpg", partial_postprocess: bool = False, height: Optional[int] = None, width: Optional[int] = None, ): headers = {} parameters = { "image_format": image_format, "output_type": output_type, "partial_postprocess": partial_postprocess, "shape": list(tensor.shape), "dtype": str(tensor.dtype).split(".")[-1], } if do_scaling and scaling_factor is not None: parameters["scaling_factor"] = scaling_factor if do_scaling and shift_factor is not None: parameters["shift_factor"] = shift_factor if do_scaling and scaling_factor is None: parameters["do_scaling"] = do_scaling elif do_scaling and scaling_factor is None and shift_factor is None: parameters["do_scaling"] = do_scaling if height is not None and width is not None: parameters["height"] = height parameters["width"] = width headers["Content-Type"] = "tensor/binary" headers["Accept"] = "tensor/binary" if output_type == "pil" and image_format == "jpg" and processor is None: headers["Accept"] = "image/jpeg" elif output_type == "pil" and image_format == "png" and processor is None: headers["Accept"] = "image/png" elif output_type == "mp4": headers["Accept"] = "text/plain" tensor_data = safetensors.torch._tobytes(tensor, "tensor") return {"data": tensor_data, "params": parameters, "headers": headers} def remote_decode( endpoint: str, tensor: "torch.Tensor", processor: Optional[Union["VaeImageProcessor", "VideoProcessor"]] = None, do_scaling: bool = True, scaling_factor: Optional[float] = None, shift_factor: Optional[float] = None, output_type: Literal["mp4", "pil", "pt"] = "pil", return_type: Literal["mp4", "pil", "pt"] = "pil", image_format: Literal["png", "jpg"] = "jpg", partial_postprocess: bool = False, input_tensor_type: Literal["binary"] = "binary", output_tensor_type: Literal["binary"] = "binary", height: Optional[int] = None, width: Optional[int] = None, ) -> Union[Image.Image, List[Image.Image], bytes, "torch.Tensor"]: """ Hugging Face Hybrid Inference that allow running VAE decode remotely. Args: endpoint (`str`): Endpoint for Remote Decode. tensor (`torch.Tensor`): Tensor to be decoded. processor (`VaeImageProcessor` or `VideoProcessor`, *optional*): Used with `return_type="pt"`, and `return_type="pil"` for Video models. do_scaling (`bool`, default `True`, *optional*): **DEPRECATED**. **pass `scaling_factor`/`shift_factor` instead.** **still set do_scaling=None/do_scaling=False for no scaling until option is removed** When `True` scaling e.g. `latents / self.vae.config.scaling_factor` is applied remotely. If `False`, input must be passed with scaling applied. scaling_factor (`float`, *optional*): Scaling is applied when passed e.g. [`latents / self.vae.config.scaling_factor`](https://github.com/huggingface/diffusers/blob/7007febae5cff000d4df9059d9cf35133e8b2ca9/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L1083C37-L1083C77). - SD v1: 0.18215 - SD XL: 0.13025 - Flux: 0.3611 If `None`, input must be passed with scaling applied. shift_factor (`float`, *optional*): Shift is applied when passed e.g. `latents + self.vae.config.shift_factor`. - Flux: 0.1159 If `None`, input must be passed with scaling applied. output_type (`"mp4"` or `"pil"` or `"pt", default `"pil"): **Endpoint** output type. Subject to change. Report feedback on preferred type. `"mp4": Supported by video models. Endpoint returns `bytes` of video. `"pil"`: Supported by image and video models. Image models: Endpoint returns `bytes` of an image in `image_format`. Video models: Endpoint returns `torch.Tensor` with partial `postprocessing` applied. Requires `processor` as a flag (any `None` value will work). `"pt"`: Support by image and video models. Endpoint returns `torch.Tensor`. With `partial_postprocess=True` the tensor is postprocessed `uint8` image tensor. Recommendations: `"pt"` with `partial_postprocess=True` is the smallest transfer for full quality. `"pt"` with `partial_postprocess=False` is the most compatible with third party code. `"pil"` with `image_format="jpg"` is the smallest transfer overall. return_type (`"mp4"` or `"pil"` or `"pt", default `"pil"): **Function** return type. `"mp4": Function returns `bytes` of video. `"pil"`: Function returns `PIL.Image.Image`. With `output_type="pil" no further processing is applied. With `output_type="pt" a `PIL.Image.Image` is created. `partial_postprocess=False` `processor` is required. `partial_postprocess=True` `processor` is **not** required. `"pt"`: Function returns `torch.Tensor`. `processor` is **not** required. `partial_postprocess=False` tensor is `float16` or `bfloat16`, without denormalization. `partial_postprocess=True` tensor is `uint8`, denormalized. image_format (`"png"` or `"jpg"`, default `jpg`): Used with `output_type="pil"`. Endpoint returns `jpg` or `png`. partial_postprocess (`bool`, default `False`): Used with `output_type="pt"`. `partial_postprocess=False` tensor is `float16` or `bfloat16`, without denormalization. `partial_postprocess=True` tensor is `uint8`, denormalized. input_tensor_type (`"binary"`, default `"binary"`): Tensor transfer type. output_tensor_type (`"binary"`, default `"binary"`): Tensor transfer type. height (`int`, **optional**): Required for `"packed"` latents. width (`int`, **optional**): Required for `"packed"` latents. Returns: output (`Image.Image` or `List[Image.Image]` or `bytes` or `torch.Tensor`). """ if input_tensor_type == "base64": deprecate( "input_tensor_type='base64'", "1.0.0", "input_tensor_type='base64' is deprecated. Using `binary`.", standard_warn=False, ) input_tensor_type = "binary" if output_tensor_type == "base64": deprecate( "output_tensor_type='base64'", "1.0.0", "output_tensor_type='base64' is deprecated. Using `binary`.", standard_warn=False, ) output_tensor_type = "binary" check_inputs_decode( endpoint, tensor, processor, do_scaling, scaling_factor, shift_factor, output_type, return_type, image_format, partial_postprocess, input_tensor_type, output_tensor_type, height, width, ) kwargs = prepare_decode( tensor=tensor, processor=processor, do_scaling=do_scaling, scaling_factor=scaling_factor, shift_factor=shift_factor, output_type=output_type, image_format=image_format, partial_postprocess=partial_postprocess, height=height, width=width, ) response = requests.post(endpoint, **kwargs) if not response.ok: raise RuntimeError(response.json()) output = postprocess_decode( response=response, processor=processor, output_type=output_type, return_type=return_type, partial_postprocess=partial_postprocess, ) return output def check_inputs_encode( endpoint: str, image: Union["torch.Tensor", Image.Image], scaling_factor: Optional[float] = None, shift_factor: Optional[float] = None, ): pass def postprocess_encode( response: requests.Response, ): output_tensor = response.content parameters = response.headers shape = json.loads(parameters["shape"]) dtype = parameters["dtype"] torch_dtype = DTYPE_MAP[dtype] output_tensor = torch.frombuffer(bytearray(output_tensor), dtype=torch_dtype).reshape(shape) return output_tensor def prepare_encode( image: Union["torch.Tensor", Image.Image], scaling_factor: Optional[float] = None, shift_factor: Optional[float] = None, ): headers = {} parameters = {} if scaling_factor is not None: parameters["scaling_factor"] = scaling_factor if shift_factor is not None: parameters["shift_factor"] = shift_factor if isinstance(image, torch.Tensor): data = safetensors.torch._tobytes(image.contiguous(), "tensor") parameters["shape"] = list(image.shape) parameters["dtype"] = str(image.dtype).split(".")[-1] else: buffer = io.BytesIO() image.save(buffer, format="PNG") data = buffer.getvalue() return {"data": data, "params": parameters, "headers": headers} def remote_encode( endpoint: str, image: Union["torch.Tensor", Image.Image], scaling_factor: Optional[float] = None, shift_factor: Optional[float] = None, ) -> "torch.Tensor": """ Hugging Face Hybrid Inference that allow running VAE encode remotely. Args: endpoint (`str`): Endpoint for Remote Decode. image (`torch.Tensor` or `PIL.Image.Image`): Image to be encoded. scaling_factor (`float`, *optional*): Scaling is applied when passed e.g. [`latents * self.vae.config.scaling_factor`]. - SD v1: 0.18215 - SD XL: 0.13025 - Flux: 0.3611 If `None`, input must be passed with scaling applied. shift_factor (`float`, *optional*): Shift is applied when passed e.g. `latents - self.vae.config.shift_factor`. - Flux: 0.1159 If `None`, input must be passed with scaling applied. Returns: output (`torch.Tensor`). """ check_inputs_encode( endpoint, image, scaling_factor, shift_factor, ) kwargs = prepare_encode( image=image, scaling_factor=scaling_factor, shift_factor=shift_factor, ) response = requests.post(endpoint, **kwargs) if not response.ok: raise RuntimeError(response.json()) output = postprocess_encode( response=response, ) return output
diffusers/src/diffusers/utils/remote_utils.py/0
{ "file_path": "diffusers/src/diffusers/utils/remote_utils.py", "repo_id": "diffusers", "token_count": 6925 }
185
# Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys import unittest import torch from parameterized import parameterized from transformers import AutoTokenizer, T5EncoderModel from diffusers import ( AutoencoderKLCogVideoX, CogVideoXDDIMScheduler, CogVideoXDPMScheduler, CogVideoXPipeline, CogVideoXTransformer3DModel, ) from diffusers.utils.testing_utils import ( floats_tensor, require_peft_backend, require_torch_accelerator, ) sys.path.append(".") from utils import PeftLoraLoaderMixinTests # noqa: E402 @require_peft_backend class CogVideoXLoRATests(unittest.TestCase, PeftLoraLoaderMixinTests): pipeline_class = CogVideoXPipeline scheduler_cls = CogVideoXDPMScheduler scheduler_kwargs = {"timestep_spacing": "trailing"} scheduler_classes = [CogVideoXDDIMScheduler, CogVideoXDPMScheduler] transformer_kwargs = { "num_attention_heads": 4, "attention_head_dim": 8, "in_channels": 4, "out_channels": 4, "time_embed_dim": 2, "text_embed_dim": 32, "num_layers": 1, "sample_width": 16, "sample_height": 16, "sample_frames": 9, "patch_size": 2, "temporal_compression_ratio": 4, "max_text_seq_length": 16, } transformer_cls = CogVideoXTransformer3DModel vae_kwargs = { "in_channels": 3, "out_channels": 3, "down_block_types": ( "CogVideoXDownBlock3D", "CogVideoXDownBlock3D", "CogVideoXDownBlock3D", "CogVideoXDownBlock3D", ), "up_block_types": ( "CogVideoXUpBlock3D", "CogVideoXUpBlock3D", "CogVideoXUpBlock3D", "CogVideoXUpBlock3D", ), "block_out_channels": (8, 8, 8, 8), "latent_channels": 4, "layers_per_block": 1, "norm_num_groups": 2, "temporal_compression_ratio": 4, } vae_cls = AutoencoderKLCogVideoX tokenizer_cls, tokenizer_id = AutoTokenizer, "hf-internal-testing/tiny-random-t5" text_encoder_cls, text_encoder_id = T5EncoderModel, "hf-internal-testing/tiny-random-t5" text_encoder_target_modules = ["q", "k", "v", "o"] @property def output_shape(self): return (1, 9, 16, 16, 3) def get_dummy_inputs(self, with_generator=True): batch_size = 1 sequence_length = 16 num_channels = 4 num_frames = 9 num_latent_frames = 3 # (num_frames - 1) // temporal_compression_ratio + 1 sizes = (2, 2) generator = torch.manual_seed(0) noise = floats_tensor((batch_size, num_latent_frames, num_channels) + sizes) input_ids = torch.randint(1, sequence_length, size=(batch_size, sequence_length), generator=generator) pipeline_inputs = { "prompt": "dance monkey", "num_frames": num_frames, "num_inference_steps": 4, "guidance_scale": 6.0, # Cannot reduce because convolution kernel becomes bigger than sample "height": 16, "width": 16, "max_sequence_length": sequence_length, "output_type": "np", } if with_generator: pipeline_inputs.update({"generator": generator}) return noise, input_ids, pipeline_inputs def test_simple_inference_with_text_lora_denoiser_fused_multi(self): super().test_simple_inference_with_text_lora_denoiser_fused_multi(expected_atol=9e-3) def test_simple_inference_with_text_denoiser_lora_unfused(self): super().test_simple_inference_with_text_denoiser_lora_unfused(expected_atol=9e-3) def test_lora_scale_kwargs_match_fusion(self): super().test_lora_scale_kwargs_match_fusion(expected_atol=9e-3, expected_rtol=9e-3) @parameterized.expand([("block_level", True), ("leaf_level", False)]) @require_torch_accelerator def test_group_offloading_inference_denoiser(self, offload_type, use_stream): # TODO: We don't run the (leaf_level, True) test here that is enabled for other models. # The reason for this can be found here: https://github.com/huggingface/diffusers/pull/11804#issuecomment-3013325338 super()._test_group_offloading_inference_denoiser(offload_type, use_stream) @unittest.skip("Not supported in CogVideoX.") def test_simple_inference_with_text_denoiser_block_scale(self): pass @unittest.skip("Not supported in CogVideoX.") def test_simple_inference_with_text_denoiser_block_scale_for_all_dict_options(self): pass @unittest.skip("Not supported in CogVideoX.") def test_modify_padding_mode(self): pass @unittest.skip("Text encoder LoRA is not supported in CogVideoX.") def test_simple_inference_with_partial_text_lora(self): pass @unittest.skip("Text encoder LoRA is not supported in CogVideoX.") def test_simple_inference_with_text_lora(self): pass @unittest.skip("Text encoder LoRA is not supported in CogVideoX.") def test_simple_inference_with_text_lora_and_scale(self): pass @unittest.skip("Text encoder LoRA is not supported in CogVideoX.") def test_simple_inference_with_text_lora_fused(self): pass @unittest.skip("Text encoder LoRA is not supported in CogVideoX.") def test_simple_inference_with_text_lora_save_load(self): pass @unittest.skip("Not supported in CogVideoX.") def test_simple_inference_with_text_denoiser_multi_adapter_block_lora(self): pass
diffusers/tests/lora/test_lora_layers_cogvideox.py/0
{ "file_path": "diffusers/tests/lora/test_lora_layers_cogvideox.py", "repo_id": "diffusers", "token_count": 2620 }
186
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import torch from diffusers import VQModel from diffusers.utils.testing_utils import ( backend_manual_seed, enable_full_determinism, floats_tensor, torch_device, ) from ..test_modeling_common import ModelTesterMixin, UNetTesterMixin enable_full_determinism() class VQModelTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase): model_class = VQModel main_input_name = "sample" @property def dummy_input(self, sizes=(32, 32)): batch_size = 4 num_channels = 3 image = floats_tensor((batch_size, num_channels) + sizes).to(torch_device) return {"sample": image} @property def input_shape(self): return (3, 32, 32) @property def output_shape(self): return (3, 32, 32) def prepare_init_args_and_inputs_for_common(self): init_dict = { "block_out_channels": [8, 16], "norm_num_groups": 8, "in_channels": 3, "out_channels": 3, "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"], "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"], "latent_channels": 3, } inputs_dict = self.dummy_input return init_dict, inputs_dict @unittest.skip("Test not supported.") def test_forward_signature(self): pass @unittest.skip("Test not supported.") def test_training(self): pass def test_from_pretrained_hub(self): model, loading_info = VQModel.from_pretrained("fusing/vqgan-dummy", output_loading_info=True) self.assertIsNotNone(model) self.assertEqual(len(loading_info["missing_keys"]), 0) model.to(torch_device) image = model(**self.dummy_input) assert image is not None, "Make sure output is not None" def test_output_pretrained(self): model = VQModel.from_pretrained("fusing/vqgan-dummy") model.to(torch_device).eval() torch.manual_seed(0) backend_manual_seed(torch_device, 0) image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) image = image.to(torch_device) with torch.no_grad(): output = model(image).sample output_slice = output[0, -1, -3:, -3:].flatten().cpu() # fmt: off expected_output_slice = torch.tensor([-0.0153, -0.4044, -0.1880, -0.5161, -0.2418, -0.4072, -0.1612, -0.0633, -0.0143]) # fmt: on self.assertTrue(torch.allclose(output_slice, expected_output_slice, atol=1e-3)) def test_loss_pretrained(self): model = VQModel.from_pretrained("fusing/vqgan-dummy") model.to(torch_device).eval() torch.manual_seed(0) backend_manual_seed(torch_device, 0) image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) image = image.to(torch_device) with torch.no_grad(): output = model(image).commit_loss.cpu() # fmt: off expected_output = torch.tensor([0.1936]) # fmt: on self.assertTrue(torch.allclose(output, expected_output, atol=1e-3))
diffusers/tests/models/autoencoders/test_models_vq.py/0
{ "file_path": "diffusers/tests/models/autoencoders/test_models_vq.py", "repo_id": "diffusers", "token_count": 1611 }
187
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import torch from diffusers import CogVideoXTransformer3DModel from diffusers.utils.testing_utils import ( enable_full_determinism, torch_device, ) from ..test_modeling_common import ModelTesterMixin enable_full_determinism() class CogVideoXTransformerTests(ModelTesterMixin, unittest.TestCase): model_class = CogVideoXTransformer3DModel main_input_name = "hidden_states" uses_custom_attn_processor = True model_split_percents = [0.7, 0.7, 0.8] @property def dummy_input(self): batch_size = 2 num_channels = 4 num_frames = 1 height = 8 width = 8 embedding_dim = 8 sequence_length = 8 hidden_states = torch.randn((batch_size, num_frames, num_channels, height, width)).to(torch_device) encoder_hidden_states = torch.randn((batch_size, sequence_length, embedding_dim)).to(torch_device) timestep = torch.randint(0, 1000, size=(batch_size,)).to(torch_device) return { "hidden_states": hidden_states, "encoder_hidden_states": encoder_hidden_states, "timestep": timestep, } @property def input_shape(self): return (1, 4, 8, 8) @property def output_shape(self): return (1, 4, 8, 8) def prepare_init_args_and_inputs_for_common(self): init_dict = { # Product of num_attention_heads * attention_head_dim must be divisible by 16 for 3D positional embeddings. "num_attention_heads": 2, "attention_head_dim": 8, "in_channels": 4, "out_channels": 4, "time_embed_dim": 2, "text_embed_dim": 8, "num_layers": 2, "sample_width": 8, "sample_height": 8, "sample_frames": 8, "patch_size": 2, "patch_size_t": None, "temporal_compression_ratio": 4, "max_text_seq_length": 8, } inputs_dict = self.dummy_input return init_dict, inputs_dict def test_gradient_checkpointing_is_applied(self): expected_set = {"CogVideoXTransformer3DModel"} super().test_gradient_checkpointing_is_applied(expected_set=expected_set) class CogVideoX1_5TransformerTests(ModelTesterMixin, unittest.TestCase): model_class = CogVideoXTransformer3DModel main_input_name = "hidden_states" uses_custom_attn_processor = True @property def dummy_input(self): batch_size = 2 num_channels = 4 num_frames = 2 height = 8 width = 8 embedding_dim = 8 sequence_length = 8 hidden_states = torch.randn((batch_size, num_frames, num_channels, height, width)).to(torch_device) encoder_hidden_states = torch.randn((batch_size, sequence_length, embedding_dim)).to(torch_device) timestep = torch.randint(0, 1000, size=(batch_size,)).to(torch_device) return { "hidden_states": hidden_states, "encoder_hidden_states": encoder_hidden_states, "timestep": timestep, } @property def input_shape(self): return (1, 4, 8, 8) @property def output_shape(self): return (1, 4, 8, 8) def prepare_init_args_and_inputs_for_common(self): init_dict = { # Product of num_attention_heads * attention_head_dim must be divisible by 16 for 3D positional embeddings. "num_attention_heads": 2, "attention_head_dim": 8, "in_channels": 4, "out_channels": 4, "time_embed_dim": 2, "text_embed_dim": 8, "num_layers": 2, "sample_width": 8, "sample_height": 8, "sample_frames": 8, "patch_size": 2, "patch_size_t": 2, "temporal_compression_ratio": 4, "max_text_seq_length": 8, "use_rotary_positional_embeddings": True, } inputs_dict = self.dummy_input return init_dict, inputs_dict def test_gradient_checkpointing_is_applied(self): expected_set = {"CogVideoXTransformer3DModel"} super().test_gradient_checkpointing_is_applied(expected_set=expected_set)
diffusers/tests/models/transformers/test_models_transformer_cogvideox.py/0
{ "file_path": "diffusers/tests/models/transformers/test_models_transformer_cogvideox.py", "repo_id": "diffusers", "token_count": 2166 }
188
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import torch from diffusers import OmniGenTransformer2DModel from diffusers.utils.testing_utils import enable_full_determinism, torch_device from ..test_modeling_common import ModelTesterMixin enable_full_determinism() class OmniGenTransformerTests(ModelTesterMixin, unittest.TestCase): model_class = OmniGenTransformer2DModel main_input_name = "hidden_states" uses_custom_attn_processor = True model_split_percents = [0.1, 0.1, 0.1] @property def dummy_input(self): batch_size = 2 num_channels = 4 height = 8 width = 8 sequence_length = 24 hidden_states = torch.randn((batch_size, num_channels, height, width)).to(torch_device) timestep = torch.rand(size=(batch_size,), dtype=hidden_states.dtype).to(torch_device) input_ids = torch.randint(0, 10, (batch_size, sequence_length)).to(torch_device) input_img_latents = [torch.randn((1, num_channels, height, width)).to(torch_device)] input_image_sizes = {0: [[0, 0 + height * width // 2 // 2]]} attn_seq_length = sequence_length + 1 + height * width // 2 // 2 attention_mask = torch.ones((batch_size, attn_seq_length, attn_seq_length)).to(torch_device) position_ids = torch.LongTensor([list(range(attn_seq_length))] * batch_size).to(torch_device) return { "hidden_states": hidden_states, "timestep": timestep, "input_ids": input_ids, "input_img_latents": input_img_latents, "input_image_sizes": input_image_sizes, "attention_mask": attention_mask, "position_ids": position_ids, } @property def input_shape(self): return (4, 8, 8) @property def output_shape(self): return (4, 8, 8) def prepare_init_args_and_inputs_for_common(self): init_dict = { "hidden_size": 16, "num_attention_heads": 4, "num_key_value_heads": 4, "intermediate_size": 32, "num_layers": 20, "pad_token_id": 0, "vocab_size": 1000, "in_channels": 4, "time_step_dim": 4, "rope_scaling": {"long_factor": list(range(1, 3)), "short_factor": list(range(1, 3))}, } inputs_dict = self.dummy_input return init_dict, inputs_dict def test_gradient_checkpointing_is_applied(self): expected_set = {"OmniGenTransformer2DModel"} super().test_gradient_checkpointing_is_applied(expected_set=expected_set)
diffusers/tests/models/transformers/test_models_transformer_omnigen.py/0
{ "file_path": "diffusers/tests/models/transformers/test_models_transformer_omnigen.py", "repo_id": "diffusers", "token_count": 1326 }
189
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from diffusers.models.unets.unet_2d_blocks import * # noqa F403 from diffusers.utils.testing_utils import torch_device from .test_unet_blocks_common import UNetBlockTesterMixin class DownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = DownBlock2D # noqa F405 block_type = "down" def test_output(self): expected_slice = [-0.0232, -0.9869, 0.8054, -0.0637, -0.1688, -1.4264, 0.4470, -1.3394, 0.0904] super().test_output(expected_slice) class ResnetDownsampleBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = ResnetDownsampleBlock2D # noqa F405 block_type = "down" def test_output(self): expected_slice = [0.0710, 0.2410, -0.7320, -1.0757, -1.1343, 0.3540, -0.0133, -0.2576, 0.0948] super().test_output(expected_slice) class AttnDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = AttnDownBlock2D # noqa F405 block_type = "down" def test_output(self): expected_slice = [0.0636, 0.8964, -0.6234, -1.0131, 0.0844, 0.4935, 0.3437, 0.0911, -0.2957] super().test_output(expected_slice) class CrossAttnDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = CrossAttnDownBlock2D # noqa F405 block_type = "down" def prepare_init_args_and_inputs_for_common(self): init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() init_dict["cross_attention_dim"] = 32 return init_dict, inputs_dict def test_output(self): expected_slice = [0.2238, -0.7396, -0.2255, -0.3829, 0.1925, 1.1665, 0.0603, -0.7295, 0.1983] super().test_output(expected_slice) class SimpleCrossAttnDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = SimpleCrossAttnDownBlock2D # noqa F405 block_type = "down" @property def dummy_input(self): return super().get_dummy_input(include_encoder_hidden_states=True) def prepare_init_args_and_inputs_for_common(self): init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() init_dict["cross_attention_dim"] = 32 return init_dict, inputs_dict @unittest.skipIf(torch_device == "mps", "MPS result is not consistent") def test_output(self): expected_slice = [0.7921, -0.0992, -0.1962, -0.7695, -0.4242, 0.7804, 0.4737, 0.2765, 0.3338] super().test_output(expected_slice) class SkipDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = SkipDownBlock2D # noqa F405 block_type = "down" @property def dummy_input(self): return super().get_dummy_input(include_skip_sample=True) def test_output(self): expected_slice = [-0.0845, -0.2087, -0.2465, 0.0971, 0.1900, -0.0484, 0.2664, 0.4179, 0.5069] super().test_output(expected_slice) class AttnSkipDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = AttnSkipDownBlock2D # noqa F405 block_type = "down" @property def dummy_input(self): return super().get_dummy_input(include_skip_sample=True) def test_output(self): expected_slice = [0.5539, 0.1609, 0.4924, 0.0537, -0.1995, 0.4050, 0.0979, -0.2721, -0.0642] super().test_output(expected_slice) class DownEncoderBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = DownEncoderBlock2D # noqa F405 block_type = "down" @property def dummy_input(self): return super().get_dummy_input(include_temb=False) def prepare_init_args_and_inputs_for_common(self): init_dict = { "in_channels": 32, "out_channels": 32, } inputs_dict = self.dummy_input return init_dict, inputs_dict def test_output(self): expected_slice = [1.1102, 0.5302, 0.4872, -0.0023, -0.8042, 0.0483, -0.3489, -0.5632, 0.7626] super().test_output(expected_slice) class AttnDownEncoderBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = AttnDownEncoderBlock2D # noqa F405 block_type = "down" @property def dummy_input(self): return super().get_dummy_input(include_temb=False) def prepare_init_args_and_inputs_for_common(self): init_dict = { "in_channels": 32, "out_channels": 32, } inputs_dict = self.dummy_input return init_dict, inputs_dict def test_output(self): expected_slice = [0.8966, -0.1486, 0.8568, 0.8141, -0.9046, -0.1342, -0.0972, -0.7417, 0.1538] super().test_output(expected_slice) class UNetMidBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = UNetMidBlock2D # noqa F405 block_type = "mid" def prepare_init_args_and_inputs_for_common(self): init_dict = { "in_channels": 32, "temb_channels": 128, } inputs_dict = self.dummy_input return init_dict, inputs_dict def test_output(self): expected_slice = [-0.1062, 1.7248, 0.3494, 1.4569, -0.0910, -1.2421, -0.9984, 0.6736, 1.0028] super().test_output(expected_slice) class UNetMidBlock2DCrossAttnTests(UNetBlockTesterMixin, unittest.TestCase): block_class = UNetMidBlock2DCrossAttn # noqa F405 block_type = "mid" def prepare_init_args_and_inputs_for_common(self): init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() init_dict["cross_attention_dim"] = 32 return init_dict, inputs_dict def test_output(self): expected_slice = [0.0187, 2.4220, 0.4484, 1.1203, -0.6121, -1.5122, -0.8270, 0.7851, 1.8335] super().test_output(expected_slice) class UNetMidBlock2DSimpleCrossAttnTests(UNetBlockTesterMixin, unittest.TestCase): block_class = UNetMidBlock2DSimpleCrossAttn # noqa F405 block_type = "mid" @property def dummy_input(self): return super().get_dummy_input(include_encoder_hidden_states=True) def prepare_init_args_and_inputs_for_common(self): init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() init_dict["cross_attention_dim"] = 32 return init_dict, inputs_dict def test_output(self): expected_slice = [0.7143, 1.9974, 0.5448, 1.3977, 0.1282, -1.1237, -1.4238, 0.5530, 0.8880] super().test_output(expected_slice) class UpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = UpBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_res_hidden_states_tuple=True) def test_output(self): expected_slice = [-0.2041, -0.4165, -0.3022, 0.0041, -0.6628, -0.7053, 0.1928, -0.0325, 0.0523] super().test_output(expected_slice) class ResnetUpsampleBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = ResnetUpsampleBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_res_hidden_states_tuple=True) def test_output(self): expected_slice = [0.2287, 0.3549, -0.1346, 0.4797, -0.1715, -0.9649, 0.7305, -0.5864, -0.6244] super().test_output(expected_slice) class CrossAttnUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = CrossAttnUpBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_res_hidden_states_tuple=True) def prepare_init_args_and_inputs_for_common(self): init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() init_dict["cross_attention_dim"] = 32 return init_dict, inputs_dict def test_output(self): expected_slice = [-0.1403, -0.3515, -0.0420, -0.1425, 0.3167, 0.5094, -0.2181, 0.5931, 0.5582] super().test_output(expected_slice) class SimpleCrossAttnUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = SimpleCrossAttnUpBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_res_hidden_states_tuple=True, include_encoder_hidden_states=True) def prepare_init_args_and_inputs_for_common(self): init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() init_dict["cross_attention_dim"] = 32 return init_dict, inputs_dict def test_output(self): expected_slice = [0.2645, 0.1480, 0.0909, 0.8044, -0.9758, -0.9083, 0.0994, -1.1453, -0.7402] super().test_output(expected_slice) class AttnUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = AttnUpBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_res_hidden_states_tuple=True) @unittest.skipIf(torch_device == "mps", "MPS result is not consistent") def test_output(self): expected_slice = [0.0979, 0.1326, 0.0021, 0.0659, 0.2249, 0.0059, 0.1132, 0.5952, 0.1033] super().test_output(expected_slice) class SkipUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = SkipUpBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_res_hidden_states_tuple=True) def test_output(self): expected_slice = [-0.0893, -0.1234, -0.1506, -0.0332, 0.0123, -0.0211, 0.0566, 0.0143, 0.0362] super().test_output(expected_slice) class AttnSkipUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = AttnSkipUpBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_res_hidden_states_tuple=True) def test_output(self): expected_slice = [0.0361, 0.0617, 0.2787, -0.0350, 0.0342, 0.3421, -0.0843, 0.0913, 0.3015] super().test_output(expected_slice) class UpDecoderBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = UpDecoderBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_temb=False) def prepare_init_args_and_inputs_for_common(self): init_dict = {"in_channels": 32, "out_channels": 32} inputs_dict = self.dummy_input return init_dict, inputs_dict def test_output(self): expected_slice = [0.4404, 0.1998, -0.9886, -0.3320, -0.3128, -0.7034, -0.6955, -0.2338, -0.3137] super().test_output(expected_slice) class AttnUpDecoderBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): block_class = AttnUpDecoderBlock2D # noqa F405 block_type = "up" @property def dummy_input(self): return super().get_dummy_input(include_temb=False) def prepare_init_args_and_inputs_for_common(self): init_dict = {"in_channels": 32, "out_channels": 32} inputs_dict = self.dummy_input return init_dict, inputs_dict def test_output(self): expected_slice = [0.6738, 0.4491, 0.1055, 1.0710, 0.7316, 0.3339, 0.3352, 0.1023, 0.3568] super().test_output(expected_slice)
diffusers/tests/models/unets/test_unet_2d_blocks.py/0
{ "file_path": "diffusers/tests/models/unets/test_unet_2d_blocks.py", "repo_id": "diffusers", "token_count": 5186 }
190
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest import pytest from diffusers import __version__ from diffusers.utils import deprecate from diffusers.utils.testing_utils import Expectations, str_to_bool # Used to test the hub USER = "__DUMMY_TRANSFORMERS_USER__" ENDPOINT_STAGING = "https://hub-ci.huggingface.co" # Not critical, only usable on the sandboxed CI instance. TOKEN = "hf_94wBhPGp6KrrTH3KDchhKpRxZwd6dmHWLL" class DeprecateTester(unittest.TestCase): higher_version = ".".join([str(int(__version__.split(".")[0]) + 1)] + __version__.split(".")[1:]) lower_version = "0.0.1" def test_deprecate_function_arg(self): kwargs = {"deprecated_arg": 4} with self.assertWarns(FutureWarning) as warning: output = deprecate("deprecated_arg", self.higher_version, "message", take_from=kwargs) assert output == 4 assert ( str(warning.warning) == f"The `deprecated_arg` argument is deprecated and will be removed in version {self.higher_version}." " message" ) def test_deprecate_function_arg_tuple(self): kwargs = {"deprecated_arg": 4} with self.assertWarns(FutureWarning) as warning: output = deprecate(("deprecated_arg", self.higher_version, "message"), take_from=kwargs) assert output == 4 assert ( str(warning.warning) == f"The `deprecated_arg` argument is deprecated and will be removed in version {self.higher_version}." " message" ) def test_deprecate_function_args(self): kwargs = {"deprecated_arg_1": 4, "deprecated_arg_2": 8} with self.assertWarns(FutureWarning) as warning: output_1, output_2 = deprecate( ("deprecated_arg_1", self.higher_version, "Hey"), ("deprecated_arg_2", self.higher_version, "Hey"), take_from=kwargs, ) assert output_1 == 4 assert output_2 == 8 assert ( str(warning.warnings[0].message) == "The `deprecated_arg_1` argument is deprecated and will be removed in version" f" {self.higher_version}. Hey" ) assert ( str(warning.warnings[1].message) == "The `deprecated_arg_2` argument is deprecated and will be removed in version" f" {self.higher_version}. Hey" ) def test_deprecate_function_incorrect_arg(self): kwargs = {"deprecated_arg": 4} with self.assertRaises(TypeError) as error: deprecate(("wrong_arg", self.higher_version, "message"), take_from=kwargs) assert "test_deprecate_function_incorrect_arg in" in str(error.exception) assert "line" in str(error.exception) assert "got an unexpected keyword argument `deprecated_arg`" in str(error.exception) def test_deprecate_arg_no_kwarg(self): with self.assertWarns(FutureWarning) as warning: deprecate(("deprecated_arg", self.higher_version, "message")) assert ( str(warning.warning) == f"`deprecated_arg` is deprecated and will be removed in version {self.higher_version}. message" ) def test_deprecate_args_no_kwarg(self): with self.assertWarns(FutureWarning) as warning: deprecate( ("deprecated_arg_1", self.higher_version, "Hey"), ("deprecated_arg_2", self.higher_version, "Hey"), ) assert ( str(warning.warnings[0].message) == f"`deprecated_arg_1` is deprecated and will be removed in version {self.higher_version}. Hey" ) assert ( str(warning.warnings[1].message) == f"`deprecated_arg_2` is deprecated and will be removed in version {self.higher_version}. Hey" ) def test_deprecate_class_obj(self): class Args: arg = 5 with self.assertWarns(FutureWarning) as warning: arg = deprecate(("arg", self.higher_version, "message"), take_from=Args()) assert arg == 5 assert ( str(warning.warning) == f"The `arg` attribute is deprecated and will be removed in version {self.higher_version}. message" ) def test_deprecate_class_objs(self): class Args: arg = 5 foo = 7 with self.assertWarns(FutureWarning) as warning: arg_1, arg_2 = deprecate( ("arg", self.higher_version, "message"), ("foo", self.higher_version, "message"), ("does not exist", self.higher_version, "message"), take_from=Args(), ) assert arg_1 == 5 assert arg_2 == 7 assert ( str(warning.warning) == f"The `arg` attribute is deprecated and will be removed in version {self.higher_version}. message" ) assert ( str(warning.warnings[0].message) == f"The `arg` attribute is deprecated and will be removed in version {self.higher_version}. message" ) assert ( str(warning.warnings[1].message) == f"The `foo` attribute is deprecated and will be removed in version {self.higher_version}. message" ) def test_deprecate_incorrect_version(self): kwargs = {"deprecated_arg": 4} with self.assertRaises(ValueError) as error: deprecate(("wrong_arg", self.lower_version, "message"), take_from=kwargs) assert ( str(error.exception) == "The deprecation tuple ('wrong_arg', '0.0.1', 'message') should be removed since diffusers' version" f" {__version__} is >= {self.lower_version}" ) def test_deprecate_incorrect_no_standard_warn(self): with self.assertWarns(FutureWarning) as warning: deprecate(("deprecated_arg", self.higher_version, "This message is better!!!"), standard_warn=False) assert str(warning.warning) == "This message is better!!!" def test_deprecate_stacklevel(self): with self.assertWarns(FutureWarning) as warning: deprecate(("deprecated_arg", self.higher_version, "This message is better!!!"), standard_warn=False) assert str(warning.warning) == "This message is better!!!" assert "diffusers/tests/others/test_utils.py" in warning.filename # Copied from https://github.com/huggingface/transformers/blob/main/tests/utils/test_expectations.py class ExpectationsTester(unittest.TestCase): def test_expectations(self): expectations = Expectations( { (None, None): 1, ("cuda", 8): 2, ("cuda", 7): 3, ("rocm", 8): 4, ("rocm", None): 5, ("cpu", None): 6, ("xpu", 3): 7, } ) def check(value, key): assert expectations.find_expectation(key) == value # npu has no matches so should find default expectation check(1, ("npu", None)) check(7, ("xpu", 3)) check(2, ("cuda", 8)) check(3, ("cuda", 7)) check(4, ("rocm", 9)) check(4, ("rocm", None)) check(2, ("cuda", 2)) expectations = Expectations({("cuda", 8): 1}) with self.assertRaises(ValueError): expectations.find_expectation(("xpu", None)) def parse_flag_from_env(key, default=False): try: value = os.environ[key] except KeyError: # KEY isn't set, default to `default`. _value = default else: # KEY is set, convert it to True or False. try: _value = str_to_bool(value) except ValueError: # More values are supported, but let's keep the message simple. raise ValueError(f"If set, {key} must be yes or no.") return _value _run_staging = parse_flag_from_env("HUGGINGFACE_CO_STAGING", default=False) def is_staging_test(test_case): """ Decorator marking a test as a staging test. Those tests will run using the staging environment of huggingface.co instead of the real model hub. """ if not _run_staging: return unittest.skip("test is staging test")(test_case) else: return pytest.mark.is_staging_test()(test_case)
diffusers/tests/others/test_utils.py/0
{ "file_path": "diffusers/tests/others/test_utils.py", "repo_id": "diffusers", "token_count": 3862 }
191
import unittest import numpy as np import PIL.Image import torch from transformers import AutoTokenizer, CLIPTextConfig, CLIPTextModel, CLIPTokenizer, T5EncoderModel from diffusers import ( AutoencoderKL, FasterCacheConfig, FlowMatchEulerDiscreteScheduler, FluxKontextPipeline, FluxTransformer2DModel, ) from diffusers.utils.testing_utils import torch_device from ..test_pipelines_common import ( FasterCacheTesterMixin, FluxIPAdapterTesterMixin, PipelineTesterMixin, PyramidAttentionBroadcastTesterMixin, ) class FluxKontextPipelineFastTests( unittest.TestCase, PipelineTesterMixin, FluxIPAdapterTesterMixin, PyramidAttentionBroadcastTesterMixin, FasterCacheTesterMixin, ): pipeline_class = FluxKontextPipeline params = frozenset( ["image", "prompt", "height", "width", "guidance_scale", "prompt_embeds", "pooled_prompt_embeds"] ) batch_params = frozenset(["image", "prompt"]) # there is no xformers processor for Flux test_xformers_attention = False test_layerwise_casting = True test_group_offloading = True faster_cache_config = FasterCacheConfig( spatial_attention_block_skip_range=2, spatial_attention_timestep_skip_range=(-1, 901), unconditional_batch_skip_range=2, attention_weight_callback=lambda _: 0.5, is_guidance_distilled=True, ) def get_dummy_components(self, num_layers: int = 1, num_single_layers: int = 1): torch.manual_seed(0) transformer = FluxTransformer2DModel( patch_size=1, in_channels=4, num_layers=num_layers, num_single_layers=num_single_layers, attention_head_dim=16, num_attention_heads=2, joint_attention_dim=32, pooled_projection_dim=32, axes_dims_rope=[4, 4, 8], ) clip_text_encoder_config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, hidden_act="gelu", projection_dim=32, ) torch.manual_seed(0) text_encoder = CLIPTextModel(clip_text_encoder_config) torch.manual_seed(0) text_encoder_2 = T5EncoderModel.from_pretrained("hf-internal-testing/tiny-random-t5") tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") tokenizer_2 = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-t5") torch.manual_seed(0) vae = AutoencoderKL( sample_size=32, in_channels=3, out_channels=3, block_out_channels=(4,), layers_per_block=1, latent_channels=1, norm_num_groups=1, use_quant_conv=False, use_post_quant_conv=False, shift_factor=0.0609, scaling_factor=1.5035, ) scheduler = FlowMatchEulerDiscreteScheduler() return { "scheduler": scheduler, "text_encoder": text_encoder, "text_encoder_2": text_encoder_2, "tokenizer": tokenizer, "tokenizer_2": tokenizer_2, "transformer": transformer, "vae": vae, "image_encoder": None, "feature_extractor": None, } def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device="cpu").manual_seed(seed) image = PIL.Image.new("RGB", (32, 32), 0) inputs = { "image": image, "prompt": "A painting of a squirrel eating a burger", "generator": generator, "num_inference_steps": 2, "guidance_scale": 5.0, "height": 8, "width": 8, "max_area": 8 * 8, "max_sequence_length": 48, "output_type": "np", "_auto_resize": False, } return inputs def test_flux_different_prompts(self): pipe = self.pipeline_class(**self.get_dummy_components()).to(torch_device) inputs = self.get_dummy_inputs(torch_device) output_same_prompt = pipe(**inputs).images[0] inputs = self.get_dummy_inputs(torch_device) inputs["prompt_2"] = "a different prompt" output_different_prompts = pipe(**inputs).images[0] max_diff = np.abs(output_same_prompt - output_different_prompts).max() # Outputs should be different here # For some reasons, they don't show large differences assert max_diff > 1e-6 def test_flux_image_output_shape(self): pipe = self.pipeline_class(**self.get_dummy_components()).to(torch_device) inputs = self.get_dummy_inputs(torch_device) height_width_pairs = [(32, 32), (72, 57)] for height, width in height_width_pairs: expected_height = height - height % (pipe.vae_scale_factor * 2) expected_width = width - width % (pipe.vae_scale_factor * 2) inputs.update({"height": height, "width": width, "max_area": height * width}) image = pipe(**inputs).images[0] output_height, output_width, _ = image.shape assert (output_height, output_width) == (expected_height, expected_width) def test_flux_true_cfg(self): pipe = self.pipeline_class(**self.get_dummy_components()).to(torch_device) inputs = self.get_dummy_inputs(torch_device) inputs.pop("generator") no_true_cfg_out = pipe(**inputs, generator=torch.manual_seed(0)).images[0] inputs["negative_prompt"] = "bad quality" inputs["true_cfg_scale"] = 2.0 true_cfg_out = pipe(**inputs, generator=torch.manual_seed(0)).images[0] assert not np.allclose(no_true_cfg_out, true_cfg_out)
diffusers/tests/pipelines/flux/test_pipeline_flux_kontext.py/0
{ "file_path": "diffusers/tests/pipelines/flux/test_pipeline_flux_kontext.py", "repo_id": "diffusers", "token_count": 2913 }
192
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import random import unittest import numpy as np import torch from PIL import Image from transformers import XLMRobertaTokenizerFast from diffusers import ( DDIMScheduler, DDPMScheduler, KandinskyImg2ImgPipeline, KandinskyPriorPipeline, UNet2DConditionModel, VQModel, ) from diffusers.pipelines.kandinsky.text_encoder import MCLIPConfig, MultilingualCLIP from diffusers.utils.testing_utils import ( backend_empty_cache, enable_full_determinism, floats_tensor, load_image, load_numpy, nightly, require_torch_accelerator, slow, torch_device, ) from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference enable_full_determinism() class Dummies: @property def text_embedder_hidden_size(self): return 32 @property def time_input_dim(self): return 32 @property def block_out_channels_0(self): return self.time_input_dim @property def time_embed_dim(self): return self.time_input_dim * 4 @property def cross_attention_dim(self): return 32 @property def dummy_tokenizer(self): tokenizer = XLMRobertaTokenizerFast.from_pretrained("YiYiXu/tiny-random-mclip-base") return tokenizer @property def dummy_text_encoder(self): torch.manual_seed(0) config = MCLIPConfig( numDims=self.cross_attention_dim, transformerDimensions=self.text_embedder_hidden_size, hidden_size=self.text_embedder_hidden_size, intermediate_size=37, num_attention_heads=4, num_hidden_layers=5, vocab_size=1005, ) text_encoder = MultilingualCLIP(config) text_encoder = text_encoder.eval() return text_encoder @property def dummy_unet(self): torch.manual_seed(0) model_kwargs = { "in_channels": 4, # Out channels is double in channels because predicts mean and variance "out_channels": 8, "addition_embed_type": "text_image", "down_block_types": ("ResnetDownsampleBlock2D", "SimpleCrossAttnDownBlock2D"), "up_block_types": ("SimpleCrossAttnUpBlock2D", "ResnetUpsampleBlock2D"), "mid_block_type": "UNetMidBlock2DSimpleCrossAttn", "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2), "layers_per_block": 1, "encoder_hid_dim": self.text_embedder_hidden_size, "encoder_hid_dim_type": "text_image_proj", "cross_attention_dim": self.cross_attention_dim, "attention_head_dim": 4, "resnet_time_scale_shift": "scale_shift", "class_embed_type": None, } model = UNet2DConditionModel(**model_kwargs) return model @property def dummy_movq_kwargs(self): return { "block_out_channels": [32, 64], "down_block_types": ["DownEncoderBlock2D", "AttnDownEncoderBlock2D"], "in_channels": 3, "latent_channels": 4, "layers_per_block": 1, "norm_num_groups": 8, "norm_type": "spatial", "num_vq_embeddings": 12, "out_channels": 3, "up_block_types": [ "AttnUpDecoderBlock2D", "UpDecoderBlock2D", ], "vq_embed_dim": 4, } @property def dummy_movq(self): torch.manual_seed(0) model = VQModel(**self.dummy_movq_kwargs) return model def get_dummy_components(self): text_encoder = self.dummy_text_encoder tokenizer = self.dummy_tokenizer unet = self.dummy_unet movq = self.dummy_movq ddim_config = { "num_train_timesteps": 1000, "beta_schedule": "linear", "beta_start": 0.00085, "beta_end": 0.012, "clip_sample": False, "set_alpha_to_one": False, "steps_offset": 0, "prediction_type": "epsilon", "thresholding": False, } scheduler = DDIMScheduler(**ddim_config) components = { "text_encoder": text_encoder, "tokenizer": tokenizer, "unet": unet, "scheduler": scheduler, "movq": movq, } return components def get_dummy_inputs(self, device, seed=0): image_embeds = floats_tensor((1, self.cross_attention_dim), rng=random.Random(seed)).to(device) negative_image_embeds = floats_tensor((1, self.cross_attention_dim), rng=random.Random(seed + 1)).to(device) # create init_image image = floats_tensor((1, 3, 64, 64), rng=random.Random(seed)).to(device) image = image.cpu().permute(0, 2, 3, 1)[0] init_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((256, 256)) if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "horse", "image": init_image, "image_embeds": image_embeds, "negative_image_embeds": negative_image_embeds, "generator": generator, "height": 64, "width": 64, "num_inference_steps": 10, "guidance_scale": 7.0, "strength": 0.2, "output_type": "np", } return inputs class KandinskyImg2ImgPipelineFastTests(PipelineTesterMixin, unittest.TestCase): pipeline_class = KandinskyImg2ImgPipeline params = ["prompt", "image_embeds", "negative_image_embeds", "image"] batch_params = [ "prompt", "negative_prompt", "image_embeds", "negative_image_embeds", "image", ] required_optional_params = [ "generator", "height", "width", "strength", "guidance_scale", "negative_prompt", "num_inference_steps", "return_dict", "guidance_scale", "num_images_per_prompt", "output_type", "return_dict", ] test_xformers_attention = False supports_dduf = False def get_dummy_components(self): dummies = Dummies() return dummies.get_dummy_components() def get_dummy_inputs(self, device, seed=0): dummies = Dummies() return dummies.get_dummy_inputs(device=device, seed=seed) def test_kandinsky_img2img(self): device = "cpu" components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe = pipe.to(device) pipe.set_progress_bar_config(disable=None) output = pipe(**self.get_dummy_inputs(device)) image = output.images image_from_tuple = pipe( **self.get_dummy_inputs(device), return_dict=False, )[0] image_slice = image[0, -3:, -3:, -1] image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] assert image.shape == (1, 64, 64, 3) expected_slice = np.array([0.5816, 0.5872, 0.4634, 0.5982, 0.4767, 0.4710, 0.4669, 0.4717, 0.4966]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2, ( f" expected_slice {expected_slice}, but got {image_slice.flatten()}" ) assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2, ( f" expected_slice {expected_slice}, but got {image_from_tuple_slice.flatten()}" ) @require_torch_accelerator def test_offloads(self): pipes = [] components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components).to(torch_device) pipes.append(sd_pipe) components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe.enable_model_cpu_offload() pipes.append(sd_pipe) components = self.get_dummy_components() sd_pipe = self.pipeline_class(**components) sd_pipe.enable_sequential_cpu_offload() pipes.append(sd_pipe) image_slices = [] for pipe in pipes: inputs = self.get_dummy_inputs(torch_device) image = pipe(**inputs).images image_slices.append(image[0, -3:, -3:, -1].flatten()) assert np.abs(image_slices[0] - image_slices[1]).max() < 1e-3 assert np.abs(image_slices[0] - image_slices[2]).max() < 1e-3 def test_dict_tuple_outputs_equivalent(self): super().test_dict_tuple_outputs_equivalent(expected_max_difference=5e-4) @slow @require_torch_accelerator class KandinskyImg2ImgPipelineIntegrationTests(unittest.TestCase): def setUp(self): # clean up the VRAM before each test super().setUp() gc.collect() backend_empty_cache(torch_device) def tearDown(self): # clean up the VRAM after each test super().tearDown() gc.collect() backend_empty_cache(torch_device) def test_kandinsky_img2img(self): expected_image = load_numpy( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/kandinsky_img2img_frog.npy" ) init_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png" ) prompt = "A red cartoon frog, 4k" pipe_prior = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 ) pipe_prior.to(torch_device) pipeline = KandinskyImg2ImgPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 ) pipeline = pipeline.to(torch_device) pipeline.set_progress_bar_config(disable=None) generator = torch.Generator(device="cpu").manual_seed(0) image_emb, zero_image_emb = pipe_prior( prompt, generator=generator, num_inference_steps=5, negative_prompt="", ).to_tuple() output = pipeline( prompt, image=init_image, image_embeds=image_emb, negative_image_embeds=zero_image_emb, generator=generator, num_inference_steps=100, height=768, width=768, strength=0.2, output_type="np", ) image = output.images[0] assert image.shape == (768, 768, 3) assert_mean_pixel_difference(image, expected_image) @nightly @require_torch_accelerator class KandinskyImg2ImgPipelineNightlyTests(unittest.TestCase): def setUp(self): # clean up the VRAM before each test super().setUp() gc.collect() backend_empty_cache(torch_device) def tearDown(self): # clean up the VRAM after each test super().tearDown() gc.collect() backend_empty_cache(torch_device) def test_kandinsky_img2img_ddpm(self): expected_image = load_numpy( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/kandinsky_img2img_ddpm_frog.npy" ) init_image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/frog.png" ) prompt = "A red cartoon frog, 4k" pipe_prior = KandinskyPriorPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 ) pipe_prior.to(torch_device) scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") pipeline = KandinskyImg2ImgPipeline.from_pretrained( "kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16 ) pipeline = pipeline.to(torch_device) pipeline.set_progress_bar_config(disable=None) generator = torch.Generator(device="cpu").manual_seed(0) image_emb, zero_image_emb = pipe_prior( prompt, generator=generator, num_inference_steps=5, negative_prompt="", ).to_tuple() output = pipeline( prompt, image=init_image, image_embeds=image_emb, negative_image_embeds=zero_image_emb, generator=generator, num_inference_steps=100, height=768, width=768, strength=0.2, output_type="np", ) image = output.images[0] assert image.shape == (768, 768, 3) assert_mean_pixel_difference(image, expected_image)
diffusers/tests/pipelines/kandinsky/test_kandinsky_img2img.py/0
{ "file_path": "diffusers/tests/pipelines/kandinsky/test_kandinsky_img2img.py", "repo_id": "diffusers", "token_count": 6474 }
193
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import numpy as np import torch from diffusers import ( AutoencoderKL, EulerDiscreteScheduler, KolorsPipeline, UNet2DConditionModel, ) from diffusers.pipelines.kolors import ChatGLMModel, ChatGLMTokenizer from diffusers.utils.testing_utils import enable_full_determinism from ..pipeline_params import ( TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_CALLBACK_CFG_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS, ) from ..test_pipelines_common import PipelineTesterMixin enable_full_determinism() class KolorsPipelineFastTests(PipelineTesterMixin, unittest.TestCase): pipeline_class = KolorsPipeline params = TEXT_TO_IMAGE_PARAMS batch_params = TEXT_TO_IMAGE_BATCH_PARAMS image_params = TEXT_TO_IMAGE_IMAGE_PARAMS image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS callback_cfg_params = TEXT_TO_IMAGE_CALLBACK_CFG_PARAMS.union({"add_text_embeds", "add_time_ids"}) supports_dduf = False test_layerwise_casting = True def get_dummy_components(self, time_cond_proj_dim=None): torch.manual_seed(0) unet = UNet2DConditionModel( block_out_channels=(2, 4), layers_per_block=2, time_cond_proj_dim=time_cond_proj_dim, sample_size=32, in_channels=4, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), # specific config below attention_head_dim=(2, 4), use_linear_projection=True, addition_embed_type="text_time", addition_time_embed_dim=8, transformer_layers_per_block=(1, 2), projection_class_embeddings_input_dim=56, cross_attention_dim=8, norm_num_groups=1, ) scheduler = EulerDiscreteScheduler( beta_start=0.00085, beta_end=0.012, steps_offset=1, beta_schedule="scaled_linear", timestep_spacing="leading", ) torch.manual_seed(0) vae = AutoencoderKL( block_out_channels=[32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, sample_size=128, ) torch.manual_seed(0) text_encoder = ChatGLMModel.from_pretrained( "hf-internal-testing/tiny-random-chatglm3-6b", torch_dtype=torch.float32 ) tokenizer = ChatGLMTokenizer.from_pretrained("hf-internal-testing/tiny-random-chatglm3-6b") components = { "unet": unet, "scheduler": scheduler, "vae": vae, "text_encoder": text_encoder, "tokenizer": tokenizer, "image_encoder": None, "feature_extractor": None, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "A painting of a squirrel eating a burger", "generator": generator, "num_inference_steps": 2, "guidance_scale": 5.0, "output_type": "np", } return inputs def test_inference(self): device = "cpu" components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) image = pipe(**inputs).images image_slice = image[0, -3:, -3:, -1] self.assertEqual(image.shape, (1, 64, 64, 3)) expected_slice = np.array( [0.26413745, 0.4425478, 0.4102801, 0.42693347, 0.52529025, 0.3867405, 0.47512037, 0.41538602, 0.43855375] ) max_diff = np.abs(image_slice.flatten() - expected_slice).max() self.assertLessEqual(max_diff, 1e-3) def test_save_load_optional_components(self): super().test_save_load_optional_components(expected_max_difference=2e-4) def test_save_load_float16(self): super().test_save_load_float16(expected_max_diff=2e-1) def test_inference_batch_single_identical(self): self._test_inference_batch_single_identical(expected_max_diff=5e-3)
diffusers/tests/pipelines/kolors/test_kolors.py/0
{ "file_path": "diffusers/tests/pipelines/kolors/test_kolors.py", "repo_id": "diffusers", "token_count": 2399 }
194
# These are canonical sets of parameters for different types of pipelines. # They are set on subclasses of `PipelineTesterMixin` as `params` and # `batch_params`. # # If a pipeline's set of arguments has minor changes from one of the common sets # of arguments, do not make modifications to the existing common sets of arguments. # I.e. a text to image pipeline with non-configurable height and width arguments # should set its attribute as `params = TEXT_TO_IMAGE_PARAMS - {'height', 'width'}`. TEXT_TO_IMAGE_PARAMS = frozenset( [ "prompt", "height", "width", "guidance_scale", "negative_prompt", "prompt_embeds", "negative_prompt_embeds", "cross_attention_kwargs", ] ) IMAGE_VARIATION_PARAMS = frozenset( [ "image", "height", "width", "guidance_scale", ] ) TEXT_GUIDED_IMAGE_VARIATION_PARAMS = frozenset( [ "prompt", "image", "height", "width", "guidance_scale", "negative_prompt", "prompt_embeds", "negative_prompt_embeds", ] ) TEXT_GUIDED_IMAGE_INPAINTING_PARAMS = frozenset( [ # Text guided image variation with an image mask "prompt", "image", "mask_image", "height", "width", "guidance_scale", "negative_prompt", "prompt_embeds", "negative_prompt_embeds", ] ) IMAGE_INPAINTING_PARAMS = frozenset( [ # image variation with an image mask "image", "mask_image", "height", "width", "guidance_scale", ] ) IMAGE_GUIDED_IMAGE_INPAINTING_PARAMS = frozenset( [ "example_image", "image", "mask_image", "height", "width", "guidance_scale", ] ) UNCONDITIONAL_IMAGE_GENERATION_PARAMS = frozenset(["batch_size"]) CLASS_CONDITIONED_IMAGE_GENERATION_PARAMS = frozenset(["class_labels"]) CLASS_CONDITIONED_IMAGE_GENERATION_BATCH_PARAMS = frozenset(["class_labels"]) TEXT_TO_AUDIO_PARAMS = frozenset( [ "prompt", "audio_length_in_s", "guidance_scale", "negative_prompt", "prompt_embeds", "negative_prompt_embeds", "cross_attention_kwargs", ] ) TOKENS_TO_AUDIO_GENERATION_PARAMS = frozenset(["input_tokens"]) UNCONDITIONAL_AUDIO_GENERATION_PARAMS = frozenset(["batch_size"]) # image params TEXT_TO_IMAGE_IMAGE_PARAMS = frozenset([]) IMAGE_TO_IMAGE_IMAGE_PARAMS = frozenset(["image"]) # batch params TEXT_TO_IMAGE_BATCH_PARAMS = frozenset(["prompt", "negative_prompt"]) IMAGE_VARIATION_BATCH_PARAMS = frozenset(["image"]) TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS = frozenset(["prompt", "image", "negative_prompt"]) TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["prompt", "image", "mask_image", "negative_prompt"]) IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["image", "mask_image"]) IMAGE_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS = frozenset(["example_image", "image", "mask_image"]) UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS = frozenset([]) UNCONDITIONAL_AUDIO_GENERATION_BATCH_PARAMS = frozenset([]) TEXT_TO_AUDIO_BATCH_PARAMS = frozenset(["prompt", "negative_prompt"]) TOKENS_TO_AUDIO_GENERATION_BATCH_PARAMS = frozenset(["input_tokens"]) VIDEO_TO_VIDEO_BATCH_PARAMS = frozenset(["prompt", "negative_prompt", "video"]) # callback params TEXT_TO_IMAGE_CALLBACK_CFG_PARAMS = frozenset(["prompt_embeds"])
diffusers/tests/pipelines/pipeline_params.py/0
{ "file_path": "diffusers/tests/pipelines/pipeline_params.py", "repo_id": "diffusers", "token_count": 1597 }
195
# coding=utf-8 # Copyright 2025 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import random import tempfile import unittest import numpy as np import torch from PIL import Image from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer from diffusers import AutoencoderKL, DDIMScheduler, DDPMScheduler, StableDiffusionUpscalePipeline, UNet2DConditionModel from diffusers.utils.testing_utils import ( backend_empty_cache, backend_max_memory_allocated, backend_reset_max_memory_allocated, backend_reset_peak_memory_stats, enable_full_determinism, floats_tensor, load_image, load_numpy, require_accelerator, require_torch_accelerator, slow, torch_device, ) enable_full_determinism() class StableDiffusionUpscalePipelineFastTests(unittest.TestCase): def setUp(self): # clean up the VRAM before each test super().setUp() gc.collect() backend_empty_cache(torch_device) def tearDown(self): # clean up the VRAM after each test super().tearDown() gc.collect() backend_empty_cache(torch_device) @property def dummy_image(self): batch_size = 1 num_channels = 3 sizes = (32, 32) image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device) return image @property def dummy_cond_unet_upscale(self): torch.manual_seed(0) model = UNet2DConditionModel( block_out_channels=(32, 32, 64), layers_per_block=2, sample_size=32, in_channels=7, out_channels=4, down_block_types=("DownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D"), up_block_types=("CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "UpBlock2D"), cross_attention_dim=32, # SD2-specific config below attention_head_dim=8, use_linear_projection=True, only_cross_attention=(True, True, False), num_class_embeds=100, ) return model @property def dummy_vae(self): torch.manual_seed(0) model = AutoencoderKL( block_out_channels=[32, 32, 64], in_channels=3, out_channels=3, down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D"], up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"], latent_channels=4, ) return model @property def dummy_text_encoder(self): torch.manual_seed(0) config = CLIPTextConfig( bos_token_id=0, eos_token_id=2, hidden_size=32, intermediate_size=37, layer_norm_eps=1e-05, num_attention_heads=4, num_hidden_layers=5, pad_token_id=1, vocab_size=1000, # SD2-specific config below hidden_act="gelu", projection_dim=512, ) return CLIPTextModel(config) def test_stable_diffusion_upscale(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator unet = self.dummy_cond_unet_upscale low_res_scheduler = DDPMScheduler() scheduler = DDIMScheduler(prediction_type="v_prediction") vae = self.dummy_vae text_encoder = self.dummy_text_encoder tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64)) # make sure here that pndm scheduler skips prk sd_pipe = StableDiffusionUpscalePipeline( unet=unet, low_res_scheduler=low_res_scheduler, scheduler=scheduler, vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, max_noise_level=350, ) sd_pipe = sd_pipe.to(device) sd_pipe.set_progress_bar_config(disable=None) prompt = "A painting of a squirrel eating a burger" generator = torch.Generator(device=device).manual_seed(0) output = sd_pipe( [prompt], image=low_res_image, generator=generator, guidance_scale=6.0, noise_level=20, num_inference_steps=2, output_type="np", ) image = output.images generator = torch.Generator(device=device).manual_seed(0) image_from_tuple = sd_pipe( [prompt], image=low_res_image, generator=generator, guidance_scale=6.0, noise_level=20, num_inference_steps=2, output_type="np", return_dict=False, )[0] image_slice = image[0, -3:, -3:, -1] image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] expected_height_width = low_res_image.size[0] * 4 assert image.shape == (1, expected_height_width, expected_height_width, 3) expected_slice = np.array([0.3113, 0.3910, 0.4272, 0.4859, 0.5061, 0.4652, 0.5362, 0.5715, 0.5661]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2 def test_stable_diffusion_upscale_batch(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator unet = self.dummy_cond_unet_upscale low_res_scheduler = DDPMScheduler() scheduler = DDIMScheduler(prediction_type="v_prediction") vae = self.dummy_vae text_encoder = self.dummy_text_encoder tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64)) # make sure here that pndm scheduler skips prk sd_pipe = StableDiffusionUpscalePipeline( unet=unet, low_res_scheduler=low_res_scheduler, scheduler=scheduler, vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, max_noise_level=350, ) sd_pipe = sd_pipe.to(device) sd_pipe.set_progress_bar_config(disable=None) prompt = "A painting of a squirrel eating a burger" output = sd_pipe( 2 * [prompt], image=2 * [low_res_image], guidance_scale=6.0, noise_level=20, num_inference_steps=2, output_type="np", ) image = output.images assert image.shape[0] == 2 generator = torch.Generator(device=device).manual_seed(0) output = sd_pipe( [prompt], image=low_res_image, generator=generator, num_images_per_prompt=2, guidance_scale=6.0, noise_level=20, num_inference_steps=2, output_type="np", ) image = output.images assert image.shape[0] == 2 def test_stable_diffusion_upscale_prompt_embeds(self): device = "cpu" # ensure determinism for the device-dependent torch.Generator unet = self.dummy_cond_unet_upscale low_res_scheduler = DDPMScheduler() scheduler = DDIMScheduler(prediction_type="v_prediction") vae = self.dummy_vae text_encoder = self.dummy_text_encoder tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64)) # make sure here that pndm scheduler skips prk sd_pipe = StableDiffusionUpscalePipeline( unet=unet, low_res_scheduler=low_res_scheduler, scheduler=scheduler, vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, max_noise_level=350, ) sd_pipe = sd_pipe.to(device) sd_pipe.set_progress_bar_config(disable=None) prompt = "A painting of a squirrel eating a burger" generator = torch.Generator(device=device).manual_seed(0) output = sd_pipe( [prompt], image=low_res_image, generator=generator, guidance_scale=6.0, noise_level=20, num_inference_steps=2, output_type="np", ) image = output.images generator = torch.Generator(device=device).manual_seed(0) prompt_embeds, negative_prompt_embeds = sd_pipe.encode_prompt(prompt, device, 1, False) if negative_prompt_embeds is not None: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) image_from_prompt_embeds = sd_pipe( prompt_embeds=prompt_embeds, image=[low_res_image], generator=generator, guidance_scale=6.0, noise_level=20, num_inference_steps=2, output_type="np", return_dict=False, )[0] image_slice = image[0, -3:, -3:, -1] image_from_prompt_embeds_slice = image_from_prompt_embeds[0, -3:, -3:, -1] expected_height_width = low_res_image.size[0] * 4 assert image.shape == (1, expected_height_width, expected_height_width, 3) expected_slice = np.array([0.3113, 0.3910, 0.4272, 0.4859, 0.5061, 0.4652, 0.5362, 0.5715, 0.5661]) assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 assert np.abs(image_from_prompt_embeds_slice.flatten() - expected_slice).max() < 1e-2 @require_accelerator def test_stable_diffusion_upscale_fp16(self): """Test that stable diffusion upscale works with fp16""" unet = self.dummy_cond_unet_upscale low_res_scheduler = DDPMScheduler() scheduler = DDIMScheduler(prediction_type="v_prediction") vae = self.dummy_vae text_encoder = self.dummy_text_encoder tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64)) # put models in fp16, except vae as it overflows in fp16 unet = unet.half() text_encoder = text_encoder.half() # make sure here that pndm scheduler skips prk sd_pipe = StableDiffusionUpscalePipeline( unet=unet, low_res_scheduler=low_res_scheduler, scheduler=scheduler, vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, max_noise_level=350, ) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) prompt = "A painting of a squirrel eating a burger" generator = torch.manual_seed(0) image = sd_pipe( [prompt], image=low_res_image, generator=generator, num_inference_steps=2, output_type="np", ).images expected_height_width = low_res_image.size[0] * 4 assert image.shape == (1, expected_height_width, expected_height_width, 3) def test_stable_diffusion_upscale_from_save_pretrained(self): pipes = [] device = "cpu" # ensure determinism for the device-dependent torch.Generator low_res_scheduler = DDPMScheduler() scheduler = DDIMScheduler(prediction_type="v_prediction") tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") # make sure here that pndm scheduler skips prk sd_pipe = StableDiffusionUpscalePipeline( unet=self.dummy_cond_unet_upscale, low_res_scheduler=low_res_scheduler, scheduler=scheduler, vae=self.dummy_vae, text_encoder=self.dummy_text_encoder, tokenizer=tokenizer, max_noise_level=350, ) sd_pipe = sd_pipe.to(device) pipes.append(sd_pipe) with tempfile.TemporaryDirectory() as tmpdirname: sd_pipe.save_pretrained(tmpdirname) sd_pipe = StableDiffusionUpscalePipeline.from_pretrained(tmpdirname).to(device) pipes.append(sd_pipe) prompt = "A painting of a squirrel eating a burger" image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0] low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64)) image_slices = [] for pipe in pipes: generator = torch.Generator(device=device).manual_seed(0) image = pipe( [prompt], image=low_res_image, generator=generator, guidance_scale=6.0, noise_level=20, num_inference_steps=2, output_type="np", ).images image_slices.append(image[0, -3:, -3:, -1].flatten()) assert np.abs(image_slices[0] - image_slices[1]).max() < 1e-3 @slow @require_torch_accelerator class StableDiffusionUpscalePipelineIntegrationTests(unittest.TestCase): def setUp(self): # clean up the VRAM before each test super().setUp() gc.collect() backend_empty_cache(torch_device) def tearDown(self): # clean up the VRAM after each test super().tearDown() gc.collect() backend_empty_cache(torch_device) def test_stable_diffusion_upscale_pipeline(self): image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/sd2-upscale/low_res_cat.png" ) expected_image = load_numpy( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale" "/upsampled_cat.npy" ) model_id = "stabilityai/stable-diffusion-x4-upscaler" pipe = StableDiffusionUpscalePipeline.from_pretrained(model_id) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing() prompt = "a cat sitting on a park bench" generator = torch.manual_seed(0) output = pipe( prompt=prompt, image=image, generator=generator, output_type="np", ) image = output.images[0] assert image.shape == (512, 512, 3) assert np.abs(expected_image - image).max() < 1e-3 def test_stable_diffusion_upscale_pipeline_fp16(self): image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/sd2-upscale/low_res_cat.png" ) expected_image = load_numpy( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale" "/upsampled_cat_fp16.npy" ) model_id = "stabilityai/stable-diffusion-x4-upscaler" pipe = StableDiffusionUpscalePipeline.from_pretrained( model_id, torch_dtype=torch.float16, ) pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing() prompt = "a cat sitting on a park bench" generator = torch.manual_seed(0) output = pipe( prompt=prompt, image=image, generator=generator, output_type="np", ) image = output.images[0] assert image.shape == (512, 512, 3) assert np.abs(expected_image - image).max() < 5e-1 def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self): backend_empty_cache(torch_device) backend_reset_max_memory_allocated(torch_device) backend_reset_peak_memory_stats(torch_device) image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/sd2-upscale/low_res_cat.png" ) model_id = "stabilityai/stable-diffusion-x4-upscaler" pipe = StableDiffusionUpscalePipeline.from_pretrained( model_id, torch_dtype=torch.float16, ) pipe.set_progress_bar_config(disable=None) pipe.enable_attention_slicing(1) pipe.enable_sequential_cpu_offload(device=torch_device) prompt = "a cat sitting on a park bench" generator = torch.manual_seed(0) _ = pipe( prompt=prompt, image=image, generator=generator, num_inference_steps=5, output_type="np", ) mem_bytes = backend_max_memory_allocated(torch_device) # make sure that less than 2.9 GB is allocated assert mem_bytes < 2.9 * 10**9
diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_upscale.py/0
{ "file_path": "diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_upscale.py", "repo_id": "diffusers", "token_count": 8527 }
196
# Copyright 2025 The HuggingFace Team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import tempfile import unittest import numpy as np import torch from transformers import AutoTokenizer, T5EncoderModel from diffusers import AutoencoderKLWan, FlowMatchEulerDiscreteScheduler, WanPipeline, WanTransformer3DModel from diffusers.utils.testing_utils import ( backend_empty_cache, enable_full_determinism, require_torch_accelerator, slow, torch_device, ) from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS from ..test_pipelines_common import PipelineTesterMixin enable_full_determinism() class WanPipelineFastTests(PipelineTesterMixin, unittest.TestCase): pipeline_class = WanPipeline params = TEXT_TO_IMAGE_PARAMS - {"cross_attention_kwargs"} batch_params = TEXT_TO_IMAGE_BATCH_PARAMS image_params = TEXT_TO_IMAGE_IMAGE_PARAMS image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS required_optional_params = frozenset( [ "num_inference_steps", "generator", "latents", "return_dict", "callback_on_step_end", "callback_on_step_end_tensor_inputs", ] ) test_xformers_attention = False supports_dduf = False def get_dummy_components(self): torch.manual_seed(0) vae = AutoencoderKLWan( base_dim=3, z_dim=16, dim_mult=[1, 1, 1, 1], num_res_blocks=1, temperal_downsample=[False, True, True], ) torch.manual_seed(0) # TODO: impl FlowDPMSolverMultistepScheduler scheduler = FlowMatchEulerDiscreteScheduler(shift=7.0) text_encoder = T5EncoderModel.from_pretrained("hf-internal-testing/tiny-random-t5") tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-t5") torch.manual_seed(0) transformer = WanTransformer3DModel( patch_size=(1, 2, 2), num_attention_heads=2, attention_head_dim=12, in_channels=16, out_channels=16, text_dim=32, freq_dim=256, ffn_dim=32, num_layers=2, cross_attn_norm=True, qk_norm="rms_norm_across_heads", rope_max_seq_len=32, ) components = { "transformer": transformer, "vae": vae, "scheduler": scheduler, "text_encoder": text_encoder, "tokenizer": tokenizer, "transformer_2": None, } return components def get_dummy_inputs(self, device, seed=0): if str(device).startswith("mps"): generator = torch.manual_seed(seed) else: generator = torch.Generator(device=device).manual_seed(seed) inputs = { "prompt": "dance monkey", "negative_prompt": "negative", # TODO "generator": generator, "num_inference_steps": 2, "guidance_scale": 6.0, "height": 16, "width": 16, "num_frames": 9, "max_sequence_length": 16, "output_type": "pt", } return inputs def test_inference(self): device = "cpu" components = self.get_dummy_components() pipe = self.pipeline_class(**components) pipe.to(device) pipe.set_progress_bar_config(disable=None) inputs = self.get_dummy_inputs(device) video = pipe(**inputs).frames generated_video = video[0] self.assertEqual(generated_video.shape, (9, 3, 16, 16)) # fmt: off expected_slice = torch.tensor([0.4525, 0.452, 0.4485, 0.4534, 0.4524, 0.4529, 0.454, 0.453, 0.5127, 0.5326, 0.5204, 0.5253, 0.5439, 0.5424, 0.5133, 0.5078]) # fmt: on generated_slice = generated_video.flatten() generated_slice = torch.cat([generated_slice[:8], generated_slice[-8:]]) self.assertTrue(torch.allclose(generated_slice, expected_slice, atol=1e-3)) @unittest.skip("Test not supported") def test_attention_slicing_forward_pass(self): pass # _optional_components include transformer, transformer_2, but only transformer_2 is optional for this wan2.1 t2v pipeline def test_save_load_optional_components(self, expected_max_difference=1e-4): optional_component = "transformer_2" components = self.get_dummy_components() components[optional_component] = None pipe = self.pipeline_class(**components) for component in pipe.components.values(): if hasattr(component, "set_default_attn_processor"): component.set_default_attn_processor() pipe.to(torch_device) pipe.set_progress_bar_config(disable=None) generator_device = "cpu" inputs = self.get_dummy_inputs(generator_device) torch.manual_seed(0) output = pipe(**inputs)[0] with tempfile.TemporaryDirectory() as tmpdir: pipe.save_pretrained(tmpdir, safe_serialization=False) pipe_loaded = self.pipeline_class.from_pretrained(tmpdir) for component in pipe_loaded.components.values(): if hasattr(component, "set_default_attn_processor"): component.set_default_attn_processor() pipe_loaded.to(torch_device) pipe_loaded.set_progress_bar_config(disable=None) self.assertTrue( getattr(pipe_loaded, optional_component) is None, f"`{optional_component}` did not stay set to None after loading.", ) inputs = self.get_dummy_inputs(generator_device) torch.manual_seed(0) output_loaded = pipe_loaded(**inputs)[0] max_diff = np.abs(output.detach().cpu().numpy() - output_loaded.detach().cpu().numpy()).max() self.assertLess(max_diff, expected_max_difference) @slow @require_torch_accelerator class WanPipelineIntegrationTests(unittest.TestCase): prompt = "A painting of a squirrel eating a burger." def setUp(self): super().setUp() gc.collect() backend_empty_cache(torch_device) def tearDown(self): super().tearDown() gc.collect() backend_empty_cache(torch_device) @unittest.skip("TODO: test needs to be implemented") def test_Wanx(self): pass
diffusers/tests/pipelines/wan/test_wan.py/0
{ "file_path": "diffusers/tests/pipelines/wan/test_wan.py", "repo_id": "diffusers", "token_count": 3135 }
197
# coding=utf-8 # Copyright 2025 The HuggingFace Team Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a clone of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import inspect import torch from diffusers import DiffusionPipeline from diffusers.utils.testing_utils import backend_empty_cache, require_torch_accelerator, slow, torch_device @require_torch_accelerator @slow class QuantCompileTests: @property def quantization_config(self): raise NotImplementedError( "This property should be implemented in the subclass to return the appropriate quantization config." ) def setUp(self): super().setUp() gc.collect() backend_empty_cache(torch_device) torch.compiler.reset() def tearDown(self): super().tearDown() gc.collect() backend_empty_cache(torch_device) torch.compiler.reset() def _init_pipeline(self, quantization_config, torch_dtype): pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers", quantization_config=quantization_config, torch_dtype=torch_dtype, ) return pipe def _test_torch_compile(self, torch_dtype=torch.bfloat16): pipe = self._init_pipeline(self.quantization_config, torch_dtype).to(torch_device) # `fullgraph=True` ensures no graph breaks pipe.transformer.compile(fullgraph=True) # small resolutions to ensure speedy execution. with torch._dynamo.config.patch(error_on_recompile=True): pipe("a dog", num_inference_steps=2, max_sequence_length=16, height=256, width=256) def _test_torch_compile_with_cpu_offload(self, torch_dtype=torch.bfloat16): pipe = self._init_pipeline(self.quantization_config, torch_dtype) pipe.enable_model_cpu_offload() # regional compilation is better for offloading. # see: https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/ if getattr(pipe.transformer, "_repeated_blocks"): pipe.transformer.compile_repeated_blocks(fullgraph=True) else: pipe.transformer.compile() # small resolutions to ensure speedy execution. pipe("a dog", num_inference_steps=2, max_sequence_length=16, height=256, width=256) def _test_torch_compile_with_group_offload_leaf(self, torch_dtype=torch.bfloat16, *, use_stream: bool = False): torch._dynamo.config.cache_size_limit = 1000 pipe = self._init_pipeline(self.quantization_config, torch_dtype) group_offload_kwargs = { "onload_device": torch.device(torch_device), "offload_device": torch.device("cpu"), "offload_type": "leaf_level", "use_stream": use_stream, } pipe.transformer.enable_group_offload(**group_offload_kwargs) pipe.transformer.compile() for name, component in pipe.components.items(): if name != "transformer" and isinstance(component, torch.nn.Module): if torch.device(component.device).type == "cpu": component.to(torch_device) # small resolutions to ensure speedy execution. pipe("a dog", num_inference_steps=2, max_sequence_length=16, height=256, width=256) def test_torch_compile(self): self._test_torch_compile() def test_torch_compile_with_cpu_offload(self): self._test_torch_compile_with_cpu_offload() def test_torch_compile_with_group_offload_leaf(self, use_stream=False): for cls in inspect.getmro(self.__class__): if "test_torch_compile_with_group_offload_leaf" in cls.__dict__ and cls is not QuantCompileTests: return self._test_torch_compile_with_group_offload_leaf(use_stream=use_stream)
diffusers/tests/quantization/test_torch_compile_utils.py/0
{ "file_path": "diffusers/tests/quantization/test_torch_compile_utils.py", "repo_id": "diffusers", "token_count": 1727 }
198
import torch from diffusers import SASolverScheduler from diffusers.utils.testing_utils import require_torchsde, torch_device from .test_schedulers import SchedulerCommonTest @require_torchsde class SASolverSchedulerTest(SchedulerCommonTest): scheduler_classes = (SASolverScheduler,) forward_default_kwargs = (("num_inference_steps", 10),) num_inference_steps = 10 def get_scheduler_config(self, **kwargs): config = { "num_train_timesteps": 1100, "beta_start": 0.0001, "beta_end": 0.02, "beta_schedule": "linear", } config.update(**kwargs) return config def test_step_shape(self): kwargs = dict(self.forward_default_kwargs) num_inference_steps = kwargs.pop("num_inference_steps", None) for scheduler_class in self.scheduler_classes: scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) sample = self.dummy_sample residual = 0.1 * sample if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): scheduler.set_timesteps(num_inference_steps) elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): kwargs["num_inference_steps"] = num_inference_steps # copy over dummy past residuals (must be done after set_timesteps) dummy_past_residuals = [residual + 0.2, residual + 0.15, residual + 0.10] scheduler.model_outputs = dummy_past_residuals[ : max( scheduler.config.predictor_order, scheduler.config.corrector_order - 1, ) ] time_step_0 = scheduler.timesteps[5] time_step_1 = scheduler.timesteps[6] output_0 = scheduler.step(residual, time_step_0, sample, **kwargs).prev_sample output_1 = scheduler.step(residual, time_step_1, sample, **kwargs).prev_sample self.assertEqual(output_0.shape, sample.shape) self.assertEqual(output_0.shape, output_1.shape) def test_timesteps(self): for timesteps in [10, 50, 100, 1000]: self.check_over_configs(num_train_timesteps=timesteps) def test_betas(self): for beta_start, beta_end in zip([0.00001, 0.0001, 0.001], [0.0002, 0.002, 0.02]): self.check_over_configs(beta_start=beta_start, beta_end=beta_end) def test_schedules(self): for schedule in ["linear", "scaled_linear"]: self.check_over_configs(beta_schedule=schedule) def test_prediction_type(self): for prediction_type in ["epsilon", "v_prediction"]: self.check_over_configs(prediction_type=prediction_type) def test_full_loop_no_noise(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) scheduler.set_timesteps(self.num_inference_steps) model = self.dummy_model() sample = self.dummy_sample_deter * scheduler.init_noise_sigma sample = sample.to(torch_device) generator = torch.manual_seed(0) for i, t in enumerate(scheduler.timesteps): sample = scheduler.scale_model_input(sample, t, generator=generator) model_output = model(sample, t) output = scheduler.step(model_output, t, sample) sample = output.prev_sample result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) if torch_device in ["cpu"]: assert abs(result_sum.item() - 337.394287109375) < 1e-2 assert abs(result_mean.item() - 0.43931546807289124) < 1e-3 elif torch_device in ["cuda"]: assert abs(result_sum.item() - 329.1999816894531) < 1e-2 assert abs(result_mean.item() - 0.4286458194255829) < 1e-3 def test_full_loop_with_v_prediction(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config(prediction_type="v_prediction") scheduler = scheduler_class(**scheduler_config) scheduler.set_timesteps(self.num_inference_steps) model = self.dummy_model() sample = self.dummy_sample_deter * scheduler.init_noise_sigma sample = sample.to(torch_device) generator = torch.manual_seed(0) for i, t in enumerate(scheduler.timesteps): sample = scheduler.scale_model_input(sample, t, generator=generator) model_output = model(sample, t) output = scheduler.step(model_output, t, sample) sample = output.prev_sample result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) if torch_device in ["cpu"]: assert abs(result_sum.item() - 193.1467742919922) < 1e-2 assert abs(result_mean.item() - 0.2514931857585907) < 1e-3 elif torch_device in ["cuda"]: assert abs(result_sum.item() - 193.4154052734375) < 1e-2 assert abs(result_mean.item() - 0.2518429756164551) < 1e-3 def test_full_loop_device(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config) scheduler.set_timesteps(self.num_inference_steps, device=torch_device) model = self.dummy_model() sample = self.dummy_sample_deter.to(torch_device) * scheduler.init_noise_sigma generator = torch.manual_seed(0) for t in scheduler.timesteps: sample = scheduler.scale_model_input(sample, t) model_output = model(sample, t) output = scheduler.step(model_output, t, sample, generator=generator) sample = output.prev_sample result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) if torch_device in ["cpu"]: assert abs(result_sum.item() - 337.394287109375) < 1e-2 assert abs(result_mean.item() - 0.43931546807289124) < 1e-3 elif torch_device in ["cuda"]: assert abs(result_sum.item() - 337.394287109375) < 1e-2 assert abs(result_mean.item() - 0.4393154978752136) < 1e-3 def test_full_loop_device_karras_sigmas(self): scheduler_class = self.scheduler_classes[0] scheduler_config = self.get_scheduler_config() scheduler = scheduler_class(**scheduler_config, use_karras_sigmas=True) scheduler.set_timesteps(self.num_inference_steps, device=torch_device) model = self.dummy_model() sample = self.dummy_sample_deter.to(torch_device) * scheduler.init_noise_sigma sample = sample.to(torch_device) generator = torch.manual_seed(0) for t in scheduler.timesteps: sample = scheduler.scale_model_input(sample, t) model_output = model(sample, t) output = scheduler.step(model_output, t, sample, generator=generator) sample = output.prev_sample result_sum = torch.sum(torch.abs(sample)) result_mean = torch.mean(torch.abs(sample)) if torch_device in ["cpu"]: assert abs(result_sum.item() - 837.2554931640625) < 1e-2 assert abs(result_mean.item() - 1.0901764631271362) < 1e-2 elif torch_device in ["cuda"]: assert abs(result_sum.item() - 837.25537109375) < 1e-2 assert abs(result_mean.item() - 1.0901763439178467) < 1e-2 def test_beta_sigmas(self): self.check_over_configs(use_beta_sigmas=True) def test_exponential_sigmas(self): self.check_over_configs(use_exponential_sigmas=True)
diffusers/tests/schedulers/test_scheduler_sasolver.py/0
{ "file_path": "diffusers/tests/schedulers/test_scheduler_sasolver.py", "repo_id": "diffusers", "token_count": 3611 }
199
# coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import glob import os import re import subprocess # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_copies.py DIFFUSERS_PATH = "src/diffusers" REPO_PATH = "." def _should_continue(line, indent): return line.startswith(indent) or len(line) <= 1 or re.search(r"^\s*\)(\s*->.*:|:)\s*$", line) is not None def find_code_in_diffusers(object_name): """Find and return the code source code of `object_name`.""" parts = object_name.split(".") i = 0 # First let's find the module where our object lives. module = parts[i] while i < len(parts) and not os.path.isfile(os.path.join(DIFFUSERS_PATH, f"{module}.py")): i += 1 if i < len(parts): module = os.path.join(module, parts[i]) if i >= len(parts): raise ValueError(f"`object_name` should begin with the name of a module of diffusers but got {object_name}.") with open( os.path.join(DIFFUSERS_PATH, f"{module}.py"), "r", encoding="utf-8", newline="\n", ) as f: lines = f.readlines() # Now let's find the class / func in the code! indent = "" line_index = 0 for name in parts[i + 1 :]: while ( line_index < len(lines) and re.search(rf"^{indent}(class|def)\s+{name}(\(|\:)", lines[line_index]) is None ): line_index += 1 indent += " " line_index += 1 if line_index >= len(lines): raise ValueError(f" {object_name} does not match any function or class in {module}.") # We found the beginning of the class / func, now let's find the end (when the indent diminishes). start_index = line_index while line_index < len(lines) and _should_continue(lines[line_index], indent): line_index += 1 # Clean up empty lines at the end (if any). while len(lines[line_index - 1]) <= 1: line_index -= 1 code_lines = lines[start_index:line_index] return "".join(code_lines) _re_copy_warning = re.compile(r"^(\s*)#\s*Copied from\s+diffusers\.(\S+\.\S+)\s*($|\S.*$)") _re_replace_pattern = re.compile(r"^\s*(\S+)->(\S+)(\s+.*|$)") _re_fill_pattern = re.compile(r"<FILL\s+[^>]*>") def get_indent(code): lines = code.split("\n") idx = 0 while idx < len(lines) and len(lines[idx]) == 0: idx += 1 if idx < len(lines): return re.search(r"^(\s*)\S", lines[idx]).groups()[0] return "" def run_ruff(code): command = ["ruff", "format", "-", "--config", "pyproject.toml", "--silent"] process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) stdout, _ = process.communicate(input=code.encode()) return stdout.decode() def stylify(code: str) -> str: """ Applies the ruff part of our `make style` command to some code. This formats the code using `ruff format`. As `ruff` does not provide a python api this cannot be done on the fly. Args: code (`str`): The code to format. Returns: `str`: The formatted code. """ has_indent = len(get_indent(code)) > 0 if has_indent: code = f"class Bla:\n{code}" formatted_code = run_ruff(code) return formatted_code[len("class Bla:\n") :] if has_indent else formatted_code def is_copy_consistent(filename, overwrite=False): """ Check if the code commented as a copy in `filename` matches the original. Return the differences or overwrites the content depending on `overwrite`. """ with open(filename, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() diffs = [] line_index = 0 # Not a for loop cause `lines` is going to change (if `overwrite=True`). while line_index < len(lines): search = _re_copy_warning.search(lines[line_index]) if search is None: line_index += 1 continue # There is some copied code here, let's retrieve the original. indent, object_name, replace_pattern = search.groups() theoretical_code = find_code_in_diffusers(object_name) theoretical_indent = get_indent(theoretical_code) start_index = line_index + 1 if indent == theoretical_indent else line_index + 2 indent = theoretical_indent line_index = start_index # Loop to check the observed code, stop when indentation diminishes or if we see a End copy comment. should_continue = True while line_index < len(lines) and should_continue: line_index += 1 if line_index >= len(lines): break line = lines[line_index] should_continue = _should_continue(line, indent) and re.search(f"^{indent}# End copy", line) is None # Clean up empty lines at the end (if any). while len(lines[line_index - 1]) <= 1: line_index -= 1 observed_code_lines = lines[start_index:line_index] observed_code = "".join(observed_code_lines) # Remove any nested `Copied from` comments to avoid circular copies theoretical_code = [line for line in theoretical_code.split("\n") if _re_copy_warning.search(line) is None] theoretical_code = "\n".join(theoretical_code) # Before comparing, use the `replace_pattern` on the original code. if len(replace_pattern) > 0: patterns = replace_pattern.replace("with", "").split(",") patterns = [_re_replace_pattern.search(p) for p in patterns] for pattern in patterns: if pattern is None: continue obj1, obj2, option = pattern.groups() theoretical_code = re.sub(obj1, obj2, theoretical_code) if option.strip() == "all-casing": theoretical_code = re.sub(obj1.lower(), obj2.lower(), theoretical_code) theoretical_code = re.sub(obj1.upper(), obj2.upper(), theoretical_code) # stylify after replacement. To be able to do that, we need the header (class or function definition) # from the previous line theoretical_code = stylify(lines[start_index - 1] + theoretical_code) theoretical_code = theoretical_code[len(lines[start_index - 1]) :] # Test for a diff and act accordingly. if observed_code != theoretical_code: diffs.append([object_name, start_index]) if overwrite: lines = lines[:start_index] + [theoretical_code] + lines[line_index:] line_index = start_index + 1 if overwrite and len(diffs) > 0: # Warn the user a file has been modified. print(f"Detected changes, rewriting {filename}.") with open(filename, "w", encoding="utf-8", newline="\n") as f: f.writelines(lines) return diffs def check_copies(overwrite: bool = False): all_files = glob.glob(os.path.join(DIFFUSERS_PATH, "**/*.py"), recursive=True) diffs = [] for filename in all_files: new_diffs = is_copy_consistent(filename, overwrite) diffs += [f"- {filename}: copy does not match {d[0]} at line {d[1]}" for d in new_diffs] if not overwrite and len(diffs) > 0: diff = "\n".join(diffs) raise Exception( "Found the following copy inconsistencies:\n" + diff + "\nRun `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.", ) args = parser.parse_args() check_copies(args.fix_and_overwrite)
diffusers/utils/check_copies.py/0
{ "file_path": "diffusers/utils/check_copies.py", "repo_id": "diffusers", "token_count": 3397 }
200
# coding=utf-8 # Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import requests # Configuration LIBRARY_NAME = "diffusers" GITHUB_REPO = "huggingface/diffusers" SLACK_WEBHOOK_URL = os.getenv("SLACK_WEBHOOK_URL") def check_pypi_for_latest_release(library_name): """Check PyPI for the latest release of the library.""" response = requests.get(f"https://pypi.org/pypi/{library_name}/json", timeout=60) if response.status_code == 200: data = response.json() return data["info"]["version"] else: print("Failed to fetch library details from PyPI.") return None def get_github_release_info(github_repo): """Fetch the latest release info from GitHub.""" url = f"https://api.github.com/repos/{github_repo}/releases/latest" response = requests.get(url, timeout=60) if response.status_code == 200: data = response.json() return {"tag_name": data["tag_name"], "url": data["html_url"], "release_time": data["published_at"]} else: print("Failed to fetch release info from GitHub.") return None def notify_slack(webhook_url, library_name, version, release_info): """Send a notification to a Slack channel.""" message = ( f"🚀 New release for {library_name} available: version **{version}** 🎉\n" f"📜 Release Notes: {release_info['url']}\n" f"⏱️ Release time: {release_info['release_time']}" ) payload = {"text": message} response = requests.post(webhook_url, json=payload) if response.status_code == 200: print("Notification sent to Slack successfully.") else: print("Failed to send notification to Slack.") def main(): latest_version = check_pypi_for_latest_release(LIBRARY_NAME) release_info = get_github_release_info(GITHUB_REPO) parsed_version = release_info["tag_name"].replace("v", "") if latest_version and release_info and latest_version == parsed_version: notify_slack(SLACK_WEBHOOK_URL, LIBRARY_NAME, latest_version, release_info) else: print(f"{latest_version=}, {release_info=}, {parsed_version=}") raise ValueError("There were some problems.") if __name__ == "__main__": main()
diffusers/utils/notify_slack_about_release.py/0
{ "file_path": "diffusers/utils/notify_slack_about_release.py", "repo_id": "diffusers", "token_count": 993 }
201
# How to contribute to 🤗 LeRobot? Everyone is welcome to contribute, and we value everybody's contribution. Code is thus not the only way to help the community. Answering questions, helping others, reaching out and improving the documentations are immensely valuable to the community. It also helps us if you spread the word: reference the library from blog posts on the awesome projects it made possible, shout out on Twitter when it has helped you, or simply ⭐️ the repo to say "thank you". Whichever way you choose to contribute, please be mindful to respect our [code of conduct](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md). ## You can contribute in so many ways! Some of the ways you can contribute to 🤗 LeRobot: - Fixing outstanding issues with the existing code. - Implementing new models, datasets or simulation environments. - Contributing to the examples or to the documentation. - Submitting issues related to bugs or desired new features. Following the guides below, feel free to open issues and PRs and to coordinate your efforts with the community on our [Discord Channel](https://discord.gg/VjFz58wn3R). For specific inquiries, reach out to [Remi Cadene](mailto:remi.cadene@huggingface.co). If you are not sure how to contribute or want to know the next features we working on, look on this project page: [LeRobot TODO](https://github.com/orgs/huggingface/projects/46) ## Submitting a new issue or feature request Do your best to follow these guidelines when submitting an issue or a feature request. It will make it easier for us to come back to you quickly and with good feedback. ### Did you find a bug? The 🤗 LeRobot library is robust and reliable thanks to the users who notify us of the problems they encounter. So thank you for reporting an issue. First, we would really appreciate it if you could **make sure the bug was not already reported** (use the search bar on Github under Issues). Did not find it? :( So we can act quickly on it, please follow these steps: - Include your **OS type and version**, the versions of **Python** and **PyTorch**. - A short, self-contained, code snippet that allows us to reproduce the bug in less than 30s. - The full traceback if an exception is raised. - Attach any other additional information, like screenshots, you think may help. ### Do you want a new feature? A good feature request addresses the following points: 1. Motivation first: - Is it related to a problem/frustration with the library? If so, please explain why. Providing a code snippet that demonstrates the problem is best. - Is it related to something you would need for a project? We'd love to hear about it! - Is it something you worked on and think could benefit the community? Awesome! Tell us what problem it solved for you. 2. Write a _paragraph_ describing the feature. 3. Provide a **code snippet** that demonstrates its future use. 4. In case this is related to a paper, please attach a link. 5. Attach any additional information (drawings, screenshots, etc.) you think may help. If your issue is well written we're already 80% of the way there by the time you post it. ## Adding new policies, datasets or environments Look at our implementations for [datasets](./src/lerobot/datasets/), [policies](./src/lerobot/policies/), environments ([aloha](https://github.com/huggingface/gym-aloha), [xarm](https://github.com/huggingface/gym-xarm), [pusht](https://github.com/huggingface/gym-pusht)) and follow the same api design. When implementing a new dataset loadable with LeRobotDataset follow these steps: - Update `available_datasets_per_env` in `lerobot/__init__.py` When implementing a new environment (e.g. `gym_aloha`), follow these steps: - Update `available_tasks_per_env` and `available_datasets_per_env` in `lerobot/__init__.py` When implementing a new policy class (e.g. `DiffusionPolicy`) follow these steps: - Update `available_policies` and `available_policies_per_env`, in `lerobot/__init__.py` - Set the required `name` class attribute. - Update variables in `tests/test_available.py` by importing your new Policy class ## Submitting a pull request (PR) Before writing code, we strongly advise you to search through the existing PRs or issues to make sure that nobody is already working on the same thing. If you are unsure, it is always a good idea to open an issue to get some feedback. You will need basic `git` proficiency to be able to contribute to 🤗 LeRobot. `git` is not the easiest tool to use but it has the greatest manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro Git](https://git-scm.com/book/en/v2) is a very good reference. Follow these steps to start contributing: 1. Fork the [repository](https://github.com/huggingface/lerobot) by clicking on the 'Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your fork to your local disk, and add the base repository as a remote. The following command assumes you have your public SSH key uploaded to GitHub. See the following guide for more [information](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository). ```bash git clone git@github.com:<your Github handle>/lerobot.git cd lerobot git remote add upstream https://github.com/huggingface/lerobot.git ``` 3. Create a new branch to hold your development changes, and do this for every new PR you work on. Start by synchronizing your `main` branch with the `upstream/main` branch (more details in the [GitHub Docs](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork)): ```bash git checkout main git fetch upstream git rebase upstream/main ``` Once your `main` branch is synchronized, create a new branch from it: ```bash git checkout -b a-descriptive-name-for-my-changes ``` 🚨 **Do not** work on the `main` branch. 4. for development, we advise to use a tool like `poetry` or `uv` instead of just `pip` to easily track our dependencies. Follow the instructions to [install poetry](https://python-poetry.org/docs/#installation) (use a version >=2.1.0) or to [install uv](https://docs.astral.sh/uv/getting-started/installation/#installation-methods) if you don't have one of them already. Set up a development environment with conda or miniconda: ```bash conda create -y -n lerobot-dev python=3.10 && conda activate lerobot-dev ``` If you're using `uv`, it can manage python versions so you can instead do: ```bash uv venv --python 3.10 && source .venv/bin/activate ``` To develop on 🤗 LeRobot, you will at least need to install the `dev` and `test` extras dependencies along with the core library: using `poetry` ```bash poetry sync --extras "dev test" ``` using `uv` ```bash uv sync --extra dev --extra test ``` You can also install the project with all its dependencies (including environments): using `poetry` ```bash poetry sync --all-extras ``` using `uv` ```bash uv sync --all-extras ``` > **Note:** If you don't install simulation environments with `--all-extras`, the tests that require them will be skipped when running the pytest suite locally. However, they _will_ be tested in the CI. In general, we advise you to install everything and test locally before pushing. Whichever command you chose to install the project (e.g. `poetry sync --all-extras`), you should run it again when pulling code with an updated version of `pyproject.toml` and `poetry.lock` in order to synchronize your virtual environment with the new dependencies. The equivalent of `pip install some-package`, would just be: using `poetry` ```bash poetry add some-package ``` using `uv` ```bash uv add some-package ``` When making changes to the poetry sections of the `pyproject.toml`, you should run the following command to lock dependencies. using `poetry` ```bash poetry lock ``` using `uv` ```bash uv lock ``` 5. Develop the features on your branch. As you work on the features, you should make sure that the test suite passes. You should run the tests impacted by your changes like this (see below an explanation regarding the environment variable): ```bash pytest tests/<TEST_TO_RUN>.py ``` 6. Follow our style. `lerobot` relies on `ruff` to format its source code consistently. Set up [`pre-commit`](https://pre-commit.com/) to run these checks automatically as Git commit hooks. Install `pre-commit` hooks: ```bash pre-commit install ``` You can run these hooks whenever you need on staged files with: ```bash pre-commit ``` Once you're happy with your changes, add changed files using `git add` and make a commit with `git commit` to record your changes locally: ```bash git add modified_file.py git commit ``` Note, if you already committed some changes that have a wrong formatting, you can use: ```bash pre-commit run --all-files ``` Please write [good commit messages](https://chris.beams.io/posts/git-commit/). It is a good idea to sync your copy of the code with the original repository regularly. This way you can quickly account for changes: ```bash git fetch upstream git rebase upstream/main ``` Push the changes to your account using: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 7. Once you are satisfied (**and the checklist below is happy too**), go to the webpage of your fork on GitHub. Click on 'Pull request' to send your changes to the project maintainers for review. 8. It's ok if maintainers ask you for changes. It happens to core contributors too! So everyone can see the changes in the Pull request, work in your local branch and push the changes to your fork. They will automatically appear in the pull request. ### Checklist 1. The title of your pull request should be a summary of its contribution; 2. If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people consulting the issue know you are working on it); 3. To indicate a work in progress please prefix the title with `[WIP]`, or preferably mark the PR as a draft PR. These are useful to avoid duplicated work, and to differentiate it from PRs ready to be merged; 4. Make sure existing tests pass; ### Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the [tests folder](https://github.com/huggingface/lerobot/tree/main/tests). Install [git lfs](https://git-lfs.com/) to retrieve test artifacts (if you don't have it already). On Mac: ```bash brew install git-lfs git lfs install ``` On Ubuntu: ```bash sudo apt-get install git-lfs git lfs install ``` Pull artifacts if they're not in [tests/artifacts](tests/artifacts) ```bash git lfs pull ``` We use `pytest` in order to run the tests. From the root of the repository, here's how to run tests with `pytest` for the library: ```bash python -m pytest -sv ./tests ``` You can specify a smaller set of tests in order to test only the feature you're working on.
lerobot/CONTRIBUTING.md/0
{ "file_path": "lerobot/CONTRIBUTING.md", "repo_id": "lerobot", "token_count": 3330 }
202
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import abc from typing import Any import numpy as np from .configs import CameraConfig, ColorMode class Camera(abc.ABC): """Base class for camera implementations. Defines a standard interface for camera operations across different backends. Subclasses must implement all abstract methods. Manages basic camera properties (FPS, resolution) and core operations: - Connection/disconnection - Frame capture (sync/async) Attributes: fps (int | None): Configured frames per second width (int | None): Frame width in pixels height (int | None): Frame height in pixels Example: class MyCamera(Camera): def __init__(self, config): ... @property def is_connected(self) -> bool: ... def connect(self, warmup=True): ... # Plus other required methods """ def __init__(self, config: CameraConfig): """Initialize the camera with the given configuration. Args: config: Camera configuration containing FPS and resolution. """ self.fps: int | None = config.fps self.width: int | None = config.width self.height: int | None = config.height @property @abc.abstractmethod def is_connected(self) -> bool: """Check if the camera is currently connected. Returns: bool: True if the camera is connected and ready to capture frames, False otherwise. """ pass @staticmethod @abc.abstractmethod def find_cameras() -> list[dict[str, Any]]: """Detects available cameras connected to the system. Returns: List[Dict[str, Any]]: A list of dictionaries, where each dictionary contains information about a detected camera. """ pass @abc.abstractmethod def connect(self, warmup: bool = True) -> None: """Establish connection to the camera. Args: warmup: If True (default), captures a warmup frame before returning. Useful for cameras that require time to adjust capture settings. If False, skips the warmup frame. """ pass @abc.abstractmethod def read(self, color_mode: ColorMode | None = None) -> np.ndarray: """Capture and return a single frame from the camera. Args: color_mode: Desired color mode for the output frame. If None, uses the camera's default color mode. Returns: np.ndarray: Captured frame as a numpy array. """ pass @abc.abstractmethod def async_read(self, timeout_ms: float = ...) -> np.ndarray: """Asynchronously capture and return a single frame from the camera. Args: timeout_ms: Maximum time to wait for a frame in milliseconds. Defaults to implementation-specific timeout. Returns: np.ndarray: Captured frame as a numpy array. """ pass @abc.abstractmethod def disconnect(self) -> None: """Disconnect from the camera and release resources.""" pass
lerobot/src/lerobot/cameras/camera.py/0
{ "file_path": "lerobot/src/lerobot/cameras/camera.py", "repo_id": "lerobot", "token_count": 1431 }
203
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import packaging.version V2_MESSAGE = """ The dataset you requested ({repo_id}) is in {version} format. We introduced a new format since v2.0 which is not backward compatible with v1.x. Please, use our conversion script. Modify the following command with your own task description: ``` python -m lerobot.datasets.v2.convert_dataset_v1_to_v2 \\ --repo-id {repo_id} \\ --single-task "TASK DESCRIPTION." # <---- /!\\ Replace TASK DESCRIPTION /!\\ ``` A few examples to replace TASK DESCRIPTION: "Pick up the blue cube and place it into the bin.", "Insert the peg into the socket.", "Slide open the ziploc bag.", "Take the elevator to the 1st floor.", "Open the top cabinet, store the pot inside it then close the cabinet.", "Push the T-shaped block onto the T-shaped target.", "Grab the spray paint on the shelf and place it in the bin on top of the robot dog.", "Fold the sweatshirt.", ... If you encounter a problem, contact LeRobot maintainers on [Discord](https://discord.com/invite/s3KuuzsPFb) or open an [issue on GitHub](https://github.com/huggingface/lerobot/issues/new/choose). """ V21_MESSAGE = """ The dataset you requested ({repo_id}) is in {version} format. While current version of LeRobot is backward-compatible with it, the version of your dataset still uses global stats instead of per-episode stats. Update your dataset stats to the new format using this command: ``` python -m lerobot.datasets.v21.convert_dataset_v20_to_v21 --repo-id={repo_id} ``` If you encounter a problem, contact LeRobot maintainers on [Discord](https://discord.com/invite/s3KuuzsPFb) or open an [issue on GitHub](https://github.com/huggingface/lerobot/issues/new/choose). """ FUTURE_MESSAGE = """ The dataset you requested ({repo_id}) is only available in {version} format. As we cannot ensure forward compatibility with it, please update your current version of lerobot. """ class CompatibilityError(Exception): ... class BackwardCompatibilityError(CompatibilityError): def __init__(self, repo_id: str, version: packaging.version.Version): message = V2_MESSAGE.format(repo_id=repo_id, version=version) super().__init__(message) class ForwardCompatibilityError(CompatibilityError): def __init__(self, repo_id: str, version: packaging.version.Version): message = FUTURE_MESSAGE.format(repo_id=repo_id, version=version) super().__init__(message)
lerobot/src/lerobot/datasets/backward_compatibility.py/0
{ "file_path": "lerobot/src/lerobot/datasets/backward_compatibility.py", "repo_id": "lerobot", "token_count": 939 }
204
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from concurrent.futures import ThreadPoolExecutor, as_completed import numpy as np from tqdm import tqdm from lerobot.datasets.compute_stats import aggregate_stats, get_feature_stats, sample_indices from lerobot.datasets.lerobot_dataset import LeRobotDataset from lerobot.datasets.utils import write_episode_stats def sample_episode_video_frames(dataset: LeRobotDataset, episode_index: int, ft_key: str) -> np.ndarray: ep_len = dataset.meta.episodes[episode_index]["length"] sampled_indices = sample_indices(ep_len) query_timestamps = dataset._get_query_timestamps(0.0, {ft_key: sampled_indices}) video_frames = dataset._query_videos(query_timestamps, episode_index) return video_frames[ft_key].numpy() def convert_episode_stats(dataset: LeRobotDataset, ep_idx: int): ep_start_idx = dataset.episode_data_index["from"][ep_idx] ep_end_idx = dataset.episode_data_index["to"][ep_idx] ep_data = dataset.hf_dataset.select(range(ep_start_idx, ep_end_idx)) ep_stats = {} for key, ft in dataset.features.items(): if ft["dtype"] == "video": # We sample only for videos ep_ft_data = sample_episode_video_frames(dataset, ep_idx, key) else: ep_ft_data = np.array(ep_data[key]) axes_to_reduce = (0, 2, 3) if ft["dtype"] in ["image", "video"] else 0 keepdims = True if ft["dtype"] in ["image", "video"] else ep_ft_data.ndim == 1 ep_stats[key] = get_feature_stats(ep_ft_data, axis=axes_to_reduce, keepdims=keepdims) if ft["dtype"] in ["image", "video"]: # remove batch dim ep_stats[key] = { k: v if k == "count" else np.squeeze(v, axis=0) for k, v in ep_stats[key].items() } dataset.meta.episodes_stats[ep_idx] = ep_stats def convert_stats(dataset: LeRobotDataset, num_workers: int = 0): assert dataset.episodes is None print("Computing episodes stats") total_episodes = dataset.meta.total_episodes if num_workers > 0: with ThreadPoolExecutor(max_workers=num_workers) as executor: futures = { executor.submit(convert_episode_stats, dataset, ep_idx): ep_idx for ep_idx in range(total_episodes) } for future in tqdm(as_completed(futures), total=total_episodes): future.result() else: for ep_idx in tqdm(range(total_episodes)): convert_episode_stats(dataset, ep_idx) for ep_idx in tqdm(range(total_episodes)): write_episode_stats(ep_idx, dataset.meta.episodes_stats[ep_idx], dataset.root) def check_aggregate_stats( dataset: LeRobotDataset, reference_stats: dict[str, dict[str, np.ndarray]], video_rtol_atol: tuple[float] = (1e-2, 1e-2), default_rtol_atol: tuple[float] = (5e-6, 6e-5), ): """Verifies that the aggregated stats from episodes_stats are close to reference stats.""" agg_stats = aggregate_stats(list(dataset.meta.episodes_stats.values())) for key, ft in dataset.features.items(): # These values might need some fine-tuning if ft["dtype"] == "video": # to account for image sub-sampling rtol, atol = video_rtol_atol else: rtol, atol = default_rtol_atol for stat, val in agg_stats[key].items(): if key in reference_stats and stat in reference_stats[key]: err_msg = f"feature='{key}' stats='{stat}'" np.testing.assert_allclose( val, reference_stats[key][stat], rtol=rtol, atol=atol, err_msg=err_msg )
lerobot/src/lerobot/datasets/v21/convert_stats.py/0
{ "file_path": "lerobot/src/lerobot/datasets/v21/convert_stats.py", "repo_id": "lerobot", "token_count": 1756 }
205
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from copy import deepcopy from enum import Enum from pprint import pformat from lerobot.utils.encoding_utils import decode_sign_magnitude, encode_sign_magnitude from ..motors_bus import Motor, MotorCalibration, MotorsBus, NameOrID, Value, get_address from .tables import ( FIRMWARE_MAJOR_VERSION, FIRMWARE_MINOR_VERSION, MODEL_BAUDRATE_TABLE, MODEL_CONTROL_TABLE, MODEL_ENCODING_TABLE, MODEL_NUMBER, MODEL_NUMBER_TABLE, MODEL_PROTOCOL, MODEL_RESOLUTION, SCAN_BAUDRATES, ) DEFAULT_PROTOCOL_VERSION = 0 DEFAULT_BAUDRATE = 1_000_000 DEFAULT_TIMEOUT_MS = 1000 NORMALIZED_DATA = ["Goal_Position", "Present_Position"] logger = logging.getLogger(__name__) class OperatingMode(Enum): # position servo mode POSITION = 0 # The motor is in constant speed mode, which is controlled by parameter 0x2e, and the highest bit 15 is # the direction bit VELOCITY = 1 # PWM open-loop speed regulation mode, with parameter 0x2c running time parameter control, bit11 as # direction bit PWM = 2 # In step servo mode, the number of step progress is represented by parameter 0x2a, and the highest bit 15 # is the direction bit STEP = 3 class DriveMode(Enum): NON_INVERTED = 0 INVERTED = 1 class TorqueMode(Enum): ENABLED = 1 DISABLED = 0 def _split_into_byte_chunks(value: int, length: int) -> list[int]: import scservo_sdk as scs if length == 1: data = [value] elif length == 2: data = [scs.SCS_LOBYTE(value), scs.SCS_HIBYTE(value)] elif length == 4: data = [ scs.SCS_LOBYTE(scs.SCS_LOWORD(value)), scs.SCS_HIBYTE(scs.SCS_LOWORD(value)), scs.SCS_LOBYTE(scs.SCS_HIWORD(value)), scs.SCS_HIBYTE(scs.SCS_HIWORD(value)), ] return data def patch_setPacketTimeout(self, packet_length): # noqa: N802 """ HACK: This patches the PortHandler behavior to set the correct packet timeouts. It fixes https://gitee.com/ftservo/SCServoSDK/issues/IBY2S6 The bug is fixed on the official Feetech SDK repo (https://gitee.com/ftservo/FTServo_Python) but because that version is not published on PyPI, we rely on the (unofficial) on that is, which needs patching. """ self.packet_start_time = self.getCurrentTime() self.packet_timeout = (self.tx_time_per_byte * packet_length) + (self.tx_time_per_byte * 3.0) + 50 class FeetechMotorsBus(MotorsBus): """ The FeetechMotorsBus class allows to efficiently read and write to the attached motors. It relies on the python feetech sdk to communicate with the motors, which is itself based on the dynamixel sdk. """ apply_drive_mode = True available_baudrates = deepcopy(SCAN_BAUDRATES) default_baudrate = DEFAULT_BAUDRATE default_timeout = DEFAULT_TIMEOUT_MS model_baudrate_table = deepcopy(MODEL_BAUDRATE_TABLE) model_ctrl_table = deepcopy(MODEL_CONTROL_TABLE) model_encoding_table = deepcopy(MODEL_ENCODING_TABLE) model_number_table = deepcopy(MODEL_NUMBER_TABLE) model_resolution_table = deepcopy(MODEL_RESOLUTION) normalized_data = deepcopy(NORMALIZED_DATA) def __init__( self, port: str, motors: dict[str, Motor], calibration: dict[str, MotorCalibration] | None = None, protocol_version: int = DEFAULT_PROTOCOL_VERSION, ): super().__init__(port, motors, calibration) self.protocol_version = protocol_version self._assert_same_protocol() import scservo_sdk as scs self.port_handler = scs.PortHandler(self.port) # HACK: monkeypatch self.port_handler.setPacketTimeout = patch_setPacketTimeout.__get__( self.port_handler, scs.PortHandler ) self.packet_handler = scs.PacketHandler(protocol_version) self.sync_reader = scs.GroupSyncRead(self.port_handler, self.packet_handler, 0, 0) self.sync_writer = scs.GroupSyncWrite(self.port_handler, self.packet_handler, 0, 0) self._comm_success = scs.COMM_SUCCESS self._no_error = 0x00 if any(MODEL_PROTOCOL[model] != self.protocol_version for model in self.models): raise ValueError(f"Some motors are incompatible with protocol_version={self.protocol_version}") def _assert_same_protocol(self) -> None: if any(MODEL_PROTOCOL[model] != self.protocol_version for model in self.models): raise RuntimeError("Some motors use an incompatible protocol.") def _assert_protocol_is_compatible(self, instruction_name: str) -> None: if instruction_name == "sync_read" and self.protocol_version == 1: raise NotImplementedError( "'Sync Read' is not available with Feetech motors using Protocol 1. Use 'Read' sequentially instead." ) if instruction_name == "broadcast_ping" and self.protocol_version == 1: raise NotImplementedError( "'Broadcast Ping' is not available with Feetech motors using Protocol 1. Use 'Ping' sequentially instead." ) def _assert_same_firmware(self) -> None: firmware_versions = self._read_firmware_version(self.ids, raise_on_error=True) if len(set(firmware_versions.values())) != 1: raise RuntimeError( "Some Motors use different firmware versions:" f"\n{pformat(firmware_versions)}\n" "Update their firmware first using Feetech's software. " "Visit https://www.feetechrc.com/software." ) def _handshake(self) -> None: self._assert_motors_exist() self._assert_same_firmware() def _find_single_motor(self, motor: str, initial_baudrate: int | None = None) -> tuple[int, int]: if self.protocol_version == 0: return self._find_single_motor_p0(motor, initial_baudrate) else: return self._find_single_motor_p1(motor, initial_baudrate) def _find_single_motor_p0(self, motor: str, initial_baudrate: int | None = None) -> tuple[int, int]: model = self.motors[motor].model search_baudrates = ( [initial_baudrate] if initial_baudrate is not None else self.model_baudrate_table[model] ) expected_model_nb = self.model_number_table[model] for baudrate in search_baudrates: self.set_baudrate(baudrate) id_model = self.broadcast_ping() if id_model: found_id, found_model = next(iter(id_model.items())) if found_model != expected_model_nb: raise RuntimeError( f"Found one motor on {baudrate=} with id={found_id} but it has a " f"model number '{found_model}' different than the one expected: '{expected_model_nb}'. " f"Make sure you are connected only connected to the '{motor}' motor (model '{model}')." ) return baudrate, found_id raise RuntimeError(f"Motor '{motor}' (model '{model}') was not found. Make sure it is connected.") def _find_single_motor_p1(self, motor: str, initial_baudrate: int | None = None) -> tuple[int, int]: import scservo_sdk as scs model = self.motors[motor].model search_baudrates = ( [initial_baudrate] if initial_baudrate is not None else self.model_baudrate_table[model] ) expected_model_nb = self.model_number_table[model] for baudrate in search_baudrates: self.set_baudrate(baudrate) for id_ in range(scs.MAX_ID + 1): found_model = self.ping(id_) if found_model is not None: if found_model != expected_model_nb: raise RuntimeError( f"Found one motor on {baudrate=} with id={id_} but it has a " f"model number '{found_model}' different than the one expected: '{expected_model_nb}'. " f"Make sure you are connected only connected to the '{motor}' motor (model '{model}')." ) return baudrate, id_ raise RuntimeError(f"Motor '{motor}' (model '{model}') was not found. Make sure it is connected.") def configure_motors(self, return_delay_time=0, maximum_acceleration=254, acceleration=254) -> None: for motor in self.motors: # By default, Feetech motors have a 500µs delay response time (corresponding to a value of 250 on # the 'Return_Delay_Time' address). We ensure this is reduced to the minimum of 2µs (value of 0). self.write("Return_Delay_Time", motor, return_delay_time) # Set 'Maximum_Acceleration' to 254 to speedup acceleration and deceleration of the motors. if self.protocol_version == 0: self.write("Maximum_Acceleration", motor, maximum_acceleration) self.write("Acceleration", motor, acceleration) @property def is_calibrated(self) -> bool: motors_calibration = self.read_calibration() if set(motors_calibration) != set(self.calibration): return False same_ranges = all( self.calibration[motor].range_min == cal.range_min and self.calibration[motor].range_max == cal.range_max for motor, cal in motors_calibration.items() ) if self.protocol_version == 1: return same_ranges same_offsets = all( self.calibration[motor].homing_offset == cal.homing_offset for motor, cal in motors_calibration.items() ) return same_ranges and same_offsets def read_calibration(self) -> dict[str, MotorCalibration]: offsets, mins, maxes = {}, {}, {} for motor in self.motors: mins[motor] = self.read("Min_Position_Limit", motor, normalize=False) maxes[motor] = self.read("Max_Position_Limit", motor, normalize=False) offsets[motor] = ( self.read("Homing_Offset", motor, normalize=False) if self.protocol_version == 0 else 0 ) calibration = {} for motor, m in self.motors.items(): calibration[motor] = MotorCalibration( id=m.id, drive_mode=0, homing_offset=offsets[motor], range_min=mins[motor], range_max=maxes[motor], ) return calibration def write_calibration(self, calibration_dict: dict[str, MotorCalibration], cache: bool = True) -> None: for motor, calibration in calibration_dict.items(): if self.protocol_version == 0: self.write("Homing_Offset", motor, calibration.homing_offset) self.write("Min_Position_Limit", motor, calibration.range_min) self.write("Max_Position_Limit", motor, calibration.range_max) if cache: self.calibration = calibration_dict def _get_half_turn_homings(self, positions: dict[NameOrID, Value]) -> dict[NameOrID, Value]: """ On Feetech Motors: Present_Position = Actual_Position - Homing_Offset """ half_turn_homings = {} for motor, pos in positions.items(): model = self._get_motor_model(motor) max_res = self.model_resolution_table[model] - 1 half_turn_homings[motor] = pos - int(max_res / 2) return half_turn_homings def disable_torque(self, motors: str | list[str] | None = None, num_retry: int = 0) -> None: for motor in self._get_motors_list(motors): self.write("Torque_Enable", motor, TorqueMode.DISABLED.value, num_retry=num_retry) self.write("Lock", motor, 0, num_retry=num_retry) def _disable_torque(self, motor_id: int, model: str, num_retry: int = 0) -> None: addr, length = get_address(self.model_ctrl_table, model, "Torque_Enable") self._write(addr, length, motor_id, TorqueMode.DISABLED.value, num_retry=num_retry) addr, length = get_address(self.model_ctrl_table, model, "Lock") self._write(addr, length, motor_id, 0, num_retry=num_retry) def enable_torque(self, motors: str | list[str] | None = None, num_retry: int = 0) -> None: for motor in self._get_motors_list(motors): self.write("Torque_Enable", motor, TorqueMode.ENABLED.value, num_retry=num_retry) self.write("Lock", motor, 1, num_retry=num_retry) def _encode_sign(self, data_name: str, ids_values: dict[int, int]) -> dict[int, int]: for id_ in ids_values: model = self._id_to_model(id_) encoding_table = self.model_encoding_table.get(model) if encoding_table and data_name in encoding_table: sign_bit = encoding_table[data_name] ids_values[id_] = encode_sign_magnitude(ids_values[id_], sign_bit) return ids_values def _decode_sign(self, data_name: str, ids_values: dict[int, int]) -> dict[int, int]: for id_ in ids_values: model = self._id_to_model(id_) encoding_table = self.model_encoding_table.get(model) if encoding_table and data_name in encoding_table: sign_bit = encoding_table[data_name] ids_values[id_] = decode_sign_magnitude(ids_values[id_], sign_bit) return ids_values def _split_into_byte_chunks(self, value: int, length: int) -> list[int]: return _split_into_byte_chunks(value, length) def _broadcast_ping(self) -> tuple[dict[int, int], int]: import scservo_sdk as scs data_list = {} status_length = 6 rx_length = 0 wait_length = status_length * scs.MAX_ID txpacket = [0] * 6 tx_time_per_byte = (1000.0 / self.port_handler.getBaudRate()) * 10.0 txpacket[scs.PKT_ID] = scs.BROADCAST_ID txpacket[scs.PKT_LENGTH] = 2 txpacket[scs.PKT_INSTRUCTION] = scs.INST_PING result = self.packet_handler.txPacket(self.port_handler, txpacket) if result != scs.COMM_SUCCESS: self.port_handler.is_using = False return data_list, result # set rx timeout self.port_handler.setPacketTimeoutMillis((wait_length * tx_time_per_byte) + (3.0 * scs.MAX_ID) + 16.0) rxpacket = [] while not self.port_handler.isPacketTimeout() and rx_length < wait_length: rxpacket += self.port_handler.readPort(wait_length - rx_length) rx_length = len(rxpacket) self.port_handler.is_using = False if rx_length == 0: return data_list, scs.COMM_RX_TIMEOUT while True: if rx_length < status_length: return data_list, scs.COMM_RX_CORRUPT # find packet header for idx in range(0, (rx_length - 1)): if (rxpacket[idx] == 0xFF) and (rxpacket[idx + 1] == 0xFF): break if idx == 0: # found at the beginning of the packet # calculate checksum checksum = 0 for idx in range(2, status_length - 1): # except header & checksum checksum += rxpacket[idx] checksum = ~checksum & 0xFF if rxpacket[status_length - 1] == checksum: result = scs.COMM_SUCCESS data_list[rxpacket[scs.PKT_ID]] = rxpacket[scs.PKT_ERROR] del rxpacket[0:status_length] rx_length = rx_length - status_length if rx_length == 0: return data_list, result else: result = scs.COMM_RX_CORRUPT # remove header (0xFF 0xFF) del rxpacket[0:2] rx_length = rx_length - 2 else: # remove unnecessary packets del rxpacket[0:idx] rx_length = rx_length - idx def broadcast_ping(self, num_retry: int = 0, raise_on_error: bool = False) -> dict[int, int] | None: self._assert_protocol_is_compatible("broadcast_ping") for n_try in range(1 + num_retry): ids_status, comm = self._broadcast_ping() if self._is_comm_success(comm): break logger.debug(f"Broadcast ping failed on port '{self.port}' ({n_try=})") logger.debug(self.packet_handler.getTxRxResult(comm)) if not self._is_comm_success(comm): if raise_on_error: raise ConnectionError(self.packet_handler.getTxRxResult(comm)) return ids_errors = {id_: status for id_, status in ids_status.items() if self._is_error(status)} if ids_errors: display_dict = {id_: self.packet_handler.getRxPacketError(err) for id_, err in ids_errors.items()} logger.error(f"Some motors found returned an error status:\n{pformat(display_dict, indent=4)}") return self._read_model_number(list(ids_status), raise_on_error) def _read_firmware_version(self, motor_ids: list[int], raise_on_error: bool = False) -> dict[int, str]: firmware_versions = {} for id_ in motor_ids: firm_ver_major, comm, error = self._read( *FIRMWARE_MAJOR_VERSION, id_, raise_on_error=raise_on_error ) if not self._is_comm_success(comm) or self._is_error(error): continue firm_ver_minor, comm, error = self._read( *FIRMWARE_MINOR_VERSION, id_, raise_on_error=raise_on_error ) if not self._is_comm_success(comm) or self._is_error(error): continue firmware_versions[id_] = f"{firm_ver_major}.{firm_ver_minor}" return firmware_versions def _read_model_number(self, motor_ids: list[int], raise_on_error: bool = False) -> dict[int, int]: model_numbers = {} for id_ in motor_ids: model_nb, comm, error = self._read(*MODEL_NUMBER, id_, raise_on_error=raise_on_error) if not self._is_comm_success(comm) or self._is_error(error): continue model_numbers[id_] = model_nb return model_numbers
lerobot/src/lerobot/motors/feetech/feetech.py/0
{ "file_path": "lerobot/src/lerobot/motors/feetech/feetech.py", "repo_id": "lerobot", "token_count": 8570 }
206
# Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass, field from lerobot.configs.policies import PreTrainedConfig from lerobot.configs.types import FeatureType, NormalizationMode, PolicyFeature from lerobot.optim.optimizers import AdamWConfig from lerobot.optim.schedulers import ( CosineDecayWithWarmupSchedulerConfig, ) @PreTrainedConfig.register_subclass("smolvla") @dataclass class SmolVLAConfig(PreTrainedConfig): # Input / output structure. n_obs_steps: int = 1 chunk_size: int = 50 n_action_steps: int = 50 normalization_mapping: dict[str, NormalizationMode] = field( default_factory=lambda: { "VISUAL": NormalizationMode.IDENTITY, "STATE": NormalizationMode.MEAN_STD, "ACTION": NormalizationMode.MEAN_STD, } ) # Shorter state and action vectors will be padded max_state_dim: int = 32 max_action_dim: int = 32 # Image preprocessing resize_imgs_with_padding: tuple[int, int] = (512, 512) # Add empty images. Used by smolvla_aloha_sim which adds the empty # left and right wrist cameras in addition to the top camera. empty_cameras: int = 0 # Converts the joint and gripper values from the standard Aloha space to # the space used by the pi internal runtime which was used to train the base model. adapt_to_pi_aloha: bool = False # Converts joint dimensions to deltas with respect to the current state before passing to the model. # Gripper dimensions will remain in absolute values. use_delta_joint_actions_aloha: bool = False # Tokenizer tokenizer_max_length: int = 48 # Decoding num_steps: int = 10 # Attention utils use_cache: bool = True # Finetuning settings freeze_vision_encoder: bool = True train_expert_only: bool = True train_state_proj: bool = True # Training presets optimizer_lr: float = 1e-4 optimizer_betas: tuple[float, float] = (0.9, 0.95) optimizer_eps: float = 1e-8 optimizer_weight_decay: float = 1e-10 optimizer_grad_clip_norm: float = 10 scheduler_warmup_steps: int = 1_000 scheduler_decay_steps: int = 30_000 scheduler_decay_lr: float = 2.5e-6 vlm_model_name: str = "HuggingFaceTB/SmolVLM2-500M-Video-Instruct" # Select the VLM backbone. load_vlm_weights: bool = False # Set to True in case of training the expert from scratch. True when init from pretrained SmolVLA weights add_image_special_tokens: bool = False # Whether to use special image tokens around image features. attention_mode: str = "cross_attn" prefix_length: int = -1 pad_language_to: str = "longest" # "max_length" num_expert_layers: int = -1 # Less or equal to 0 is the default where the action expert has the same number of layers of VLM. Otherwise the expert have less layers. num_vlm_layers: int = 16 # Number of layers used in the VLM (first num_vlm_layers layers) self_attn_every_n_layers: int = 2 # Interleave SA layers each self_attn_every_n_layers expert_width_multiplier: float = 0.75 # The action expert hidden size (wrt to the VLM) min_period: float = 4e-3 # sensitivity range for the timestep used in sine-cosine positional encoding max_period: float = 4.0 def __post_init__(self): super().__post_init__() """Input validation (not exhaustive).""" if self.n_action_steps > self.chunk_size: raise ValueError( f"The chunk size is the upper bound for the number of action steps per model invocation. Got " f"{self.n_action_steps} for `n_action_steps` and {self.chunk_size} for `chunk_size`." ) if self.use_delta_joint_actions_aloha: raise NotImplementedError( "`use_delta_joint_actions_aloha` is used by smolvla for aloha real models. It is not ported yet in LeRobot." ) def validate_features(self) -> None: for i in range(self.empty_cameras): key = f"observation.images.empty_camera_{i}" empty_camera = PolicyFeature( type=FeatureType.VISUAL, shape=(3, 480, 640), ) self.input_features[key] = empty_camera def get_optimizer_preset(self) -> AdamWConfig: return AdamWConfig( lr=self.optimizer_lr, betas=self.optimizer_betas, eps=self.optimizer_eps, weight_decay=self.optimizer_weight_decay, grad_clip_norm=self.optimizer_grad_clip_norm, ) def get_scheduler_preset(self): return CosineDecayWithWarmupSchedulerConfig( peak_lr=self.optimizer_lr, decay_lr=self.scheduler_decay_lr, num_warmup_steps=self.scheduler_warmup_steps, num_decay_steps=self.scheduler_decay_steps, ) @property def observation_delta_indices(self) -> list: return [0] @property def action_delta_indices(self) -> list: return list(range(self.chunk_size)) @property def reward_delta_indices(self) -> None: return None
lerobot/src/lerobot/policies/smolvla/configuration_smolvla.py/0
{ "file_path": "lerobot/src/lerobot/policies/smolvla/configuration_smolvla.py", "repo_id": "lerobot", "token_count": 2225 }
207
#!/usr/bin/env python # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass, field from typing import Any from lerobot.configs.types import PolicyFeature from lerobot.processor.pipeline import ( ObservationProcessor, ProcessorStepRegistry, ) @dataclass @ProcessorStepRegistry.register(name="rename_processor") class RenameProcessor(ObservationProcessor): """Rename processor that renames keys in the observation.""" rename_map: dict[str, str] = field(default_factory=dict) def observation(self, observation): processed_obs = {} for key, value in observation.items(): if key in self.rename_map: processed_obs[self.rename_map[key]] = value else: processed_obs[key] = value return processed_obs def get_config(self) -> dict[str, Any]: return {"rename_map": self.rename_map} def feature_contract(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]: """Transforms: - Each key in the observation that appears in `rename_map` is renamed to its value. - Keys not in `rename_map` remain unchanged. """ return {self.rename_map.get(k, k): v for k, v in features.items()}
lerobot/src/lerobot/processor/rename_processor.py/0
{ "file_path": "lerobot/src/lerobot/processor/rename_processor.py", "repo_id": "lerobot", "token_count": 626 }
208
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import time from functools import cached_property from typing import Any from lerobot.cameras.utils import make_cameras_from_configs from lerobot.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError from lerobot.motors import Motor, MotorCalibration, MotorNormMode from lerobot.motors.dynamixel import ( DynamixelMotorsBus, OperatingMode, ) from ..robot import Robot from ..utils import ensure_safe_goal_position from .config_koch_follower import KochFollowerConfig logger = logging.getLogger(__name__) class KochFollower(Robot): """ - [Koch v1.0](https://github.com/AlexanderKoch-Koch/low_cost_robot), with and without the wrist-to-elbow expansion, developed by Alexander Koch from [Tau Robotics](https://tau-robotics.com) - [Koch v1.1](https://github.com/jess-moss/koch-v1-1) developed by Jess Moss """ config_class = KochFollowerConfig name = "koch_follower" def __init__(self, config: KochFollowerConfig): super().__init__(config) self.config = config norm_mode_body = MotorNormMode.DEGREES if config.use_degrees else MotorNormMode.RANGE_M100_100 self.bus = DynamixelMotorsBus( port=self.config.port, motors={ "shoulder_pan": Motor(1, "xl430-w250", norm_mode_body), "shoulder_lift": Motor(2, "xl430-w250", norm_mode_body), "elbow_flex": Motor(3, "xl330-m288", norm_mode_body), "wrist_flex": Motor(4, "xl330-m288", norm_mode_body), "wrist_roll": Motor(5, "xl330-m288", norm_mode_body), "gripper": Motor(6, "xl330-m288", MotorNormMode.RANGE_0_100), }, calibration=self.calibration, ) self.cameras = make_cameras_from_configs(config.cameras) @property def _motors_ft(self) -> dict[str, type]: return {f"{motor}.pos": float for motor in self.bus.motors} @property def _cameras_ft(self) -> dict[str, tuple]: return { cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras } @cached_property def observation_features(self) -> dict[str, type | tuple]: return {**self._motors_ft, **self._cameras_ft} @cached_property def action_features(self) -> dict[str, type]: return self._motors_ft @property def is_connected(self) -> bool: return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values()) def connect(self, calibrate: bool = True) -> None: """ We assume that at connection time, arm is in a rest position, and torque can be safely disabled to run calibration. """ if self.is_connected: raise DeviceAlreadyConnectedError(f"{self} already connected") self.bus.connect() if not self.is_calibrated and calibrate: logger.info( "Mismatch between calibration values in the motor and the calibration file or no calibration file found" ) self.calibrate() for cam in self.cameras.values(): cam.connect() self.configure() logger.info(f"{self} connected.") @property def is_calibrated(self) -> bool: return self.bus.is_calibrated def calibrate(self) -> None: if self.calibration: # Calibration file exists, ask user whether to use it or run new calibration user_input = input( f"Press ENTER to use provided calibration file associated with the id {self.id}, or type 'c' and press ENTER to run calibration: " ) if user_input.strip().lower() != "c": logger.info(f"Writing calibration file associated with the id {self.id} to the motors") self.bus.write_calibration(self.calibration) return logger.info(f"\nRunning calibration of {self}") self.bus.disable_torque() for motor in self.bus.motors: self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value) input(f"Move {self} to the middle of its range of motion and press ENTER....") homing_offsets = self.bus.set_half_turn_homings() full_turn_motors = ["shoulder_pan", "wrist_roll"] unknown_range_motors = [motor for motor in self.bus.motors if motor not in full_turn_motors] print( f"Move all joints except {full_turn_motors} sequentially through their entire " "ranges of motion.\nRecording positions. Press ENTER to stop..." ) range_mins, range_maxes = self.bus.record_ranges_of_motion(unknown_range_motors) for motor in full_turn_motors: range_mins[motor] = 0 range_maxes[motor] = 4095 self.calibration = {} for motor, m in self.bus.motors.items(): self.calibration[motor] = MotorCalibration( id=m.id, drive_mode=0, homing_offset=homing_offsets[motor], range_min=range_mins[motor], range_max=range_maxes[motor], ) self.bus.write_calibration(self.calibration) self._save_calibration() logger.info(f"Calibration saved to {self.calibration_fpath}") def configure(self) -> None: with self.bus.torque_disabled(): self.bus.configure_motors() # Use 'extended position mode' for all motors except gripper, because in joint mode the servos # can't rotate more than 360 degrees (from 0 to 4095) And some mistake can happen while assembling # the arm, you could end up with a servo with a position 0 or 4095 at a crucial point for motor in self.bus.motors: if motor != "gripper": self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value) # Use 'position control current based' for gripper to be limited by the limit of the current. For # the follower gripper, it means it can grasp an object without forcing too much even tho, its # goal position is a complete grasp (both gripper fingers are ordered to join and reach a touch). # For the leader gripper, it means we can use it as a physical trigger, since we can force with # our finger to make it move, and it will move back to its original target position when we # release the force. self.bus.write("Operating_Mode", "gripper", OperatingMode.CURRENT_POSITION.value) # Set better PID values to close the gap between recorded states and actions # TODO(rcadene): Implement an automatic procedure to set optimal PID values for each motor self.bus.write("Position_P_Gain", "elbow_flex", 1500) self.bus.write("Position_I_Gain", "elbow_flex", 0) self.bus.write("Position_D_Gain", "elbow_flex", 600) def setup_motors(self) -> None: for motor in reversed(self.bus.motors): input(f"Connect the controller board to the '{motor}' motor only and press enter.") self.bus.setup_motor(motor) print(f"'{motor}' motor id set to {self.bus.motors[motor].id}") def get_observation(self) -> dict[str, Any]: if not self.is_connected: raise DeviceNotConnectedError(f"{self} is not connected.") # Read arm position start = time.perf_counter() obs_dict = self.bus.sync_read("Present_Position") obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()} dt_ms = (time.perf_counter() - start) * 1e3 logger.debug(f"{self} read state: {dt_ms:.1f}ms") # Capture images from cameras for cam_key, cam in self.cameras.items(): start = time.perf_counter() obs_dict[cam_key] = cam.async_read() dt_ms = (time.perf_counter() - start) * 1e3 logger.debug(f"{self} read {cam_key}: {dt_ms:.1f}ms") return obs_dict def send_action(self, action: dict[str, float]) -> dict[str, float]: """Command arm to move to a target joint configuration. The relative action magnitude may be clipped depending on the configuration parameter `max_relative_target`. In this case, the action sent differs from original action. Thus, this function always returns the action actually sent. Args: action (dict[str, float]): The goal positions for the motors. Returns: dict[str, float]: The action sent to the motors, potentially clipped. """ if not self.is_connected: raise DeviceNotConnectedError(f"{self} is not connected.") goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")} # Cap goal position when too far away from present position. # /!\ Slower fps expected due to reading from the follower. if self.config.max_relative_target is not None: present_pos = self.bus.sync_read("Present_Position") goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()} goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target) # Send goal position to the arm self.bus.sync_write("Goal_Position", goal_pos) return {f"{motor}.pos": val for motor, val in goal_pos.items()} def disconnect(self): if not self.is_connected: raise DeviceNotConnectedError(f"{self} is not connected.") self.bus.disconnect(self.config.disable_torque_on_disconnect) for cam in self.cameras.values(): cam.disconnect() logger.info(f"{self} disconnected.")
lerobot/src/lerobot/robots/koch_follower/koch_follower.py/0
{ "file_path": "lerobot/src/lerobot/robots/koch_follower/koch_follower.py", "repo_id": "lerobot", "token_count": 4320 }
209
# !/usr/bin/env python # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Robot Environment for LeRobot Manipulation Tasks This module provides a comprehensive gym-compatible environment for robot manipulation with support for: - Multiple robot types (SO100, SO101, Koch and Moss) - Human intervention via leader-follower control or gamepad - End-effector and joint space control - Image processing (cropping and resizing) The environment is built using a composable wrapper pattern where each wrapper adds specific functionality to the base RobotEnv. Example: env = make_robot_env(cfg) obs, info = env.reset() action = policy.select_action(obs) obs, reward, terminated, truncated, info = env.step(action) """ import logging import time from collections import deque from collections.abc import Sequence from threading import Lock from typing import Annotated, Any import gymnasium as gym import numpy as np import torch import torchvision.transforms.functional as F # noqa: N812 from lerobot.cameras import opencv # noqa: F401 from lerobot.configs import parser from lerobot.envs.configs import EnvConfig from lerobot.envs.utils import preprocess_observation from lerobot.model.kinematics import RobotKinematics from lerobot.robots import ( # noqa: F401 RobotConfig, make_robot_from_config, so100_follower, ) from lerobot.teleoperators import ( gamepad, # noqa: F401 keyboard, # noqa: F401 make_teleoperator_from_config, so101_leader, # noqa: F401 ) from lerobot.teleoperators.gamepad.teleop_gamepad import GamepadTeleop from lerobot.teleoperators.keyboard.teleop_keyboard import KeyboardEndEffectorTeleop from lerobot.utils.robot_utils import busy_wait from lerobot.utils.utils import log_say logging.basicConfig(level=logging.INFO) def reset_follower_position(robot_arm, target_position): current_position_dict = robot_arm.bus.sync_read("Present_Position") current_position = np.array( [current_position_dict[name] for name in current_position_dict], dtype=np.float32 ) trajectory = torch.from_numpy( np.linspace(current_position, target_position, 50) ) # NOTE: 30 is just an arbitrary number for pose in trajectory: action_dict = dict(zip(current_position_dict, pose, strict=False)) robot_arm.bus.sync_write("Goal_Position", action_dict) busy_wait(0.015) class TorchBox(gym.spaces.Box): """ A version of gym.spaces.Box that handles PyTorch tensors. This class extends gym.spaces.Box to work with PyTorch tensors, providing compatibility between NumPy arrays and PyTorch tensors. """ def __init__( self, low: float | Sequence[float] | np.ndarray, high: float | Sequence[float] | np.ndarray, shape: Sequence[int] | None = None, np_dtype: np.dtype | type = np.float32, torch_dtype: torch.dtype = torch.float32, device: str = "cpu", seed: int | np.random.Generator | None = None, ) -> None: """ Initialize the PyTorch-compatible Box space. Args: low: Lower bounds of the space. high: Upper bounds of the space. shape: Shape of the space. If None, inferred from low and high. np_dtype: NumPy data type for internal storage. torch_dtype: PyTorch data type for tensor conversion. device: PyTorch device for returned tensors. seed: Random seed for sampling. """ super().__init__(low, high, shape=shape, dtype=np_dtype, seed=seed) self.torch_dtype = torch_dtype self.device = device def sample(self) -> torch.Tensor: """ Sample a random point from the space. Returns: A PyTorch tensor within the space bounds. """ arr = super().sample() return torch.as_tensor(arr, dtype=self.torch_dtype, device=self.device) def contains(self, x: torch.Tensor) -> bool: """ Check if a tensor is within the space bounds. Args: x: The PyTorch tensor to check. Returns: Boolean indicating whether the tensor is within bounds. """ # Move to CPU/numpy and cast to the internal dtype arr = x.detach().cpu().numpy().astype(self.dtype, copy=False) return super().contains(arr) def seed(self, seed: int | np.random.Generator | None = None): """ Set the random seed for sampling. Args: seed: The random seed to use. Returns: List containing the seed. """ super().seed(seed) return [seed] def __repr__(self) -> str: """ Return a string representation of the space. Returns: Formatted string with space details. """ return ( f"TorchBox({self.low_repr}, {self.high_repr}, {self.shape}, " f"np={self.dtype.name}, torch={self.torch_dtype}, device={self.device})" ) class TorchActionWrapper(gym.Wrapper): """ Wrapper that changes the action space to use PyTorch tensors. This wrapper modifies the action space to return PyTorch tensors when sampled and handles converting PyTorch actions to NumPy when stepping the environment. """ def __init__(self, env: gym.Env, device: str): """ Initialize the PyTorch action space wrapper. Args: env: The environment to wrap. device: The PyTorch device to use for tensor operations. """ super().__init__(env) self.action_space = TorchBox( low=env.action_space.low, high=env.action_space.high, shape=env.action_space.shape, torch_dtype=torch.float32, device=torch.device("cpu"), ) def step(self, action: torch.Tensor): """ Step the environment with a PyTorch tensor action. This method handles conversion from PyTorch tensors to NumPy arrays for compatibility with the underlying environment. Args: action: PyTorch tensor action to take. Returns: Tuple of (observation, reward, terminated, truncated, info). """ if action.dim() == 2: action = action.squeeze(0) action = action.detach().cpu().numpy() return self.env.step(action) class RobotEnv(gym.Env): """ Gym-compatible environment for evaluating robotic control policies with integrated human intervention. This environment wraps a robot interface to provide a consistent API for policy evaluation. It supports both relative (delta) and absolute joint position commands and automatically configures its observation and action spaces based on the robot's sensors and configuration. """ def __init__( self, robot, use_gripper: bool = False, display_cameras: bool = False, ): """ Initialize the RobotEnv environment. The environment is set up with a robot interface, which is used to capture observations and send joint commands. The setup supports both relative (delta) adjustments and absolute joint positions for controlling the robot. Args: robot: The robot interface object used to connect and interact with the physical robot. display_cameras: If True, the robot's camera feeds will be displayed during execution. """ super().__init__() self.robot = robot self.display_cameras = display_cameras # Connect to the robot if not already connected. if not self.robot.is_connected: self.robot.connect() # Episode tracking. self.current_step = 0 self.episode_data = None self._joint_names = [f"{key}.pos" for key in self.robot.bus.motors] self._image_keys = self.robot.cameras.keys() self.current_observation = None self.use_gripper = use_gripper self._setup_spaces() def _get_observation(self) -> dict[str, np.ndarray]: """Helper to convert a dictionary from bus.sync_read to an ordered numpy array.""" obs_dict = self.robot.get_observation() joint_positions = np.array([obs_dict[name] for name in self._joint_names]) images = {key: obs_dict[key] for key in self._image_keys} self.current_observation = {"agent_pos": joint_positions, "pixels": images} def _setup_spaces(self): """ Dynamically configure the observation and action spaces based on the robot's capabilities. Observation Space: - For keys with "image": A Box space with pixel values ranging from 0 to 255. - For non-image keys: A nested Dict space is created under 'observation.state' with a suitable range. Action Space: - The action space is defined as a Box space representing joint position commands. It is defined as relative (delta) or absolute, based on the configuration. """ self._get_observation() observation_spaces = {} # Define observation spaces for images and other states. if "pixels" in self.current_observation: prefix = "observation.images" observation_spaces = { f"{prefix}.{key}": gym.spaces.Box( low=0, high=255, shape=self.current_observation["pixels"][key].shape, dtype=np.uint8 ) for key in self.current_observation["pixels"] } observation_spaces["observation.state"] = gym.spaces.Box( low=0, high=10, shape=self.current_observation["agent_pos"].shape, dtype=np.float32, ) self.observation_space = gym.spaces.Dict(observation_spaces) # Define the action space for joint positions along with setting an intervention flag. action_dim = 3 bounds = {} bounds["min"] = -np.ones(action_dim) bounds["max"] = np.ones(action_dim) if self.use_gripper: action_dim += 1 bounds["min"] = np.concatenate([bounds["min"], [0]]) bounds["max"] = np.concatenate([bounds["max"], [2]]) self.action_space = gym.spaces.Box( low=bounds["min"], high=bounds["max"], shape=(action_dim,), dtype=np.float32, ) def reset(self, seed=None, options=None) -> tuple[dict[str, np.ndarray], dict[str, Any]]: """ Reset the environment to its initial state. This method resets the step counter and clears any episodic data. Args: seed: A seed for random number generation to ensure reproducibility. options: Additional options to influence the reset behavior. Returns: A tuple containing: - observation (dict): The initial sensor observation. - info (dict): A dictionary with supplementary information, including the key "is_intervention". """ super().reset(seed=seed, options=options) self.robot.reset() # Reset episode tracking variables. self.current_step = 0 self.episode_data = None self.current_observation = None self._get_observation() return self.current_observation, {"is_intervention": False} def step(self, action) -> tuple[dict[str, np.ndarray], float, bool, bool, dict[str, Any]]: """ Execute a single step within the environment using the specified action. The provided action is processed and sent to the robot as joint position commands that may be either absolute values or deltas based on the environment configuration. Args: action: The commanded joint positions as a numpy array or torch tensor. Returns: A tuple containing: - observation (dict): The new sensor observation after taking the step. - reward (float): The step reward (default is 0.0 within this wrapper). - terminated (bool): True if the episode has reached a terminal state. - truncated (bool): True if the episode was truncated (e.g., time constraints). - info (dict): Additional debugging information including intervention status. """ action_dict = {"delta_x": action[0], "delta_y": action[1], "delta_z": action[2]} # 1.0 action corresponds to no-op action action_dict["gripper"] = action[3] if self.use_gripper else 1.0 self.robot.send_action(action_dict) self._get_observation() if self.display_cameras: self.render() self.current_step += 1 reward = 0.0 terminated = False truncated = False return ( self.current_observation, reward, terminated, truncated, {"is_intervention": False}, ) def render(self): """ Render the current state of the environment by displaying the robot's camera feeds. """ import cv2 image_keys = [key for key in self.current_observation if "image" in key] for key in image_keys: cv2.imshow(key, cv2.cvtColor(self.current_observation[key].numpy(), cv2.COLOR_RGB2BGR)) cv2.waitKey(1) def close(self): """ Close the environment and clean up resources by disconnecting the robot. If the robot is currently connected, this method properly terminates the connection to ensure that all associated resources are released. """ if self.robot.is_connected: self.robot.disconnect() class AddJointVelocityToObservation(gym.ObservationWrapper): """ Wrapper that adds joint velocity information to the observation. This wrapper computes joint velocities by tracking changes in joint positions over time, and extends the observation space to include these velocities. """ def __init__(self, env, joint_velocity_limits=100.0, fps=30, num_dof=6): """ Initialize the joint velocity wrapper. Args: env: The environment to wrap. joint_velocity_limits: Maximum expected joint velocity for space bounds. fps: Frames per second used to calculate velocity (position delta / time). num_dof: Number of degrees of freedom (joints) in the robot. """ super().__init__(env) # Extend observation space to include joint velocities old_low = self.observation_space["observation.state"].low old_high = self.observation_space["observation.state"].high old_shape = self.observation_space["observation.state"].shape self.last_joint_positions = np.zeros(num_dof) new_low = np.concatenate([old_low, np.ones(num_dof) * -joint_velocity_limits]) new_high = np.concatenate([old_high, np.ones(num_dof) * joint_velocity_limits]) new_shape = (old_shape[0] + num_dof,) self.observation_space["observation.state"] = gym.spaces.Box( low=new_low, high=new_high, shape=new_shape, dtype=np.float32, ) self.dt = 1.0 / fps def observation(self, observation): """ Add joint velocity information to the observation. Args: observation: The original observation from the environment. Returns: The modified observation with joint velocities. """ joint_velocities = (observation["agent_pos"] - self.last_joint_positions) / self.dt self.last_joint_positions = observation["agent_pos"] observation["agent_pos"] = np.concatenate([observation["agent_pos"], joint_velocities], axis=-1) return observation class AddCurrentToObservation(gym.ObservationWrapper): """ Wrapper that adds motor current information to the observation. This wrapper extends the observation space to include the current values from each motor, providing information about the forces being applied. """ def __init__(self, env, max_current=500, num_dof=6): """ Initialize the current observation wrapper. Args: env: The environment to wrap. max_current: Maximum expected current for space bounds. num_dof: Number of degrees of freedom (joints) in the robot. """ super().__init__(env) # Extend observation space to include joint velocities old_low = self.observation_space["observation.state"].low old_high = self.observation_space["observation.state"].high old_shape = self.observation_space["observation.state"].shape new_low = np.concatenate([old_low, np.zeros(num_dof)]) new_high = np.concatenate([old_high, np.ones(num_dof) * max_current]) new_shape = (old_shape[0] + num_dof,) self.observation_space["observation.state"] = gym.spaces.Box( low=new_low, high=new_high, shape=new_shape, dtype=np.float32, ) def observation(self, observation): """ Add current information to the observation. Args: observation: The original observation from the environment. Returns: The modified observation with current values. """ present_current_dict = self.env.unwrapped.robot.bus.sync_read("Present_Current") present_current_observation = np.array( [present_current_dict[name] for name in self.env.unwrapped.robot.bus.motors] ) observation["agent_pos"] = np.concatenate( [observation["agent_pos"], present_current_observation], axis=-1 ) return observation class RewardWrapper(gym.Wrapper): def __init__(self, env, reward_classifier, device="cuda"): """ Wrapper to add reward prediction to the environment using a trained classifier. Args: env: The environment to wrap. reward_classifier: The reward classifier model. device: The device to run the model on. """ self.env = env self.device = device self.reward_classifier = torch.compile(reward_classifier) self.reward_classifier.to(self.device) def step(self, action): """ Execute a step and compute the reward using the classifier. Args: action: The action to take in the environment. Returns: Tuple of (observation, reward, terminated, truncated, info). """ observation, _, terminated, truncated, info = self.env.step(action) images = {} for key in observation: if "image" in key: images[key] = observation[key].to(self.device, non_blocking=(self.device == "cuda")) if images[key].dim() == 3: images[key] = images[key].unsqueeze(0) start_time = time.perf_counter() with torch.inference_mode(): success = ( self.reward_classifier.predict_reward(images, threshold=0.7) if self.reward_classifier is not None else 0.0 ) info["Reward classifier frequency"] = 1 / (time.perf_counter() - start_time) reward = 0.0 if success == 1.0: terminated = True reward = 1.0 return observation, reward, terminated, truncated, info def reset(self, seed=None, options=None): """ Reset the environment. Args: seed: Random seed for reproducibility. options: Additional reset options. Returns: The initial observation and info from the wrapped environment. """ return self.env.reset(seed=seed, options=options) class TimeLimitWrapper(gym.Wrapper): """ Wrapper that adds a time limit to episodes and tracks execution time. This wrapper terminates episodes after a specified time has elapsed, providing better control over episode length. """ def __init__(self, env, control_time_s, fps): """ Initialize the time limit wrapper. Args: env: The environment to wrap. control_time_s: Maximum episode duration in seconds. fps: Frames per second for calculating the maximum number of steps. """ self.env = env self.control_time_s = control_time_s self.fps = fps self.last_timestamp = 0.0 self.episode_time_in_s = 0.0 self.max_episode_steps = int(self.control_time_s * self.fps) self.current_step = 0 def step(self, action): """ Step the environment and track time elapsed. Args: action: The action to take in the environment. Returns: Tuple of (observation, reward, terminated, truncated, info). """ obs, reward, terminated, truncated, info = self.env.step(action) time_since_last_step = time.perf_counter() - self.last_timestamp self.episode_time_in_s += time_since_last_step self.last_timestamp = time.perf_counter() self.current_step += 1 # check if last timestep took more time than the expected fps if 1.0 / time_since_last_step < self.fps: logging.debug(f"Current timestep exceeded expected fps {self.fps}") if self.current_step >= self.max_episode_steps: terminated = True return obs, reward, terminated, truncated, info def reset(self, seed=None, options=None): """ Reset the environment and time tracking. Args: seed: Random seed for reproducibility. options: Additional reset options. Returns: The initial observation and info from the wrapped environment. """ self.episode_time_in_s = 0.0 self.last_timestamp = time.perf_counter() self.current_step = 0 return self.env.reset(seed=seed, options=options) class ImageCropResizeWrapper(gym.Wrapper): """ Wrapper that crops and resizes image observations. This wrapper processes image observations to focus on relevant regions by cropping and then resizing to a standard size. """ def __init__( self, env, crop_params_dict: dict[str, Annotated[tuple[int], 4]], resize_size=None, ): """ Initialize the image crop and resize wrapper. Args: env: The environment to wrap. crop_params_dict: Dictionary mapping image observation keys to crop parameters (top, left, height, width). resize_size: Target size for resized images (height, width). Defaults to (128, 128). """ super().__init__(env) self.env = env self.crop_params_dict = crop_params_dict print(f"obs_keys , {self.env.observation_space}") print(f"crop params dict {crop_params_dict.keys()}") for key_crop in crop_params_dict: if key_crop not in self.env.observation_space.keys(): # noqa: SIM118 raise ValueError(f"Key {key_crop} not in observation space") for key in crop_params_dict: new_shape = (3, resize_size[0], resize_size[1]) self.observation_space[key] = gym.spaces.Box(low=0, high=255, shape=new_shape) self.resize_size = resize_size if self.resize_size is None: self.resize_size = (128, 128) def step(self, action): """ Step the environment and process image observations. Args: action: The action to take in the environment. Returns: Tuple of (observation, reward, terminated, truncated, info) with processed images. """ obs, reward, terminated, truncated, info = self.env.step(action) for k in self.crop_params_dict: device = obs[k].device if obs[k].dim() >= 3: # Reshape to combine height and width dimensions for easier calculation batch_size = obs[k].size(0) channels = obs[k].size(1) flattened_spatial_dims = obs[k].view(batch_size, channels, -1) # Calculate standard deviation across spatial dimensions (H, W) # If any channel has std=0, all pixels in that channel have the same value # This is helpful if one camera mistakenly covered or the image is black std_per_channel = torch.std(flattened_spatial_dims, dim=2) if (std_per_channel <= 0.02).any(): logging.warning( f"Potential hardware issue detected: All pixels have the same value in observation {k}" ) if device == torch.device("mps:0"): obs[k] = obs[k].cpu() obs[k] = F.crop(obs[k], *self.crop_params_dict[k]) obs[k] = F.resize(obs[k], self.resize_size) # TODO (michel-aractingi): Bug in resize, it returns values outside [0, 1] obs[k] = obs[k].clamp(0.0, 1.0) obs[k] = obs[k].to(device) return obs, reward, terminated, truncated, info def reset(self, seed=None, options=None): """ Reset the environment and process image observations. Args: seed: Random seed for reproducibility. options: Additional reset options. Returns: Tuple of (observation, info) with processed images. """ obs, info = self.env.reset(seed=seed, options=options) for k in self.crop_params_dict: device = obs[k].device if device == torch.device("mps:0"): obs[k] = obs[k].cpu() obs[k] = F.crop(obs[k], *self.crop_params_dict[k]) obs[k] = F.resize(obs[k], self.resize_size) obs[k] = obs[k].clamp(0.0, 1.0) obs[k] = obs[k].to(device) return obs, info class ConvertToLeRobotObservation(gym.ObservationWrapper): """ Wrapper that converts standard observations to LeRobot format. This wrapper processes observations to match the expected format for LeRobot, including normalizing image values and moving tensors to the specified device. """ def __init__(self, env, device: str = "cpu"): """ Initialize the LeRobot observation converter. Args: env: The environment to wrap. device: Target device for the observation tensors. """ super().__init__(env) self.device = torch.device(device) def observation(self, observation): """ Convert observations to LeRobot format. Args: observation: The original observation from the environment. Returns: The processed observation with normalized images and proper tensor formats. """ observation = preprocess_observation(observation) observation = { key: observation[key].to(self.device, non_blocking=self.device.type == "cuda") for key in observation } return observation class ResetWrapper(gym.Wrapper): """ Wrapper that handles environment reset procedures. This wrapper provides additional functionality during environment reset, including the option to reset to a fixed pose or allow manual reset. """ def __init__( self, env: RobotEnv, reset_pose: np.ndarray | None = None, reset_time_s: float = 5, ): """ Initialize the reset wrapper. Args: env: The environment to wrap. reset_pose: Fixed joint positions to reset to. If None, manual reset is used. reset_time_s: Time in seconds to wait after reset or allowed for manual reset. """ super().__init__(env) self.reset_time_s = reset_time_s self.reset_pose = reset_pose self.robot = self.unwrapped.robot def reset(self, *, seed=None, options=None): """ Reset the environment with either fixed or manual reset procedure. If reset_pose is provided, the robot will move to that position. Otherwise, manual teleoperation control is allowed for reset_time_s seconds. Args: seed: Random seed for reproducibility. options: Additional reset options. Returns: The initial observation and info from the wrapped environment. """ start_time = time.perf_counter() if self.reset_pose is not None: log_say("Reset the environment.", play_sounds=True) reset_follower_position(self.unwrapped.robot, self.reset_pose) log_say("Reset the environment done.", play_sounds=True) if hasattr(self.env, "robot_leader"): self.env.robot_leader.bus.sync_write("Torque_Enable", 1) log_say("Reset the leader robot.", play_sounds=True) reset_follower_position(self.env.robot_leader, self.reset_pose) log_say("Reset the leader robot done.", play_sounds=True) else: log_say( f"Manually reset the environment for {self.reset_time_s} seconds.", play_sounds=True, ) start_time = time.perf_counter() while time.perf_counter() - start_time < self.reset_time_s: action = self.env.robot_leader.get_action() self.unwrapped.robot.send_action(action) log_say("Manual reset of the environment done.", play_sounds=True) busy_wait(self.reset_time_s - (time.perf_counter() - start_time)) return super().reset(seed=seed, options=options) class BatchCompatibleWrapper(gym.ObservationWrapper): """ Wrapper that ensures observations are compatible with batch processing. This wrapper adds a batch dimension to observations that don't already have one, making them compatible with models that expect batched inputs. """ def __init__(self, env): """ Initialize the batch compatibility wrapper. Args: env: The environment to wrap. """ super().__init__(env) def observation(self, observation: dict[str, torch.Tensor]) -> dict[str, torch.Tensor]: """ Add batch dimensions to observations if needed. Args: observation: Dictionary of observation tensors. Returns: Dictionary of observation tensors with batch dimensions. """ for key in observation: if "image" in key and observation[key].dim() == 3: observation[key] = observation[key].unsqueeze(0) if "state" in key and observation[key].dim() == 1: observation[key] = observation[key].unsqueeze(0) if "velocity" in key and observation[key].dim() == 1: observation[key] = observation[key].unsqueeze(0) return observation class GripperPenaltyWrapper(gym.RewardWrapper): """ Wrapper that adds penalties for inefficient gripper commands. This wrapper modifies rewards to discourage excessive gripper movement or commands that attempt to move the gripper beyond its physical limits. """ def __init__(self, env, penalty: float = -0.1): """ Initialize the gripper penalty wrapper. Args: env: The environment to wrap. penalty: Negative reward value to apply for inefficient gripper actions. """ super().__init__(env) self.penalty = penalty self.last_gripper_state = None def reward(self, reward, action): """ Apply penalties to reward based on gripper actions. Args: reward: The original reward from the environment. action: The action that was taken. Returns: Modified reward with penalty applied if necessary. """ gripper_state_normalized = self.last_gripper_state / self.unwrapped.robot.config.max_gripper_pos action_normalized = action - 1.0 # action / MAX_GRIPPER_COMMAND gripper_penalty_bool = (gripper_state_normalized < 0.5 and action_normalized > 0.5) or ( gripper_state_normalized > 0.75 and action_normalized < -0.5 ) return reward + self.penalty * int(gripper_penalty_bool) def step(self, action): """ Step the environment and apply gripper penalties. Args: action: The action to take in the environment. Returns: Tuple of (observation, reward, terminated, truncated, info) with penalty applied. """ self.last_gripper_state = self.unwrapped.robot.bus.sync_read("Present_Position")["gripper"] gripper_action = action[-1] obs, reward, terminated, truncated, info = self.env.step(action) gripper_penalty = self.reward(reward, gripper_action) info["discrete_penalty"] = gripper_penalty return obs, reward, terminated, truncated, info def reset(self, **kwargs): """ Reset the environment and penalty tracking. Args: **kwargs: Keyword arguments passed to the wrapped environment's reset. Returns: The initial observation and info with gripper penalty initialized. """ self.last_gripper_state = None obs, info = super().reset(**kwargs) info["gripper_penalty"] = 0.0 return obs, info class GripperActionWrapper(gym.ActionWrapper): """ Wrapper that processes gripper control commands. This wrapper quantizes and processes gripper commands, adding a sleep time between consecutive gripper actions to prevent rapid toggling. """ def __init__(self, env, quantization_threshold: float = 0.2, gripper_sleep: float = 0.0): """ Initialize the gripper action wrapper. Args: env: The environment to wrap. quantization_threshold: Threshold below which gripper commands are quantized to zero. gripper_sleep: Minimum time in seconds between consecutive gripper commands. """ super().__init__(env) self.quantization_threshold = quantization_threshold self.gripper_sleep = gripper_sleep self.last_gripper_action_time = 0.0 self.last_gripper_action = None def action(self, action): """ Process gripper commands in the action. Args: action: The original action from the agent. Returns: Modified action with processed gripper command. """ if self.gripper_sleep > 0.0: if ( self.last_gripper_action is not None and time.perf_counter() - self.last_gripper_action_time < self.gripper_sleep ): action[-1] = self.last_gripper_action else: self.last_gripper_action_time = time.perf_counter() self.last_gripper_action = action[-1] gripper_command = action[-1] # Gripper actions are between 0, 2 # we want to quantize them to -1, 0 or 1 gripper_command = gripper_command - 1.0 if self.quantization_threshold is not None: # Quantize gripper command to -1, 0 or 1 gripper_command = ( np.sign(gripper_command) if abs(gripper_command) > self.quantization_threshold else 0.0 ) gripper_command = gripper_command * self.unwrapped.robot.config.max_gripper_pos gripper_state = self.unwrapped.robot.bus.sync_read("Present_Position")["gripper"] gripper_action_value = np.clip( gripper_state + gripper_command, 0, self.unwrapped.robot.config.max_gripper_pos ) action[-1] = gripper_action_value.item() return action def reset(self, **kwargs): """ Reset the gripper action tracking. Args: **kwargs: Keyword arguments passed to the wrapped environment's reset. Returns: The initial observation and info. """ obs, info = super().reset(**kwargs) self.last_gripper_action_time = 0.0 self.last_gripper_action = None return obs, info class EEObservationWrapper(gym.ObservationWrapper): """ Wrapper that adds end-effector pose information to observations. This wrapper computes the end-effector pose using forward kinematics and adds it to the observation space. """ def __init__(self, env, ee_pose_limits): """ Initialize the end-effector observation wrapper. Args: env: The environment to wrap. ee_pose_limits: Dictionary with 'min' and 'max' keys containing limits for EE pose. """ super().__init__(env) # Extend observation space to include end effector pose prev_space = self.observation_space["observation.state"] self.observation_space["observation.state"] = gym.spaces.Box( low=np.concatenate([prev_space.low, ee_pose_limits["min"]]), high=np.concatenate([prev_space.high, ee_pose_limits["max"]]), shape=(prev_space.shape[0] + 3,), dtype=np.float32, ) self.kinematics = RobotKinematics( urdf_path=env.unwrapped.robot.config.urdf_path, target_frame_name=env.unwrapped.robot.config.target_frame_name, ) def observation(self, observation): """ Add end-effector pose to the observation. Args: observation: Original observation from the environment. Returns: Enhanced observation with end-effector pose information. """ current_joint_pos = self.unwrapped.current_observation["agent_pos"] current_ee_pos = self.kinematics.forward_kinematics(current_joint_pos)[:3, 3] observation["agent_pos"] = np.concatenate([observation["agent_pos"], current_ee_pos], -1) return observation ########################################################### # Wrappers related to human intervention and input devices ########################################################### class BaseLeaderControlWrapper(gym.Wrapper): """ Base class for leader-follower robot control wrappers. This wrapper enables human intervention through a leader-follower robot setup, where the human can control a leader robot to guide the follower robot's movements. """ def __init__( self, env, teleop_device, end_effector_step_sizes, use_geared_leader_arm: bool = False, use_gripper=False, ): """ Initialize the base leader control wrapper. Args: env: The environment to wrap. teleop_device: The teleoperation device. use_geared_leader_arm: Whether to use a geared leader arm setup. use_gripper: Whether to include gripper control. """ super().__init__(env) self.robot_leader = teleop_device self.robot_follower = env.unwrapped.robot self.use_geared_leader_arm = use_geared_leader_arm self.use_gripper: bool = use_gripper self.end_effector_step_sizes = np.array(list(end_effector_step_sizes.values())) # Set up keyboard event tracking self._init_keyboard_events() self.event_lock = Lock() # Thread-safe access to events # Initialize robot control self.kinematics = RobotKinematics( urdf_path=env.unwrapped.robot.config.urdf_path, target_frame_name=env.unwrapped.robot.config.target_frame_name, ) self.leader_torque_enabled = True self.prev_leader_gripper = None # Configure leader arm # NOTE: Lower the gains of leader arm for automatic take-over # With lower gains we can manually move the leader arm without risk of injury to ourselves or the robot # With higher gains, it would be dangerous and difficult to modify the leader's pose while torque is enabled # Default value for P_coeff is 32 self.robot_leader.bus.sync_write("Torque_Enable", 1) for motor in self.robot_leader.bus.motors: self.robot_leader.bus.write("P_Coefficient", motor, 16) self.robot_leader.bus.write("I_Coefficient", motor, 0) self.robot_leader.bus.write("D_Coefficient", motor, 16) self.leader_tracking_error_queue = deque(maxlen=4) self._init_keyboard_listener() def _init_keyboard_events(self): """ Initialize the keyboard events dictionary. This method sets up tracking for keyboard events used for intervention control. It should be overridden in subclasses to add additional events. """ self.keyboard_events = { "episode_success": False, "episode_end": False, "rerecord_episode": False, } def _handle_key_press(self, key, keyboard_device): """ Handle key press events. Args: key: The key that was pressed. keyboard: The keyboard module with key definitions. This method should be overridden in subclasses for additional key handling. """ try: if key == keyboard_device.Key.esc: self.keyboard_events["episode_end"] = True return if key == keyboard_device.Key.left: self.keyboard_events["rerecord_episode"] = True return if hasattr(key, "char") and key.char == "s": logging.info("Key 's' pressed. Episode success triggered.") self.keyboard_events["episode_success"] = True return except Exception as e: logging.error(f"Error handling key press: {e}") def _init_keyboard_listener(self): """ Initialize the keyboard listener for intervention control. This method sets up keyboard event handling if not in headless mode. """ from pynput import keyboard as keyboard_device def on_press(key): with self.event_lock: self._handle_key_press(key, keyboard_device) self.listener = keyboard_device.Listener(on_press=on_press) self.listener.start() def _check_intervention(self): """ Check if human intervention is needed. Returns: Boolean indicating whether intervention is needed. This method should be overridden in subclasses with specific intervention logic. """ return False def _handle_intervention(self, action): """ Process actions during intervention mode. Args: action: The original action from the agent. Returns: Tuple of (modified_action, intervention_action). """ if self.leader_torque_enabled: self.robot_leader.bus.sync_write("Torque_Enable", 0) self.leader_torque_enabled = False leader_pos_dict = self.robot_leader.bus.sync_read("Present_Position") follower_pos_dict = self.robot_follower.bus.sync_read("Present_Position") leader_pos = np.array([leader_pos_dict[name] for name in leader_pos_dict]) follower_pos = np.array([follower_pos_dict[name] for name in follower_pos_dict]) self.leader_tracking_error_queue.append(np.linalg.norm(follower_pos[:-1] - leader_pos[:-1])) # [:3, 3] Last column of the transformation matrix corresponds to the xyz translation leader_ee = self.kinematics.forward_kinematics(leader_pos)[:3, 3] follower_ee = self.kinematics.forward_kinematics(follower_pos)[:3, 3] action = np.clip(leader_ee - follower_ee, -self.end_effector_step_sizes, self.end_effector_step_sizes) # Normalize the action to the range [-1, 1] action = action / self.end_effector_step_sizes if self.use_gripper: if self.prev_leader_gripper is None: self.prev_leader_gripper = np.clip( leader_pos[-1], 0, self.robot_follower.config.max_gripper_pos ) # Get gripper action delta based on leader pose leader_gripper = leader_pos[-1] gripper_delta = leader_gripper - self.prev_leader_gripper # Normalize by max angle and quantize to {0,1,2} normalized_delta = gripper_delta / self.robot_follower.config.max_gripper_pos if normalized_delta >= 0.3: gripper_action = 2 elif normalized_delta <= 0.1: gripper_action = 0 else: gripper_action = 1 action = np.append(action, gripper_action) return action def _handle_leader_teleoperation(self): """ Handle leader teleoperation in non-intervention mode. This method synchronizes the leader robot position with the follower. """ prev_leader_pos_dict = self.robot_leader.bus.sync_read("Present_Position") prev_leader_pos = np.array( [prev_leader_pos_dict[name] for name in prev_leader_pos_dict], dtype=np.float32 ) if not self.leader_torque_enabled: self.robot_leader.bus.sync_write("Torque_Enable", 1) self.leader_torque_enabled = True follower_pos_dict = self.robot_follower.bus.sync_read("Present_Position") follower_pos = np.array([follower_pos_dict[name] for name in follower_pos_dict], dtype=np.float32) goal_pos = {f"{motor}": follower_pos[i] for i, motor in enumerate(self.robot_leader.bus.motors)} self.robot_leader.bus.sync_write("Goal_Position", goal_pos) self.leader_tracking_error_queue.append(np.linalg.norm(follower_pos[:-1] - prev_leader_pos[:-1])) def step(self, action): """ Execute a step with possible human intervention. Args: action: The action to take in the environment. Returns: Tuple of (observation, reward, terminated, truncated, info). """ is_intervention = self._check_intervention() # NOTE: if is_intervention: action = self._handle_intervention(action) else: self._handle_leader_teleoperation() # NOTE: obs, reward, terminated, truncated, info = self.env.step(action) if isinstance(action, np.ndarray): action = torch.from_numpy(action) # Add intervention info info["is_intervention"] = is_intervention info["action_intervention"] = action self.prev_leader_gripper = np.clip( self.robot_leader.bus.sync_read("Present_Position")["gripper"], 0, self.robot_follower.config.max_gripper_pos, ) # Check for success or manual termination success = self.keyboard_events["episode_success"] terminated = terminated or self.keyboard_events["episode_end"] or success if success: reward = 1.0 logging.info("Episode ended successfully with reward 1.0") return obs, reward, terminated, truncated, info def reset(self, **kwargs): """ Reset the environment and intervention state. Args: **kwargs: Keyword arguments passed to the wrapped environment's reset. Returns: The initial observation and info. """ self.keyboard_events = dict.fromkeys(self.keyboard_events, False) self.leader_tracking_error_queue.clear() return super().reset(**kwargs) def close(self): """ Clean up resources, including stopping keyboard listener. Returns: Result of closing the wrapped environment. """ if hasattr(self, "listener") and self.listener is not None: self.listener.stop() return self.env.close() class GearedLeaderControlWrapper(BaseLeaderControlWrapper): """ Wrapper that enables manual intervention via keyboard. This wrapper extends the BaseLeaderControlWrapper to allow explicit toggling of human intervention mode with keyboard controls. """ def _init_keyboard_events(self): """ Initialize keyboard events including human intervention flag. Extends the base class dictionary with an additional flag for tracking intervention state toggled by keyboard. """ super()._init_keyboard_events() self.keyboard_events["human_intervention_step"] = False def _handle_key_press(self, key, keyboard_device): """ Handle key presses including space for intervention toggle. Args: key: The key that was pressed. keyboard: The keyboard module with key definitions. Extends the base handler to respond to space key for toggling intervention. """ super()._handle_key_press(key, keyboard_device) if key == keyboard_device.Key.space: if not self.keyboard_events["human_intervention_step"]: logging.info( "Space key pressed. Human intervention required.\n" "Place the leader in similar pose to the follower and press space again." ) self.keyboard_events["human_intervention_step"] = True log_say("Human intervention step.", play_sounds=True) else: self.keyboard_events["human_intervention_step"] = False logging.info("Space key pressed for a second time.\nContinuing with policy actions.") log_say("Continuing with policy actions.", play_sounds=True) def _check_intervention(self): """ Check if human intervention is active based on keyboard toggle. Returns: Boolean indicating whether intervention mode is active. """ return self.keyboard_events["human_intervention_step"] class GearedLeaderAutomaticControlWrapper(BaseLeaderControlWrapper): """ Wrapper with automatic intervention based on error thresholds. This wrapper monitors the error between leader and follower positions and automatically triggers intervention when error exceeds thresholds. """ def __init__( self, env, teleop_device, end_effector_step_sizes, use_gripper=False, intervention_threshold=10.0, release_threshold=1e-2, ): """ Initialize the automatic intervention wrapper. Args: env: The environment to wrap. teleop_device: The teleoperation device. use_gripper: Whether to include gripper control. intervention_threshold: Error threshold to trigger intervention. release_threshold: Error threshold to release intervention. queue_size: Number of error measurements to track for smoothing. """ super().__init__(env, teleop_device, end_effector_step_sizes, use_gripper=use_gripper) # Error tracking parameters self.intervention_threshold = intervention_threshold # Threshold to trigger intervention self.release_threshold = release_threshold # Threshold to release intervention self.is_intervention_active = False self.start_time = time.perf_counter() def _check_intervention(self): """ Determine if intervention should occur based on the rate of change of leader-follower error in end_effector space. This method monitors the rate of change of leader-follower error in end_effector space and automatically triggers intervention when the rate of change exceeds the intervention threshold, releasing when it falls below the release threshold. Returns: Boolean indicating whether intervention should be active. """ # Condition for starting the intervention # If the error in teleoperation is too high, that means the a user has grasped the leader robot and he wants to take over if ( not self.is_intervention_active and len(self.leader_tracking_error_queue) == self.leader_tracking_error_queue.maxlen and np.var(list(self.leader_tracking_error_queue)[-2:]) > self.intervention_threshold ): self.is_intervention_active = True self.leader_tracking_error_queue.clear() log_say("Intervention started", play_sounds=True) return True # Track the error over time in leader_tracking_error_queue # If the variance of the tracking error is too low, that means the user has let go of the leader robot and the intervention is over if ( self.is_intervention_active and len(self.leader_tracking_error_queue) == self.leader_tracking_error_queue.maxlen and np.var(self.leader_tracking_error_queue) < self.release_threshold ): self.is_intervention_active = False self.leader_tracking_error_queue.clear() log_say("Intervention ended", play_sounds=True) return False # If not change has happened that merits a change in the intervention state, return the current state return self.is_intervention_active def reset(self, **kwargs): """ Reset error tracking on environment reset. Args: **kwargs: Keyword arguments passed to the wrapped environment's reset. Returns: The initial observation and info. """ self.is_intervention_active = False return super().reset(**kwargs) class GamepadControlWrapper(gym.Wrapper): """ Wrapper that allows controlling a gym environment with a gamepad. This wrapper intercepts the step method and allows human input via gamepad to override the agent's actions when desired. """ def __init__( self, env, teleop_device, # Accepts an instantiated teleoperator use_gripper=False, # This should align with teleop_device's config auto_reset=False, ): """ Initialize the gamepad controller wrapper. Args: env: The environment to wrap. teleop_device: The instantiated teleoperation device (e.g., GamepadTeleop). use_gripper: Whether to include gripper control (should match teleop_device.config.use_gripper). auto_reset: Whether to auto reset the environment when episode ends. """ super().__init__(env) self.teleop_device = teleop_device # Ensure the teleop_device is connected if it has a connect method if hasattr(self.teleop_device, "connect") and not self.teleop_device.is_connected: self.teleop_device.connect() # self.controller attribute is removed self.auto_reset = auto_reset # use_gripper from args should ideally match teleop_device.config.use_gripper # For now, we use the one passed, but it can lead to inconsistency if not set correctly from config self.use_gripper = use_gripper logging.info("Gamepad control wrapper initialized with provided teleop_device.") print( "Gamepad controls (managed by the provided teleop_device - specific button mappings might vary):" ) print(" Left analog stick: Move in X-Y plane") print(" Right analog stick: Move in Z axis (up/down)") print(" X/Square button: End episode (FAILURE)") print(" Y/Triangle button: End episode (SUCCESS)") print(" B/Circle button: Exit program") def get_teleop_commands( self, ) -> tuple[bool, np.ndarray, bool, bool, bool]: """ Get the current action from the gamepad if any input is active. Returns: Tuple containing: - is_active: Whether gamepad input is active (from teleop_device.gamepad.should_intervene()) - action: The action derived from gamepad input (from teleop_device.get_action()) - terminate_episode: Whether episode termination was requested - success: Whether episode success was signaled - rerecord_episode: Whether episode rerecording was requested """ if not hasattr(self.teleop_device, "gamepad") or self.teleop_device.gamepad is None: raise AttributeError( "teleop_device does not have a 'gamepad' attribute or it is None. Expected for GamepadControlWrapper." ) # Get status flags from the underlying gamepad controller within the teleop_device self.teleop_device.gamepad.update() # Ensure gamepad state is fresh intervention_is_active = self.teleop_device.gamepad.should_intervene() episode_end_status = self.teleop_device.gamepad.get_episode_end_status() terminate_episode = episode_end_status is not None success = episode_end_status == "success" rerecord_episode = episode_end_status == "rerecord_episode" # Get the action dictionary from the teleop_device action_dict = self.teleop_device.get_action() # Convert action_dict to numpy array based on expected structure # Order: delta_x, delta_y, delta_z, gripper (if use_gripper) action_list = [action_dict["delta_x"], action_dict["delta_y"], action_dict["delta_z"]] if self.use_gripper: # GamepadTeleop returns gripper action as 0 (close), 1 (stay), 2 (open) # This needs to be consistent with what EEActionWrapper expects if it's used downstream # EEActionWrapper for gripper typically expects 0.0 (closed) to 2.0 (open) # For now, we pass the direct value from GamepadTeleop, ensure downstream compatibility. gripper_val = action_dict.get("gripper", 1.0) # Default to 1.0 (stay) if not present action_list.append(float(gripper_val)) gamepad_action_np = np.array(action_list, dtype=np.float32) return ( intervention_is_active, gamepad_action_np, terminate_episode, success, rerecord_episode, ) def step(self, action): """ Step the environment, using gamepad input to override actions when active. Args: action: Original action from agent. Returns: Tuple of (observation, reward, terminated, truncated, info). """ # Get gamepad state and action ( is_intervention, gamepad_action, terminate_episode, success, rerecord_episode, ) = self.get_teleop_commands() # Update episode ending state if requested if terminate_episode: logging.info(f"Episode manually ended: {'SUCCESS' if success else 'FAILURE'}") # Only override the action if gamepad is active action = gamepad_action if is_intervention else action # Step the environment obs, reward, terminated, truncated, info = self.env.step(action) # Add episode ending if requested via gamepad terminated = terminated or truncated or terminate_episode if success: reward = 1.0 logging.info("Episode ended successfully with reward 1.0") if isinstance(action, np.ndarray): action = torch.from_numpy(action) info["is_intervention"] = is_intervention # The original `BaseLeaderControlWrapper` puts `action_intervention` in info. # For Gamepad, if intervention, `gamepad_action` is the intervention. # If not intervention, policy's action is `action`. # For consistency, let's store the *human's* action if intervention occurred. info["action_intervention"] = action info["rerecord_episode"] = rerecord_episode # If episode ended, reset the state if terminated or truncated: # Add success/failure information to info dict info["next.success"] = success # Auto reset if configured if self.auto_reset: obs, reset_info = self.reset() info.update(reset_info) return obs, reward, terminated, truncated, info def close(self): """ Clean up resources when environment closes. Returns: Result of closing the wrapped environment. """ if hasattr(self.teleop_device, "disconnect"): self.teleop_device.disconnect() # Call the parent close method return self.env.close() class KeyboardControlWrapper(GamepadControlWrapper): """ Wrapper that allows controlling a gym environment with a keyboard. This wrapper intercepts the step method and allows human input via keyboard to override the agent's actions when desired. Inherits from GamepadControlWrapper to avoid code duplication. """ def __init__( self, env, teleop_device, # Accepts an instantiated teleoperator use_gripper=False, # This should align with teleop_device's config auto_reset=False, ): """ Initialize the gamepad controller wrapper. Args: env: The environment to wrap. teleop_device: The instantiated teleoperation device (e.g., GamepadTeleop). use_gripper: Whether to include gripper control (should match teleop_device.config.use_gripper). auto_reset: Whether to auto reset the environment when episode ends. """ super().__init__(env, teleop_device, use_gripper, auto_reset) self.is_intervention_active = False logging.info("Keyboard control wrapper initialized with provided teleop_device.") print("Keyboard controls:") print(" Arrow keys: Move in X-Y plane") print(" Shift and Shift_R: Move in Z axis") print(" Right Ctrl and Left Ctrl: Open and close gripper") print(" f: End episode with FAILURE") print(" s: End episode with SUCCESS") print(" r: End episode with RERECORD") print(" i: Start/Stop Intervention") def get_teleop_commands( self, ) -> tuple[bool, np.ndarray, bool, bool, bool]: action_dict = self.teleop_device.get_action() episode_end_status = None # Unroll the misc_keys_queue to check for events related to intervention, episode success, etc. while not self.teleop_device.misc_keys_queue.empty(): key = self.teleop_device.misc_keys_queue.get() if key == "i": self.is_intervention_active = not self.is_intervention_active elif key == "f": episode_end_status = "failure" elif key == "s": episode_end_status = "success" elif key == "r": episode_end_status = "rerecord_episode" terminate_episode = episode_end_status is not None success = episode_end_status == "success" rerecord_episode = episode_end_status == "rerecord_episode" # Convert action_dict to numpy array based on expected structure # Order: delta_x, delta_y, delta_z, gripper (if use_gripper) action_list = [action_dict["delta_x"], action_dict["delta_y"], action_dict["delta_z"]] if self.use_gripper: # GamepadTeleop returns gripper action as 0 (close), 1 (stay), 2 (open) # This needs to be consistent with what EEActionWrapper expects if it's used downstream # EEActionWrapper for gripper typically expects 0.0 (closed) to 2.0 (open) # For now, we pass the direct value from GamepadTeleop, ensure downstream compatibility. gripper_val = action_dict.get("gripper", 1.0) # Default to 1.0 (stay) if not present action_list.append(float(gripper_val)) gamepad_action_np = np.array(action_list, dtype=np.float32) return ( self.is_intervention_active, gamepad_action_np, terminate_episode, success, rerecord_episode, ) class GymHilDeviceWrapper(gym.Wrapper): def __init__(self, env, device="cpu"): super().__init__(env) self.device = device def step(self, action): obs, reward, terminated, truncated, info = self.env.step(action) for k in obs: obs[k] = obs[k].to(self.device) if "action_intervention" in info: # NOTE: This is a hack to ensure the action intervention is a float32 tensor and supported on MPS device info["action_intervention"] = info["action_intervention"].astype(np.float32) info["action_intervention"] = torch.from_numpy(info["action_intervention"]).to(self.device) return obs, reward, terminated, truncated, info def reset(self, *, seed: int | None = None, options: dict[str, Any] | None = None): obs, info = self.env.reset(seed=seed, options=options) for k in obs: obs[k] = obs[k].to(self.device) if "action_intervention" in info: # NOTE: This is a hack to ensure the action intervention is a float32 tensor and supported on MPS device info["action_intervention"] = info["action_intervention"].astype(np.float32) info["action_intervention"] = torch.from_numpy(info["action_intervention"]).to(self.device) return obs, info class GymHilObservationProcessorWrapper(gym.ObservationWrapper): def __init__(self, env: gym.Env): super().__init__(env) prev_space = self.observation_space new_space = {} for key in prev_space: if "pixels" in key: for k in prev_space["pixels"]: new_space[f"observation.images.{k}"] = gym.spaces.Box( 0.0, 255.0, shape=(3, 128, 128), dtype=np.uint8 ) if key == "agent_pos": new_space["observation.state"] = prev_space["agent_pos"] self.observation_space = gym.spaces.Dict(new_space) def observation(self, observation: dict[str, Any]) -> dict[str, Any]: return preprocess_observation(observation) ########################################################### # Factory functions ########################################################### def make_robot_env(cfg: EnvConfig) -> gym.Env: """ Factory function to create a robot environment. This function builds a robot environment with all necessary wrappers based on the provided configuration. Args: cfg: Configuration object containing environment parameters. Returns: A gym environment with all necessary wrappers applied. """ if cfg.type == "hil": import gym_hil # noqa: F401 # TODO (azouitine) env = gym.make( f"gym_hil/{cfg.task}", image_obs=True, render_mode="human", use_gripper=cfg.wrapper.use_gripper, gripper_penalty=cfg.wrapper.gripper_penalty, ) env = GymHilObservationProcessorWrapper(env=env) env = GymHilDeviceWrapper(env=env, device=cfg.device) env = BatchCompatibleWrapper(env=env) env = TorchActionWrapper(env=env, device=cfg.device) return env if not hasattr(cfg, "robot") or not hasattr(cfg, "teleop"): raise ValueError( "Configuration for 'gym_manipulator' must be HILSerlRobotEnvConfig with robot and teleop." ) if cfg.robot is None: raise ValueError("RobotConfig (cfg.robot) must be provided for gym_manipulator environment.") robot = make_robot_from_config(cfg.robot) teleop_device = make_teleoperator_from_config(cfg.teleop) teleop_device.connect() # Create base environment env = RobotEnv( robot=robot, use_gripper=cfg.wrapper.use_gripper, display_cameras=cfg.wrapper.display_cameras if cfg.wrapper else False, ) # Add observation and image processing if cfg.wrapper: if cfg.wrapper.add_joint_velocity_to_observation: env = AddJointVelocityToObservation(env=env, fps=cfg.fps) if cfg.wrapper.add_current_to_observation: env = AddCurrentToObservation(env=env) if cfg.wrapper.add_ee_pose_to_observation: env = EEObservationWrapper(env=env, ee_pose_limits=robot.end_effector_bounds) env = ConvertToLeRobotObservation(env=env, device=cfg.device) if cfg.wrapper and cfg.wrapper.crop_params_dict is not None: env = ImageCropResizeWrapper( env=env, crop_params_dict=cfg.wrapper.crop_params_dict, resize_size=cfg.wrapper.resize_size, ) # Add reward computation and control wrappers reward_classifier = init_reward_classifier(cfg) if reward_classifier is not None: env = RewardWrapper(env=env, reward_classifier=reward_classifier, device=cfg.device) env = TimeLimitWrapper(env=env, control_time_s=cfg.wrapper.control_time_s, fps=cfg.fps) if cfg.wrapper.use_gripper and cfg.wrapper.gripper_penalty is not None: env = GripperPenaltyWrapper( env=env, penalty=cfg.wrapper.gripper_penalty, ) # Control mode specific wrappers control_mode = cfg.wrapper.control_mode if control_mode == "gamepad": assert isinstance(teleop_device, GamepadTeleop), ( "teleop_device must be an instance of GamepadTeleop for gamepad control mode" ) env = GamepadControlWrapper( env=env, teleop_device=teleop_device, use_gripper=cfg.wrapper.use_gripper, ) elif control_mode == "keyboard_ee": assert isinstance(teleop_device, KeyboardEndEffectorTeleop), ( "teleop_device must be an instance of KeyboardEndEffectorTeleop for keyboard control mode" ) env = KeyboardControlWrapper( env=env, teleop_device=teleop_device, use_gripper=cfg.wrapper.use_gripper, ) elif control_mode == "leader": env = GearedLeaderControlWrapper( env=env, teleop_device=teleop_device, end_effector_step_sizes=cfg.robot.end_effector_step_sizes, use_gripper=cfg.wrapper.use_gripper, ) elif control_mode == "leader_automatic": env = GearedLeaderAutomaticControlWrapper( env=env, teleop_device=teleop_device, end_effector_step_sizes=cfg.robot.end_effector_step_sizes, use_gripper=cfg.wrapper.use_gripper, ) else: raise ValueError(f"Invalid control mode: {control_mode}") env = ResetWrapper( env=env, reset_pose=cfg.wrapper.fixed_reset_joint_positions, reset_time_s=cfg.wrapper.reset_time_s, ) env = BatchCompatibleWrapper(env=env) env = TorchActionWrapper(env=env, device=cfg.device) return env def init_reward_classifier(cfg): """ Load a reward classifier policy from a pretrained path if configured. Args: cfg: The environment configuration containing classifier paths. Returns: The loaded classifier model or None if not configured. """ if cfg.reward_classifier_pretrained_path is None: return None from lerobot.policies.sac.reward_model.modeling_classifier import Classifier # Get device from config or default to CUDA device = getattr(cfg, "device", "cpu") # Load the classifier directly using from_pretrained classifier = Classifier.from_pretrained( pretrained_name_or_path=cfg.reward_classifier_pretrained_path, ) # Ensure model is on the correct device classifier.to(device) classifier.eval() # Set to evaluation mode return classifier ########################################################### # Record and replay functions ########################################################### def record_dataset(env, policy, cfg): """ Record a dataset of robot interactions using either a policy or teleop. This function runs episodes in the environment and records the observations, actions, and results for dataset creation. Args: env: The environment to record from. policy: Optional policy to generate actions (if None, uses teleop). cfg: Configuration object containing recording parameters like: - repo_id: Repository ID for dataset storage - dataset_root: Local root directory for dataset - num_episodes: Number of episodes to record - fps: Frames per second for recording - push_to_hub: Whether to push dataset to Hugging Face Hub - task: Name/description of the task being recorded - number_of_steps_after_success: Number of additional steps to continue recording after a success (reward=1) is detected. This helps collect more positive examples for reward classifier training. """ from lerobot.datasets.lerobot_dataset import LeRobotDataset # Setup initial action (zero action if using teleop) action = env.action_space.sample() * 0.0 action_names = ["delta_x_ee", "delta_y_ee", "delta_z_ee"] if cfg.wrapper.use_gripper: action_names.append("gripper_delta") # Configure dataset features based on environment spaces features = { "observation.state": { "dtype": "float32", "shape": env.observation_space["observation.state"].shape, "names": None, }, "action": { "dtype": "float32", "shape": (len(action_names),), "names": action_names, }, "next.reward": {"dtype": "float32", "shape": (1,), "names": None}, "next.done": {"dtype": "bool", "shape": (1,), "names": None}, "complementary_info.discrete_penalty": { "dtype": "float32", "shape": (1,), "names": ["discrete_penalty"], }, } # Add image features for key in env.observation_space: if "image" in key: features[key] = { "dtype": "video", "shape": env.observation_space[key].shape, "names": ["channels", "height", "width"], } # Create dataset dataset = LeRobotDataset.create( cfg.repo_id, cfg.fps, root=cfg.dataset_root, use_videos=True, image_writer_threads=4, image_writer_processes=0, features=features, ) # Record episodes episode_index = 0 recorded_action = None while episode_index < cfg.num_episodes: obs, _ = env.reset() start_episode_t = time.perf_counter() log_say(f"Recording episode {episode_index}", play_sounds=True) # Track success state collection success_detected = False success_steps_collected = 0 # Run episode steps while time.perf_counter() - start_episode_t < cfg.wrapper.control_time_s: start_loop_t = time.perf_counter() # Get action from policy if available if cfg.pretrained_policy_name_or_path is not None: action = policy.select_action(obs) # Step environment obs, reward, terminated, truncated, info = env.step(action) # Check if episode needs to be rerecorded if info.get("rerecord_episode", False): break # For teleop, get action from intervention recorded_action = { "action": info["action_intervention"].cpu().squeeze(0).float() if policy is None else action } # Process observation for dataset obs_processed = {k: v.cpu().squeeze(0).float() for k, v in obs.items()} # Check if we've just detected success if reward == 1.0 and not success_detected: success_detected = True logging.info("Success detected! Collecting additional success states.") # Add frame to dataset - continue marking as success even during extra collection steps frame = {**obs_processed, **recorded_action} # If we're in the success collection phase, keep marking rewards as 1.0 if success_detected: frame["next.reward"] = np.array([1.0], dtype=np.float32) else: frame["next.reward"] = np.array([reward], dtype=np.float32) # Only mark as done if we're truly done (reached end or collected enough success states) really_done = terminated or truncated if success_detected: success_steps_collected += 1 really_done = success_steps_collected >= cfg.number_of_steps_after_success frame["next.done"] = np.array([really_done], dtype=bool) frame["complementary_info.discrete_penalty"] = torch.tensor( [info.get("discrete_penalty", 0.0)], dtype=torch.float32 ) dataset.add_frame(frame, task=cfg.task) # Maintain consistent timing if cfg.fps: dt_s = time.perf_counter() - start_loop_t busy_wait(1 / cfg.fps - dt_s) # Check if we should end the episode if (terminated or truncated) and not success_detected: # Regular termination without success break elif success_detected and success_steps_collected >= cfg.number_of_steps_after_success: # We've collected enough success states logging.info(f"Collected {success_steps_collected} additional success states") break # Handle episode recording if info.get("rerecord_episode", False): dataset.clear_episode_buffer() logging.info(f"Re-recording episode {episode_index}") continue dataset.save_episode() episode_index += 1 # Finalize dataset # dataset.consolidate(run_compute_stats=True) if cfg.push_to_hub: dataset.push_to_hub() def replay_episode(env, cfg): """ Replay a recorded episode in the environment. This function loads actions from a previously recorded episode and executes them in the environment. Args: env: The environment to replay in. cfg: Configuration object containing replay parameters: - repo_id: Repository ID for dataset - dataset_root: Local root directory for dataset - episode: Episode ID to replay """ from lerobot.datasets.lerobot_dataset import LeRobotDataset dataset = LeRobotDataset(cfg.repo_id, root=cfg.dataset_root, episodes=[cfg.episode]) env.reset() actions = dataset.hf_dataset.select_columns("action") for idx in range(dataset.num_frames): start_episode_t = time.perf_counter() action = actions[idx]["action"] env.step(action) dt_s = time.perf_counter() - start_episode_t busy_wait(1 / 10 - dt_s) @parser.wrap() def main(cfg: EnvConfig): """Main entry point for the robot environment script. This function runs the robot environment in one of several modes based on the provided configuration. Args: cfg: Configuration object defining the run parameters, including mode (record, replay, random) and other settings. """ env = make_robot_env(cfg) if cfg.mode == "record": policy = None if cfg.pretrained_policy_name_or_path is not None: from lerobot.policies.sac.modeling_sac import SACPolicy policy = SACPolicy.from_pretrained(cfg.pretrained_policy_name_or_path) policy.to(cfg.device) policy.eval() record_dataset( env, policy=policy, cfg=cfg, ) exit() if cfg.mode == "replay": replay_episode( env, cfg=cfg, ) exit() env.reset() # Initialize the smoothed action as a random sample. smoothed_action = env.action_space.sample() * 0.0 # Smoothing coefficient (alpha) defines how much of the new random sample to mix in. # A value close to 0 makes the trajectory very smooth (slow to change), while a value close to 1 is less smooth. alpha = 1.0 num_episode = 0 successes = [] while num_episode < 10: start_loop_s = time.perf_counter() # Sample a new random action from the robot's action space. new_random_action = env.action_space.sample() # Update the smoothed action using an exponential moving average. smoothed_action = alpha * new_random_action + (1 - alpha) * smoothed_action # Execute the step: wrap the NumPy action in a torch tensor. obs, reward, terminated, truncated, info = env.step(smoothed_action) if terminated or truncated: successes.append(reward) env.reset() num_episode += 1 dt_s = time.perf_counter() - start_loop_s busy_wait(1 / cfg.fps - dt_s) logging.info(f"Success after 20 steps {successes}") logging.info(f"success rate {sum(successes) / len(successes)}") if __name__ == "__main__": main()
lerobot/src/lerobot/scripts/rl/gym_manipulator.py/0
{ "file_path": "lerobot/src/lerobot/scripts/rl/gym_manipulator.py", "repo_id": "lerobot", "token_count": 34160 }
210
#!/usr/bin/env python # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging from functools import cached_property from lerobot.teleoperators.so100_leader.config_so100_leader import SO100LeaderConfig from lerobot.teleoperators.so100_leader.so100_leader import SO100Leader from ..teleoperator import Teleoperator from .config_bi_so100_leader import BiSO100LeaderConfig logger = logging.getLogger(__name__) class BiSO100Leader(Teleoperator): """ [Bimanual SO-100 Leader Arms](https://github.com/TheRobotStudio/SO-ARM100) designed by TheRobotStudio This bimanual leader arm can also be easily adapted to use SO-101 leader arms, just replace the SO100Leader class with SO101Leader and SO100LeaderConfig with SO101LeaderConfig. """ config_class = BiSO100LeaderConfig name = "bi_so100_leader" def __init__(self, config: BiSO100LeaderConfig): super().__init__(config) self.config = config left_arm_config = SO100LeaderConfig( id=f"{config.id}_left" if config.id else None, calibration_dir=config.calibration_dir, port=config.left_arm_port, ) right_arm_config = SO100LeaderConfig( id=f"{config.id}_right" if config.id else None, calibration_dir=config.calibration_dir, port=config.right_arm_port, ) self.left_arm = SO100Leader(left_arm_config) self.right_arm = SO100Leader(right_arm_config) @cached_property def action_features(self) -> dict[str, type]: return {f"left_{motor}.pos": float for motor in self.left_arm.bus.motors} | { f"right_{motor}.pos": float for motor in self.right_arm.bus.motors } @cached_property def feedback_features(self) -> dict[str, type]: return {} @property def is_connected(self) -> bool: return self.left_arm.is_connected and self.right_arm.is_connected def connect(self, calibrate: bool = True) -> None: self.left_arm.connect(calibrate) self.right_arm.connect(calibrate) @property def is_calibrated(self) -> bool: return self.left_arm.is_calibrated and self.right_arm.is_calibrated def calibrate(self) -> None: self.left_arm.calibrate() self.right_arm.calibrate() def configure(self) -> None: self.left_arm.configure() self.right_arm.configure() def setup_motors(self) -> None: self.left_arm.setup_motors() self.right_arm.setup_motors() def get_action(self) -> dict[str, float]: action_dict = {} # Add "left_" prefix left_action = self.left_arm.get_action() action_dict.update({f"left_{key}": value for key, value in left_action.items()}) # Add "right_" prefix right_action = self.right_arm.get_action() action_dict.update({f"right_{key}": value for key, value in right_action.items()}) return action_dict def send_feedback(self, feedback: dict[str, float]) -> None: # Remove "left_" prefix left_feedback = { key.removeprefix("left_"): value for key, value in feedback.items() if key.startswith("left_") } # Remove "right_" prefix right_feedback = { key.removeprefix("right_"): value for key, value in feedback.items() if key.startswith("right_") } if left_feedback: self.left_arm.send_feedback(left_feedback) if right_feedback: self.right_arm.send_feedback(right_feedback) def disconnect(self) -> None: self.left_arm.disconnect() self.right_arm.disconnect()
lerobot/src/lerobot/teleoperators/bi_so100_leader/bi_so100_leader.py/0
{ "file_path": "lerobot/src/lerobot/teleoperators/bi_so100_leader/bi_so100_leader.py", "repo_id": "lerobot", "token_count": 1658 }
211
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards # prettier-ignore {{card_data}} --- # Model Card for {{ model_name | default("Model ID", true) }} <!-- Provide a quick summary of what the model is/does. --> {% if model_name == "smolvla" %} [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. {% elif model_name == "act" %} [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. {% elif model_name == "tdmpc" %} [TD-MPC](https://huggingface.co/papers/2203.04955) combines model-free and model-based approaches to improve sample efficiency and performance in continuous control tasks by using a learned latent dynamics model and terminal value function. {% elif model_name == "diffusion" %} [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. {% elif model_name == "vqbet" %} [VQ-BET](https://huggingface.co/papers/2403.03181) combines vector-quantised action tokens with Behaviour Transformers to discretise control and achieve data-efficient imitation across diverse skills. {% elif model_name == "pi0" %} [Pi0](https://huggingface.co/papers/2410.24164) is a generalist vision-language-action transformer that converts multimodal observations and text instructions into robot actions for zero-shot task transfer. {% elif model_name == "pi0fast" %} [Pi0-Fast](https://huggingface.co/papers/2501.09747) is a variant of Pi0 that uses a new tokenization method called FAST, which enables training of an autoregressive vision-language-action policy for high-frequency robotic tasks with improved performance and reduced training time. {% elif model_name == "sac" %} [Soft Actor-Critic (SAC)](https://huggingface.co/papers/1801.01290) is an entropy-regularised actor-critic algorithm offering stable, sample-efficient learning in continuous-control environments. {% elif model_name == "reward_classifier" %} A reward classifier is a lightweight neural network that scores observations or trajectories for task success, providing a learned reward signal or offline evaluation when explicit rewards are unavailable. {% else %} _Model type not recognized — please update this template._ {% endif %} This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** {{ license | default("\[More Information Needed]", true) }}
lerobot/src/lerobot/templates/lerobot_modelcard_template.md/0
{ "file_path": "lerobot/src/lerobot/templates/lerobot_modelcard_template.md", "repo_id": "lerobot", "token_count": 1217 }
212
#!/usr/bin/env python # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import platform from contextlib import suppress from queue import Empty from typing import Any from torch.multiprocessing import Queue def get_last_item_from_queue(queue: Queue, block=True, timeout: float = 0.1) -> Any: if block: try: item = queue.get(timeout=timeout) except Empty: return None else: item = None # Drain queue and keep only the most recent parameters if platform.system() == "Darwin": # On Mac, avoid using `qsize` due to unreliable implementation. # There is a comment on `qsize` code in the Python source: # Raises NotImplementedError on Mac OSX because of broken sem_getvalue() try: while True: item = queue.get_nowait() except Empty: pass return item # Details about using qsize in https://github.com/huggingface/lerobot/issues/1523 while queue.qsize() > 0: with suppress(Empty): item = queue.get_nowait() return item
lerobot/src/lerobot/utils/queue.py/0
{ "file_path": "lerobot/src/lerobot/utils/queue.py", "repo_id": "lerobot", "token_count": 585 }
213
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script provides a utility for saving a dataset as safetensors files for the purpose of testing backward compatibility when updating the data format. It uses the `PushtDataset` to create a DataLoader and saves selected frame from the dataset into a corresponding safetensors file in a specified output directory. If you know that your change will break backward compatibility, you should write a shortlived test by modifying `tests/test_datasets.py::test_backward_compatibility` accordingly, and make sure this custom test pass. Your custom test doesnt need to be merged into the `main` branch. Then you need to run this script and update the tests artifacts. Example usage: `python tests/artifacts/datasets/save_dataset_to_safetensors.py` """ import shutil from pathlib import Path from safetensors.torch import save_file from lerobot.datasets.lerobot_dataset import LeRobotDataset def save_dataset_to_safetensors(output_dir, repo_id="lerobot/pusht"): repo_dir = Path(output_dir) / repo_id if repo_dir.exists(): shutil.rmtree(repo_dir) repo_dir.mkdir(parents=True, exist_ok=True) dataset = LeRobotDataset( repo_id=repo_id, episodes=[0], ) # save 2 first frames of first episode i = dataset.episode_data_index["from"][0].item() save_file(dataset[i], repo_dir / f"frame_{i}.safetensors") save_file(dataset[i + 1], repo_dir / f"frame_{i + 1}.safetensors") # save 2 frames at the middle of first episode i = int((dataset.episode_data_index["to"][0].item() - dataset.episode_data_index["from"][0].item()) / 2) save_file(dataset[i], repo_dir / f"frame_{i}.safetensors") save_file(dataset[i + 1], repo_dir / f"frame_{i + 1}.safetensors") # save 2 last frames of first episode i = dataset.episode_data_index["to"][0].item() save_file(dataset[i - 2], repo_dir / f"frame_{i - 2}.safetensors") save_file(dataset[i - 1], repo_dir / f"frame_{i - 1}.safetensors") # TODO(rcadene): Enable testing on second and last episode # We currently cant because our test dataset only contains the first episode # # save 2 first frames of second episode # i = dataset.episode_data_index["from"][1].item() # save_file(dataset[i], repo_dir / f"frame_{i}.safetensors") # save_file(dataset[i + 1], repo_dir / f"frame_{i+1}.safetensors") # # save 2 last frames of second episode # i = dataset.episode_data_index["to"][1].item() # save_file(dataset[i - 2], repo_dir / f"frame_{i-2}.safetensors") # save_file(dataset[i - 1], repo_dir / f"frame_{i-1}.safetensors") # # save 2 last frames of last episode # i = dataset.episode_data_index["to"][-1].item() # save_file(dataset[i - 2], repo_dir / f"frame_{i-2}.safetensors") # save_file(dataset[i - 1], repo_dir / f"frame_{i-1}.safetensors") if __name__ == "__main__": for dataset in [ "lerobot/pusht", "lerobot/aloha_sim_insertion_human", "lerobot/xarm_lift_medium", "lerobot/nyu_franka_play_dataset", "lerobot/cmu_stretch", ]: save_dataset_to_safetensors("tests/artifacts/datasets", repo_id=dataset)
lerobot/tests/artifacts/datasets/save_dataset_to_safetensors.py/0
{ "file_path": "lerobot/tests/artifacts/datasets/save_dataset_to_safetensors.py", "repo_id": "lerobot", "token_count": 1380 }
214
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import shutil from pathlib import Path import torch from safetensors.torch import save_file from lerobot.configs.default import DatasetConfig from lerobot.configs.train import TrainPipelineConfig from lerobot.datasets.factory import make_dataset from lerobot.optim.factory import make_optimizer_and_scheduler from lerobot.policies.factory import make_policy, make_policy_config from lerobot.utils.random_utils import set_seed def get_policy_stats(ds_repo_id: str, policy_name: str, policy_kwargs: dict): set_seed(1337) train_cfg = TrainPipelineConfig( # TODO(rcadene, aliberts): remove dataset download dataset=DatasetConfig(repo_id=ds_repo_id, episodes=[0]), policy=make_policy_config(policy_name, push_to_hub=False, **policy_kwargs), ) train_cfg.validate() # Needed for auto-setting some parameters dataset = make_dataset(train_cfg) policy = make_policy(train_cfg.policy, ds_meta=dataset.meta) policy.train() optimizer, _ = make_optimizer_and_scheduler(train_cfg, policy) dataloader = torch.utils.data.DataLoader( dataset, num_workers=0, batch_size=train_cfg.batch_size, shuffle=False, ) batch = next(iter(dataloader)) loss, output_dict = policy.forward(batch) if output_dict is not None: output_dict = {k: v for k, v in output_dict.items() if isinstance(v, torch.Tensor)} output_dict["loss"] = loss else: output_dict = {"loss": loss} loss.backward() grad_stats = {} for key, param in policy.named_parameters(): if param.requires_grad: grad_stats[f"{key}_mean"] = param.grad.mean() grad_stats[f"{key}_std"] = ( param.grad.std() if param.grad.numel() > 1 else torch.tensor(float(0.0)) ) optimizer.step() param_stats = {} for key, param in policy.named_parameters(): param_stats[f"{key}_mean"] = param.mean() param_stats[f"{key}_std"] = param.std() if param.numel() > 1 else torch.tensor(float(0.0)) optimizer.zero_grad() policy.reset() # HACK: We reload a batch with no delta_indices as `select_action` won't expect a timestamps dimension # We simulate having an environment using a dataset by setting delta_indices to None and dropping tensors # indicating padding (those ending with "_is_pad") dataset.delta_indices = None batch = next(iter(dataloader)) obs = {} for k in batch: # TODO: regenerate the safetensors # for backward compatibility if k.endswith("_is_pad"): continue # for backward compatibility if k == "task": continue if k.startswith("observation"): obs[k] = batch[k] if hasattr(train_cfg.policy, "n_action_steps"): actions_queue = train_cfg.policy.n_action_steps else: actions_queue = train_cfg.policy.n_action_repeats actions = {str(i): policy.select_action(obs).contiguous() for i in range(actions_queue)} return output_dict, grad_stats, param_stats, actions def save_policy_to_safetensors(output_dir: Path, ds_repo_id: str, policy_name: str, policy_kwargs: dict): if output_dir.exists(): print(f"Overwrite existing safetensors in '{output_dir}':") print(f" - Validate with: `git add {output_dir}`") print(f" - Revert with: `git checkout -- {output_dir}`") shutil.rmtree(output_dir) output_dir.mkdir(parents=True, exist_ok=True) output_dict, grad_stats, param_stats, actions = get_policy_stats(ds_repo_id, policy_name, policy_kwargs) save_file(output_dict, output_dir / "output_dict.safetensors") save_file(grad_stats, output_dir / "grad_stats.safetensors") save_file(param_stats, output_dir / "param_stats.safetensors") save_file(actions, output_dir / "actions.safetensors") if __name__ == "__main__": artifacts_cfg = [ ("lerobot/xarm_lift_medium", "tdmpc", {"use_mpc": False}, "use_policy"), ("lerobot/xarm_lift_medium", "tdmpc", {"use_mpc": True}, "use_mpc"), ( "lerobot/pusht", "diffusion", { "n_action_steps": 8, "num_inference_steps": 10, "down_dims": [128, 256, 512], }, "", ), ("lerobot/aloha_sim_insertion_human", "act", {"n_action_steps": 10}, ""), ( "lerobot/aloha_sim_insertion_human", "act", {"n_action_steps": 1000, "chunk_size": 1000}, "1000_steps", ), ] if len(artifacts_cfg) == 0: raise RuntimeError("No policies were provided!") for ds_repo_id, policy, policy_kwargs, file_name_extra in artifacts_cfg: ds_name = ds_repo_id.split("/")[-1] output_dir = Path("tests/artifacts/policies") / f"{ds_name}_{policy}_{file_name_extra}" save_policy_to_safetensors(output_dir, ds_repo_id, policy, policy_kwargs)
lerobot/tests/artifacts/policies/save_policy_to_safetensors.py/0
{ "file_path": "lerobot/tests/artifacts/policies/save_policy_to_safetensors.py", "repo_id": "lerobot", "token_count": 2333 }
215
#!/usr/bin/env python # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import traceback import pytest from serial import SerialException from lerobot.configs.types import FeatureType, PolicyFeature from tests.utils import DEVICE # Import fixture modules as plugins pytest_plugins = [ "tests.fixtures.dataset_factories", "tests.fixtures.files", "tests.fixtures.hub", "tests.fixtures.optimizers", ] def pytest_collection_finish(): print(f"\nTesting with {DEVICE=}") def _check_component_availability(component_type, available_components, make_component): """Generic helper to check if a hardware component is available""" if component_type not in available_components: raise ValueError( f"The {component_type} type is not valid. Expected one of these '{available_components}'" ) try: component = make_component(component_type) component.connect() del component return True except Exception as e: print(f"\nA {component_type} is not available.") if isinstance(e, ModuleNotFoundError): print(f"\nInstall module '{e.name}'") elif isinstance(e, SerialException): print("\nNo physical device detected.") elif isinstance(e, ValueError) and "camera_index" in str(e): print("\nNo physical camera detected.") else: traceback.print_exc() return False @pytest.fixture def patch_builtins_input(monkeypatch): def print_text(text=None): if text is not None: print(text) monkeypatch.setattr("builtins.input", print_text) @pytest.fixture def policy_feature_factory(): """PolicyFeature factory""" def _pf(ft: FeatureType, shape: tuple[int, ...]) -> PolicyFeature: return PolicyFeature(type=ft, shape=shape) return _pf def assert_contract_is_typed(features: dict[str, PolicyFeature]) -> None: assert isinstance(features, dict) assert all(isinstance(k, str) for k in features.keys()) assert all(isinstance(v, PolicyFeature) for v in features.values())
lerobot/tests/conftest.py/0
{ "file_path": "lerobot/tests/conftest.py", "repo_id": "lerobot", "token_count": 928 }
216
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import pytest import torch from lerobot.optim.optimizers import AdamConfig from lerobot.optim.schedulers import VQBeTSchedulerConfig @pytest.fixture def model_params(): return [torch.nn.Parameter(torch.randn(10, 10))] @pytest.fixture def optimizer(model_params): optimizer = AdamConfig().build(model_params) # Dummy step to populate state loss = sum(param.sum() for param in model_params) loss.backward() optimizer.step() return optimizer @pytest.fixture def scheduler(optimizer): config = VQBeTSchedulerConfig(num_warmup_steps=10, num_vqvae_training_steps=20, num_cycles=0.5) return config.build(optimizer, num_training_steps=100)
lerobot/tests/fixtures/optimizers.py/0
{ "file_path": "lerobot/tests/fixtures/optimizers.py", "repo_id": "lerobot", "token_count": 406 }
217
import torch from lerobot.processor.pipeline import ( RobotProcessor, TransitionKey, _default_batch_to_transition, _default_transition_to_batch, ) def _dummy_batch(): """Create a dummy batch using the new format with observation.* and next.* keys.""" return { "observation.image.left": torch.randn(1, 3, 128, 128), "observation.image.right": torch.randn(1, 3, 128, 128), "observation.state": torch.tensor([[0.1, 0.2, 0.3, 0.4]]), "action": torch.tensor([[0.5]]), "next.reward": 1.0, "next.done": False, "next.truncated": False, "info": {"key": "value"}, } def test_observation_grouping_roundtrip(): """Test that observation.* keys are properly grouped and ungrouped.""" proc = RobotProcessor([]) batch_in = _dummy_batch() batch_out = proc(batch_in) # Check that all observation.* keys are preserved original_obs_keys = {k: v for k, v in batch_in.items() if k.startswith("observation.")} reconstructed_obs_keys = {k: v for k, v in batch_out.items() if k.startswith("observation.")} assert set(original_obs_keys.keys()) == set(reconstructed_obs_keys.keys()) # Check tensor values assert torch.allclose(batch_out["observation.image.left"], batch_in["observation.image.left"]) assert torch.allclose(batch_out["observation.image.right"], batch_in["observation.image.right"]) assert torch.allclose(batch_out["observation.state"], batch_in["observation.state"]) # Check other fields assert torch.allclose(batch_out["action"], batch_in["action"]) assert batch_out["next.reward"] == batch_in["next.reward"] assert batch_out["next.done"] == batch_in["next.done"] assert batch_out["next.truncated"] == batch_in["next.truncated"] assert batch_out["info"] == batch_in["info"] def test_batch_to_transition_observation_grouping(): """Test that _default_batch_to_transition correctly groups observation.* keys.""" batch = { "observation.image.top": torch.randn(1, 3, 128, 128), "observation.image.left": torch.randn(1, 3, 128, 128), "observation.state": [1, 2, 3, 4], "action": "action_data", "next.reward": 1.5, "next.done": True, "next.truncated": False, "info": {"episode": 42}, } transition = _default_batch_to_transition(batch) # Check observation is a dict with all observation.* keys assert isinstance(transition[TransitionKey.OBSERVATION], dict) assert "observation.image.top" in transition[TransitionKey.OBSERVATION] assert "observation.image.left" in transition[TransitionKey.OBSERVATION] assert "observation.state" in transition[TransitionKey.OBSERVATION] # Check values are preserved assert torch.allclose( transition[TransitionKey.OBSERVATION]["observation.image.top"], batch["observation.image.top"] ) assert torch.allclose( transition[TransitionKey.OBSERVATION]["observation.image.left"], batch["observation.image.left"] ) assert transition[TransitionKey.OBSERVATION]["observation.state"] == [1, 2, 3, 4] # Check other fields assert transition[TransitionKey.ACTION] == "action_data" assert transition[TransitionKey.REWARD] == 1.5 assert transition[TransitionKey.DONE] assert not transition[TransitionKey.TRUNCATED] assert transition[TransitionKey.INFO] == {"episode": 42} assert transition[TransitionKey.COMPLEMENTARY_DATA] == {} def test_transition_to_batch_observation_flattening(): """Test that _default_transition_to_batch correctly flattens observation dict.""" observation_dict = { "observation.image.top": torch.randn(1, 3, 128, 128), "observation.image.left": torch.randn(1, 3, 128, 128), "observation.state": [1, 2, 3, 4], } transition = { TransitionKey.OBSERVATION: observation_dict, TransitionKey.ACTION: "action_data", TransitionKey.REWARD: 1.5, TransitionKey.DONE: True, TransitionKey.TRUNCATED: False, TransitionKey.INFO: {"episode": 42}, TransitionKey.COMPLEMENTARY_DATA: {}, } batch = _default_transition_to_batch(transition) # Check that observation.* keys are flattened back to batch assert "observation.image.top" in batch assert "observation.image.left" in batch assert "observation.state" in batch # Check values are preserved assert torch.allclose(batch["observation.image.top"], observation_dict["observation.image.top"]) assert torch.allclose(batch["observation.image.left"], observation_dict["observation.image.left"]) assert batch["observation.state"] == [1, 2, 3, 4] # Check other fields are mapped to next.* format assert batch["action"] == "action_data" assert batch["next.reward"] == 1.5 assert batch["next.done"] assert not batch["next.truncated"] assert batch["info"] == {"episode": 42} def test_no_observation_keys(): """Test behavior when there are no observation.* keys.""" batch = { "action": "action_data", "next.reward": 2.0, "next.done": False, "next.truncated": True, "info": {"test": "no_obs"}, } transition = _default_batch_to_transition(batch) # Observation should be None when no observation.* keys assert transition[TransitionKey.OBSERVATION] is None # Check other fields assert transition[TransitionKey.ACTION] == "action_data" assert transition[TransitionKey.REWARD] == 2.0 assert not transition[TransitionKey.DONE] assert transition[TransitionKey.TRUNCATED] assert transition[TransitionKey.INFO] == {"test": "no_obs"} # Round trip should work reconstructed_batch = _default_transition_to_batch(transition) assert reconstructed_batch["action"] == "action_data" assert reconstructed_batch["next.reward"] == 2.0 assert not reconstructed_batch["next.done"] assert reconstructed_batch["next.truncated"] assert reconstructed_batch["info"] == {"test": "no_obs"} def test_minimal_batch(): """Test with minimal batch containing only observation.* and action.""" batch = {"observation.state": "minimal_state", "action": "minimal_action"} transition = _default_batch_to_transition(batch) # Check observation assert transition[TransitionKey.OBSERVATION] == {"observation.state": "minimal_state"} assert transition[TransitionKey.ACTION] == "minimal_action" # Check defaults assert transition[TransitionKey.REWARD] == 0.0 assert not transition[TransitionKey.DONE] assert not transition[TransitionKey.TRUNCATED] assert transition[TransitionKey.INFO] == {} assert transition[TransitionKey.COMPLEMENTARY_DATA] == {} # Round trip reconstructed_batch = _default_transition_to_batch(transition) assert reconstructed_batch["observation.state"] == "minimal_state" assert reconstructed_batch["action"] == "minimal_action" assert reconstructed_batch["next.reward"] == 0.0 assert not reconstructed_batch["next.done"] assert not reconstructed_batch["next.truncated"] assert reconstructed_batch["info"] == {} def test_empty_batch(): """Test behavior with empty batch.""" batch = {} transition = _default_batch_to_transition(batch) # All fields should have defaults assert transition[TransitionKey.OBSERVATION] is None assert transition[TransitionKey.ACTION] is None assert transition[TransitionKey.REWARD] == 0.0 assert not transition[TransitionKey.DONE] assert not transition[TransitionKey.TRUNCATED] assert transition[TransitionKey.INFO] == {} assert transition[TransitionKey.COMPLEMENTARY_DATA] == {} # Round trip reconstructed_batch = _default_transition_to_batch(transition) assert reconstructed_batch["action"] is None assert reconstructed_batch["next.reward"] == 0.0 assert not reconstructed_batch["next.done"] assert not reconstructed_batch["next.truncated"] assert reconstructed_batch["info"] == {} def test_complex_nested_observation(): """Test with complex nested observation data.""" batch = { "observation.image.top": {"image": torch.randn(1, 3, 128, 128), "timestamp": 1234567890}, "observation.image.left": {"image": torch.randn(1, 3, 128, 128), "timestamp": 1234567891}, "observation.state": torch.randn(7), "action": torch.randn(8), "next.reward": 3.14, "next.done": False, "next.truncated": True, "info": {"episode_length": 200, "success": True}, } transition = _default_batch_to_transition(batch) reconstructed_batch = _default_transition_to_batch(transition) # Check that all observation keys are preserved original_obs_keys = {k for k in batch if k.startswith("observation.")} reconstructed_obs_keys = {k for k in reconstructed_batch if k.startswith("observation.")} assert original_obs_keys == reconstructed_obs_keys # Check tensor values assert torch.allclose(batch["observation.state"], reconstructed_batch["observation.state"]) # Check nested dict with tensors assert torch.allclose( batch["observation.image.top"]["image"], reconstructed_batch["observation.image.top"]["image"] ) assert torch.allclose( batch["observation.image.left"]["image"], reconstructed_batch["observation.image.left"]["image"] ) # Check action tensor assert torch.allclose(batch["action"], reconstructed_batch["action"]) # Check other fields assert batch["next.reward"] == reconstructed_batch["next.reward"] assert batch["next.done"] == reconstructed_batch["next.done"] assert batch["next.truncated"] == reconstructed_batch["next.truncated"] assert batch["info"] == reconstructed_batch["info"] def test_custom_converter(): """Test that custom converters can still be used.""" def to_tr(batch): # Custom converter that modifies the reward tr = _default_batch_to_transition(batch) # Double the reward reward = tr.get(TransitionKey.REWARD, 0.0) new_tr = tr.copy() new_tr[TransitionKey.REWARD] = reward * 2 if reward is not None else 0.0 return new_tr def to_batch(tr): batch = _default_transition_to_batch(tr) return batch processor = RobotProcessor(steps=[], to_transition=to_tr, to_output=to_batch) batch = { "observation.state": torch.randn(1, 4), "action": torch.randn(1, 2), "next.reward": 1.0, "next.done": False, } result = processor(batch) # Check the reward was doubled by our custom converter assert result["next.reward"] == 2.0 assert torch.allclose(result["observation.state"], batch["observation.state"]) assert torch.allclose(result["action"], batch["action"])
lerobot/tests/processor/test_batch_conversion.py/0
{ "file_path": "lerobot/tests/processor/test_batch_conversion.py", "repo_id": "lerobot", "token_count": 3950 }
218
#!/usr/bin/env python # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import multiprocessing import os import signal import threading from unittest.mock import patch import pytest from lerobot.utils.process import ProcessSignalHandler # Fixture to reset shutdown_event_counter and original signal handlers before and after each test @pytest.fixture(autouse=True) def reset_globals_and_handlers(): # Store original signal handlers original_handlers = { sig: signal.getsignal(sig) for sig in [signal.SIGINT, signal.SIGTERM, signal.SIGHUP, signal.SIGQUIT] if hasattr(signal, sig.name) } yield # Restore original signal handlers for sig, handler in original_handlers.items(): signal.signal(sig, handler) def test_setup_process_handlers_event_with_threads(): """Test that setup_process_handlers returns the correct event type.""" handler = ProcessSignalHandler(use_threads=True) shutdown_event = handler.shutdown_event assert isinstance(shutdown_event, threading.Event), "Should be a threading.Event" assert not shutdown_event.is_set(), "Event should initially be unset" def test_setup_process_handlers_event_with_processes(): """Test that setup_process_handlers returns the correct event type.""" handler = ProcessSignalHandler(use_threads=False) shutdown_event = handler.shutdown_event assert isinstance(shutdown_event, type(multiprocessing.Event())), "Should be a multiprocessing.Event" assert not shutdown_event.is_set(), "Event should initially be unset" @pytest.mark.parametrize("use_threads", [True, False]) @pytest.mark.parametrize( "sig", [ signal.SIGINT, signal.SIGTERM, # SIGHUP and SIGQUIT are not reliably available on all platforms (e.g. Windows) pytest.param( signal.SIGHUP, marks=pytest.mark.skipif(not hasattr(signal, "SIGHUP"), reason="SIGHUP not available"), ), pytest.param( signal.SIGQUIT, marks=pytest.mark.skipif(not hasattr(signal, "SIGQUIT"), reason="SIGQUIT not available"), ), ], ) def test_signal_handler_sets_event(use_threads, sig): """Test that the signal handler sets the event on receiving a signal.""" handler = ProcessSignalHandler(use_threads=use_threads) shutdown_event = handler.shutdown_event assert handler.counter == 0 os.kill(os.getpid(), sig) # In some environments, the signal might take a moment to be handled. shutdown_event.wait(timeout=1.0) assert shutdown_event.is_set(), f"Event should be set after receiving signal {sig}" # Ensure the internal counter was incremented assert handler.counter == 1 @pytest.mark.parametrize("use_threads", [True, False]) @patch("sys.exit") def test_force_shutdown_on_second_signal(mock_sys_exit, use_threads): """Test that a second signal triggers a force shutdown.""" handler = ProcessSignalHandler(use_threads=use_threads) os.kill(os.getpid(), signal.SIGINT) # Give a moment for the first signal to be processed import time time.sleep(0.1) os.kill(os.getpid(), signal.SIGINT) time.sleep(0.1) assert handler.counter == 2 mock_sys_exit.assert_called_once_with(1)
lerobot/tests/utils/test_process.py/0
{ "file_path": "lerobot/tests/utils/test_process.py", "repo_id": "lerobot", "token_count": 1332 }
219
# Model arguments model_name_or_path: Qwen/Qwen2.5-Coder-7B-Instruct model_revision: main torch_dtype: bfloat16 attn_implementation: flash_attention_2 # Data training arguments dataset_name: open-r1/codeforces dataset_prompt_column: prompt dataset_config: verifiable-prompts dataset_test_split: test dataset_train_split: train system_prompt: "You are a helpful AI Assistant that provides well-reasoned and detailed responses. You first think about the reasoning process as an internal monologue and then provide the user with the answer. Respond in the following format: <think>\n...\n</think>\n<answer>\n...\n</answer>" # GRPO trainer config callbacks: - push_to_hub_revision benchmarks: - lcb_v4 beta: 0.0 loss_type: dr_grpo scale_rewards: false bf16: true do_eval: false eval_strategy: "no" use_vllm: true vllm_device: auto vllm_gpu_memory_utilization: 0.7 gradient_accumulation_steps: 32 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false hub_model_id: open-r1/Qwen2.5-Coder-7B-Instruct-Codeforces-GRPO hub_model_revision: v01.00 hub_strategy: every_save learning_rate: 1.0e-06 log_completions: true log_level: info logging_first_step: true logging_steps: 1 logging_strategy: steps lr_scheduler_type: constant_with_warmup max_grad_norm: 0.2 max_prompt_length: 2000 max_completion_length: 8192 max_steps: -1 num_generations: 16 # aiming for 1k optimization steps # total_samples_per_batch = num_gpus * grad_accumulation_steps * per_device_batch_size = 8 * 32 * 4 = 1024 # unique_prompts_per_batch = total_samples_per_batch / num_generations = 1024 / 16 = 64 # #dataset ~= 16k (8k * 2, for python and cpp) # global_steps_per_epoch = #dataset / unique_prompts_per_batch = 16k / 64 ~= 250 # epochs_for_1k_steps = 1000/250 = 4 epochs num_train_epochs: 4 output_dir: data/Qwen2.5-Coder-7B-Instruct-Codeforces-GRPO_v01.00 overwrite_output_dir: true per_device_train_batch_size: 4 push_to_hub: true report_to: - wandb reward_funcs: - cf_code - code_format reward_weights: - 1.0 - 0.1 save_strategy: "steps" save_steps: 0.05 save_total_limit: 1 seed: 42 temperature: 0.7 wandb_entity: huggingface wandb_project: open-r1 warmup_ratio: 0.1 mask_truncated_completions: true # for each generation, evaluate these many test cases in parallel, then check if any of them failed (0 score): if so stop evaluating # otherwise continue with the next batch of test cases. Useful to avoid overloading the eval server + save time on wrong solutions code_eval_test_batch_size: -1 code_eval_scoring_mode: weighted_sum
open-r1/recipes/Qwen2.5-Coder-7B-Instruct/grpo/config_codeforces.yaml/0
{ "file_path": "open-r1/recipes/Qwen2.5-Coder-7B-Instruct/grpo/config_codeforces.yaml", "repo_id": "open-r1", "token_count": 926 }
220
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # example usage python scripts/filter_dataset.py --config recipes/dataset_filtering/config_demo.yaml import logging from dataclasses import dataclass from git import Optional import torch import sys import datasets import transformers from datasets import load_dataset from transformers import set_seed from open_r1.configs import GRPOConfig, GRPOScriptArguments from open_r1.rewards import get_reward_funcs from open_r1.utils import get_tokenizer from trl import ModelConfig, TrlParser from trl.data_utils import apply_chat_template from vllm import LLM, SamplingParams logger = logging.getLogger(__name__) @dataclass class PassRateScriptArguments(GRPOScriptArguments): # we can be lazy and just use the same script args as GRPO output_dataset_name: Optional[str] = None pass_rate_min: float = 0.1 pass_rate_max: float = 0.9 dataset_start_index: Optional[int] = None dataset_end_index: Optional[int] = None dataset_split: str = "train" def main(script_args, training_args, model_args): # Set seed for reproducibility set_seed(training_args.seed) ############### # Setup logging ############### logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() logger.info(f"Model parameters {model_args}") logger.info(f"Script parameters {script_args}") logger.info(f"Training parameters {training_args}") # Load the dataset dataset = load_dataset(script_args.dataset_name, name=script_args.dataset_config, split=script_args.dataset_split) if script_args.dataset_start_index is not None and script_args.dataset_end_index is not None: dataset = dataset.select(range(script_args.dataset_start_index, script_args.dataset_end_index)) # Get reward functions from the registry reward_funcs = get_reward_funcs(script_args) # Format into conversation def make_conversation(example, prompt_column: str = script_args.dataset_prompt_column): example["prompt_backup"] = example[prompt_column] prompt = [] if training_args.system_prompt is not None: prompt.append({"role": "system", "content": training_args.system_prompt}) if prompt_column not in example: raise ValueError(f"Dataset Question Field Error: {prompt_column} is not supported.") prompt.append({"role": "user", "content": example[prompt_column]}) return {"prompt": prompt} dataset = dataset.map(make_conversation) tokenizer = get_tokenizer(model_args, training_args) if "messages" in dataset.column_names: dataset = dataset.remove_columns("messages") dataset = dataset.map(apply_chat_template, fn_kwargs={"tokenizer": tokenizer}) llm = LLM( model=model_args.model_name_or_path, revision=model_args.model_revision, trust_remote_code=model_args.trust_remote_code, ) sampling_params=SamplingParams( temperature=training_args.temperature, top_p=training_args.top_p, top_k=training_args.top_k, n=training_args.num_generations, max_tokens=training_args.max_completion_length, ) def batch_score(examples): prompts = examples["prompt"] outputs = llm.generate( prompts, sampling_params=sampling_params, use_tqdm=False, ) repeated_prompts = [] reward_completions = [] grouped_completions = [] for output in outputs: prompt = output.prompt group = [] for completion in output.outputs: text = completion.text group.append(text) message = [{"role": "assistant", "content": text}] repeated_prompts.append(prompt) reward_completions.append(message) grouped_completions.append(group) def repeat_each_element_k_times(list_to_repeat: list, k: int) -> list: return [element for item in list_to_repeat for element in [item] * k] rewards_per_func = torch.zeros(len(repeated_prompts), len(reward_funcs)) for i, reward_func in enumerate(reward_funcs): keys = [key for key in examples.data.keys() if key not in ["prompt", "completion"]] reward_kwargs = {key: repeat_each_element_k_times(examples[key], training_args.num_generations) for key in keys} output_reward_func = reward_func(prompts=repeated_prompts, completions=reward_completions, **reward_kwargs) # Convert None values to NaN output_reward_func = [reward if reward is not None else torch.nan for reward in output_reward_func] rewards_per_func[:, i] = torch.tensor(output_reward_func, dtype=torch.float32) reshaped_rewards = rewards_per_func.view(-1, training_args.num_generations) examples["pass_rate_generations"] = grouped_completions examples["pass_rate_rewards"] = reshaped_rewards.tolist() return examples dataset = dataset.map(batch_score, batched=True, batch_size=64) # we need to restore the prompt for the final dataset def restore_prompt(example): example["prompt"] = example["prompt_backup"] return example dataset = dataset.map(restore_prompt) dataset = dataset.remove_columns("prompt_backup") if script_args.output_dataset_name is not None: output_dataset_name = script_args.output_dataset_name else: model_name = model_args.model_name_or_path if "/" in model_name: model_name = model_name.split("/")[-1] model_revision = model_args.model_revision output_dataset_name = f"{script_args.dataset_name}-{model_name}-{model_revision}-gen" config_name="default" filtered_config_name = f"filt-{script_args.pass_rate_min}-{script_args.pass_rate_max}" if script_args.dataset_start_index is not None and script_args.dataset_end_index is not None: config_name = f"gen-{script_args.dataset_start_index}-{script_args.dataset_end_index}" filtered_config_name = f"{filtered_config_name}-{script_args.dataset_start_index}-{script_args.dataset_end_index}" dataset.push_to_hub(output_dataset_name, config_name=config_name, revision="gen") def filter_func(example): rewards = example["pass_rate_rewards"] # get the mean of the rewards that are not None mean_reward = torch.nanmean(torch.tensor(rewards, dtype=torch.float32)) return script_args.pass_rate_min < mean_reward < script_args.pass_rate_max logger.info(f"Filtering dataset with low reward threshold {script_args.pass_rate_min} and high reward threshold {script_args.pass_rate_max}") logger.info(f"Dataset size before filtering: {dataset}") dataset = dataset.filter(filter_func) logger.info(f"Dataset size after filtering: {dataset}") dataset.push_to_hub(output_dataset_name, config_name=filtered_config_name, revision="pass_rate") if __name__ == "__main__": parser = TrlParser((PassRateScriptArguments, GRPOConfig, ModelConfig)) script_args, training_args, model_args = parser.parse_args_and_config() main(script_args, training_args, model_args)
open-r1/scripts/pass_rate_filtering/compute_pass_rate.py/0
{ "file_path": "open-r1/scripts/pass_rate_filtering/compute_pass_rate.py", "repo_id": "open-r1", "token_count": 3331 }
221
#!/bin/bash #SBATCH --job-name=r1-server #SBATCH --partition=hopper-prod #SBATCH --qos=normal #SBATCH --nodes=2 #SBATCH --gpus-per-node=8 #SBATCH --exclusive #SBATCH --output=./logs/%x_%j_%n.out #SBATCH --error=./logs/%x_%j_%n.err #SBATCH --time=7-00:00:00 #SBATCH --ntasks-per-node=1 set -exuo pipefail MODEL_PATH="deepseek-ai/DeepSeek-R1" CONDA_ENV="sglang124" ROUTER_ADDRESS="" SERVER_PORT=39877 DIST_PORT=45000 # TODO: Adjust these variables to your cluster configuration export OUTLINES_CACHE_DIR=/scratch/serve_r1/ocache/ export TRITON_HOME=/scratch/serve_r1/triton/ export GLOO_SOCKET_IFNAME="enp71s0" export NCCL_SOCKET_IFNAME="enp71s0" while getopts "m:e:r:h" opt; do case $opt in m) MODEL_PATH="$OPTARG" ;; e) CONDA_ENV="$OPTARG" ;; r) ROUTER_ADDRESS="$OPTARG" ;; h|?) echo "Usage: sbatch $0 [-m MODEL_PATH] [-e CONDA_ENV] [-r ROUTER_ADDRESS]"; exit 1 ;; esac done # TODO: Environment setup, adjust to your cluster configuration module load cuda/12.4 source ~/.bashrc source "$CONDA_PREFIX/etc/profile.d/conda.sh" conda activate "$CONDA_ENV" || { echo "Failed to activate conda env $CONDA_ENV"; exit 1; } FIRST_NODE=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n1) FIRST_NODE_IP=$(srun --nodes=1 --ntasks=1 -w "$FIRST_NODE" hostname --ip-address) # Launch servers synchronously across all nodes # (--max-running-requests=56 is rough estimate to avoid too many evicted/preempted 16k-long requests) srun --nodes=2 --ntasks=2 --ntasks-per-node=1 \ bash -c "python -m sglang.launch_server \ --model-path '$MODEL_PATH' \ --tp 16 \ --dist-init-addr '$FIRST_NODE_IP:$DIST_PORT' \ --nnodes 2 \ --node-rank \$SLURM_PROCID \ --port '$SERVER_PORT' \ --host 0.0.0.0 \ --trust-remote-code \ --max-running-requests 56 \ --context-length 32768" & # Wait for server with timeout TIMEOUT=3600 # 1h, but model loading should take ~30min START_TIME=$(date +%s) echo "Waiting for SGLang server (http://$FIRST_NODE_IP:$SERVER_PORT)..." while true; do if curl -s -o /dev/null -w "%{http_code}" "http://$FIRST_NODE_IP:$SERVER_PORT/health" >/dev/null 2>&1; then echo "Server is ready at http://$FIRST_NODE_IP:$SERVER_PORT" break fi CURRENT_TIME=$(date +%s) if [ $((CURRENT_TIME - START_TIME)) -gt $TIMEOUT ]; then echo "Error: Server failed to start within $TIMEOUT seconds" exit 1 fi echo "Still waiting... ($(($CURRENT_TIME - $START_TIME)) seconds elapsed)" sleep 60 done # Register with router only if address was provided if [ -n "$ROUTER_ADDRESS" ]; then echo "Registering with router at $ROUTER_ADDRESS..." curl -X POST "http://$ROUTER_ADDRESS/add_worker?url=http://$FIRST_NODE_IP:$SERVER_PORT" || true sleep 10 fi echo "Checking available models..." curl "http://$FIRST_NODE_IP:$SERVER_PORT/v1/models" sleep 10 echo "Executing sanity check..." curl "http://$FIRST_NODE_IP:$SERVER_PORT/v1/completions" \ -H "Content-Type: application/json" \ -d "{ \"model\": \"default\", \"prompt\": \"<|begin▁of▁sentence|><|User|>hi, how are you?<|Assistant|>\", \"max_tokens\": 2048, \"temperature\": 0.6 }" # Keep the job running with health checks while true; do if ! curl -s -o /dev/null "http://$FIRST_NODE_IP:$SERVER_PORT/health"; then echo "Error: Server health check failed" exit 1 fi sleep 300 done
open-r1/slurm/serve_r1.slurm/0
{ "file_path": "open-r1/slurm/serve_r1.slurm", "repo_id": "open-r1", "token_count": 1544 }
222
from collections import defaultdict from functools import lru_cache from datasets import load_dataset def add_includes(code: str, problem_id: str) -> str: """ Fix common compilation errors for IOI problems. """ if not code: return code # has most of the useful functions code_header = "#include <bits/stdc++.h>\n" # include the problem header problem_header_include = f'#include "{problem_id}.h"' if problem_header_include not in code: code_header += problem_header_include + "\n" # use namespace std since models forget std:: often if "using namespace std;" not in code and "std::" not in code: code_header += "\nusing namespace std;\n\n" return code_header + code @lru_cache def load_ioi_tests_for_year(year: int) -> dict[str, dict[str, tuple[str, str]]]: """ Load IOI tests for a given year. """ tests_dataset = load_dataset("open-r1/ioi-test-cases", name=f"{year}", split="train") test_cases = defaultdict(dict) for test_case in tests_dataset: test_cases[test_case["problem_id"]][test_case["test_name"]] = test_case["test_input"], test_case["test_output"] return test_cases def load_ioi_tests(year: int, problem_id: str) -> dict[str, tuple[str, str]]: """ Load IOI tests for a given year and problem id. """ return load_ioi_tests_for_year(year)[problem_id]
open-r1/src/open_r1/utils/competitive_programming/ioi_utils.py/0
{ "file_path": "open-r1/src/open_r1/utils/competitive_programming/ioi_utils.py", "repo_id": "open-r1", "token_count": 520 }
223
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Orthogonal Finetuning (OFT and BOFT) This conceptual guide gives a brief overview of [OFT](https://huggingface.co/papers/2306.07280), [OFTv2](https://www.arxiv.org/abs/2506.19847) and [BOFT](https://huggingface.co/papers/2311.06243), a parameter-efficient fine-tuning technique that utilizes orthogonal matrix to multiplicatively transform the pretrained weight matrices. To achieve efficient fine-tuning, OFT represents the weight updates with an orthogonal transformation. The orthogonal transformation is parameterized by an orthogonal matrix multiplied to the pretrained weight matrix. These new matrices can be trained to adapt to the new data while keeping the overall number of changes low. The original weight matrix remains frozen and doesn't receive any further adjustments. To produce the final results, both the original and the adapted weights are multiplied togethor. Orthogonal Butterfly (BOFT) generalizes OFT with Butterfly factorization and further improves its parameter efficiency and finetuning flexibility. In short, OFT can be viewed as a special case of BOFT. Different from LoRA that uses additive low-rank weight updates, BOFT uses multiplicative orthogonal weight updates. The comparison is shown below. <div class="flex justify-center"> <img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/BOFT_comparison.png"/> </div> BOFT has some advantages compared to LoRA: * BOFT proposes a simple yet generic way to finetune pretrained models to downstream tasks, yielding a better preservation of pretraining knowledge and a better parameter efficiency. * Through the orthogonality, BOFT introduces a structural constraint, i.e., keeping the [hyperspherical energy](https://huggingface.co/papers/1805.09298) unchanged during finetuning. This can effectively reduce the forgetting of pretraining knowledge. * BOFT uses the butterfly factorization to efficiently parameterize the orthogonal matrix, which yields a compact yet expressive learning space (i.e., hypothesis class). * The sparse matrix decomposition in BOFT brings in additional inductive biases that are beneficial to generalization. In principle, BOFT can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Given the target layers for injecting BOFT parameters, the number of trainable parameters can be determined based on the size of the weight matrices. ## Merge OFT/BOFT weights into the base model Similar to LoRA, the weights learned by OFT/BOFT can be integrated into the pretrained weight matrices using the merge_and_unload() function. This function merges the adapter weights with the base model which allows you to effectively use the newly merged model as a standalone model. <div class="flex justify-center"> <img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/boft_merge.png"/> </div> This works because during training, the orthogonal weight matrix (R in the diagram above) and the pretrained weight matrices are separate. But once training is complete, these weights can actually be merged (multiplied) into a new weight matrix that is equivalent. ## Utils for OFT / BOFT ### Common OFT / BOFT parameters in PEFT As with other methods supported by PEFT, to fine-tune a model using OFT or BOFT, you need to: 1. Instantiate a base model. 2. Create a configuration (`OFTConfig` or `BOFTConfig`) where you define OFT/BOFT-specific parameters. 3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`. 4. Train the `PeftModel` as you normally would train the base model. ### OFT-specific parameters `OFTConfig` allows you to control how OFT is applied to the base model through the following parameters: - `r`: OFT rank, number of OFT blocks per injected layer. **Bigger** `r` results in more sparse update matrices with **fewer** trainable paramters. **Note**: You can only specify either `r` or `oft_block_size`, but not both simultaneously, because `r` × `oft_block_size` = layer dimension. For simplicity, we let the user speficy either `r` or `oft_block_size` and infer the other one. Default set to `r = 0`, the user is advised to set the `oft_block_size` instead for better clarity. - `oft_block_size`: OFT block size across different layers. **Bigger** `oft_block_size` results in more dense update matrices with **more** trainable parameters. **Note**: Please choose `oft_block_size` to be divisible by layer's input dimension (`in_features`), e.g., 4, 8, 16. You can only specify either `r` or `oft_block_size`, but not both simultaneously, because `r` × `oft_block_size` = layer dimension. For simplicity, we let the user speficy either `r` or `oft_block_size` and infer the other one. Default set to `oft_block_size = 32`. - `use_cayley_neumann`: Specifies whether to use the Cayley-Neumann parameterization (efficient but approximate) or the vanilla Cayley parameterization (exact but computationally expensive because of matrix inverse). We recommend to set it to `True` for better efficiency, but performance may be slightly worse because of the approximation error. Please test both settings (`True` and `False`) depending on your needs. Default is `False`. - `module_dropout`: The multiplicative dropout probability, by setting OFT blocks to identity during training, similar to the dropout layer in LoRA. - `bias`: specify if the `bias` parameters should be trained. Can be `"none"`, `"all"` or `"oft_only"`. - `target_modules`: The modules (for example, attention blocks) to inject the OFT matrices. - `modules_to_save`: List of modules apart from OFT matrices to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task. ### BOFT-specific parameters `BOFTConfig` allows you to control how BOFT is applied to the base model through the following parameters: - `boft_block_size`: the BOFT matrix block size across different layers, expressed in `int`. **Bigger** `boft_block_size` results in more dense update matrices with **more** trainable parameters. **Note**, please choose `boft_block_size` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension. - `boft_block_num`: the number of BOFT matrix blocks across different layers, expressed in `int`. **Bigger** `boft_block_num` result in sparser update matrices with **fewer** trainable parameters. **Note**, please choose `boft_block_num` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension. - `boft_n_butterfly_factor`: the number of butterfly factors. **Note**, for `boft_n_butterfly_factor=1`, BOFT is the same as vanilla OFT, for `boft_n_butterfly_factor=2`, the effective block size of OFT becomes twice as big and the number of blocks become half. - `bias`: specify if the `bias` parameters should be trained. Can be `"none"`, `"all"` or `"boft_only"`. - `boft_dropout`: specify the probability of multiplicative dropout. - `target_modules`: The modules (for example, attention blocks) to inject the OFT/BOFT matrices. - `modules_to_save`: List of modules apart from OFT/BOFT matrices to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task. ## OFT Example Usage For using OFT for quantized finetuning with [TRL](https://github.com/huggingface/trl) for `SFT`, `PPO`, or `DPO` fine-tuning, follow the following outline: ```py from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from trl import SFTTrainer from peft import OFTConfig if use_quantization: bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_storage=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained( "model_name", quantization_config=bnb_config ) tokenizer = AutoTokenizer.from_pretrained("model_name") # Configure OFT peft_config = OFTConfig( oft_block_size=32, use_cayley_neumann=True, target_modules="all-linear", bias="none", task_type="CAUSAL_LM" ) trainer = SFTTrainer( model=model, train_dataset=ds['train'], peft_config=peft_config, processing_class=tokenizer, args=training_arguments, data_collator=collator, ) trainer.train() ``` ## BOFT Example Usage For an example of the BOFT method application to various downstream tasks, please refer to the following guides: Take a look at the following step-by-step guides on how to finetune a model with BOFT: - [Dreambooth finetuning with BOFT](https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md) - [Controllable generation finetuning with BOFT (ControlNet)](https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md) For the task of image classification, one can initialize the BOFT config for a DinoV2 model as follows: ```py import transformers from transformers import AutoModelForSeq2SeqLM, BOFTConfig from peft import BOFTConfig, get_peft_model config = BOFTConfig( boft_block_size=4, boft_n_butterfly_factor=2, target_modules=["query", "value", "key", "output.dense", "mlp.fc1", "mlp.fc2"], boft_dropout=0.1, bias="boft_only", modules_to_save=["classifier"], ) model = transformers.Dinov2ForImageClassification.from_pretrained( "facebook/dinov2-large", num_labels=100, ) boft_model = get_peft_model(model, config) ```
peft/docs/source/conceptual_guides/oft.md/0
{ "file_path": "peft/docs/source/conceptual_guides/oft.md", "repo_id": "peft", "token_count": 3135 }
224
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # AutoPeftModels The `AutoPeftModel` classes loads the appropriate PEFT model for the task type by automatically inferring it from the configuration file. They are designed to quickly and easily load a PEFT model in a single line of code without having to worry about which exact model class you need or manually loading a [`PeftConfig`]. ## AutoPeftModel [[autodoc]] auto.AutoPeftModel - from_pretrained ## AutoPeftModelForCausalLM [[autodoc]] auto.AutoPeftModelForCausalLM ## AutoPeftModelForSeq2SeqLM [[autodoc]] auto.AutoPeftModelForSeq2SeqLM ## AutoPeftModelForSequenceClassification [[autodoc]] auto.AutoPeftModelForSequenceClassification ## AutoPeftModelForTokenClassification [[autodoc]] auto.AutoPeftModelForTokenClassification ## AutoPeftModelForQuestionAnswering [[autodoc]] auto.AutoPeftModelForQuestionAnswering ## AutoPeftModelForFeatureExtraction [[autodoc]] auto.AutoPeftModelForFeatureExtraction
peft/docs/source/package_reference/auto_class.md/0
{ "file_path": "peft/docs/source/package_reference/auto_class.md", "repo_id": "peft", "token_count": 470 }
225
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Model merge PEFT provides several internal utilities for [merging LoRA adapters](../developer_guides/model_merging) with the TIES and DARE methods. [[autodoc]] utils.merge_utils.prune [[autodoc]] utils.merge_utils.calculate_majority_sign_mask [[autodoc]] utils.merge_utils.disjoint_merge [[autodoc]] utils.merge_utils.task_arithmetic [[autodoc]] utils.merge_utils.ties [[autodoc]] utils.merge_utils.dare_linear [[autodoc]] utils.merge_utils.dare_ties
peft/docs/source/package_reference/merge_utils.md/0
{ "file_path": "peft/docs/source/package_reference/merge_utils.md", "repo_id": "peft", "token_count": 361 }
226