repo string | github_id int64 | github_node_id string | number int64 | html_url string | api_url string | title string | body string | state string | state_reason string | locked bool | comments_count int64 | labels list | assignees list | created_at string | updated_at string | closed_at string | author_association string | milestone_title string | snapshot_id string | extracted_at string | author_login string | author_id int64 | author_node_id string | author_type string | author_site_admin bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 2,503,376,413 | I_kwDOHa8MBc6VNn4d | 9,358 | https://github.com/huggingface/diffusers/issues/9358 | https://api.github.com/repos/huggingface/diffusers/issues/9358 | Redundant reinitialization of text encoders in train_dreambooth_lora_flux | ### Describe the bug
In the `train_dreambooth_lora_flux.py` script, during each call to `log_validation`, the text encoders `text_encoder_one` and `text_encoder_two` are reinitialized. https://github.com/huggingface/diffusers/blob/8ba90aa706a733f45d83508a5b221da3c59fe4cd/examples/dreambooth/train_dreambooth_lora_flux.... | open | null | false | 2 | [
"bug",
"stale"
] | [] | 2024-09-03T17:11:09Z | 2024-10-17T15:02:49Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CyberDragon93 | 102,126,402 | U_kgDOBhZTQg | User | false |
huggingface/diffusers | 2,504,381,335 | I_kwDOHa8MBc6VRdOX | 9,359 | https://github.com/huggingface/diffusers/issues/9359 | https://api.github.com/repos/huggingface/diffusers/issues/9359 | DPMSolverSinglestepScheduler Step ValueError in SDXL Pipeline | ### Describe the bug
The DPMSolverSinglestepScheduler throws a value error "step must be greater than zero" in terminal. No other scheduler had this issue. When I set clipped_idx to be a small value like 0.01, the error goes away.
### Reproduction
```
import torch, random, os
from diffusers import (StableDif... | closed | completed | false | 12 | [
"bug",
"stale",
"scheduler"
] | [] | 2024-09-04T05:59:18Z | 2025-07-27T11:33:00Z | 2024-11-13T00:15:40Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zacheryvaughn | 114,719,371 | U_kgDOBtZ6iw | User | false |
huggingface/diffusers | 2,504,754,604 | I_kwDOHa8MBc6VS4Ws | 9,360 | https://github.com/huggingface/diffusers/issues/9360 | https://api.github.com/repos/huggingface/diffusers/issues/9360 | StableDiffusion3ControlNetPipeline Tensor Shape Mismatch | ### Describe the bug
I want to create a class that inherits from StableDiffusion3ControlNetPipeline. When I rewrite the __call__ function, I meet a problem that encoder_hidden_states and context_attn_output have different shapes. The pre-trained model of StableDiffusion3ControlNetPipeline is "stabilityai/stable-diffus... | closed | completed | true | 2 | [
"bug"
] | [] | 2024-09-04T09:13:43Z | 2024-09-04T12:39:35Z | 2024-09-04T12:39:35Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Yuhan291 | 70,860,147 | MDQ6VXNlcjcwODYwMTQ3 | User | false |
huggingface/diffusers | 2,504,837,343 | I_kwDOHa8MBc6VTMjf | 9,361 | https://github.com/huggingface/diffusers/issues/9361 | https://api.github.com/repos/huggingface/diffusers/issues/9361 | LoRa effect is none when inferencing with FluxPipeline.from_pretrained() | Ηello, I trained a LoRa with the help of the [ostris/ai-toolkit](https://github.com/ostris/ai-toolkit) repo, I believe it is based mostly on the kohya_ss repo. The LoRa saved in safetensors format when run with the sample inference code below gave me warnings on most of the LoRa keys and even though it ran fine, the ou... | closed | completed | false | 4 | [] | [] | 2024-09-04T09:48:17Z | 2024-09-06T16:51:12Z | 2024-09-06T16:51:12Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DimitriosKakouris | 105,593,676 | U_kgDOBks7TA | User | false |
huggingface/diffusers | 2,505,013,983 | I_kwDOHa8MBc6VT3rf | 9,362 | https://github.com/huggingface/diffusers/issues/9362 | https://api.github.com/repos/huggingface/diffusers/issues/9362 | IndexError: index 29 is out of bounds for dimension 0 with size 29 | ### Describe the bug
I have three problems because of the same reason.
1) TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'
# upon completion increase step index by one
self._step_index += 1 <---Error [here](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedu... | open | null | false | 8 | [
"bug",
"stale"
] | [] | 2024-09-04T11:02:49Z | 2024-11-25T15:04:22Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Anvarka | 22,154,737 | MDQ6VXNlcjIyMTU0NzM3 | User | false |
huggingface/diffusers | 2,506,500,461 | I_kwDOHa8MBc6VZilt | 9,366 | https://github.com/huggingface/diffusers/issues/9366 | https://api.github.com/repos/huggingface/diffusers/issues/9366 | DPMSolverMultistepScheduler with AutoPipelineForImage2Image fails at specific combinations of step counts and strength | ### Describe the bug
When using DPMSolverMultistepScheduler and certain combinations of step counts and prompt strength I get a crash. The issue has be reproduced with multiple models.
Our use case is image refining. So the prompt strength is low. If we could figure out the pattern and filter the input that would... | closed | completed | false | 3 | [
"bug"
] | [
"yiyixuxu"
] | 2024-09-04T23:57:17Z | 2024-09-09T16:38:23Z | 2024-09-09T16:38:23Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | frankjoshua | 230,666 | MDQ6VXNlcjIzMDY2Ng== | User | false |
huggingface/diffusers | 1,418,482,642 | I_kwDOHa8MBc5UjFPS | 937 | https://github.com/huggingface/diffusers/issues/937 | https://api.github.com/repos/huggingface/diffusers/issues/937 | [Community] Testing Stable Diffusion is hard 🥵 | It's really difficult to test stable diffusion due to the following:
- 1. **Continous output**: Diffusion models take float values as input and output float values. This is different from NLP models which tend to take int64 as inputs and int64 as outputs.
- 2. **Output dimensions are huge**. If an image has a outpu... | closed | completed | false | 15 | [
"stale"
] | [
"patrickvonplaten"
] | 2022-10-21T15:06:57Z | 2023-01-22T15:03:31Z | 2023-01-22T15:03:31Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | patrickvonplaten | 23,423,619 | MDQ6VXNlcjIzNDIzNjE5 | User | false |
huggingface/diffusers | 2,507,570,467 | I_kwDOHa8MBc6Vdn0j | 9,370 | https://github.com/huggingface/diffusers/issues/9370 | https://api.github.com/repos/huggingface/diffusers/issues/9370 | CombinedTimestepGuidanceTextProjEmbeddings.forward() missing 1 required positional argument: 'pooled_projection' | ### Describe the bug
Hello, I am using the latest version of transformer to train Flux's Lora slider. I used 'pooled_projection' in the middle, but the following error will be reported during runtime:
```
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
r... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-09-05T11:54:07Z | 2024-09-20T01:49:38Z | 2024-09-20T01:49:38Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | KILOY00 | 179,323,228 | U_kgDOCrBBXA | User | false |
huggingface/diffusers | 2,507,853,443 | I_kwDOHa8MBc6Ves6D | 9,371 | https://github.com/huggingface/diffusers/issues/9371 | https://api.github.com/repos/huggingface/diffusers/issues/9371 | FlaxStableDiffusionImg2ImgPipeline should delete prepare_inputs and provide prepare_text_inputs and prepare_image_inputs | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].
in https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_img2img.py#L170
prepare_inputs... | open | null | false | 4 | [
"stale"
] | [] | 2024-09-05T13:43:59Z | 2024-12-20T15:04:58Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ghost | 10,137 | MDQ6VXNlcjEwMTM3 | User | false |
huggingface/diffusers | 2,508,374,466 | I_kwDOHa8MBc6VgsHC | 9,374 | https://github.com/huggingface/diffusers/issues/9374 | https://api.github.com/repos/huggingface/diffusers/issues/9374 | Support for FluxControlNetPipeline, FluxImg2ImgPipeline, FluxInpaintPipeline in AutoPipelineFor... | **Cannot import FluxControlNetPipeline with AutoPipelineForText2Image**
```[A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].](valueerror: AutoPipeline can't find a pipeline linked to FluxControlNetPipeline for None)```
**Describe the solution you'd like.**
The pipeline ... | closed | completed | false | 1 | [] | [] | 2024-09-05T17:36:47Z | 2024-09-06T06:18:11Z | 2024-09-06T06:18:11Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | mantrakp04 | 87,142,457 | MDQ6VXNlcjg3MTQyNDU3 | User | false |
huggingface/diffusers | 2,510,094,392 | I_kwDOHa8MBc6VnQA4 | 9,378 | https://github.com/huggingface/diffusers/issues/9378 | https://api.github.com/repos/huggingface/diffusers/issues/9378 | [Flux ControlNet] Support Xlabs ControlNet in diffusers | It'd be great to have XLabs ControlNets supported in `diffusers`. We already support their LoRAs.
Code: https://github.com/XLabs-AI/x-flux/
Checkpoint: https://huggingface.co/XLabs-AI/flux-controlnet-depth-v3
Related issue: https://github.com/huggingface/diffusers/issues/9301
Pinging @chenbinghui1 in case yo... | closed | completed | false | 8 | [
"Good second issue",
"contributions-welcome"
] | [] | 2024-09-06T10:18:31Z | 2024-10-15T22:15:24Z | 2024-10-15T22:15:10Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 1,418,609,492 | I_kwDOHa8MBc5UjkNU | 938 | https://github.com/huggingface/diffusers/issues/938 | https://api.github.com/repos/huggingface/diffusers/issues/938 | Human Motion Diffusion Model (Text-to-Motion) | ### Model/Pipeline/Scheduler description
This work (https://arxiv.org/abs/2209.14916) presents a method based on diffusion model to synthesize human motions from a text. Their method achieves state-of-the-art results on leading benchmarks for text-to-motion and action-to-motion. I think that It would be great to have ... | closed | completed | false | 5 | [
"stale"
] | [] | 2022-10-21T16:49:31Z | 2022-12-03T15:03:16Z | 2022-12-03T15:03:16Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | clementapa | 45,719,060 | MDQ6VXNlcjQ1NzE5MDYw | User | false |
huggingface/diffusers | 2,511,467,604 | I_kwDOHa8MBc6VsfRU | 9,383 | https://github.com/huggingface/diffusers/issues/9383 | https://api.github.com/repos/huggingface/diffusers/issues/9383 | flux bitsandbytes support lora model? | flux bitsandbytes support lora model? | closed | completed | false | 2 | [] | [] | 2024-09-07T04:50:37Z | 2024-09-08T01:35:30Z | 2024-09-08T01:35:30Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lonngxiang | 40,717,349 | MDQ6VXNlcjQwNzE3MzQ5 | User | false |
huggingface/diffusers | 2,511,602,142 | I_kwDOHa8MBc6VtAHe | 9,387 | https://github.com/huggingface/diffusers/issues/9387 | https://api.github.com/repos/huggingface/diffusers/issues/9387 | [Pipeline] MimicMotion | ### Model/Pipeline/Scheduler description
MimicMotion is a recent paper on Personalised Image2Video generation.
An input is a reference image and a driving video, motion from the driving video is taken to animate reference image.
The results are really great!:
<img width="896" alt="image" src="https://github.co... | closed | completed | false | 2 | [] | [] | 2024-09-07T11:19:44Z | 2024-09-10T19:02:44Z | 2024-09-10T19:02:44Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | reallyigor | 31,889,677 | MDQ6VXNlcjMxODg5Njc3 | User | false |
huggingface/diffusers | 1,418,650,991 | I_kwDOHa8MBc5UjuVv | 939 | https://github.com/huggingface/diffusers/issues/939 | https://api.github.com/repos/huggingface/diffusers/issues/939 | use multiple community pipes in a list | **Is your feature request related to a problem? Please describe.**
I find some very great community pipes, such as CLIP Guided Stable Diffusion and Long Prompt Weighting Stable Diffusion.
But I don't know how to use them both.
**Describe the solution you'd like**
some code likes:
```python
pipe = DiffusionP... | closed | completed | false | 5 | [
"stale"
] | [] | 2022-10-21T17:29:39Z | 2022-12-04T15:02:52Z | 2022-12-04T15:02:52Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CrazyBoyM | 35,400,185 | MDQ6VXNlcjM1NDAwMTg1 | User | false |
huggingface/diffusers | 2,513,410,159 | I_kwDOHa8MBc6Vz5hv | 9,392 | https://github.com/huggingface/diffusers/issues/9392 | https://api.github.com/repos/huggingface/diffusers/issues/9392 | [Scheduler] Add SNR shift following SD3, would the rest of the code need to be modified? | **What API design would you like to have changed or added to the library? Why?**
With the increasing resolution of image or video generation, we need to introduce more noise at smaller T, such as SNR shift following SD3. I have observed that CogVideoX's schedule has already implemented [this](https://github.com/hugg... | open | null | false | 7 | [
"stale"
] | [] | 2024-09-09T09:19:37Z | 2025-01-05T15:05:04Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | LinB203 | 62,638,829 | MDQ6VXNlcjYyNjM4ODI5 | User | false |
huggingface/diffusers | 2,513,488,688 | I_kwDOHa8MBc6V0Msw | 9,393 | https://github.com/huggingface/diffusers/issues/9393 | https://api.github.com/repos/huggingface/diffusers/issues/9393 | deepspeed train flux1 dreambooth lora can not save model | ### Describe the bug
when I run the script train_dreambooth_lora_flux.py. It raise ValueError: unexpected save model: <class 'deepspeed.runtime.engine.DeepSpeedEngine'>. something bug in save_model_hook?
![Uploading image.png…]()
### Reproduction
accelerate launch train_dreambooth_lora_flux_custom.py \
--pret... | open | reopened | false | 21 | [
"bug"
] | [] | 2024-09-09T09:54:23Z | 2025-08-13T10:23:42Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ldtgodlike | 38,915,590 | MDQ6VXNlcjM4OTE1NTkw | User | false |
huggingface/diffusers | 2,514,522,652 | I_kwDOHa8MBc6V4JIc | 9,395 | https://github.com/huggingface/diffusers/issues/9395 | https://api.github.com/repos/huggingface/diffusers/issues/9395 | [Q] Possibly unused `self.final_alpha_cumprod` | Hello team, quick question to make sure I understand the behavior of the `step` function in LCM Scheduler.
https://github.com/huggingface/diffusers/blob/a7361dccdc581147620bbd74a6d295cd92daf616/src/diffusers/schedulers/scheduling_lcm.py#L534-L543
Here, it seems that the condition `prev_timestep >= 0` is always `T... | open | null | false | 7 | [
"stale"
] | [] | 2024-09-09T17:35:08Z | 2024-11-09T15:03:23Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | fdtomasi | 12,514,317 | MDQ6VXNlcjEyNTE0MzE3 | User | false |
huggingface/diffusers | 2,514,615,525 | I_kwDOHa8MBc6V4fzl | 9,396 | https://github.com/huggingface/diffusers/issues/9396 | https://api.github.com/repos/huggingface/diffusers/issues/9396 | [i18n-KO] Translating docs to Korean | Hi!👋
I'd like to translate the following files into Korean.
1. [docs/source/en/tutorials/autopipeline.md](https://github.com/huggingface/diffusers/blob/main/docs/source/en/tutorials/autopipeline.md)
2. [docs/source/en/tutorials/using_peft_for_inference.md](https://github.com/huggingface/diffusers/blob/main/docs/s... | closed | completed | false | 3 | [
"stale"
] | [] | 2024-09-09T18:15:30Z | 2025-10-06T21:24:07Z | 2025-10-06T21:24:07Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | cowboysj | 108,571,492 | U_kgDOBnirZA | User | false |
huggingface/diffusers | 2,514,658,222 | I_kwDOHa8MBc6V4qOu | 9,397 | https://github.com/huggingface/diffusers/issues/9397 | https://api.github.com/repos/huggingface/diffusers/issues/9397 | SGMUniform scheduler for Hyper loras needs support! | **Is your feature request related to a problem? Please describe.**
Your library is used in a great product like Invoke! And SGMUniform scheduler is needed to support Hyper loras! Please add support for this scheduler in diffusers library!
**Describe the solution you'd like.**
Add scheduler - SGMUniform
**Descr... | closed | completed | false | 7 | [
"stale"
] | [] | 2024-09-09T18:39:53Z | 2024-10-14T12:48:22Z | 2024-10-14T12:48:22Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rollingcookies | 40,675,630 | MDQ6VXNlcjQwNjc1NjMw | User | false |
huggingface/diffusers | 2,514,717,771 | I_kwDOHa8MBc6V44xL | 9,398 | https://github.com/huggingface/diffusers/issues/9398 | https://api.github.com/repos/huggingface/diffusers/issues/9398 | CombinedTimestepGuidanceTextProjEmbeddings.forward() missing 1 required positional argument: 'pooled_projection' | ### Describe the bug
There is a bug when using controlnet_union with flux schnell.
The reason is because controlnet union uses guidance_embeds set to True while Schnell doesn't.
### Reproduction
The code in the pipeline should be written that way I believe:
```python
# handle guidance... | closed | completed | false | 8 | [
"bug"
] | [] | 2024-09-09T19:12:35Z | 2024-09-20T01:49:37Z | 2024-09-20T01:49:37Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | christopher5106 | 6,875,375 | MDQ6VXNlcjY4NzUzNzU= | User | false |
huggingface/diffusers | 1,419,097,694 | I_kwDOHa8MBc5UlbZe | 940 | https://github.com/huggingface/diffusers/issues/940 | https://api.github.com/repos/huggingface/diffusers/issues/940 | MPS crash when using LMSDiscreteScheduler | ### Describe the bug
If you run the following code using `LMSDiscreteScheduler` a crash occurs under Apple Silicon/MPS:
```
import torch
from diffusers import AutoencoderKL, UNet2DConditionModel, LMSDiscreteScheduler
from PIL import Image
from torchvision import transforms as tfms
# Set device
torch_device ... | closed | completed | false | 5 | [
"bug"
] | [
"pcuenca"
] | 2022-10-22T05:23:13Z | 2022-10-27T09:10:37Z | 2022-10-27T08:22:15Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | FahimF | 181,110 | MDQ6VXNlcjE4MTExMA== | User | false |
huggingface/diffusers | 2,515,245,299 | I_kwDOHa8MBc6V65jz | 9,402 | https://github.com/huggingface/diffusers/issues/9402 | https://api.github.com/repos/huggingface/diffusers/issues/9402 | [Flux ControlNet] Add img2img and inpaint pipelines | We recently added img2img and inpainting pipelines for Flux thanks to @Gothos contribution.
We also have controlnet support for Flux thanks to @wangqixun.
It'd be nice to have controlnet versions of these pipelines since there's been requests to have them.
Basically, we need to create two new pipelines that a... | closed | completed | false | 11 | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | [] | 2024-09-10T02:08:32Z | 2024-10-25T02:22:19Z | 2024-09-17T19:43:55Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | asomoza | 5,442,875 | MDQ6VXNlcjU0NDI4NzU= | User | false |
huggingface/diffusers | 2,515,460,277 | I_kwDOHa8MBc6V7uC1 | 9,403 | https://github.com/huggingface/diffusers/issues/9403 | https://api.github.com/repos/huggingface/diffusers/issues/9403 | [Flux IPadapter] Support Xlabs IPadapter in diffusers | It'd be great to have XLabs IPadapter supported in diffusers.
Code: https://github.com/XLabs-AI/x-flux/
Checkpoint: https://huggingface.co/XLabs-AI/flux-ip-adapter/tree/main
@sayakpaul | closed | completed | false | 11 | [
"contributions-welcome",
"IPAdapter"
] | [] | 2024-09-10T05:47:54Z | 2024-12-21T17:49:59Z | 2024-12-21T17:49:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chuck-ma | 74,402,255 | MDQ6VXNlcjc0NDAyMjU1 | User | false |
huggingface/diffusers | 2,515,791,511 | I_kwDOHa8MBc6V8-6X | 9,405 | https://github.com/huggingface/diffusers/issues/9405 | https://api.github.com/repos/huggingface/diffusers/issues/9405 | Potential misalignment for flux and sd3 in bf16 | Hi!
I noticed that in the bf16 mode, the timestep will be rounded to other values (e.g., 750 -> 752), then divided by 1000. Therefore, I think it will be better to divide 1000 first, which can avoid the precision error in bf16.
https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src... | closed | completed | false | 5 | [] | [] | 2024-09-10T08:39:16Z | 2024-09-12T13:41:04Z | 2024-09-12T13:41:03Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhuole1025 | 53,815,869 | MDQ6VXNlcjUzODE1ODY5 | User | false |
huggingface/diffusers | 2,516,930,685 | I_kwDOHa8MBc6WBVB9 | 9,407 | https://github.com/huggingface/diffusers/issues/9407 | https://api.github.com/repos/huggingface/diffusers/issues/9407 | callback / cannot yield intermediate images on the fly during inference | Hi,
in advance apologies if this has been asked already, or if I'm just misusing the diffusers API.
Using `diffusers==0.30.2`
**What API design would you like to have changed or added to the library? Why?**
I will illustrate straight away the general issue with my use case: I need to call a (FLUX) diffuser... | closed | completed | false | 8 | [] | [] | 2024-09-10T16:32:04Z | 2024-09-25T12:28:20Z | 2024-09-25T12:27:11Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Clement-Lelievre | 70,368,164 | MDQ6VXNlcjcwMzY4MTY0 | User | false |
huggingface/diffusers | 1,419,136,350 | I_kwDOHa8MBc5Ulk1e | 941 | https://github.com/huggingface/diffusers/issues/941 | https://api.github.com/repos/huggingface/diffusers/issues/941 | For MPS using num_images_per_prompt with StableDiffusionImg2ImgPipeline results in noise | ### Describe the bug
If you try to generate multiple images with StableDiffusionImg2ImgPipeline by using the num_images_per_prompt parameter, under MPS all you get is noise/brown images.
:
from diffusers.pipelines import StableDiffusion3ControlNetInpaintingPipeline
ImportError: cannot import name 'StableDiffusion3ControlNetInpaintingPipeline' from 'diffusers.pipelines' (miniconda3/envs/sd3/lib/python3.11/site-packages/diffusers/pipelines/__... | closed | completed | false | 8 | [
"bug",
"stale"
] | [] | 2024-09-11T01:29:30Z | 2025-02-18T16:10:42Z | 2024-10-21T15:56:30Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Sotatek-DuyHoang | 167,866,823 | U_kgDOCgFxxw | User | false |
huggingface/diffusers | 2,518,974,064 | I_kwDOHa8MBc6WJH5w | 9,413 | https://github.com/huggingface/diffusers/issues/9413 | https://api.github.com/repos/huggingface/diffusers/issues/9413 | CogvideoX-5b adds freeze-frames in start and end of clip | ### Describe the bug
CogvideoX-5b adds 7 freeze-frames to the start and 1–2 freeze-frames to the end of the clip.
Output file:
https://github.com/user-attachments/assets/7a471858-e6ed-423a-a8f4-74125f2a18ba
Looking at the output:
https://github.com/user-attachments/assets/568fba7c-2927-4b2a-8652-d69c27628d6... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-09-11T08:24:30Z | 2024-10-15T21:01:53Z | 2024-10-15T21:01:52Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tin2tin | 1,322,593 | MDQ6VXNlcjEzMjI1OTM= | User | false |
huggingface/diffusers | 2,519,690,525 | I_kwDOHa8MBc6WL20d | 9,415 | https://github.com/huggingface/diffusers/issues/9415 | https://api.github.com/repos/huggingface/diffusers/issues/9415 | add new parameter module of CogVideoXTransformer3DModel, raise error | ### Describe the bug
I was add a new nn.Conv2d in the CogVideoXPatchEmbed of CogVideoXTransformer3DModel, and get a new CogVideoXTransformer3DPoseModel, while use from_pretrained method raise error, it is ok while load 2B model, but switch 5B,
### Reproduction
```python
args.pretrained_model_name_or_path = "/h... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-09-11T13:01:23Z | 2025-02-21T09:11:59Z | 2024-09-12T02:44:48Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | trouble-maker007 | 73,164,596 | MDQ6VXNlcjczMTY0NTk2 | User | false |
huggingface/diffusers | 2,519,839,159 | I_kwDOHa8MBc6WMbG3 | 9,416 | https://github.com/huggingface/diffusers/issues/9416 | https://api.github.com/repos/huggingface/diffusers/issues/9416 | [Schedulers] Add SGMUniform | Thanks to @rollingcookies, we can see in this [issue](https://github.com/huggingface/diffusers/issues/9397) that this schedulers works great with the Hyper and probably also Lighting loras/unets.
It'd be fantastic if someone can contribute this scheduler to diffusers.
Please let me know if someone is willing to ... | closed | completed | false | 12 | [
"help wanted",
"contributions-welcome",
"advanced"
] | [] | 2024-09-11T13:59:27Z | 2024-09-23T23:39:56Z | 2024-09-23T23:39:56Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | asomoza | 5,442,875 | MDQ6VXNlcjU0NDI4NzU= | User | false |
huggingface/diffusers | 2,519,985,796 | I_kwDOHa8MBc6WM-6E | 9,417 | https://github.com/huggingface/diffusers/issues/9417 | https://api.github.com/repos/huggingface/diffusers/issues/9417 | Suggestion for speeding up `index_for_timestep` by removing sequential `nonzero()` calls in samplers | **Is your feature request related to a problem? Please describe.**
First off, thanks for the great codebase and providing so many resources! I just wanted to provide some insight into an improvement I made for myself, in case you'd like to include it for all samplers. I'm using the `FlowMatchEulerDiscreteScheduler` an... | open | reopened | false | 11 | [
"help wanted",
"wip",
"contributions-welcome",
"performance"
] | [] | 2024-09-11T14:54:37Z | 2025-02-08T10:26:47Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ethanweber | 16,965,789 | MDQ6VXNlcjE2OTY1Nzg5 | User | false |
huggingface/diffusers | 2,521,247,434 | I_kwDOHa8MBc6WRy7K | 9,420 | https://github.com/huggingface/diffusers/issues/9420 | https://api.github.com/repos/huggingface/diffusers/issues/9420 | The transformer model saved after FLUX+Hyper-SD lora cannot be loaded | ### Describe the bug
The transformer model saved after FLUX+Hyper-SD lora cannot be loaded. I don't want to merge lora every time, it's time-consuming. I wanted to save the model once and load it, but I failed.
### Reproduction
> import torch
from diffusers import FluxPipeline
from huggingface_hub
import hf_hub_... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-09-12T03:24:09Z | 2024-09-13T02:01:36Z | 2024-09-13T02:01:18Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | peki12345 | 92,427,482 | U_kgDOBYJU2g | User | false |
huggingface/diffusers | 2,521,383,078 | I_kwDOHa8MBc6WSUCm | 9,421 | https://github.com/huggingface/diffusers/issues/9421 | https://api.github.com/repos/huggingface/diffusers/issues/9421 | Unable to Access 'runwayml/stable-diffusion-inpainting' Model | ### Describe the bug
I am unable to access the "runwayml/stable-diffusion-inpainting" model as it shows a 404 error. Could you please share a copy of this pretrained model or provide an alternative link? Thank you!
### Reproduction
https://huggingface.co/runwayml/stable-diffusion-inpainting
### Logs
_No response_
... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-09-12T05:27:51Z | 2024-09-19T18:13:12Z | 2024-09-19T18:13:11Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | HaoyueBaiZJU | 28,735,573 | MDQ6VXNlcjI4NzM1NTcz | User | false |
huggingface/diffusers | 2,521,441,371 | I_kwDOHa8MBc6WSiRb | 9,422 | https://github.com/huggingface/diffusers/issues/9422 | https://api.github.com/repos/huggingface/diffusers/issues/9422 | A strange thing happened when I wrote my own code to train Cotrolnet_sdxl, as soon as I did the first backpropagation, noise_pred became nan. | ### Describe the bug
A strange thing happened when I wrote my own code to train cotrolnet, as soon as I did the first backpropagation, noise_pred became nan. I did a lot of debugging, gradient decay, mixed precision training, removing ema and other parts, but the result was always nan once backpropagation was applied
... | closed | completed | false | 7 | [
"bug"
] | [] | 2024-09-12T06:11:22Z | 2025-04-29T01:40:01Z | 2024-09-12T08:44:37Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Li-Zn-H | 108,657,375 | U_kgDOBnn63w | User | false |
huggingface/diffusers | 2,521,798,448 | I_kwDOHa8MBc6WT5cw | 9,424 | https://github.com/huggingface/diffusers/issues/9424 | https://api.github.com/repos/huggingface/diffusers/issues/9424 | d3_dreambooth_lora_16gb.ipynb broken for me with latest, ok with released version | ### Describe the bug
d3_dreambooth_lora_16gb.ipynb broken for me with latest, ok with released version installed via pip
ndarray crossing the maximum supported dimension
### Reproduction
just run the embedding code in notebook
### Logs
_No response_
### System Info
windows
### Who can help?
... | closed | completed | false | 7 | [
"bug"
] | [] | 2024-09-12T09:03:47Z | 2024-09-12T10:58:20Z | 2024-09-12T10:56:56Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | 3a1b2c3 | 74,843,139 | MDQ6VXNlcjc0ODQzMTM5 | User | false |
huggingface/diffusers | 2,522,047,368 | I_kwDOHa8MBc6WU2OI | 9,425 | https://github.com/huggingface/diffusers/issues/9425 | https://api.github.com/repos/huggingface/diffusers/issues/9425 | Remove redundant comparison inside the diffusion loop of stable video diffusion pipeline | **Is your feature request related to a problem? Please describe.**
I found that the inside the `__call__` of stable video diffusion keeps doing async memcpy between host to device as attached.
<img width="1464" alt="Screenshot 2024-09-12 at 6 45 24 PM" src="https://github.com/user-attachments/assets/de0839f5-ca59-419... | open | null | false | 8 | [
"stale"
] | [] | 2024-09-12T10:49:23Z | 2024-11-13T00:35:17Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dianyo | 11,516,370 | MDQ6VXNlcjExNTE2Mzcw | User | false |
huggingface/diffusers | 1,419,174,671 | I_kwDOHa8MBc5UluMP | 943 | https://github.com/huggingface/diffusers/issues/943 | https://api.github.com/repos/huggingface/diffusers/issues/943 | Frist parameter in downsamplers should be 'out_channels', instead of 'in_channels' | https://github.com/huggingface/diffusers/blob/9bca40296e3f00fb26597a0f4cfe2fdfd2ad2fd2/src/diffusers/models/unet_blocks.py#L619
All the frist parameters of Downsample2D-like classes in `self.downsamplers` are `in_channels`.
However, the inputs of these `self.downsamplers` are the outputs of Resnets, whose number... | closed | completed | false | 2 | [] | [] | 2022-10-22T08:16:10Z | 2022-10-25T11:32:46Z | 2022-10-25T11:32:46Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chenguolin | 47,670,737 | MDQ6VXNlcjQ3NjcwNzM3 | User | false |
huggingface/diffusers | 2,523,026,924 | I_kwDOHa8MBc6WYlXs | 9,430 | https://github.com/huggingface/diffusers/issues/9430 | https://api.github.com/repos/huggingface/diffusers/issues/9430 | AttributeError: 'NoneType' object has no attribute 'get' | Name: diffusers
Version: 0.30.2
Name: transformers
Version: 4.44.2
Loading pipeline components...: 20%
1/5 [00:00<00:00, 9.69it/s]
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by d... | closed | completed | false | 3 | [
"stale"
] | [] | 2024-09-12T18:02:17Z | 2024-11-05T17:05:02Z | 2024-11-05T17:05:01Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | molo32 | 55,426,197 | MDQ6VXNlcjU1NDI2MTk3 | User | false |
huggingface/diffusers | 2,526,436,324 | I_kwDOHa8MBc6Wllvk | 9,438 | https://github.com/huggingface/diffusers/issues/9438 | https://api.github.com/repos/huggingface/diffusers/issues/9438 | model_name in quicktour page results in a 404 error | ### Describe the bug
The model_name in the Quicktour page (runwayml/stable-diffusion-v1-5) results in a 404 error, indicating that the path is currently unreachable. This is causing the following error when attempting to use the model:
```
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/mo... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-09-14T13:53:38Z | 2024-09-16T17:19:24Z | 2024-09-16T17:19:24Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vinayakkgarg | 28,915,948 | MDQ6VXNlcjI4OTE1OTQ4 | User | false |
huggingface/diffusers | 2,526,580,954 | I_kwDOHa8MBc6WmJDa | 9,439 | https://github.com/huggingface/diffusers/issues/9439 | https://api.github.com/repos/huggingface/diffusers/issues/9439 | Inconsistent results from Flux Model when loaded differently | ### Describe the bug
I've observed strange behavior when loading the Flux.1-dev model. There are two ways to load the model that produce different results if run with the same seed. One of the options is from the HF diffusers doc, the second one is inspired by the ai-toolkit repo
### Reproduction
First option, use `... | closed | completed | false | 11 | [
"bug",
"stale"
] | [
"yiyixuxu"
] | 2024-09-14T19:36:52Z | 2024-10-15T16:11:37Z | 2024-10-15T16:11:37Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | emil-malina | 179,747,217 | U_kgDOCra5kQ | User | false |
huggingface/diffusers | 2,527,820,785 | I_kwDOHa8MBc6Wq3vx | 9,443 | https://github.com/huggingface/diffusers/issues/9443 | https://api.github.com/repos/huggingface/diffusers/issues/9443 | SD3 Error: Cannot instantiate this tokenizer from a slow version. | ### Describe the bug
I am trying to generate SD3 images using diffusers, but encountered this error: Cannot instantiate this tokenizer from a slow version. If it's based on sentencepiece, make sure you have sentencepiece installed.
### Reproduction
sd3_base_model_path = "stabilityai/stable-diffusion-3-medium-diffuse... | closed | completed | false | 1 | [
"bug"
] | [] | 2024-09-16T08:27:40Z | 2024-09-16T09:18:22Z | 2024-09-16T09:18:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Scorpinaus | 85,672,737 | MDQ6VXNlcjg1NjcyNzM3 | User | false |
huggingface/diffusers | 2,529,701,781 | I_kwDOHa8MBc6WyC-V | 9,448 | https://github.com/huggingface/diffusers/issues/9448 | https://api.github.com/repos/huggingface/diffusers/issues/9448 | AttributeError: 'tuple' object has no attribute 'shape' while using IP-Adapter with StableDiffusionControlNetInpaintPipeline | ### Describe the bug
```
image = pipe(
File "/home/ubuntu/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/env/lib/python3.10/site-packages/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py", line 1... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-09-17T00:05:41Z | 2024-09-18T20:21:51Z | 2024-09-18T20:21:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | satvik-pyxer | 174,385,026 | U_kgDOCmTngg | User | false |
huggingface/diffusers | 1,419,293,566 | I_kwDOHa8MBc5UmLN- | 945 | https://github.com/huggingface/diffusers/issues/945 | https://api.github.com/repos/huggingface/diffusers/issues/945 | Can't add a custom checkpoint model to convert to ONNX | ### Describe the bug
I'm using one of the models used here: https://rentry.co/sdmodels
I get hit with an error **It looks like the config file at 'C:\amd_img2img\model.ckpt' is not a valid JSON file.**
### Full error message
```
(diffusers_venv) PS C:\amd_img2img\diffusers> python .\scripts\convert_stable_d... | closed | completed | false | 2 | [
"bug"
] | [] | 2022-10-22T13:25:06Z | 2022-10-26T12:28:58Z | 2022-10-24T10:55:36Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Yanislar | 116,126,523 | U_kgDOBuvzOw | User | false |
huggingface/diffusers | 2,529,892,993 | I_kwDOHa8MBc6WyxqB | 9,450 | https://github.com/huggingface/diffusers/issues/9450 | https://api.github.com/repos/huggingface/diffusers/issues/9450 | FluxPipeline - Multi-GPU Issue - When you define transformer= you get "Expected all tensors to be on the same device" | ### Describe the bug
When I load the text_encoder like this:
```
model_id = "black-forest-labs/FLUX.1-schnell"
text_encoder = T5EncoderModel.from_pretrained(
model_id,
subfolder="text_encoder_2",
quantization_config=quantization_config,
torch_dtype=torch.bfloat16
)
pipe = FluxPipeline.fr... | closed | completed | false | 13 | [
"bug",
"stale"
] | [] | 2024-09-17T03:19:25Z | 2024-10-26T23:13:15Z | 2024-10-26T23:13:15Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CrackerHax | 6,037,535 | MDQ6VXNlcjYwMzc1MzU= | User | false |
huggingface/diffusers | 2,530,190,113 | I_kwDOHa8MBc6Wz6Mh | 9,451 | https://github.com/huggingface/diffusers/issues/9451 | https://api.github.com/repos/huggingface/diffusers/issues/9451 | [BUG] SNR gamma in v_prediction | ### Describe the bug
I believe the SNR weighting of v_prediction should follow a similar trend as eps, otherwise, for T>600, the model learns almost nothing as the weight approaches zero.
If I am wrong, please correct me. Thank you!
 says:
`pipe = CogVideoXImageToVideoPipeline.from_pretrained("THUDM/CogVideoX-5b-I2V", torch_dtype=torch.bfloat16)`
Trying to load ... | closed | completed | false | 2 | [
"bug"
] | [] | 2024-09-17T16:18:12Z | 2024-09-18T17:29:21Z | 2024-09-18T17:29:20Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rolux | 152,646 | MDQ6VXNlcjE1MjY0Ng== | User | false |
huggingface/diffusers | 1,419,306,416 | I_kwDOHa8MBc5UmOWw | 946 | https://github.com/huggingface/diffusers/issues/946 | https://api.github.com/repos/huggingface/diffusers/issues/946 | NameError: name 'str2optimizer8bit_blockwise' is not defined | ### Describe the bug
Trying to run migrate colab scripts to runpod, followed along then ran into this error (at Run Training section)
### Reproduction
Running colab ipynb scripts on Jupyter Lab. First it says num_processes undefined, I specified num_processes=1 in Run Training section, then it returns this error
#... | closed | completed | false | 9 | [
"bug"
] | [
"patil-suraj"
] | 2022-10-22T13:49:30Z | 2023-06-27T23:07:12Z | 2022-11-03T08:15:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rexroth0619 | 4,176,960 | MDQ6VXNlcjQxNzY5NjA= | User | false |
huggingface/diffusers | 2,533,185,639 | I_kwDOHa8MBc6W_Vhn | 9,460 | https://github.com/huggingface/diffusers/issues/9460 | https://api.github.com/repos/huggingface/diffusers/issues/9460 | Enable assigning a list of SD3ControlNetModel to StableDiffusion3ControlNetPipeline | ### Describe the bug
StableDiffusion3ControlNetPipeline seems to lack full support for multiple controlnets like in StableDiffusionXLControlNetPipeline.
When feeding a list of SD3ControlNetModel, it does not converts them to SD3MultiControlNetModel and then crashes with:
AttributeError: 'list' object has no attrib... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-09-18T09:22:40Z | 2024-10-19T15:26:31Z | 2024-10-19T15:26:31Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | avrech | 40,227,814 | MDQ6VXNlcjQwMjI3ODE0 | User | false |
huggingface/diffusers | 2,533,415,645 | I_kwDOHa8MBc6XANrd | 9,461 | https://github.com/huggingface/diffusers/issues/9461 | https://api.github.com/repos/huggingface/diffusers/issues/9461 | Flux inference og | ### Describe the bug
I use the optimization.quanto package to call the quantization function. When the model are quantized to fp8, the speed is much slower than bf16. want to know why, thank you?
### Reproduction
```python
`transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flu... | closed | completed | true | 3 | [
"bug"
] | [] | 2024-09-18T11:00:32Z | 2024-09-19T02:28:45Z | 2024-09-19T02:28:45Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | echosyy | 43,342,220 | MDQ6VXNlcjQzMzQyMjIw | User | false |
huggingface/diffusers | 2,534,670,697 | I_kwDOHa8MBc6XFAFp | 9,464 | https://github.com/huggingface/diffusers/issues/9464 | https://api.github.com/repos/huggingface/diffusers/issues/9464 | train_dreambooth_lora_flux with prodigy and train_text_encoder causes IndexError: list index out of range | ### Describe the bug
train_dreambooth_lora_flux.py when running with --train_text_encoder --optimizer="prodigy" causes IndexError: list index out of range because of this:
09/18/2024 20:06:33 - WARNING - __main__ - Learning rates were provided both for the transformer and the text encoder- e.g. text_encoder_lr: 5e-... | closed | completed | false | 6 | [
"bug",
"stale"
] | [] | 2024-09-18T20:32:47Z | 2024-10-28T11:08:36Z | 2024-10-28T11:07:31Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | squewel | 97,603,184 | U_kgDOBdFOcA | User | false |
huggingface/diffusers | 2,535,433,819 | I_kwDOHa8MBc6XH6Zb | 9,467 | https://github.com/huggingface/diffusers/issues/9467 | https://api.github.com/repos/huggingface/diffusers/issues/9467 | CogVideoX I2V: Missing guard-rail on num_frames | ### Describe the bug
CogVideoX I2V:
There is a warning when the negative_prompt is defined, but no warning when num_frames is defined, however, the latter will cause a faulty render, so maybe a guard-rail warning should be added to a defined num_frames value.
The faulty video:
https://github.com/user-attachment... | closed | completed | false | 5 | [
"bug"
] | [] | 2024-09-19T06:42:18Z | 2024-11-20T02:00:59Z | 2024-11-20T02:00:58Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | tin2tin | 1,322,593 | MDQ6VXNlcjEzMjI1OTM= | User | false |
huggingface/diffusers | 2,535,748,354 | I_kwDOHa8MBc6XJHMC | 9,470 | https://github.com/huggingface/diffusers/issues/9470 | https://api.github.com/repos/huggingface/diffusers/issues/9470 | Prompt scheduling in Diffusers like A1111 | Hi everyone, I have a question that how to implement the [prompt scheduling feature](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing) in A1111 by diffusers library.
**Example prompt:** Official portrait of a smiling world war ii general, `[male:female:0.99]`, cheerful, happy, det... | closed | completed | false | 5 | [] | [] | 2024-09-19T09:07:30Z | 2024-10-19T17:22:23Z | 2024-10-19T17:22:23Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | linhbeige | 171,725,048 | U_kgDOCjxQ-A | User | false |
huggingface/diffusers | 2,536,677,047 | I_kwDOHa8MBc6XMp63 | 9,471 | https://github.com/huggingface/diffusers/issues/9471 | https://api.github.com/repos/huggingface/diffusers/issues/9471 | ValueError: Multiple file extensions found at ./models/coreml-stable-diffusion-v1-4_original_packages.Cannot infer resource type from contents. | ### Describe the bug
First problem is that, I run https://huggingface.co/docs/diffusers/v0.30.3/en/optimization/coreml#stable-diffusion-core-ml-checkpoints failed.
Second problem is that, `For example, if you want to use [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5):` ... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-09-19T15:20:00Z | 2024-09-28T20:19:44Z | 2024-09-28T20:19:44Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | LukeLIN-web | 60,426,396 | MDQ6VXNlcjYwNDI2Mzk2 | User | false |
huggingface/diffusers | 2,537,489,591 | I_kwDOHa8MBc6XPwS3 | 9,474 | https://github.com/huggingface/diffusers/issues/9474 | https://api.github.com/repos/huggingface/diffusers/issues/9474 | Formulation of reverse diffusion process in DDPM ( 1 - alpha_prod_t = beta_prod_t assumption) | ### Describe the bug
I looked into sampling code of DDPM, and I believe there's a mistake:
#### I believe the code makes assumption that 1 - alpha_prod_t = beta_prod_t, which simply isn't true.
Original sampling algorithm:
[original paper](https://arxiv.org/pdf/2006.11239)
 and bias type (c10::BFloat16) should be the same | ### Describe the bug
When train_dreambooth_lora_flux attempts to generate images during validation, `RuntimeError: Input type (float) and bias type (c10::BFloat16) should be the same` is thrown
### Reproduction
Just follow the steps from `README_flux.md` for DreamBooth LoRA with text-encoder training:
```export M... | open | null | false | 11 | [
"bug"
] | [] | 2024-09-19T23:57:40Z | 2025-03-18T20:28:34Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | squewel | 97,603,184 | U_kgDOBdFOcA | User | false |
huggingface/diffusers | 2,537,561,068 | I_kwDOHa8MBc6XQBvs | 9,477 | https://github.com/huggingface/diffusers/issues/9477 | https://api.github.com/repos/huggingface/diffusers/issues/9477 | [BUG] 'GatheredParameters' object is not callable | ### Describe the bug
'GatheredParameters' object is not callable when we use zero3 in text-to-image.py
The `context_manager` has been initilized, why we need `with context_manager()`. I think it should be `with context_manager`.
After replacing `with context_manager()` with `with context_manager`, the problem is... | closed | completed | false | 2 | [
"bug"
] | [] | 2024-09-20T00:05:38Z | 2024-10-16T01:14:11Z | 2024-10-16T01:14:11Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | LinB203 | 62,638,829 | MDQ6VXNlcjYyNjM4ODI5 | User | false |
huggingface/diffusers | 2,537,715,677 | I_kwDOHa8MBc6XQnfd | 9,479 | https://github.com/huggingface/diffusers/issues/9479 | https://api.github.com/repos/huggingface/diffusers/issues/9479 | StableDiffusionInpaintPipeline changes areas that I didn't mask out. | ### Describe the bug
StableDiffusionInpaintPipeline changes areas that I didn't mask out.
### Reproduction
pipe = StableDiffusionInpaintPipeline.from_single_file(
"../mymodels/cyberrealistic_v50-inpainting.safetensors", torch_dtype=torch.float16
)
pipe.to("cuda")
pipe.scheduler = DPMSolverMultistepSchedule... | closed | completed | false | 0 | [
"bug"
] | [] | 2024-09-20T02:43:04Z | 2024-09-20T03:01:09Z | 2024-09-20T03:01:09Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | arceus-jia | 5,162,767 | MDQ6VXNlcjUxNjI3Njc= | User | false |
huggingface/diffusers | 2,538,196,676 | I_kwDOHa8MBc6XSc7E | 9,484 | https://github.com/huggingface/diffusers/issues/9484 | https://api.github.com/repos/huggingface/diffusers/issues/9484 | FLUX dreambooth train on multigpu with deepspeed | ### Describe the bug
i'm using the train_dreambooth_flux.py to finetune flux. i get oom on 4x A100 80gb with deepspeed stage 2, gradient checkpoint, bf16 mixed precision, 1024px *1024px input, adafactor optimizer,batchsize 1. it can only run with deepspeed stage3, but that is too slow about 16sec/it.
### Reproductio... | open | null | false | 13 | [
"bug",
"stale"
] | [] | 2024-09-20T08:22:00Z | 2024-11-03T15:02:38Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhangvia | 38,352,569 | MDQ6VXNlcjM4MzUyNTY5 | User | false |
huggingface/diffusers | 2,538,719,196 | I_kwDOHa8MBc6XUcfc | 9,485 | https://github.com/huggingface/diffusers/issues/9485 | https://api.github.com/repos/huggingface/diffusers/issues/9485 | Can we allow making everything on gpu/cuda for scheduler? | **What API design would you like to have changed or added to the library? Why?**
Is it possible to allow setting every tensor attribute of scheduler to cuda device?
In https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py
It looks like that attributes like `scheduler.alphas_cu... | open | null | false | 14 | [
"stale",
"scheduler",
"performance"
] | [
"yiyixuxu",
"a-r-r-o-w"
] | 2024-09-20T12:38:16Z | 2024-12-17T15:04:46Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | xiang9156 | 14,140,862 | MDQ6VXNlcjE0MTQwODYy | User | false |
huggingface/diffusers | 2,538,808,001 | I_kwDOHa8MBc6XUyLB | 9,486 | https://github.com/huggingface/diffusers/issues/9486 | https://api.github.com/repos/huggingface/diffusers/issues/9486 | Problem with FluxInpaintPipeline when doing a replace. | ### Describe the bug
Since this is a relatively new feature of Flux, problems are to be expected. The images below tell the whole story.
The issue is that the black mask (the part to reserve) is overlaying the new image with a copy of the masked image.
I'm using an image of a dog sitting on a park bench. I mask th... | closed | completed | false | 8 | [
"bug"
] | [] | 2024-09-20T13:17:30Z | 2024-09-21T18:54:24Z | 2024-09-21T18:54:24Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ukaprch | 107,368,096 | U_kgDOBmZOoA | User | false |
huggingface/diffusers | 2,539,231,560 | I_kwDOHa8MBc6XWZlI | 9,487 | https://github.com/huggingface/diffusers/issues/9487 | https://api.github.com/repos/huggingface/diffusers/issues/9487 | Add GGUF loader for FluxTransformer2DModel | [GGUF](https://huggingface.co/docs/hub/en/gguf) is becoming a preferred means of distribution of FLUX fine-tunes.
Transformers recently added general support for GGUF and are slowly adding support for [additional model types](https://github.com/huggingface/transformers/issues/33260).
(implementation is by adding `g... | closed | completed | false | 20 | [] | [
"DN6"
] | 2024-09-20T16:45:32Z | 2024-12-18T10:52:31Z | 2024-12-18T10:52:30Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,539,849,000 | I_kwDOHa8MBc6XYwUo | 9,488 | https://github.com/huggingface/diffusers/issues/9488 | https://api.github.com/repos/huggingface/diffusers/issues/9488 | Lumina pipeline fails to generate any image | ### Describe the bug
I found the current version of diffusers fails to generate any image using Lumina pipeline. However, the 0.30.0 version works well. So I guess some related modules are changed during the update, but I have no idea after some debugging.
### Reproduction
```python
import torch
from diffusers... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-09-21T00:36:09Z | 2024-09-23T10:43:36Z | 2024-09-23T10:43:35Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhuole1025 | 53,815,869 | MDQ6VXNlcjUzODE1ODY5 | User | false |
huggingface/diffusers | 2,540,334,488 | I_kwDOHa8MBc6Xam2Y | 9,490 | https://github.com/huggingface/diffusers/issues/9490 | https://api.github.com/repos/huggingface/diffusers/issues/9490 | [Schedulers] Analysis of `simple`, `exponential`, `polyexponential` and `beta` | I'm creating this issue to present my findings in relation to a discussion in #9416 about supporting additional schedulers used in A1111/Forge/Comfy etc. specifically `simple`, `exponential`, `polyexponential` and `beta` schedulers.
I've tested these schedulers and compared them to Diffusers with step counts `4`, `8... | closed | completed | false | 7 | [
"stale",
"scheduler"
] | [] | 2024-09-21T14:53:37Z | 2024-11-23T20:29:38Z | 2024-11-23T20:29:38Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hlky | 106,811,348 | U_kgDOBl3P1A | User | false |
huggingface/diffusers | 2,541,145,485 | I_kwDOHa8MBc6Xds2N | 9,492 | https://github.com/huggingface/diffusers/issues/9492 | https://api.github.com/repos/huggingface/diffusers/issues/9492 | Add FLUX image-to-image to `AutoPipelineForImage2Image` | **Is your feature request related to a problem? Please describe.**
Currently `FluxImg2ImgPipeline` is not mapped to `AutoPipelineForImage2Image`
**Describe the solution you'd like.**
A clear and concise description of what you want to happen.
**Additional context.**
Here's the error
```shell
Traceback (most ... | closed | completed | false | 1 | [] | [] | 2024-09-22T16:05:47Z | 2024-09-22T16:15:11Z | 2024-09-22T16:15:11Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | apolinario | 788,417 | MDQ6VXNlcjc4ODQxNw== | User | false |
huggingface/diffusers | 2,541,935,913 | I_kwDOHa8MBc6Xgt0p | 9,495 | https://github.com/huggingface/diffusers/issues/9495 | https://api.github.com/repos/huggingface/diffusers/issues/9495 | SDXL PAG with IPAdapter is not working. | ### Describe the bug
I am currently using diffusers, SDXL 1.0 base + PAG + IPAdapter. The pipeline StableDiffusionXLControlNetPipeline is unable to handle both, PAG and IPAdapter for a controlnet canny image as the input.
Goal: I want to have a canny input image and generate an image such that it uses both, the re... | closed | completed | false | 2 | [
"bug"
] | [] | 2024-09-23T08:32:17Z | 2024-09-23T13:17:08Z | 2024-09-23T13:17:07Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | akshatd007 | 166,140,134 | U_kgDOCecY5g | User | false |
huggingface/diffusers | 2,542,245,713 | I_kwDOHa8MBc6Xh5dR | 9,496 | https://github.com/huggingface/diffusers/issues/9496 | https://api.github.com/repos/huggingface/diffusers/issues/9496 | SD3ControlNetModel forward function error | ### Describe the bug
The following codes are line 326 - line 352 in diffusers/models/controlnet_sd3.py. "hidden_states" returned by "torch.utils.checkpoint.checkpoint" in if branch is a tuple, while "hidden_states" returned by "block" in else branch is a tensor. The following layers require a tensor. So when traini... | closed | completed | false | 3 | [
"bug"
] | [] | 2024-09-23T10:40:12Z | 2024-09-23T22:30:51Z | 2024-09-23T22:30:50Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | pibbo88 | 81,701,354 | MDQ6VXNlcjgxNzAxMzU0 | User | false |
huggingface/diffusers | 2,542,643,226 | I_kwDOHa8MBc6Xjaga | 9,497 | https://github.com/huggingface/diffusers/issues/9497 | https://api.github.com/repos/huggingface/diffusers/issues/9497 | Dreambooth Flux training error: RuntimeError: mat2 must be a matrix, got 1-D tensor | ### Describe the bug
I run the training but get this error
### Reproduction
Run `accelerate config`
```
compute_environment: LOCAL_MACHINE
debug: true
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: false
fsdp_config:
fsdp_activation_checkpointing: true
fsdp_auto_wrap_policy: TRANSFORM... | closed | completed | false | 12 | [
"bug",
"stale"
] | [] | 2024-09-23T13:24:15Z | 2024-12-10T01:31:01Z | 2024-12-05T15:15:45Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kopyl | 17,604,849 | MDQ6VXNlcjE3NjA0ODQ5 | User | false |
huggingface/diffusers | 2,542,920,097 | I_kwDOHa8MBc6XkeGh | 9,500 | https://github.com/huggingface/diffusers/issues/9500 | https://api.github.com/repos/huggingface/diffusers/issues/9500 | Dreambooth Flux training failed on saving a checkpoint | ### Describe the bug
I run the training but get this error
### Reproduction
Run accelerate config
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: true
fsdp_config:
fsdp_activation_checkpointing: true
fsdp_auto_wrap_policy: TRANSFORMER... | open | null | false | 26 | [
"bug",
"stale"
] | [] | 2024-09-23T14:56:14Z | 2025-01-16T15:05:08Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kopyl | 17,604,849 | MDQ6VXNlcjE3NjA0ODQ5 | User | false |
huggingface/diffusers | 2,542,974,477 | I_kwDOHa8MBc6XkrYN | 9,501 | https://github.com/huggingface/diffusers/issues/9501 | https://api.github.com/repos/huggingface/diffusers/issues/9501 | Dreambooth Flux training does not save a model for around 10-15 minutes | ### Describe the bug
This time i set amount of steps to 2 to make sure it correctly saves the model after an hour of training. But it does not.
### Reproduction
Run `accelerate config`
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: true
fs... | open | null | false | 16 | [
"bug",
"stale"
] | [] | 2024-09-23T15:15:11Z | 2025-02-15T15:05:37Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kopyl | 17,604,849 | MDQ6VXNlcjE3NjA0ODQ5 | User | false |
huggingface/diffusers | 2,542,980,514 | I_kwDOHa8MBc6Xks2i | 9,503 | https://github.com/huggingface/diffusers/issues/9503 | https://api.github.com/repos/huggingface/diffusers/issues/9503 | Confusion about `FrozenDict` in `configuration_utils.py` | I am confused about the design of `FrozenDict` in `configuration_utils.py` and the usage of it.
### 1. Is `FrozenDict` really frozen?
From the code, `FrozenDict` sets `self.__frozen = True` during initialization. It then checks `if hasattr(self, "__frozen") and self.__frozen` in methods like `__setattr__` or `__se... | open | null | false | 3 | [
"stale"
] | [] | 2024-09-23T15:17:46Z | 2024-11-10T15:02:53Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | townwish4git | 143,256,262 | U_kgDOCInqxg | User | false |
huggingface/diffusers | 2,543,445,463 | I_kwDOHa8MBc6XmeXX | 9,505 | https://github.com/huggingface/diffusers/issues/9505 | https://api.github.com/repos/huggingface/diffusers/issues/9505 | Can't upscale anything. Keep getting "KeyError: 'middle_block_out.0.weight'" error/ | ### Describe the bug
I have a newly installed version of Invoke v5.0.0.rc1. I take a simple image and try to upscale but I recieve this error message:
```
[2024-09-23 15:05:39,510]::[InvokeAI]::ERROR --> Error while invoking session 9f8db22a-dc2e-4fef-b19b-5e9672a7e259, invocation d3477dbe-f38e-4ea6-b5bb-bc1c3232d... | closed | completed | false | 3 | [
"bug",
"stale"
] | [] | 2024-09-23T19:13:26Z | 2024-10-24T15:06:30Z | 2024-10-24T15:06:30Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | evanerichards | 10,945,957 | MDQ6VXNlcjEwOTQ1OTU3 | User | false |
huggingface/diffusers | 2,543,743,358 | I_kwDOHa8MBc6XnnF- | 9,508 | https://github.com/huggingface/diffusers/issues/9508 | https://api.github.com/repos/huggingface/diffusers/issues/9508 | AnimateDiff SparseCtrl RGB does not work as expected | Relevant comments are [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255416318) and [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255478105).
AnimateDiff SparseCtrl RGB does not work similar to other implementations and cannot replicate their outputs. This makes me ... | open | null | false | 9 | [
"bug",
"help wanted",
"stale",
"contributions-welcome",
"advanced"
] | [] | 2024-09-23T21:42:54Z | 2025-08-10T16:47:50Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | a-r-r-o-w | 72,266,394 | MDQ6VXNlcjcyMjY2Mzk0 | User | false |
huggingface/diffusers | 2,544,637,781 | I_kwDOHa8MBc6XrBdV | 9,511 | https://github.com/huggingface/diffusers/issues/9511 | https://api.github.com/repos/huggingface/diffusers/issues/9511 | Multi-controlnet batching for `StableDiffusionXLControlNetInpaintPipeline` | **Is your feature request related to a problem? Please describe.**
Currently, batching is not supported when we are conditioning the SDXL pipeline on multiple controlnets: https://github.com/huggingface/diffusers/blob/28f9d84549c0b1d24ef00d69a4c723f3a11cffb6/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpai... | open | null | false | 4 | [
"stale",
"contributions-welcome"
] | [] | 2024-09-24T07:53:14Z | 2025-01-31T15:04:46Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kdubovikov | 832,185 | MDQ6VXNlcjgzMjE4NQ== | User | false |
huggingface/diffusers | 2,545,065,632 | I_kwDOHa8MBc6Xsp6g | 9,514 | https://github.com/huggingface/diffusers/issues/9514 | https://api.github.com/repos/huggingface/diffusers/issues/9514 | Flux-dev - Unable to load LoRA weights after fp8 quantisation | ### Describe the bug
I tried to quantise the Flux1.dev model and load LoRA weights but I get and error. The state_dict doesn't match
### Reproduction
```
import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from transformers import T5EncoderModel, CLIPTextModel
from optimum.quanto import... | closed | completed | false | 4 | [
"bug"
] | [] | 2024-09-24T10:58:33Z | 2024-12-17T11:33:57Z | 2024-09-24T11:04:50Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kkumar5991 | 111,890,010 | U_kgDOBqtOWg | User | false |
huggingface/diffusers | 2,545,174,488 | I_kwDOHa8MBc6XtEfY | 9,515 | https://github.com/huggingface/diffusers/issues/9515 | https://api.github.com/repos/huggingface/diffusers/issues/9515 | The `JointAttnProcessor2_0` in the `SD3Transformer2DModel` does not include `RMSNorm`. | ### Describe the bug
The `SD3Transformer2DModel` utilizes the `JointTransformerBlock`, where the attention is handled by `JointAttnProcessor2_0`. However, `JointAttnProcessor2_0` does not include `RMSNorm`, which is inconsistent with the SD3 paper.
### Reproduction
For detailed code, please see:
- `SD3Trans... | closed | completed | false | 2 | [
"bug",
"stale"
] | [] | 2024-09-24T11:49:01Z | 2024-10-25T22:09:22Z | 2024-10-25T22:09:22Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | HuiZhang0812 | 44,562,591 | MDQ6VXNlcjQ0NTYyNTkx | User | false |
huggingface/diffusers | 2,545,581,951 | I_kwDOHa8MBc6Xun9_ | 9,516 | https://github.com/huggingface/diffusers/issues/9516 | https://api.github.com/repos/huggingface/diffusers/issues/9516 | parameters `joint_attention_kwargs` doesn't be passed to FLUX's transformers model | ### Describe the bug
Like `cross_attention_kwargs` in UNet, I want to modify the attention processor of the FLUX model, and pass the extra parameter by the `joint_attention_kwargs` which written in the FluxPipeline:
```python
noise_pred = self.transformer(
hidden_states=latents,
... | closed | completed | false | 3 | [
"bug",
"stale"
] | [] | 2024-09-24T14:31:41Z | 2024-10-25T22:09:49Z | 2024-10-25T22:09:49Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | HorizonWind2004 | 50,254,737 | MDQ6VXNlcjUwMjU0NzM3 | User | false |
huggingface/diffusers | 2,545,970,154 | I_kwDOHa8MBc6XwGvq | 9,519 | https://github.com/huggingface/diffusers/issues/9519 | https://api.github.com/repos/huggingface/diffusers/issues/9519 | self.scheduler.add_noise did not add any noise for the given latent | null | closed | completed | false | 0 | [
"bug"
] | [] | 2024-09-24T17:30:42Z | 2024-09-24T23:07:23Z | 2024-09-24T23:06:56Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | LonglongaaaGo | 34,880,268 | MDQ6VXNlcjM0ODgwMjY4 | User | false |
huggingface/diffusers | 1,419,578,363 | I_kwDOHa8MBc5UnQv7 | 952 | https://github.com/huggingface/diffusers/issues/952 | https://api.github.com/repos/huggingface/diffusers/issues/952 | Performance Issue with RTX 4090 and all SD/Diffusers versions | ### Describe the bug
Hello!
Since 10 days, nearly round the clock, I try to bring my brand new and proudly owned Geforce RTX 4090 graphic cards to work appropriate with Stable Diffusion. But finally, 10 days later at least, it is still around 50% below its options.
In that 240 hours I changed from Ubuntu to Man... | closed | completed | false | 47 | [
"bug",
"stale"
] | [
"pcuenca",
"anton-l",
"NouamaneTazi"
] | 2022-10-23T01:04:09Z | 2023-02-07T15:04:08Z | 2023-02-07T15:04:08Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Marcophono2 | 22,599,855 | MDQ6VXNlcjIyNTk5ODU1 | User | false |
huggingface/diffusers | 2,546,027,708 | I_kwDOHa8MBc6XwUy8 | 9,520 | https://github.com/huggingface/diffusers/issues/9520 | https://api.github.com/repos/huggingface/diffusers/issues/9520 | UNetMotionModel.dtype is really expensive to call, is it possible to cache it during inference? | **What API design would you like to have changed or added to the library? Why?**
we are using class UNetMotionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin)
and its `forward()` implementation is calling self.dtype, which is very expensive
:
File "/works... | closed | completed | false | 2 | [
"bug"
] | [] | 2024-09-24T21:40:30Z | 2024-09-25T01:34:20Z | 2024-09-25T01:34:19Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Night1099 | 90,132,896 | MDQ6VXNlcjkwMTMyODk2 | User | false |
huggingface/diffusers | 2,546,462,185 | I_kwDOHa8MBc6Xx-3p | 9,525 | https://github.com/huggingface/diffusers/issues/9525 | https://api.github.com/repos/huggingface/diffusers/issues/9525 | `lora_scale` has no effect when loading with Flux | ### Describe the bug
According to [loading loras for inference](https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference) an argument `cross_attention_kwargs={"scale": 0.5}` can be added to a `pipeline()` call to vary the impact of a LORA on image generation. As the `FluxPipeline` class doesn... | closed | completed | false | 15 | [
"bug"
] | [] | 2024-09-24T21:55:08Z | 2024-09-27T02:03:14Z | 2024-09-27T02:03:13Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | cshowley | 12,364,081 | MDQ6VXNlcjEyMzY0MDgx | User | false |
huggingface/diffusers | 2,546,770,500 | I_kwDOHa8MBc6XzKJE | 9,527 | https://github.com/huggingface/diffusers/issues/9527 | https://api.github.com/repos/huggingface/diffusers/issues/9527 | dtype error when using controlnet fp32 and mainpipe bf16 | ### Describe the bug
An error occurs when loading controlnet as fp32 and loading mainpipe as bf16
### Reproduction
```python
import torch
from diffusers.utils import load_image
from diffusers.pipelines.flux.pipeline_flux_controlnet import FluxControlNetPipeline
from diffusers.models.controlnet_flux import ... | closed | completed | false | 12 | [
"bug",
"stale"
] | [] | 2024-09-25T02:43:11Z | 2024-10-26T23:13:58Z | 2024-10-26T23:13:57Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | PromeAIpro | 178,361,217 | U_kgDOCqGTgQ | User | false |
huggingface/diffusers | 2,546,872,666 | I_kwDOHa8MBc6XzjFa | 9,528 | https://github.com/huggingface/diffusers/issues/9528 | https://api.github.com/repos/huggingface/diffusers/issues/9528 | load_ip_adapter for distilled sd models | Is it possible to load IP-Adapter for distilled SD v1 or v2 based models such as nota-ai/bk-sdm-tiny or nota-ai/bk-sdm-v2-tiny?
When I tried to load ip adapter using bk-sdm-tiny
```python
pipe.load_ip_adapter(
"h94/IP-Adapter",
subfolder="models",
weight_name="ip-adapter-plus_sd15.bin",
low_c... | closed | completed | false | 7 | [
"stale"
] | [] | 2024-09-25T04:31:00Z | 2025-01-12T06:01:40Z | 2025-01-12T06:01:40Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kmpartner | 17,810,734 | MDQ6VXNlcjE3ODEwNzM0 | User | false |
huggingface/diffusers | 1,419,587,657 | I_kwDOHa8MBc5UnTBJ | 953 | https://github.com/huggingface/diffusers/issues/953 | https://api.github.com/repos/huggingface/diffusers/issues/953 | Any guidance on creating smaller images? 256x256 or 384x384? | **Is your feature request related to a problem? Please describe.**
I’m looking for ways to speed the process and save memory
**Describe the solution you'd like**
I wondered if it is possible to create smaller images.
**Describe alternatives you've considered**
I tried setting the images size with poor results ... | closed | completed | false | 4 | [
"stale"
] | [] | 2022-10-23T01:33:10Z | 2023-02-02T02:49:24Z | 2022-12-01T15:03:22Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | MatthewWaller | 5,520,521 | MDQ6VXNlcjU1MjA1MjE= | User | false |
huggingface/diffusers | 2,547,337,413 | I_kwDOHa8MBc6X1UjF | 9,530 | https://github.com/huggingface/diffusers/issues/9530 | https://api.github.com/repos/huggingface/diffusers/issues/9530 | Freeing GPU memory after `torch.compile` StableDiffusionXLPipeline UNet | While exploring optimizations listed in the [documentation](https://huggingface.co/docs/diffusers/optimization/torch2.0), I find myself unable to free GPU memory after using `torch.compile` on a StableDiffusionXLPipeline UNet.
```Python
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipe... | open | null | false | 4 | [
"stale",
"torch-compile"
] | [] | 2024-09-25T08:33:02Z | 2024-11-21T15:03:27Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | To-jak | 33,063,955 | MDQ6VXNlcjMzMDYzOTU1 | User | false |
huggingface/diffusers | 2,548,890,888 | I_kwDOHa8MBc6X7P0I | 9,531 | https://github.com/huggingface/diffusers/issues/9531 | https://api.github.com/repos/huggingface/diffusers/issues/9531 | SDXL max sigma value should be doubled for 1024px generations | ### Describe the bug
https://arxiv.org/abs/2409.15997
as outlined by NovelAI for their SDXL-based model, doubling sigma max is required for each doubling in the canvas length.
### Reproduction
N/A
### Logs
_No response_
### System Info
-
### Who can help?
_No response_ | closed | not_planned | false | 6 | [
"bug",
"stale",
"scheduler"
] | [] | 2024-09-25T19:59:32Z | 2024-10-26T15:37:35Z | 2024-10-26T15:37:35Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | bghira | 59,658,056 | MDQ6VXNlcjU5NjU4MDU2 | User | false |
huggingface/diffusers | 2,549,115,236 | I_kwDOHa8MBc6X8Glk | 9,532 | https://github.com/huggingface/diffusers/issues/9532 | https://api.github.com/repos/huggingface/diffusers/issues/9532 | Overflow error when using a pretrained repo locally | ### Describe the bug
Hi.
Whenever I try to load a model repo locally, I get this following error:
```
OverflowError: cannot fit 'int' into an index-sized integer
```
Not exclusive to this because I did get this error before but now I'm getting it for the `RealVisXL_V4.0_inpainting`. However, when I try to pul... | closed | completed | false | 6 | [
"bug",
"stale"
] | [] | 2024-09-25T22:38:51Z | 2025-01-12T06:00:39Z | 2025-01-12T06:00:39Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | levoz92 | 162,665,819 | U_kgDOCbIVWw | User | false |
huggingface/diffusers | 2,549,230,864 | I_kwDOHa8MBc6X8i0Q | 9,534 | https://github.com/huggingface/diffusers/issues/9534 | https://api.github.com/repos/huggingface/diffusers/issues/9534 | [FluxMultiControlNetModel] object has no attribute 'config | ### Describe the bug
This commit on main https://github.com/huggingface/diffusers/commit/14a1b86fc7de53ff1dbf803f616cbb16ad530e45 seems to have broke FluxMultiControlNetModel. Reverting this commit fixes the issue on line `pipeline_flux_controlnet.py:844`
Referencing this issue:
https://huggingface.co/Shakker-L... | closed | completed | false | 8 | [
"bug"
] | [] | 2024-09-26T00:35:09Z | 2025-01-02T18:48:10Z | 2025-01-02T18:48:10Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | darhsu | 35,377,472 | MDQ6VXNlcjM1Mzc3NDcy | User | false |
huggingface/diffusers | 2,551,284,799 | I_kwDOHa8MBc6YEYQ_ | 9,539 | https://github.com/huggingface/diffusers/issues/9539 | https://api.github.com/repos/huggingface/diffusers/issues/9539 | "index_select_cuda" not implemented for 'Float8_e4m3fn' error from CogVideoXImageToVideoPipeline | ### Describe the bug
Hello. I am trying to load CogVideoXImageToVideo in FP8 and I am getting this error
without FP8 no such errors
I am simply following this page : https://huggingface.co/THUDM/CogVideoX-5b
Diffusers commit id is latest : diffusers @ git+https://github.com/huggingface/diffusers.git@665c6b4... | closed | completed | false | 17 | [
"bug"
] | [] | 2024-09-26T18:35:52Z | 2024-10-11T11:36:16Z | 2024-10-11T11:36:16Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | FurkanGozukara | 19,240,467 | MDQ6VXNlcjE5MjQwNDY3 | User | false |
huggingface/diffusers | 1,419,604,731 | I_kwDOHa8MBc5UnXL7 | 954 | https://github.com/huggingface/diffusers/issues/954 | https://api.github.com/repos/huggingface/diffusers/issues/954 | Not able to load after a successful login | ### Describe the bug
I am trying to load diffusers either from a remote. Remote huggingface diffusers is not accessible after a successful login
### Reproduction
```shell
(pytorch)$ huggingface-cli login
_| _| _| _| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _|_|_|_| _|_| ... | closed | completed | false | 4 | [
"bug",
"stale"
] | [] | 2022-10-23T02:34:50Z | 2022-12-01T15:03:21Z | 2022-12-01T15:03:21Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DSLituiev | 8,426,290 | MDQ6VXNlcjg0MjYyOTA= | User | false |
huggingface/diffusers | 2,551,905,116 | I_kwDOHa8MBc6YGvtc | 9,540 | https://github.com/huggingface/diffusers/issues/9540 | https://api.github.com/repos/huggingface/diffusers/issues/9540 | [Flux ControlNet] ControlNet initialization from transformer seems to be broken | Originally caught in https://github.com/huggingface/diffusers/pull/9324.
Reproduction:
```py
from diffusers import FluxTransformer2DModel, FluxControlNetModel
transformer = FluxTransformer2DModel.from_pretrained(
"hf-internal-testing/tiny-flux-pipe", subfolder="transformer"
)
controlnet = FluxControlN... | closed | completed | false | 5 | [] | [
"yiyixuxu"
] | 2024-09-27T03:16:06Z | 2024-09-27T03:42:06Z | 2024-09-27T03:22:46Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,552,473,040 | I_kwDOHa8MBc6YI6XQ | 9,541 | https://github.com/huggingface/diffusers/issues/9541 | https://api.github.com/repos/huggingface/diffusers/issues/9541 | Training a specific Flux Controlnet Model from Controlnet-Union | Hi!
The training controlnet for flux script does not include control_mode implementation right now, and it's not allowing training any specific controlnet models from InstantX/Flux.1-dev-Controlnet-Union model.
Adding the control_mode to the train_controlnet_flux.py script can be a good solution for this.
Tha... | closed | completed | false | 3 | [
"stale"
] | [] | 2024-09-27T09:37:20Z | 2025-01-12T05:59:23Z | 2025-01-12T05:59:23Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | burcukilic | 94,201,593 | U_kgDOBZ1m-Q | User | false |
huggingface/diffusers | 2,553,920,407 | I_kwDOHa8MBc6YObuX | 9,546 | https://github.com/huggingface/diffusers/issues/9546 | https://api.github.com/repos/huggingface/diffusers/issues/9546 | Flux Controlnet Train Example, will run out of memory on validation step | ### Describe the bug
On default settings provided in flux train example readme, with 10 validation images training will error out with out of memory error during validation. on A100 80GB
```
09/28/2024 00:34:14 - INFO - __main__ - Running validation...
model_index.json: 100%|█████████████████████████████████... | closed | completed | false | 16 | [
"bug"
] | [] | 2024-09-28T00:41:29Z | 2024-11-09T15:38:16Z | 2024-11-09T15:38:16Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Night1099 | 90,132,896 | MDQ6VXNlcjkwMTMyODk2 | User | false |
huggingface/diffusers | 2,554,331,206 | I_kwDOHa8MBc6YQABG | 9,548 | https://github.com/huggingface/diffusers/issues/9548 | https://api.github.com/repos/huggingface/diffusers/issues/9548 | Still Issue on flux dreambooth lora training #9237 | ### Describe the bug
I tried running `train_dreambooth_lora_flux.py` again with the merged source code, but I am still encountering an issue similar to #9237 during the `log_validation` stage.
I have resolved this issue with the following modification:
~~~python
autocast_ctx = nullcontext()
~~~
to
~~~pytho... | closed | completed | false | 5 | [
"bug"
] | [] | 2024-09-28T15:25:32Z | 2024-11-01T03:49:48Z | 2024-11-01T03:49:48Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | jeongiin | 48,753,785 | MDQ6VXNlcjQ4NzUzNzg1 | User | false |
huggingface/diffusers | 1,419,876,156 | I_kwDOHa8MBc5UoZc8 | 955 | https://github.com/huggingface/diffusers/issues/955 | https://api.github.com/repos/huggingface/diffusers/issues/955 | [Feature Request][Community] Ability to pass text_embeddings/uncond_embeddings as arguments in pipe call | **Is your feature request related to a problem? Please describe.**
Im experimenting with aesthetic gradients and need to overwrite pip call to pass text_embeddings/uncond_embeddings.
Also it might save a bit of time with making a lot of images with same promt.
**Describe the solution you'd like**
Ability to pass ... | closed | completed | false | 15 | [
"good first issue"
] | [
"pcuenca",
"anton-l",
"patil-suraj"
] | 2022-10-23T18:04:40Z | 2023-03-18T18:46:43Z | 2023-03-18T18:46:43Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hadaev8 | 20,247,085 | MDQ6VXNlcjIwMjQ3MDg1 | User | false |
huggingface/diffusers | 2,554,475,636 | I_kwDOHa8MBc6YQjR0 | 9,551 | https://github.com/huggingface/diffusers/issues/9551 | https://api.github.com/repos/huggingface/diffusers/issues/9551 | How to use x-labs flux controlnet models in diffusers? | ### Model/Pipeline/Scheduler description
The following controlnets are supported in Comfy UI, but was wondering how we can use these in diffusers as well for developers. Afaik, there is no from_single_file method for FluxControlNet to load the safetensors?
### Open source status
- [x] The model implementation ... | closed | completed | false | 2 | [] | [] | 2024-09-28T20:01:15Z | 2024-09-29T06:59:46Z | 2024-09-29T06:59:46Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | neuron-party | 96,799,331 | U_kgDOBcUKYw | User | false |
huggingface/diffusers | 2,554,565,028 | I_kwDOHa8MBc6YQ5Gk | 9,552 | https://github.com/huggingface/diffusers/issues/9552 | https://api.github.com/repos/huggingface/diffusers/issues/9552 | Using a custom pipeline with from_single_file | I'm trying the same thing as https://github.com/huggingface/diffusers/issues/3567 using from_single_file() (assuming this is a renamed from_ckpt().
So far, this is what I have:
pipe = StableDiffusionPipeline.from_single_file(
checkpoint,
torch_dtype=torch.float16,
variant="fp16",
... | closed | completed | false | 3 | [] | [] | 2024-09-29T00:02:16Z | 2024-10-29T15:09:16Z | 2024-10-29T15:09:16Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | agarwalml | 42,957,482 | MDQ6VXNlcjQyOTU3NDgy | User | false |
huggingface/diffusers | 2,554,913,999 | I_kwDOHa8MBc6YSOTP | 9,555 | https://github.com/huggingface/diffusers/issues/9555 | https://api.github.com/repos/huggingface/diffusers/issues/9555 | [Flux Controlnet] Add control_guidance_start and control_guidance_end | It'd be nice to have `control_guidance_start` and `control_guidance_start` parameters added to flux Controlnet and Controlnet Inpainting pipelines.
I'm currently making experiments with Flux Controlnet Inpainting but the results are poor even with a `controlnet_conditioning_scale` set to 0.6.
I have to set `cont... | closed | completed | false | 8 | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | [] | 2024-09-29T12:37:39Z | 2024-10-10T12:29:03Z | 2024-10-10T12:29:03Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | simbrams | 25,414,628 | MDQ6VXNlcjI1NDE0NjI4 | User | false |
huggingface/diffusers | 2,555,283,152 | I_kwDOHa8MBc6YTobQ | 9,556 | https://github.com/huggingface/diffusers/issues/9556 | https://api.github.com/repos/huggingface/diffusers/issues/9556 | Problems with saved lora models when using rslora. | ### Describe the bug
The SDXL model was fine-tuned using the rslora method and the training process was fine-tuned.
After the training, the Lora model was saved and then the lora model was reloaded for the image generation test and it was found that the images generated in the test phase were wrong.
It seems that th... | open | null | false | 7 | [
"bug",
"stale"
] | [] | 2024-09-30T00:43:55Z | 2024-12-15T15:03:50Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ydniuyongjie | 42,044,877 | MDQ6VXNlcjQyMDQ0ODc3 | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.