repo
stringclasses
1 value
github_id
int64
1.27B
4.42B
github_node_id
stringlengths
18
24
number
int64
8
13.7k
html_url
stringlengths
49
53
api_url
stringlengths
59
63
title
stringlengths
1
402
body
stringlengths
1
62.9k
state
stringclasses
2 values
state_reason
stringclasses
4 values
locked
bool
2 classes
comments_count
int64
0
99
labels
listlengths
0
5
assignees
listlengths
0
5
created_at
stringdate
2022-06-09 16:28:35
2026-05-11 21:29:10
updated_at
stringdate
2022-06-12 22:18:01
2026-05-13 10:44:12
closed_at
stringdate
2022-06-12 22:18:01
2026-05-13 10:44:12
author_association
stringclasses
3 values
milestone_title
stringclasses
0 values
snapshot_id
stringclasses
42 values
extracted_at
stringdate
2026-04-07 13:34:13
2026-05-13 11:35:24
author_login
stringlengths
3
28
author_id
int64
1.54k
282M
author_node_id
stringlengths
12
20
author_type
stringclasses
3 values
author_site_admin
bool
1 class
huggingface/diffusers
3,207,399,600
I_kwDOHa8MBc6_LQiw
11,874
https://github.com/huggingface/diffusers/issues/11874
https://api.github.com/repos/huggingface/diffusers/issues/11874
[lora] `exclude_modules` won't consider the modules names in `target_modules`
In https://github.com/huggingface/diffusers/pull/11806, we introduced automatic extraction of `exclude_modules` and passing it to the following function to prepare `LoraConfig` kwargs: https://github.com/huggingface/diffusers/blob/425a715e35479338c06b2a68eb3a95790c1db3c5/src/diffusers/utils/peft_utils.py#L153 The pro...
closed
completed
false
6
[]
[]
2025-07-07T03:07:54Z
2025-08-08T03:52:50Z
2025-08-08T03:52:50Z
MEMBER
null
20260407T133413Z
2026-04-07T13:34:13Z
sayakpaul
22,957,388
MDQ6VXNlcjIyOTU3Mzg4
User
false
huggingface/diffusers
3,208,615,449
I_kwDOHa8MBc6_P5YZ
11,878
https://github.com/huggingface/diffusers/issues/11878
https://api.github.com/repos/huggingface/diffusers/issues/11878
WanVACETransformer3DModel with GGUF not working for 1.3B model
### Describe the bug The support for GGUF in WanVace was added in this PR https://github.com/huggingface/diffusers/pull/11807 This maybe working for 14B model (not tested) but not working for 1.3B. Didn't posted the issue earlier but now confirmed it's not only me who is facing issue. https://github.com/huggingface/d...
closed
completed
false
4
[ "bug" ]
[]
2025-07-07T11:22:41Z
2025-07-12T11:30:38Z
2025-07-12T11:30:38Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
nitinmukesh
2,102,186
MDQ6VXNlcjIxMDIxODY=
User
false
huggingface/diffusers
1,439,917,859
I_kwDOHa8MBc5V02cj
1,188
https://github.com/huggingface/diffusers/issues/1188
https://api.github.com/repos/huggingface/diffusers/issues/1188
[Community] Add pipeline for CLIPSeg x Stable Diffusion
### Model/Pipeline/Scheduler description We've just added [CLIPSeg](https://huggingface.co/docs/transformers/main/en/model_doc/clipseg) to the 🤗 Transformers library, making it possible to use CLIPSeg in a few lines of code as shown in [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master...
closed
completed
false
14
[ "good first issue", "community-examples" ]
[]
2022-11-08T10:28:49Z
2022-12-05T07:59:09Z
2022-11-24T09:54:18Z
MEMBER
null
20260407T133413Z
2026-04-07T13:34:13Z
NielsRogge
48,327,001
MDQ6VXNlcjQ4MzI3MDAx
User
false
huggingface/diffusers
3,211,586,926
I_kwDOHa8MBc6_bO1u
11,885
https://github.com/huggingface/diffusers/issues/11885
https://api.github.com/repos/huggingface/diffusers/issues/11885
`load_lora_weights` bug in FLUX.1 `diffusers-0.35.0.dev0`
### Describe the bug I used the latest version of `diffusers-0.35.0.dev0` to fine-tune FLUX.1-Kontext with LoRA. However, when I attempted to load the saved LoRA weights, I noticed it abnormally logged that `Loading adapter weights from state_dict led to unexpected keys found in the model`. Subsequently, this same ve...
closed
completed
false
4
[ "bug", "stale" ]
[]
2025-07-08T08:37:05Z
2026-01-10T03:25:56Z
2026-01-10T03:25:56Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Chenzzzzzz217
74,236,002
MDQ6VXNlcjc0MjM2MDAy
User
false
huggingface/diffusers
3,212,809,091
I_kwDOHa8MBc6_f5OD
11,886
https://github.com/huggingface/diffusers/issues/11886
https://api.github.com/repos/huggingface/diffusers/issues/11886
Issue with FluxKontextPipeline
### Describe the bug If we pass width/height in inference no changes in output image. If we do not pass width/height, it defaults to 1024 x 1024 and crop the image. Source image used (832 x 1216) <img width="832" height="1216" alt="Image" src="https://github.com/user-attachments/assets/3e04a88c-3917-4214-b6d3-b8b1c8...
closed
completed
false
4
[ "bug" ]
[]
2025-07-08T14:38:54Z
2025-07-12T10:33:38Z
2025-07-12T10:33:38Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
nitinmukesh
2,102,186
MDQ6VXNlcjIxMDIxODY=
User
false
huggingface/diffusers
3,213,452,703
I_kwDOHa8MBc6_iWWf
11,889
https://github.com/huggingface/diffusers/issues/11889
https://api.github.com/repos/huggingface/diffusers/issues/11889
Error during batch inference using FluxControlNetInpaintingPipeline
Hi @sayakpaul I am working on batchinferencing of [flux_controlnet_inpainting_pipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux_controlnet_inpainting.py), but I'm encountering the following error:, Traceback (most recent call last): File "/home/ubuntu/dev_anand/...
open
null
false
1
[ "stale" ]
[]
2025-07-08T18:41:33Z
2026-01-09T15:17:21Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
anandguptacv
90,339,262
MDQ6VXNlcjkwMzM5MjYy
User
false
huggingface/diffusers
3,215,285,698
I_kwDOHa8MBc6_pV3C
11,898
https://github.com/huggingface/diffusers/issues/11898
https://api.github.com/repos/huggingface/diffusers/issues/11898
stabilityai/stable-diffusion-2 is missing fp16 files
null
closed
completed
false
0
[]
[]
2025-07-09T09:53:18Z
2025-07-09T09:53:30Z
2025-07-09T09:53:30Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Parimala-15
133,242,784
U_kgDOB_EfoA
User
false
huggingface/diffusers
3,215,305,154
I_kwDOHa8MBc6_panC
11,899
https://github.com/huggingface/diffusers/issues/11899
https://api.github.com/repos/huggingface/diffusers/issues/11899
stabilityai/stable-diffusion-2 is missing fp16 files
null
closed
completed
false
2
[ "stale" ]
[]
2025-07-09T09:58:03Z
2026-01-09T21:49:25Z
2026-01-09T21:49:25Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Parimala-15
133,242,784
U_kgDOB_EfoA
User
false
huggingface/diffusers
3,215,306,480
I_kwDOHa8MBc6_pa7w
11,900
https://github.com/huggingface/diffusers/issues/11900
https://api.github.com/repos/huggingface/diffusers/issues/11900
stabilityai/stable-diffusion-2 is missing fp16 files
null
closed
completed
false
1
[]
[]
2025-07-09T09:58:17Z
2025-07-09T14:09:07Z
2025-07-09T14:09:07Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Parimala-15
133,242,784
U_kgDOB_EfoA
User
false
huggingface/diffusers
3,216,045,276
I_kwDOHa8MBc6_sPTc
11,901
https://github.com/huggingface/diffusers/issues/11901
https://api.github.com/repos/huggingface/diffusers/issues/11901
can not load Wan2.1-Fun-14B-InP-MPS_reward_lora_comfy.safetensors to wan2.1-14b-720p-diffusers
hello, i want to add Wan2.1-Fun-14B-InP-MPS_reward_lora_comfy.safetensors as a lora module into wan2.1-14b-720p-diffusers, but get errors: ``` import torch import numpy as np from diffusers import AutoencoderKLWan, WanImageToVideoPipeline, DiffusionPipeline from diffusers.schedulers.scheduling_unipc_multistep import U...
open
null
false
6
[ "stale" ]
[]
2025-07-09T13:53:57Z
2026-02-03T15:20:48Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
fzuo1230
84,018,094
MDQ6VXNlcjg0MDE4MDk0
User
false
huggingface/diffusers
3,216,248,747
I_kwDOHa8MBc6_tA-r
11,902
https://github.com/huggingface/diffusers/issues/11902
https://api.github.com/repos/huggingface/diffusers/issues/11902
SD3.5-Controlnet-8b does not use encoder_hidden_states
In https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_sd3.py#L1338 the encoder_hidden_states are set to prompt_embeds, which is incorrect if we are finetuning sd3.5-controlnet-8b since it does not use encoder_hidden_states as stated in https://github.com/huggingface/diffusers/blob/...
open
null
false
4
[ "stale" ]
[]
2025-07-09T15:00:42Z
2026-02-03T15:20:45Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
viv92
24,729,584
MDQ6VXNlcjI0NzI5NTg0
User
false
huggingface/diffusers
3,217,008,166
I_kwDOHa8MBc6_v6Ym
11,903
https://github.com/huggingface/diffusers/issues/11903
https://api.github.com/repos/huggingface/diffusers/issues/11903
Fused QKV projections incompatible with training
### Describe the bug I've enabled fused qkv projections in SimpleTuner, but it took quite a bit of investigation and effort. 1. any PEFT LoRAs become fused as well. we have to adjust the lora_target to include `to_qkv` instead of the split target layer names. 2. the `fuse_qkv_projections` method on the Attention clas...
open
null
false
4
[ "bug", "stale" ]
[]
2025-07-09T20:05:54Z
2026-01-09T15:17:06Z
null
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
bghira
59,658,056
MDQ6VXNlcjU5NjU4MDU2
User
false
huggingface/diffusers
3,218,275,357
I_kwDOHa8MBc6_0vwd
11,907
https://github.com/huggingface/diffusers/issues/11907
https://api.github.com/repos/huggingface/diffusers/issues/11907
flux pipeline not work with DPMSolverMultistepScheduler and UniPCMultistepScheduler
### Describe the bug FluxPipeline is not compatible with two advanced schedulers: `DPMSolverMultistepScheduler` and `UniPCMultistepScheduler` ### Reproduction `python repro.py dpm` or unipc, euler ```python import sys import torch from diffusers.schedulers.scheduling_dpmsolver_multistep import \ DPMSolverMulti...
closed
completed
false
0
[ "bug" ]
[]
2025-07-10T07:44:46Z
2025-07-16T03:49:58Z
2025-07-16T03:49:58Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
gameofdimension
32,255,912
MDQ6VXNlcjMyMjU1OTEy
User
false
huggingface/diffusers
3,225,513,446
I_kwDOHa8MBc7AQW3m
11,914
https://github.com/huggingface/diffusers/issues/11914
https://api.github.com/repos/huggingface/diffusers/issues/11914
Loading multiple LoRAs to 1 pipeline in parallel, 1 LoRA to 2-pipelines on 2-GPUs
Hi everyone, I have the following scenario. I have a machine with 2-GPUs and a running service that keep has two pipelines loaded to their corresponding devices. Also I have a list of LoRAs (say 10). On each request I split the batch into 2 parts (request also has the corresponding information about LoRA), load LoRA...
closed
completed
true
5
[]
[]
2025-07-12T15:54:44Z
2025-07-15T19:40:11Z
2025-07-15T19:40:11Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
vahe-toffee
192,042,540
U_kgDOC3JWLA
User
false
huggingface/diffusers
3,225,527,094
I_kwDOHa8MBc7AQaM2
11,915
https://github.com/huggingface/diffusers/issues/11915
https://api.github.com/repos/huggingface/diffusers/issues/11915
Create modular pipeline from existing pipeline
new concept of modular pipelines added via #9672 is very flexible way of creating custom pipelines and one of the best early use-cases is new concept of modular guiders added via #11311 however, this would require a complete rewrite of the existing user apps/codebases to use new concepts and would likely signifi...
closed
completed
false
6
[]
[]
2025-07-12T16:08:30Z
2025-08-28T08:18:08Z
2025-08-28T08:18:08Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
vladmandic
57,876,960
MDQ6VXNlcjU3ODc2OTYw
User
false
huggingface/diffusers
3,227,270,484
I_kwDOHa8MBc7AXD1U
11,917
https://github.com/huggingface/diffusers/issues/11917
https://api.github.com/repos/huggingface/diffusers/issues/11917
SD3ControlNet
### Describe the bug line 1333 of examples/train_conrolnet_sd3.py controlnet_image = controlnet_image * vae.config.scaling_factor missing shift factor should be controlnet_image = (controlnet_image - vae.config.shift_factor) * vae.config.scaling_factor ### Reproduction controlnet_image = (controlnet_image - vae.con...
closed
completed
false
1
[ "bug" ]
[]
2025-07-14T03:53:16Z
2025-07-14T16:59:25Z
2025-07-14T16:59:25Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
ZonglinL
97,364,054
U_kgDOBc2oVg
User
false
huggingface/diffusers
3,228,675,794
I_kwDOHa8MBc7Aca7S
11,922
https://github.com/huggingface/diffusers/issues/11922
https://api.github.com/repos/huggingface/diffusers/issues/11922
FLUX.1-Fill-dev running problem
### Describe the bug I don't know why, but the loading has remained stationary at this step. I'm running it on a 4090-24G graphics card. But it seems that it has been stuck since loading the pipe (worldgen) lqz27@rise:~/.../WorldGen/flux$ python flux.py Loading checkpoint shards: 100%|█████████████████| 2/2 [00:00<00...
open
null
false
17
[ "bug", "stale" ]
[]
2025-07-14T12:52:46Z
2026-02-03T15:20:39Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
pique2233
168,385,644
U_kgDOCglcbA
User
false
huggingface/diffusers
3,231,501,395
I_kwDOHa8MBc7AnMxT
11,928
https://github.com/huggingface/diffusers/issues/11928
https://api.github.com/repos/huggingface/diffusers/issues/11928
Support for Large attn_bias via Sparse Tensors or On‑The‑Fly Construction (seq_len ≈ 12 288)
I’m working with a Transformer model that routinely processes sequences up to 12 288 tokens. For attention bias I currently create a dense attn_bias of shape 12 288 × 12 288. Right now, I have problems with memory for my multi-head att, because of that large tensors. I could create smaller blocks on fly from sparse at...
closed
completed
true
1
[]
[]
2025-07-15T09:21:56Z
2025-07-15T19:40:31Z
2025-07-15T19:40:31Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
maciek-wisniewski
152,893,708
U_kgDOCRz5DA
User
false
huggingface/diffusers
3,232,891,653
I_kwDOHa8MBc7AsgMF
11,930
https://github.com/huggingface/diffusers/issues/11930
https://api.github.com/repos/huggingface/diffusers/issues/11930
how to run convert_cosmos_to_diffusers.py correctly?
### Describe the bug hi. I have tried to convert the cosmos-transfer1's base model to diffuers using "convert_cosmos_to_diffusers.py" code with options --transformer_type Cosmo s-1.0-Diffusion-7B-Video2World --vae_type CV8x8x8-1.0 --transformer_ckpt_path ../fsdp_edge_v1/iter_000016000_ema_model_only.pt --output_path ....
closed
completed
false
2
[ "bug", "stale" ]
[]
2025-07-15T16:20:09Z
2026-01-09T21:53:35Z
2026-01-09T21:53:35Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
dedoogong
12,013,568
MDQ6VXNlcjEyMDEzNTY4
User
false
huggingface/diffusers
3,234,286,440
I_kwDOHa8MBc7Ax0to
11,938
https://github.com/huggingface/diffusers/issues/11938
https://api.github.com/repos/huggingface/diffusers/issues/11938
RMSNorm's weight not registered as submodules when initializing
### Describe the bug RMSNorm() look like this, which is in models/normalization.py ```python class RMSNorm(nn.Module): def __init__(self, dim, eps: float, elementwise_affine: bool = True, bias: bool = False): super().__init__() self.eps = eps self.elementwise_affine = elementwise_affine ...
open
null
false
3
[ "bug", "stale" ]
[]
2025-07-16T03:06:33Z
2026-01-17T15:05:23Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Darkbblue
65,117,884
MDQ6VXNlcjY1MTE3ODg0
User
false
huggingface/diffusers
3,234,290,538
I_kwDOHa8MBc7Ax1tq
11,939
https://github.com/huggingface/diffusers/issues/11939
https://api.github.com/repos/huggingface/diffusers/issues/11939
train flux_controlnet code need to update
This code is just for instantx controlnet, not suitable xlabs https://github.com/huggingface/diffusers/blob/aa14f090f86c9641abf2da761f39a88133b49f09/examples/controlnet/train_controlnet_flux.py#L1258-L1266 reference FluxControlNetPipeline code: https://github.com/huggingface/diffusers/blob/aa14f090f86c9641abf2da761f...
open
null
false
1
[ "stale" ]
[]
2025-07-16T03:08:23Z
2026-01-09T15:16:52Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Johnson-yue
10,268,274
MDQ6VXNlcjEwMjY4Mjc0
User
false
huggingface/diffusers
3,236,757,212
I_kwDOHa8MBc7A7P7c
11,942
https://github.com/huggingface/diffusers/issues/11942
https://api.github.com/repos/huggingface/diffusers/issues/11942
Kontext pipeline resizing image even if it is within PREFERRED_KONTEXT_RESOLUTIONS
### Describe the bug My assumption is, if resolution of image matches one of PREFERRED_KONTEXT_RESOLUTIONS, pipeline should not resize the input image. Even though I use the image with one of the preferred resolution, the pipeline is resizing the image. PREFERRED_KONTEXT_RESOLUTIONS = [ **(672, 1568),** (688...
closed
completed
false
4
[ "bug" ]
[]
2025-07-16T17:26:11Z
2025-08-02T13:38:46Z
2025-08-02T13:38:46Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
nitinmukesh
2,102,186
MDQ6VXNlcjIxMDIxODY=
User
false
huggingface/diffusers
3,237,556,499
I_kwDOHa8MBc7A-TET
11,943
https://github.com/huggingface/diffusers/issues/11943
https://api.github.com/repos/huggingface/diffusers/issues/11943
ValueError when loading Lycoris safetensors LoRA: keys not correctly renamed
### Describe the bug Hello, I am trying to load a Lycoris LoRA safetensors file (e.g. `de-anime-er_v10.safetensors`) into the Diffusers pipeline using the following code: safetensors download url : https://civitai.com/api/download/models/111997?type=Model&format=SafeTensor lora_path = "/content/de-anime-er_v10.safe...
closed
completed
false
3
[ "bug", "stale" ]
[]
2025-07-16T22:52:54Z
2026-01-09T21:55:12Z
2026-01-09T21:55:12Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
sss-okayu-korone
115,227,116
U_kgDOBt457A
User
false
huggingface/diffusers
3,237,943,840
I_kwDOHa8MBc7A_xog
11,945
https://github.com/huggingface/diffusers/issues/11945
https://api.github.com/repos/huggingface/diffusers/issues/11945
Floating point exception with nightly PyTorch and CUDA
### Describe the bug When running any code snippet using diffusers it fails with floating point exception, and doesn't print any traceback. For example this one would cause the issue (the example of Stable Diffusion 3.5 medium): ``` import torch from diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3...
closed
completed
false
3
[ "bug", "stale" ]
[]
2025-07-17T03:16:02Z
2026-01-09T21:56:04Z
2026-01-09T21:56:04Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
MxtAppz
121,626,118
U_kgDOBz_eBg
User
false
huggingface/diffusers
3,238,442,724
I_kwDOHa8MBc7BBrbk
11,946
https://github.com/huggingface/diffusers/issues/11946
https://api.github.com/repos/huggingface/diffusers/issues/11946
closed
null
closed
completed
false
0
[ "bug" ]
[]
2025-07-17T07:21:24Z
2025-07-23T07:14:16Z
2025-07-23T07:14:16Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
SS-snap
143,506,979
U_kgDOCI2-Iw
User
false
huggingface/diffusers
3,239,350,249
I_kwDOHa8MBc7BFI_p
11,948
https://github.com/huggingface/diffusers/issues/11948
https://api.github.com/repos/huggingface/diffusers/issues/11948
Impossible to load WanTransformer3DModel when offline using the 'from_pretrained' function.
### Describe the bug Hi! I am using the Wan2.1-1.3B-Diffusers transformer for a project. When I load the transformer (but not the complete pipeline), I receive a ConnectionError, even when I pass the local_files_only=True input. However, when I load the complete pipeline, I do not receive an error. Please note that th...
closed
completed
false
16
[ "bug", "stale" ]
[]
2025-07-17T12:11:56Z
2026-01-09T21:56:52Z
2026-01-09T21:56:52Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
guillaumejs2403
25,442,525
MDQ6VXNlcjI1NDQyNTI1
User
false
huggingface/diffusers
1,440,364,833
I_kwDOHa8MBc5V2jkh
1,195
https://github.com/huggingface/diffusers/issues/1195
https://api.github.com/repos/huggingface/diffusers/issues/1195
Memory efficient attention not working with fp16 weights
### Describe the bug Following the example code with available in the `0.7.0` release ```py from diffusers import StableDiffusionPipeline import torch pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", revision="fp16", torch_dtype=torch.float16, ).to("cuda") p...
closed
completed
false
2
[ "bug" ]
[ "patil-suraj" ]
2022-11-08T15:01:10Z
2022-11-08T16:15:25Z
2022-11-08T16:15:25Z
MEMBER
null
20260407T133413Z
2026-04-07T13:34:13Z
apolinario
788,417
MDQ6VXNlcjc4ODQxNw==
User
false
huggingface/diffusers
3,241,705,468
I_kwDOHa8MBc7BOH_8
11,951
https://github.com/huggingface/diffusers/issues/11951
https://api.github.com/repos/huggingface/diffusers/issues/11951
Kontext model loading quantization problem
Hello, can kontext be loaded quantitatively at present? Because I only have a 4090 with 24g video memory, the current fp16 loading method will cause OOM. Like flux, can it be loaded with torchao or gguf, so that this model can run on 4090?
closed
completed
false
2
[]
[]
2025-07-18T03:20:48Z
2025-07-18T05:39:28Z
2025-07-18T05:39:27Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
babyta
37,787,795
MDQ6VXNlcjM3Nzg3Nzk1
User
false
huggingface/diffusers
3,244,149,950
I_kwDOHa8MBc7BXcy-
11,956
https://github.com/huggingface/diffusers/issues/11956
https://api.github.com/repos/huggingface/diffusers/issues/11956
Frequency-Decoupled Guidance (FDG) for diffusion models
FDG is a new method for applying CFG in the frequency domain. It improves generation quality at low CFG scales while inherently avoiding the harmful effects of high CFG values. It could be a nice addition to the guiders part of diffusers. The implementation details for FDG are available on page 19 of the paper. https:...
closed
completed
false
5
[ "help wanted", "Good second issue", "contributions-welcome", "advanced", "consider-for-modular-diffusers" ]
[]
2025-07-18T19:12:50Z
2025-08-07T05:51:03Z
2025-08-07T05:51:03Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Msadat97
71,992,433
MDQ6VXNlcjcxOTkyNDMz
User
false
huggingface/diffusers
3,244,453,789
I_kwDOHa8MBc7BYm-d
11,957
https://github.com/huggingface/diffusers/issues/11957
https://api.github.com/repos/huggingface/diffusers/issues/11957
Flash/Sage varlen does not work with torch.compile
@a-r-r-o-w @DN6 @tolgacangoz I got a bit excited about this PR and wanted to give it a go. I love the syntax, both the setter function and the context, great work! I wanted to also see if it would still compile but got the following error logs: ``` [t+28s648ms] 0%| | 0/30 [00:00<?, ?it/s]/inferencesh/...
open
null
false
4
[ "stale" ]
[ "a-r-r-o-w" ]
2025-07-18T21:57:42Z
2026-01-09T15:16:34Z
null
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
a-r-r-o-w
72,266,394
MDQ6VXNlcjcyMjY2Mzk0
User
false
huggingface/diffusers
3,245,390,831
I_kwDOHa8MBc7BcLvv
11,959
https://github.com/huggingface/diffusers/issues/11959
https://api.github.com/repos/huggingface/diffusers/issues/11959
Passing enable_gqa flag in attention dispatcher should be version guarded
Just realized this is potentially breaking: https://github.com/huggingface/diffusers/blob/cde02b061b6f13012dfefe76bc8abf5e6ec6d3f3/src/diffusers/models/attention_dispatch.py#L229 I believe enable_gqa was first released in PT 2.5.0. For versions before that, this flag should not be passed. Additionally, some user code ...
closed
completed
false
0
[ "bug" ]
[ "a-r-r-o-w" ]
2025-07-19T15:36:55Z
2025-07-22T15:17:46Z
2025-07-22T15:17:46Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
a-r-r-o-w
72,266,394
MDQ6VXNlcjcyMjY2Mzk0
User
false
huggingface/diffusers
3,245,889,780
I_kwDOHa8MBc7BeFj0
11,961
https://github.com/huggingface/diffusers/issues/11961
https://api.github.com/repos/huggingface/diffusers/issues/11961
New Adapter/Pipeline Request: IT-Blender for Creative Conceptual Blending
## Model/Pipeline/Scheduler description ### Name of the model/pipeline/scheduler "Image-and-Text Concept Blender" (IT-Blender), a diffusion adapter that blends visual concepts from a real reference image with textual concepts from a prompt in a disentangled manner. The goal is to enhance human creativity in design tas...
open
null
false
1
[ "stale" ]
[]
2025-07-20T03:07:38Z
2026-01-09T15:16:31Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
WonwoongCho
20,943,085
MDQ6VXNlcjIwOTQzMDg1
User
false
huggingface/diffusers
3,246,119,771
I_kwDOHa8MBc7Be9tb
11,962
https://github.com/huggingface/diffusers/issues/11962
https://api.github.com/repos/huggingface/diffusers/issues/11962
FLUX.1-Kontext-dev Support for GGUF Quantized Model
### Model/Pipeline/Scheduler description The original model weights and pipeline are available at: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/tree/main Quantized (GGUF) versions of the model can be found here: https://huggingface.co/QuantStack/FLUX.1-Kontext-dev-GGUF/tree/main Using the code below, I...
closed
completed
false
1
[]
[]
2025-07-20T07:46:49Z
2025-08-02T13:45:04Z
2025-08-02T13:45:04Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
sahandkh1419
52,316,446
MDQ6VXNlcjUyMzE2NDQ2
User
false
huggingface/diffusers
3,247,231,562
I_kwDOHa8MBc7BjNJK
11,964
https://github.com/huggingface/diffusers/issues/11964
https://api.github.com/repos/huggingface/diffusers/issues/11964
KeyError when loading LoRA for Flux model: missing lora_unet_final_layer_adaLN_modulation_1 weights
I'm trying to run Overlay-Kontext-Dev-LoRA locally by loading the LoRA weights using the pipe.load_lora_weights() function. However, I encountered the following error during execution: > KeyError: 'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight' ``` import torch from diffusers import DiffusionPipeline fro...
open
null
false
4
[ "stale" ]
[]
2025-07-21T05:16:34Z
2026-02-03T15:20:25Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
NEWbie0709
81,673,708
MDQ6VXNlcjgxNjczNzA4
User
false
huggingface/diffusers
3,247,234,057
I_kwDOHa8MBc7BjNwJ
11,965
https://github.com/huggingface/diffusers/issues/11965
https://api.github.com/repos/huggingface/diffusers/issues/11965
Highdream multiple gpu to offload llama
An option to offload clip and llama to a secondary gpu would be great allowing primary GPU to focus on Lora training would be great!
closed
completed
false
2
[ "stale" ]
[]
2025-07-21T05:18:01Z
2026-01-09T22:02:19Z
2026-01-09T22:02:19Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
grio43
1,619,512
MDQ6VXNlcjE2MTk1MTI=
User
false
huggingface/diffusers
3,247,718,019
I_kwDOHa8MBc7BlD6D
11,966
https://github.com/huggingface/diffusers/issues/11966
https://api.github.com/repos/huggingface/diffusers/issues/11966
How about forcing the first and last block on device when groupoffloading is used?
**Is your feature request related to a problem? Please describe.** When group offloading is enabled, the offload and onload cannot be streamed between steps and this is really a big time comsuming problem. **Describe the solution you'd like.** Is it possible to add an option that could make the first and last block fo...
open
null
false
16
[ "stale", "contributions-welcome", "group-offloading" ]
[ "a-r-r-o-w" ]
2025-07-21T08:38:30Z
2026-02-03T15:20:22Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
seed93
4,570,032
MDQ6VXNlcjQ1NzAwMzI=
User
false
huggingface/diffusers
1,440,629,396
I_kwDOHa8MBc5V3kKU
1,197
https://github.com/huggingface/diffusers/issues/1197
https://api.github.com/repos/huggingface/diffusers/issues/1197
[Community] OpenAI Diffusion Pipeline
**Is your feature request related to a problem? Please describe.** It would be cool to be able to use the many OpenAI Guided Diffusion models within diffusers itself. **Describe the solution you'd like** An OpenAIGuidedDiffusionPipeline class to load and handle OpenAI Guided Diffusion Models **Describe alterna...
open
null
false
8
[ "New pipeline/model" ]
[]
2022-11-08T17:36:41Z
2025-10-06T14:57:45Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
WASasquatch
1,151,589
MDQ6VXNlcjExNTE1ODk=
User
false
huggingface/diffusers
3,251,540,760
I_kwDOHa8MBc7BzpMY
11,971
https://github.com/huggingface/diffusers/issues/11971
https://api.github.com/repos/huggingface/diffusers/issues/11971
What is the minimum memory requirement for model training?
Hello, I would like to try training an SDXL model using my own dataset. What is the minimum memory size required for the model?
closed
completed
true
1
[]
[]
2025-07-22T07:52:28Z
2025-07-22T08:26:27Z
2025-07-22T08:26:27Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
WWWPPPGGG
124,946,162
U_kgDOB3KG8g
User
false
huggingface/diffusers
3,251,695,764
I_kwDOHa8MBc7B0PCU
11,973
https://github.com/huggingface/diffusers/issues/11973
https://api.github.com/repos/huggingface/diffusers/issues/11973
train_dreambooth_lora_flux_kontext.py: error: the following arguments are required: --instance_prompt
### Describe the bug train_dreambooth_lora_flux_kontext.py: error: the following arguments are required: --instance_prompt ### Reproduction accelerate launch --main_process_port 29513 train_dreambooth_lora_flux_kontext.py \ --pretrained_model_name_or_path=black-forest-labs/FLUX.1-Kontext-dev \ --output_dir="kon...
open
null
false
4
[ "bug", "stale" ]
[]
2025-07-22T08:36:20Z
2026-02-03T15:20:17Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
huassaue
107,062,689
U_kgDOBmGloQ
User
false
huggingface/diffusers
3,256,073,911
I_kwDOHa8MBc7CE763
11,977
https://github.com/huggingface/diffusers/issues/11977
https://api.github.com/repos/huggingface/diffusers/issues/11977
how to load a finetuned model especially during validation phase
<img width="1034" height="743" alt="Image" src="https://github.com/user-attachments/assets/c4e9318f-10aa-4b91-9d60-e28a3be38f8a" /> As the above, I have finetuned the model and want to validate it, but the given demo which is train_dreambooth_sd3.py still uses "pipeline = StableDiffusion3Pipeline.from_pretrained( ...
closed
completed
true
6
[ "stale" ]
[]
2025-07-23T11:54:16Z
2026-01-09T22:03:59Z
2026-01-09T22:03:59Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
micklexqg
13,776,012
MDQ6VXNlcjEzNzc2MDEy
User
false
huggingface/diffusers
3,256,249,258
I_kwDOHa8MBc7CFmuq
11,978
https://github.com/huggingface/diffusers/issues/11978
https://api.github.com/repos/huggingface/diffusers/issues/11978
Add Bria 3.2 pipeline - next-generation commercial-ready text-to-image model
### Model/Pipeline/Scheduler description **TL;DR** Bria 3.2 is the next-generation commercial-ready text-to-image model. With just 4 billion parameters, it provides exceptional aesthetics and text rendering, evaluated to provide on par results to leading open-source models, and outperforming other licensed models. I...
closed
completed
false
3
[]
[]
2025-07-23T12:43:54Z
2025-08-26T05:30:48Z
2025-08-26T05:30:48Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
galbria
158,810,732
U_kgDOCXdCbA
User
false
huggingface/diffusers
3,256,422,691
I_kwDOHa8MBc7CGREj
11,981
https://github.com/huggingface/diffusers/issues/11981
https://api.github.com/repos/huggingface/diffusers/issues/11981
Groupoffloading introduce bad results
### Describe the bug I am using groupoffloading for saving gpu memory. I got worse results with a cosine similarity aboud 0.934 on A800, which is unexpected. And I got results with a cosine similarity about 0.78 on 4090, which is worse. Could anyone give me any suggestions to fix the precision? ### Reproduction ```...
closed
completed
false
10
[ "bug" ]
[]
2025-07-23T13:31:58Z
2025-11-18T08:40:51Z
2025-08-06T15:41:01Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
seed93
4,570,032
MDQ6VXNlcjQ1NzAwMzI=
User
false
huggingface/diffusers
3,259,150,182
I_kwDOHa8MBc7CQq9m
11,984
https://github.com/huggingface/diffusers/issues/11984
https://api.github.com/repos/huggingface/diffusers/issues/11984
A compatibility issue when using custom Stable Diffusion with pre-trained ControlNets
I have successfully fine-tuned a Stable Diffusion v1.5 model using the Dreambooth script, and the results are excellent. However, I've encountered a compatibility issue when using this custom model with pre-trained ControlNets. Since the Dreambooth process modifies the U-Net weights, the original ControlNet is no longe...
closed
completed
false
6
[]
[]
2025-07-24T09:16:55Z
2025-07-24T15:15:20Z
2025-07-24T15:15:20Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
ScienceLi1125
99,795,063
U_kgDOBfLAdw
User
false
huggingface/diffusers
3,261,485,275
I_kwDOHa8MBc7CZlDb
11,989
https://github.com/huggingface/diffusers/issues/11989
https://api.github.com/repos/huggingface/diffusers/issues/11989
The inference image is pure black. This seems to be related to weight, but I have been unable to pinpoint where the problem lies.
### Describe the bug I occasionally encounter situations where, when I restart my inference script again, the resulting image turns completely black. I haven't made any changes. My intuition tells me that this is related to weight. I am currently a combination of SD and ControlNet, which I have trained on my own dat...
closed
completed
false
8
[ "bug" ]
[]
2025-07-24T23:08:13Z
2025-07-29T06:42:33Z
2025-07-29T06:42:33Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
ShanZard
87,146,029
MDQ6VXNlcjg3MTQ2MDI5
User
false
huggingface/diffusers
1,440,739,767
I_kwDOHa8MBc5V3_G3
1,199
https://github.com/huggingface/diffusers/issues/1199
https://api.github.com/repos/huggingface/diffusers/issues/1199
Download stable_diffusion on aws
### Describe the bug Hi, I want to run stable diffusion model on aws. and I am downloading model on aws with following code: `git lfs install` `git clone https://huggingface.co/runwayml/stable-diffusion-v1-5` However during downloading 'stable-diffusion-v1-5' the downloading interrupts and gives following erro...
closed
completed
false
6
[ "bug" ]
[]
2022-11-08T18:59:49Z
2022-11-11T11:12:10Z
2022-11-11T11:12:09Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
hamzafar
6,842,017
MDQ6VXNlcjY4NDIwMTc=
User
false
huggingface/diffusers
3,262,772,203
I_kwDOHa8MBc7CefPr
11,992
https://github.com/huggingface/diffusers/issues/11992
https://api.github.com/repos/huggingface/diffusers/issues/11992
Problems with Kontext quantization
I have some problems with Kontext quantization model. 1. FP8 quantization This is my code: ``` import torch from diffusers import FluxKontextPipeline, FluxTransformer2DModel, AutoencoderKL, FlowMatchEulerDiscreteScheduler, GGUFQuantizationConfig from transformers import CLIPTextModel, CLIPTokenizer, T5EncoderModel, T...
open
null
false
12
[ "stale" ]
[]
2025-07-25T10:35:45Z
2026-02-03T15:20:03Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
MinhVD-ZenAI
191,195,078
U_kgDOC2Vnxg
User
false
huggingface/diffusers
3,267,151,659
I_kwDOHa8MBc7CvMcr
11,996
https://github.com/huggingface/diffusers/issues/11996
https://api.github.com/repos/huggingface/diffusers/issues/11996
Using AutoModel.from_pretrained for wan2.1 crash my notebook
Hi, i been trying to use the diffusers library to use the Wan2.1 model, and i was following the official documentation to use it with the diffusers library from here: https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan But everytime I try to load the vae my jupyter notebook, It keep crashing when the model...
closed
completed
false
2
[ "stale" ]
[]
2025-07-27T14:22:20Z
2026-01-09T22:06:51Z
2026-01-09T22:06:51Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Javierxx05
42,848,068
MDQ6VXNlcjQyODQ4MDY4
User
false
huggingface/diffusers
3,267,939,180
I_kwDOHa8MBc7CyMts
11,998
https://github.com/huggingface/diffusers/issues/11998
https://api.github.com/repos/huggingface/diffusers/issues/11998
Support training variance learning (Improved DDPM) with VLB loss.
Issues has been previously discussed here, but was closed unresolved. https://github.com/huggingface/diffusers/issues/3287. Diffusers only support variance learning in inference, which is weird if they did not support the training in the first place. Improved DDPM has been since used in lot of other repositories like A...
closed
not_planned
false
1
[]
[]
2025-07-28T03:40:22Z
2025-08-02T13:43:55Z
2025-08-02T13:43:55Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
bhosalems
10,846,405
MDQ6VXNlcjEwODQ2NDA1
User
false
huggingface/diffusers
1,272,010,652
I_kwDOHa8MBc5L0Vec
12
https://github.com/huggingface/diffusers/issues/12
https://api.github.com/repos/huggingface/diffusers/issues/12
Collection of nitpicks
See below for a small collection of nitpicks; I suppose those will be adressed before the first release, but wanted to write them down somewhere: - Documentation seems to be in a different format so will need to be refactored to work with the `doc-builder` - The `logging` class still has mentions to `🤗 Transformers`...
closed
completed
false
5
[]
[]
2022-06-15T10:21:24Z
2022-07-21T18:54:29Z
2022-07-21T18:54:29Z
MEMBER
null
20260407T133413Z
2026-04-07T13:34:13Z
LysandreJik
30,755,778
MDQ6VXNlcjMwNzU1Nzc4
User
false
huggingface/diffusers
3,269,498,493
I_kwDOHa8MBc7C4JZ9
12,003
https://github.com/huggingface/diffusers/issues/12003
https://api.github.com/repos/huggingface/diffusers/issues/12003
Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
I tried to load `Flux_Turbo_Alpha` into FluxKontextPipeline by this code, and save_pretrained to local. ``` import torch from diffusers import FluxKontextPipeline from diffusers.utils import load_image pipe = FluxKontextPipeline.from_pretrained( "black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16 )...
open
null
false
9
[ "stale" ]
[ "sayakpaul" ]
2025-07-28T11:27:24Z
2026-03-14T05:20:21Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
MinhVD-ZenAI
191,195,078
U_kgDOC2Vnxg
User
false
huggingface/diffusers
3,271,947,098
I_kwDOHa8MBc7DBfNa
12,007
https://github.com/huggingface/diffusers/issues/12007
https://api.github.com/repos/huggingface/diffusers/issues/12007
The fuse_lora() func in class PeftAdapterMixin did not take effect.
### Describe the bug I use the fuse_lora function, but the weights before and after the fuse_lora stay the same in FluxTransformer2DModel. Codes in TensorRT demo: ```shell def merge_loras(model, lora_loader): import copy model_transformer_blocks_bak = copy.deepcopy(model.transformer_blocks) paths, weight...
closed
completed
false
0
[ "bug" ]
[]
2025-07-29T04:07:00Z
2025-07-29T13:21:17Z
2025-07-29T13:21:17Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
dujifeng
42,289,967
MDQ6VXNlcjQyMjg5OTY3
User
false
huggingface/diffusers
3,272,524,463
I_kwDOHa8MBc7DDsKv
12,009
https://github.com/huggingface/diffusers/issues/12009
https://api.github.com/repos/huggingface/diffusers/issues/12009
WanTransformer3DModel.from_single_file wont load Wan2.2 GGUF (NotImplementedError: Cannot copy out of meta tensor; no data)
### Describe the bug I am trying to load Wan2.2 gguf transformers but unfortunately they yield a criptic error when I try to use them ```python repo_id = "QuantStack/Wan2.2-I2V-A14B-GGUF" filename = "HighNoise/Wan2.2-I2V-A14B-HighNoise-Q2_K.gguf" gguf_path = hf_hub_download(repo_id=repo_id, filename=filename) trans...
closed
completed
false
3
[ "bug" ]
[]
2025-07-29T07:42:58Z
2025-08-07T12:56:10Z
2025-08-07T12:56:10Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
luke14free
166,602
MDQ6VXNlcjE2NjYwMg==
User
false
huggingface/diffusers
3,273,266,009
I_kwDOHa8MBc7DGhNZ
12,011
https://github.com/huggingface/diffusers/issues/12011
https://api.github.com/repos/huggingface/diffusers/issues/12011
Wan 2.2 a14b i2v OOM
### Describe the bug @a-r-r-o-w @DN6 @asomoza @jlonge4 sorry to tag you all, however after @yiyixuxu 's merge of Wan2.2 PR, one of your commit's is causing the model to OOM. I was only able to narrow it down to this. Wan2.2 PR, working commit: #a6d9f6a1a9a9ede2c64972d83ccee192b801c4a0 <img width="902" height="47...
open
null
false
6
[ "bug", "stale" ]
[]
2025-07-29T11:31:12Z
2026-02-03T15:19:56Z
null
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
okaris
1,448,702
MDQ6VXNlcjE0NDg3MDI=
User
false
huggingface/diffusers
3,273,392,175
I_kwDOHa8MBc7DHAAv
12,012
https://github.com/huggingface/diffusers/issues/12012
https://api.github.com/repos/huggingface/diffusers/issues/12012
apply_first_block_cache with Wan 2.2 causes ValueError: No context is set. Please set a context before retrieving the state
### Describe the bug Following @a-r-r-o-w guide here https://huggingface.co/posts/a-r-r-o-w/278025275110164 I tried both `apply_first_block_cache(pipe.transformer, FirstBlockCacheConfig(threshold=0.2))` and `pipe.transformer.enable_cache(FirstBlockCacheConfig(threshold=input_data.cache_threshold))` but both yield...
closed
completed
false
0
[ "bug" ]
[]
2025-07-29T12:11:54Z
2025-07-29T13:29:26Z
2025-07-29T12:55:10Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
luke14free
166,602
MDQ6VXNlcjE2NjYwMg==
User
false
huggingface/diffusers
3,274,218,294
I_kwDOHa8MBc7DKJs2
12,017
https://github.com/huggingface/diffusers/issues/12017
https://api.github.com/repos/huggingface/diffusers/issues/12017
Dream 7B
### Model/Pipeline/Scheduler description https://hkunlp.github.io/blog/2025/dream/ > In short, Dream 7B: > > - consistently outperforms existing diffusion language models by a large margin; > - matches or exceeds top-tier Autoregressive (AR) language models of similar size on the general, math, and coding > abilitie...
open
null
false
2
[ "stale" ]
[]
2025-07-29T16:15:30Z
2026-01-09T15:15:35Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
ntoxeg
823,693
MDQ6VXNlcjgyMzY5Mw==
User
false
huggingface/diffusers
3,274,615,521
I_kwDOHa8MBc7DLqrh
12,019
https://github.com/huggingface/diffusers/issues/12019
https://api.github.com/repos/huggingface/diffusers/issues/12019
Wan 2.2 First vs Second stage
Wan 2.2 consists of two transformer stages. Diffusers currently allows to run either fully on first stage or at the boundary switch to second stage (that concept somewhat resembles old-sdxl-refiner). However, after some testing of Wan2.2, its far more useful to be able to skip first stage and run on second stage only ...
closed
completed
false
7
[]
[]
2025-07-29T18:42:18Z
2025-08-01T15:29:53Z
2025-08-01T15:29:53Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
vladmandic
57,876,960
MDQ6VXNlcjU3ODc2OTYw
User
false
huggingface/diffusers
1,441,161,894
I_kwDOHa8MBc5V5mKm
1,202
https://github.com/huggingface/diffusers/issues/1202
https://api.github.com/repos/huggingface/diffusers/issues/1202
Obtaining the image iterations before final image has been generated StableDiffusionPipeline.pretrained()
Hey all. I am currently using the StableDiffusionPipeline to generate AI images with a discord bot which I use with my friends. I was wondering if it was possible to get a preview of the image being generated before it is finished? For example, if an image takes 20 seconds to generate, since it is using diffusion it...
closed
completed
false
4
[ "stale" ]
[]
2022-11-09T01:02:43Z
2022-12-18T15:03:15Z
2022-12-18T15:03:15Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
MonkeeMan1
98,274,841
U_kgDOBduOGQ
User
false
huggingface/diffusers
3,276,799,103
I_kwDOHa8MBc7DT_x_
12,022
https://github.com/huggingface/diffusers/issues/12022
https://api.github.com/repos/huggingface/diffusers/issues/12022
_flash_attention_3 in dispatch_attention_fn is not compatible with the latest flash-atten interface.
### Describe the bug [FA3] Don't return lse: https://github.com/Dao-AILab/flash-attention/commit/ed209409acedbb2379f870bbd03abce31a7a51b7 but in the current diffuser version, it is not updated. https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_dispatch.py#L608 when use fa3 backend, di...
closed
completed
false
8
[ "bug", "stale" ]
[]
2025-07-30T12:13:50Z
2026-02-26T12:04:38Z
2026-02-26T12:04:38Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
hmzjwhmzjw
23,023,261
MDQ6VXNlcjIzMDIzMjYx
User
false
huggingface/diffusers
3,278,427,878
I_kwDOHa8MBc7DaNbm
12,025
https://github.com/huggingface/diffusers/issues/12025
https://api.github.com/repos/huggingface/diffusers/issues/12025
Invalid API call in Cosmos VAE Encoder
### Describe the bug At this line, the reshape appears to be called incorrectly. I believe it should either be `hidden_states.reshape(*)` or `torch.reshape(hidden_states, *)` https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_cosmos.py#L171 It appears like this part o...
closed
completed
false
4
[ "bug" ]
[]
2025-07-30T21:00:07Z
2025-08-02T14:54:02Z
2025-08-02T14:54:02Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
akhilg-nv
165,961,486
U_kgDOCeRfDg
User
false
huggingface/diffusers
3,279,891,593
I_kwDOHa8MBc7DfyyJ
12,029
https://github.com/huggingface/diffusers/issues/12029
https://api.github.com/repos/huggingface/diffusers/issues/12029
X-Omni-En model won't load
I am running this notebook: https://colab.research.google.com/#scrollTo=rRaMa09zUh66&fileId=https%3A//huggingface.co/X-Omni/X-Omni-En.ipynb And I am getting the following error. That notebook says to post an issue here. --------------------------------------------------------------------------- HTTPError ...
closed
completed
false
6
[ "stale" ]
[]
2025-07-30T17:38:34Z
2026-01-09T22:08:19Z
2026-01-09T22:08:19Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Tylersuard
41,713,505
MDQ6VXNlcjQxNzEzNTA1
User
false
huggingface/diffusers
3,280,676,514
I_kwDOHa8MBc7Diyai
12,034
https://github.com/huggingface/diffusers/issues/12034
https://api.github.com/repos/huggingface/diffusers/issues/12034
Wan 2.2 5b i2v results poor quality compared to official Wan HF Space
### Describe the bug # diffusers result: <img width="1292" height="772" alt="Image" src="https://github.com/user-attachments/assets/35c30cdc-6c38-48bb-9a8d-71faa1d7896d" /> video links https://github.com/user-attachments/assets/4dd6d342-9dfb-4946-a714-641f5a4cc98d (this video might not play in the browser due to a ...
closed
completed
false
8
[ "bug" ]
[ "yiyixuxu" ]
2025-07-31T14:24:30Z
2025-11-15T20:12:32Z
2025-08-01T09:43:43Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
okaris
1,448,702
MDQ6VXNlcjE0NDg3MDI=
User
false
huggingface/diffusers
3,280,959,121
I_kwDOHa8MBc7Dj3aR
12,037
https://github.com/huggingface/diffusers/issues/12037
https://api.github.com/repos/huggingface/diffusers/issues/12037
Wan 2.2 WanTransformer3DModel not compatible with Lightx2v self-forcing guidance distilled loras
### Describe the bug The latest advancement in Wan has been self forcing loras, which allow to get extremely good results in just 4 steps. Although the comfy community was successful in using the Lightx2v cfg step distill Wan2.1 loras on Wan2.2, I can't apply them to the trasnformers in any way. The suggested comb...
closed
completed
false
2
[ "bug" ]
[ "sayakpaul" ]
2025-07-31T15:51:28Z
2025-08-02T06:13:27Z
2025-08-02T06:13:27Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
luke14free
166,602
MDQ6VXNlcjE2NjYwMg==
User
false
huggingface/diffusers
3,281,015,033
I_kwDOHa8MBc7DkFD5
12,038
https://github.com/huggingface/diffusers/issues/12038
https://api.github.com/repos/huggingface/diffusers/issues/12038
Dataset structure for train_text_to_image_lora.py
Hello. I am trying to use **train_text_to_image_lora.py** script following the instructions https://github.com/huggingface/diffusers/tree/main/examples/text_to_image I get errors on data structure and don't know what is the issue on my side. I have a folder **data** where I have folder **image** and **csv** file. C:/...
closed
completed
true
2
[ "stale" ]
[]
2025-07-31T16:10:38Z
2026-01-09T22:16:39Z
2026-01-09T22:16:39Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
HripsimeS
42,246,765
MDQ6VXNlcjQyMjQ2NzY1
User
false
huggingface/diffusers
3,282,505,199
I_kwDOHa8MBc7Dpw3v
12,039
https://github.com/huggingface/diffusers/issues/12039
https://api.github.com/repos/huggingface/diffusers/issues/12039
Wan 2.2 VAE forward fails
Hey, With Wan2.1 we was able to pass just a RGB PIl image. With 2.2 i get ``` python def _conv_forward(self, input: Tensor, weight: Tensor, bias: Optional[Tensor]): if self.padding_mode != "zeros": return F.conv3d( F.pad( input, self._reversed_paddi...
closed
completed
false
12
[]
[ "yiyixuxu" ]
2025-08-01T04:28:43Z
2025-08-06T12:56:19Z
2025-08-06T12:56:19Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
a-r-r-o-w
72,266,394
MDQ6VXNlcjcyMjY2Mzk0
User
false
huggingface/diffusers
1,441,216,040
I_kwDOHa8MBc5V5zYo
1,204
https://github.com/huggingface/diffusers/issues/1204
https://api.github.com/repos/huggingface/diffusers/issues/1204
[Community] Can we composite Dreambooth network training?
Very impressed with Dreambooth capabilities. I have what i think is a feature request - or perhaps a clarification on what is and is not possible in training networks with Dreambooth. In particular, i was wondering if there was a way to composite two networks to enable embedding of two instances (e.g. an sks dog >and< ...
closed
completed
false
2
[ "question", "stale" ]
[ "pcuenca" ]
2022-11-09T01:59:05Z
2022-12-21T15:03:19Z
2022-12-21T15:03:19Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
felgryn
97,655,026
U_kgDOBdIY8g
User
false
huggingface/diffusers
3,285,462,623
I_kwDOHa8MBc7D1C5f
12,044
https://github.com/huggingface/diffusers/issues/12044
https://api.github.com/repos/huggingface/diffusers/issues/12044
AttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'?
I am train the Flux.1-dev model and get this error. I found the solution to bring diffuser to version 0.21.0 but then it would beconflict with some other libraries. Is there any solution for this? ``` Traceback (most recent call last): File "/home/quyetnv/t2i/ai-toolkit/run.py", line 120, in <module> main() Fi...
closed
completed
false
3
[]
[]
2025-08-02T01:37:30Z
2025-08-21T01:27:19Z
2025-08-21T01:25:02Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
quyetnv98
47,714,907
MDQ6VXNlcjQ3NzE0OTA3
User
false
huggingface/diffusers
3,286,158,094
I_kwDOHa8MBc7D3ssO
12,047
https://github.com/huggingface/diffusers/issues/12047
https://api.github.com/repos/huggingface/diffusers/issues/12047
Fusing Lightx2v lora on Wan2.2 GGUF fails
### Describe the bug Fusing works fine on non-gguf versions of Wan2.2, but yield issues when used with the GGUF transformer which is what most consumers have to use due to memory constraints (which also make fusing quite important). ### Reproduction ```python import torch from diffusers import WanImageToVideoPipelin...
closed
completed
false
9
[ "bug" ]
[]
2025-08-02T17:51:06Z
2025-08-06T15:22:57Z
2025-08-06T15:19:24Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
luke14free
166,602
MDQ6VXNlcjE2NjYwMg==
User
false
huggingface/diffusers
1,441,466,789
I_kwDOHa8MBc5V6wml
1,205
https://github.com/huggingface/diffusers/issues/1205
https://api.github.com/repos/huggingface/diffusers/issues/1205
converting SD model to onnx
### Describe the bug Hi - I am trying to convert stable-diffusion-v1.4 to onnx using the below code. python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx" --fp16="True" The output I am getting is a model.onnx (430MB) and weights...
closed
completed
false
2
[ "bug" ]
[ "anton-l" ]
2022-11-09T06:30:14Z
2022-11-30T12:18:23Z
2022-11-30T12:18:23Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
harishprabhala
26,444,096
MDQ6VXNlcjI2NDQ0MDk2
User
false
huggingface/diffusers
3,287,039,666
I_kwDOHa8MBc7D7D6y
12,050
https://github.com/huggingface/diffusers/issues/12050
https://api.github.com/repos/huggingface/diffusers/issues/12050
`UNet2DConditionModel` : `qk_norm` setting in `config.json` is ignored
### Describe the bug Adding eg `"qk_norm": "rms_norm"` to config.json for a `UNet2DConditionModel` has no effect. This is because the value is not propagated by the `UNet2DContionalModel` initialization logic through to `Attention.__init__` in `src/diffusers/models/attention_processor.py`. ### Reproduction Default ...
open
null
false
5
[ "bug", "stale" ]
[]
2025-08-03T10:06:58Z
2026-01-09T15:15:18Z
null
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
damian0815
144,366
MDQ6VXNlcjE0NDM2Ng==
User
false
huggingface/diffusers
3,287,142,976
I_kwDOHa8MBc7D7dJA
12,052
https://github.com/huggingface/diffusers/issues/12052
https://api.github.com/repos/huggingface/diffusers/issues/12052
Wan 2.2 with LightX2V offloading tries to multiply tensors from different devices and fails
### Describe the bug After @sayakpaul great work in https://github.com/huggingface/diffusers/pull/12040 LightX2V now works. However what doesn't work is adding both a lora and offloading to the transformer_2. I can get away with either (i.e. offload both transformers but add a lora only to transformer and NOT to trans...
closed
completed
false
4
[ "bug" ]
[]
2025-08-03T12:43:13Z
2025-08-11T15:53:41Z
2025-08-08T07:51:48Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
luke14free
166,602
MDQ6VXNlcjE2NjYwMg==
User
false
huggingface/diffusers
3,287,287,219
I_kwDOHa8MBc7D8AWz
12,053
https://github.com/huggingface/diffusers/issues/12053
https://api.github.com/repos/huggingface/diffusers/issues/12053
Flux1.Dev Kohya Loras text encoder layers no more supported
Hello, I trained a Lora with Kohya SS and I have a problem of conversion, I thought it should have been managed by your conversion script ? ``` Loading adapter weights from state_dict led to unexpected keys found in the model: single_transformer_blocks.0.proj_out.lora_A.default_0.weight, single_transformer_blocks.0.p...
closed
completed
false
22
[]
[]
2025-08-03T15:47:59Z
2026-02-27T10:13:43Z
2026-02-27T10:13:43Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
christopher5106
6,875,375
MDQ6VXNlcjY4NzUzNzU=
User
false
huggingface/diffusers
3,288,731,263
I_kwDOHa8MBc7EBg5_
12,060
https://github.com/huggingface/diffusers/issues/12060
https://api.github.com/repos/huggingface/diffusers/issues/12060
Is there any DiT block defined in the huggingface/diffusers OR huggingface/transformers project?
**Is your feature request related to a problem? Please describe.** I want to make some experiments about DiT based flow-matching model, I need an implementation of the common DiT block, but did not found it in both huggingface/diffusers and huggingface/transformers. Is there any implementation about it with just some ...
open
null
false
3
[ "stale" ]
[]
2025-08-04T09:40:43Z
2026-01-09T15:15:11Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
JohnHerry
8,011,802
MDQ6VXNlcjgwMTE4MDI=
User
false
huggingface/diffusers
3,290,456,768
I_kwDOHa8MBc7EIGLA
12,065
https://github.com/huggingface/diffusers/issues/12065
https://api.github.com/repos/huggingface/diffusers/issues/12065
Qwen Image | Image-To-Image + Editing + Inpainting
# Qwen Image | Editing Capabilities ![Image](https://github.com/user-attachments/assets/97141189-1a5a-4e8f-9e6a-2288e2c46e6e) > When it comes to image editing, Qwen-Image goes far beyond simple adjustments. It enables advanced operations such as style transfer, object insertion or removal, detail enhancement, text ed...
closed
completed
false
3
[]
[]
2025-08-04T18:50:38Z
2025-08-13T14:45:18Z
2025-08-04T19:10:16Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
ghunkins
12,562,057
MDQ6VXNlcjEyNTYyMDU3
User
false
huggingface/diffusers
3,290,566,900
I_kwDOHa8MBc7EIhD0
12,066
https://github.com/huggingface/diffusers/issues/12066
https://api.github.com/repos/huggingface/diffusers/issues/12066
Qwen Image incorrect device assignment during prompt encode
### Describe the bug In the new `QwenImagePipeline` method `_get_qwen_prompt_embeds` it does following: ```py txt_tokens = self.tokenizer( txt, max_length=self.tokenizer_max_length + drop_idx, padding=True, truncation=True, return_tensors="pt" ).to(self.device) ``` i assume this is a typo a...
closed
completed
false
2
[ "bug" ]
[]
2025-08-04T19:38:02Z
2025-08-07T22:27:40Z
2025-08-07T22:27:40Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
vladmandic
57,876,960
MDQ6VXNlcjU3ODc2OTYw
User
false
huggingface/diffusers
1,441,551,773
I_kwDOHa8MBc5V7FWd
1,207
https://github.com/huggingface/diffusers/issues/1207
https://api.github.com/repos/huggingface/diffusers/issues/1207
AttributeError: /opt/conda/bin/python: undefined symbol: cudaRuntimeGetVersion
### Describe the bug ``` AttributeError: /opt/conda/bin/python: undefined symbol: cudaRuntimeGetVersion ``` ### Reproduction ``` %cd /workspace !git clone https://github.com/huggingface/diffusers.git ``` ``` %cd /workspace/diffusers/examples/dreambooth !pwd ``` ``` pip install -U -r requirements.t...
closed
completed
false
6
[ "bug" ]
[]
2022-11-09T07:22:27Z
2022-12-24T02:32:25Z
2022-11-15T11:53:55Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
0xdevalias
753,891
MDQ6VXNlcjc1Mzg5MQ==
User
false
huggingface/diffusers
3,292,205,997
I_kwDOHa8MBc7EOxOt
12,071
https://github.com/huggingface/diffusers/issues/12071
https://api.github.com/repos/huggingface/diffusers/issues/12071
flux kontext transformer single blocks forward behavior changed
### Describe the bug I observed that this line of code(https://github.com/huggingface/diffusers/blob/0454fbb30bfbe21aa4ea29c827c396bac57dc518/src/diffusers/models/transformers/transformer_flux.py#L88) was added for `First Block Cache`. However, it not only increases the computation, but also thoroughly changes the f...
closed
completed
false
3
[ "bug" ]
[]
2025-08-05T08:55:14Z
2025-08-05T11:47:53Z
2025-08-05T11:47:53Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
WingEdge777
16,758,743
MDQ6VXNlcjE2NzU4NzQz
User
false
huggingface/diffusers
3,293,162,769
I_kwDOHa8MBc7ESa0R
12,075
https://github.com/huggingface/diffusers/issues/12075
https://api.github.com/repos/huggingface/diffusers/issues/12075
Qwen Image prompt encoding is not padding to max seq len
### Describe the bug The pipeline method for QwenImagePipeline.encode_prompts is not padding correctly; it's padding by the longest sequence length in the batch, which leaves very very short embeds that are out of distribution for the models' training set. The padding should remain at 1024 tokens even after the syste...
open
null
false
22
[ "bug" ]
[]
2025-08-05T13:38:09Z
2026-02-03T11:13:06Z
null
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
bghira
59,658,056
MDQ6VXNlcjU5NjU4MDU2
User
false
huggingface/diffusers
3,294,612,975
I_kwDOHa8MBc7EX83v
12,078
https://github.com/huggingface/diffusers/issues/12078
https://api.github.com/repos/huggingface/diffusers/issues/12078
Problem with provided example validation input in the Flux Control finetuning example
### Describe the bug The help page for the Flux control finetuning example, https://github.com/huggingface/diffusers/blob/main/examples/flux-control/README.md, provides a sample validation input, a pose condition image [<img src="https://huggingface.co/api/resolve-cache/models/Adapter/t2iadapter/3c291e0547a1b17bed9342...
open
null
false
2
[ "bug", "stale" ]
[]
2025-08-05T22:29:35Z
2026-01-09T15:15:01Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
kzhang2
12,497,065
MDQ6VXNlcjEyNDk3MDY1
User
false
huggingface/diffusers
3,294,962,295
I_kwDOHa8MBc7EZSJ3
12,079
https://github.com/huggingface/diffusers/issues/12079
https://api.github.com/repos/huggingface/diffusers/issues/12079
API Suggestion: Expose Methods to Convert to Sample Prediction in Schedulers
**What API design would you like to have changed or added to the library? Why?** My proposal is for schedulers to expose `convert_to_sample_prediction` and `convert_to_prediction_type` methods, which would do the following: 1. `convert_to_sample_prediction`: Converts from a given `prediction_type` to `sample_predicti...
open
null
false
1
[ "stale" ]
[]
2025-08-06T02:24:46Z
2026-01-09T15:14:57Z
null
MEMBER
null
20260407T133413Z
2026-04-07T13:34:13Z
dg845
58,458,699
MDQ6VXNlcjU4NDU4Njk5
User
false
huggingface/diffusers
3,295,077,009
I_kwDOHa8MBc7EZuKR
12,080
https://github.com/huggingface/diffusers/issues/12080
https://api.github.com/repos/huggingface/diffusers/issues/12080
Qwen Image : Image Editing Inference
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]. The technical report and huggingface page describe qwen_image's image editing capabilities in detail, but the inference code does not have that capability. *...
closed
completed
false
1
[]
[]
2025-08-06T03:46:11Z
2025-08-06T17:35:25Z
2025-08-06T17:35:25Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
PranavAdlinge
56,697,400
MDQ6VXNlcjU2Njk3NDAw
User
false
huggingface/diffusers
3,295,630,526
I_kwDOHa8MBc7Eb1S-
12,082
https://github.com/huggingface/diffusers/issues/12082
https://api.github.com/repos/huggingface/diffusers/issues/12082
wan2.1 vae take more gpu memory after compile
### Describe the bug After `torch.compile` wan2.1 vae consume more GPU memory than `no compilation`, which is unexpected in my opinion. **compiled** <img width="3006" height="1158" alt="Image" src="https://github.com/user-attachments/assets/3eff903c-af4f-422a-b407-9afdd77ef843" /> **no-compile** <img width="2850" he...
open
null
false
4
[ "bug", "stale" ]
[]
2025-08-06T07:57:54Z
2026-02-03T15:19:43Z
null
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
gameofdimension
32,255,912
MDQ6VXNlcjMyMjU1OTEy
User
false
huggingface/diffusers
3,295,650,521
I_kwDOHa8MBc7Eb6LZ
12,083
https://github.com/huggingface/diffusers/issues/12083
https://api.github.com/repos/huggingface/diffusers/issues/12083
Qwen-Image long prompt will cause error
### Describe the bug When the token length is greater than 1024, it will be truncated to 1024. However, the length of RoPE is fixed at 1024 because the image takes up 32 len (for a 1024 width and height image). This causes the length of txt_freqs to be less than 1024. Therefore, x_rotated * freqs_cis will generate an ...
closed
completed
false
2
[ "bug" ]
[]
2025-08-06T08:03:23Z
2025-08-08T06:28:26Z
2025-08-08T06:28:26Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
akk-123
98,469,560
U_kgDOBd6GuA
User
false
huggingface/diffusers
3,296,306,911
I_kwDOHa8MBc7Eeabf
12,084
https://github.com/huggingface/diffusers/issues/12084
https://api.github.com/repos/huggingface/diffusers/issues/12084
Will `cosmos-transfer1` be supported in diffusers in the future?
Hi @a-r-r-o-w and @yiyixuxu :) First of all, thank you for recently enabling cosmos-predict1 models (text2world and video2world) in the diffusers library — it's super exciting to see them integrated! I was wondering if there are any plans to also support [cosmos-transfer1](https://github.com/nvidia-cosmos/cosmos-tr...
open
null
false
4
[ "stale" ]
[]
2025-08-06T11:22:28Z
2026-01-09T15:14:50Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
rebel-shshin
87,514,200
MDQ6VXNlcjg3NTE0MjAw
User
false
huggingface/diffusers
3,296,455,383
I_kwDOHa8MBc7Ee-rX
12,085
https://github.com/huggingface/diffusers/issues/12085
https://api.github.com/repos/huggingface/diffusers/issues/12085
WanVACETransformer3DModel load gguf error
### Describe the bug When attempting to load a model using WanVACETransformer3DModel.from_single_file(), I encounter a ValueError indicating that the model type is not compatible with FromOriginalModelMixin. The error suggests that WanVACETransformer3DModel is not in the list of supported model types. ### Reproductio...
closed
completed
false
0
[ "bug" ]
[]
2025-08-06T12:09:48Z
2025-08-06T12:43:42Z
2025-08-06T12:43:42Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
00Neil
76,234,364
MDQ6VXNlcjc2MjM0MzY0
User
false
huggingface/diffusers
3,300,136,658
I_kwDOHa8MBc7EtBbS
12,094
https://github.com/huggingface/diffusers/issues/12094
https://api.github.com/repos/huggingface/diffusers/issues/12094
[Wan2.2] pipeline_wan miss the 'shift' parameter which used by Wan2.2-A14B-diffusers.
**Firstly, I found that the quality of output using diffusers is poor** Later, I found that the pipeline_wan in diffusers[0.34.0] did not support two-stage processing. I noticed that the community had already updated it, so I installed diffusers[0.35.0-dev] by source code and it worked. Then I found that the scheduler...
closed
completed
false
7
[]
[]
2025-08-07T11:37:36Z
2025-08-10T08:43:27Z
2025-08-10T08:43:26Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
yvmilir
42,015,376
MDQ6VXNlcjQyMDE1Mzc2
User
false
huggingface/diffusers
3,300,343,252
I_kwDOHa8MBc7Etz3U
12,096
https://github.com/huggingface/diffusers/issues/12096
https://api.github.com/repos/huggingface/diffusers/issues/12096
WanVACEPipeline - doesn't work with apply_group_offloading
### Describe the bug When enable apply_group_offloading: ``` Traceback (most recent call last): File "D:\Experiments\Video_Outpaint\2__Outpaint.py", line 165, in <module> out = pipe( ^^^^^ File "C:\Users\BBCCA\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\utils\_contextlib.py", line...
closed
completed
false
12
[ "bug", "contributions-welcome", "group-offloading" ]
[]
2025-08-07T12:37:50Z
2025-12-06T00:28:02Z
2025-12-06T00:28:02Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
SlimRG
39,348,033
MDQ6VXNlcjM5MzQ4MDMz
User
false
huggingface/diffusers
3,300,406,129
I_kwDOHa8MBc7EuDNx
12,097
https://github.com/huggingface/diffusers/issues/12097
https://api.github.com/repos/huggingface/diffusers/issues/12097
Wan2.2 TI2V-5B VRAM OOM at the end
### Describe the bug After completing 50 steps of progress, the video memory usage skyrocketed from around 8GB to 26GB, resulting in very slow performance ### Reproduction ``` import torch import numpy as np from diffusers import WanImageToVideoPipeline, AutoencoderKLWan, ModularPipeline from diffusers.utils import ...
open
null
false
1
[ "bug", "stale" ]
[]
2025-08-07T12:55:44Z
2026-01-09T15:14:37Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
zhaoyun0071
35,762,050
MDQ6VXNlcjM1NzYyMDUw
User
false
huggingface/diffusers
3,300,415,796
I_kwDOHa8MBc7EuFk0
12,098
https://github.com/huggingface/diffusers/issues/12098
https://api.github.com/repos/huggingface/diffusers/issues/12098
Qwen image transformers doesn't currently support from_single_file (i.e. GGUFs)
Currently it looks like qwen image doesn't support gguf since it's missing a conversion map/function from gguf to diffuser. I'd love to help creating it if you can point me in the right direction, I tried looking into it but without guidance it's above my paygrade :)
closed
completed
false
4
[]
[]
2025-08-07T12:58:43Z
2025-08-08T19:45:00Z
2025-08-08T19:45:00Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
luke14free
166,602
MDQ6VXNlcjE2NjYwMg==
User
false
huggingface/diffusers
3,302,530,870
I_kwDOHa8MBc7E2J82
12,102
https://github.com/huggingface/diffusers/issues/12102
https://api.github.com/repos/huggingface/diffusers/issues/12102
Support training where width does not equal height for Qwen-Image
To support training where width does not equal height for Qwen-Image, the following code: ``` img_shapes = [ (1, args.resolution // vae_scale_factor // 2, args.resolution // vae_scale_factor // 2) ] * bsz noisy_model_input = noisy_model_input.permute(0, 2, 1, 3, 4) packed_noisy_model_input = QwenImagePipeline._pac...
open
null
false
2
[ "stale" ]
[]
2025-08-08T03:57:10Z
2026-02-03T15:19:33Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
yinguoweiOvO
56,142,257
MDQ6VXNlcjU2MTQyMjU3
User
false
huggingface/diffusers
3,303,303,390
I_kwDOHa8MBc7E5Gje
12,104
https://github.com/huggingface/diffusers/issues/12104
https://api.github.com/repos/huggingface/diffusers/issues/12104
IndexError: index 0 is out of bounds for dimension 0 with size 0
### Describe the bug When I test the mit-han-lab/nunchaku-flux.1-kontext-dev model, it runs normally in a non-concurrent scenario, but throws an error when I try to run it with concurrent requests. My GPU is a single RTX 4090D. How can I enable multi-concurrency support on a single GPU? Thank you in advance for yo...
closed
completed
true
1
[ "bug" ]
[]
2025-08-08T09:20:52Z
2025-08-17T22:22:37Z
2025-08-17T22:22:37Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
liushiton
115,603,416
U_kgDOBuP32A
User
false
huggingface/diffusers
3,306,188,208
I_kwDOHa8MBc7FEG2w
12,107
https://github.com/huggingface/diffusers/issues/12107
https://api.github.com/repos/huggingface/diffusers/issues/12107
accelerator.init_trackers error when try with a custom object such as list
### Describe the bug I set multiple prompts with nargs for argument "--validation_prompt " in "train_dreambooth.py": ` parser.add_argument( "--validation_prompt", type=str, default=["A photo of sks dog in a bucket", "A sks cat wearing a coat"], nargs="*", help="A prompt that...
open
null
false
1
[ "bug", "stale" ]
[]
2025-08-09T10:04:06Z
2026-01-09T15:14:27Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
micklexqg
13,776,012
MDQ6VXNlcjEzNzc2MDEy
User
false
huggingface/diffusers
3,306,859,561
I_kwDOHa8MBc7FGqwp
12,108
https://github.com/huggingface/diffusers/issues/12108
https://api.github.com/repos/huggingface/diffusers/issues/12108
Qwen Image and Chroma pipeline breaks using schedulers that enable flow matching by parameter.
### Describe the bug Several Schedulers support flow matching by using the prediction_type='flow_prediction" e.g. ``` pipe.scheduler = UniPCMultistepScheduler(prediction_type="flow_prediction", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True) ``` However Chroma and Qwen Image will not work with th...
closed
completed
false
1
[ "bug" ]
[]
2025-08-09T21:34:28Z
2026-01-22T01:19:00Z
2026-01-22T01:19:00Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Vargol
62,868
MDQ6VXNlcjYyODY4
User
false
huggingface/diffusers
3,307,203,137
I_kwDOHa8MBc7FH-pB
12,110
https://github.com/huggingface/diffusers/issues/12110
https://api.github.com/repos/huggingface/diffusers/issues/12110
Bug in initialization of UNet1DModel GaussianFourier time projection
### Describe the bug TLDR: In the constructor for the UNet1dModel (line 104, seen [here](https://github.com/huggingface/diffusers/blob/v0.34.0/src/diffusers/models/unets/unet_1d.py#L104)), the embedding size is manually hardcoded to be 8 for no apparent good reason. Instead it should be block_out_channels[0]. Expla...
closed
completed
false
1
[ "bug" ]
[]
2025-08-10T05:45:35Z
2025-09-16T06:48:49Z
2025-09-16T06:48:49Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
SammyAgrawal
41,808,786
MDQ6VXNlcjQxODA4Nzg2
User
false
huggingface/diffusers
3,307,394,229
I_kwDOHa8MBc7FItS1
12,112
https://github.com/huggingface/diffusers/issues/12112
https://api.github.com/repos/huggingface/diffusers/issues/12112
Various Chroma pipeline issues
### Describe the bug Some minor issues that I have found while using the Chroma pipeline for the first time: - if `sentencepiece` is not installed, you get a quite cryptic error message. Other pipelines such as Flux give you a nice error message that `sentencepiece` has to be installed - `The config attributes {'guid...
open
null
false
4
[ "bug", "stale" ]
[]
2025-08-10T09:42:51Z
2026-02-03T15:19:29Z
null
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
dxqb
183,307,934
U_kgDOCu0Ong
User
false
huggingface/diffusers
3,307,522,406
I_kwDOHa8MBc7FJMlm
12,113
https://github.com/huggingface/diffusers/issues/12113
https://api.github.com/repos/huggingface/diffusers/issues/12113
WanImageToVideoPipeline: Given groups=1, weight of size [160, 12, 3, 3, 3], expected input[1, 3, 3, 258, 258] to have 12 channels, but got 3 channels instead
### Describe the bug Trying the example code with the following changes pipe.vae.enable_tiling() pipe.enable_sequential_cpu_offload() Uninstalled and installed diffusers and peft from source before trying the example code. ### Reproduction ```python import torch import numpy as np from diffusers import WanImageTo...
closed
completed
false
0
[ "bug" ]
[]
2025-08-10T12:23:57Z
2025-10-09T15:54:57Z
2025-10-09T15:54:57Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
nitinmukesh
2,102,186
MDQ6VXNlcjIxMDIxODY=
User
false
huggingface/diffusers
3,307,810,586
I_kwDOHa8MBc7FKS8a
12,116
https://github.com/huggingface/diffusers/issues/12116
https://api.github.com/repos/huggingface/diffusers/issues/12116
Attention masking in Chroma pipeline
### Describe the bug There is an issue with attention masking in the Chroma pipeline. With the prompt in your example here, https://huggingface.co/docs/diffusers/main/api/pipelines/chroma the difference is not very large, probably because there are enough meaningful tokens with some weight. But short prompts fail bec...
closed
completed
false
5
[ "bug" ]
[]
2025-08-10T17:44:11Z
2025-09-29T08:50:07Z
2025-09-29T08:50:07Z
CONTRIBUTOR
null
20260407T133413Z
2026-04-07T13:34:13Z
dxqb
183,307,934
U_kgDOCu0Ong
User
false
huggingface/diffusers
1,441,824,910
I_kwDOHa8MBc5V8ICO
1,212
https://github.com/huggingface/diffusers/issues/1212
https://api.github.com/repos/huggingface/diffusers/issues/1212
Community Integration: Making AIGC cheaper, faster, and more efficient.
**Is your feature request related to a problem? Please describe.** AIGC has recently risen to be one of the hottest topics in AI. Unfortunately, large hardware requirements and training costs are still a severe impediment to the rapid growth of the AIGC industry. The Stable Diffusion v1 version of the model requires 1...
closed
completed
false
18
[ "stale" ]
[]
2022-11-09T10:21:59Z
2023-03-30T16:58:22Z
2023-02-07T15:03:59Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
binmakeswell
61,670,638
MDQ6VXNlcjYxNjcwNjM4
User
false
huggingface/diffusers
3,308,304,867
I_kwDOHa8MBc7FMLnj
12,120
https://github.com/huggingface/diffusers/issues/12120
https://api.github.com/repos/huggingface/diffusers/issues/12120
How to train a lora with distilled flux model, such as flux-schnell???
**Is your feature request related to a problem? Please describe.** I can use flux as base model to train a lora, but it need 20 steps , it cost a lot of time , and I want to train a lora base on distill model to implement use fewer step make a better image, such as based on flux-schnell model train a lora it only nee...
closed
completed
true
2
[ "stale" ]
[]
2025-08-11T03:07:42Z
2026-01-09T23:20:30Z
2026-01-09T23:20:30Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
Johnson-yue
10,268,274
MDQ6VXNlcjEwMjY4Mjc0
User
false
huggingface/diffusers
3,309,578,412
I_kwDOHa8MBc7FRCis
12,123
https://github.com/huggingface/diffusers/issues/12123
https://api.github.com/repos/huggingface/diffusers/issues/12123
"Requested to load AutoencoderKL"
### Describe the bug I'm tryna generate images but it always crashes when it gets to this line. I'm no expert when it comes to stable diffusion so i don't really know what's going.. ### Reproduction got prompt model weight dtype torch.float16, manual cast: None model_type EPS Using pytorch attention in VAE Using pyt...
closed
completed
false
1
[ "bug" ]
[]
2025-08-11T11:17:40Z
2025-08-17T22:23:06Z
2025-08-17T22:23:06Z
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
ShadyRaion
112,407,809
U_kgDOBrM1AQ
User
false
huggingface/diffusers
3,310,011,998
I_kwDOHa8MBc7FSsZe
12,124
https://github.com/huggingface/diffusers/issues/12124
https://api.github.com/repos/huggingface/diffusers/issues/12124
For qwen-image training file, Maybe "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False?
### Describe the bug I think "shuffle" of dataloader should be "False" when custom_instance_prompts is not None and cache_latents is False. Otherwise, it will lead to errors in the correspondence between prompt embedding and image during training, and prompt will not be followed when performing the task of T2I. ### R...
open
null
false
4
[ "bug", "stale" ]
[]
2025-08-11T13:15:21Z
2026-02-03T15:19:24Z
null
NONE
null
20260407T133413Z
2026-04-07T13:34:13Z
yinguoweiOvO
56,142,257
MDQ6VXNlcjU2MTQyMjU3
User
false