repo stringclasses 1
value | github_id int64 1.27B 4.42B | github_node_id stringlengths 18 24 | number int64 8 13.7k | html_url stringlengths 49 53 | api_url stringlengths 59 63 | title stringlengths 1 402 | body stringlengths 1 62.9k ⌀ | state stringclasses 2
values | state_reason stringclasses 4
values | locked bool 2
classes | comments_count int64 0 99 | labels listlengths 0 5 | assignees listlengths 0 5 | created_at stringdate 2022-06-09 16:28:35 2026-05-11 21:29:10 | updated_at stringdate 2022-06-12 22:18:01 2026-05-13 10:44:12 | closed_at stringdate 2022-06-12 22:18:01 2026-05-13 10:44:12 ⌀ | author_association stringclasses 3
values | milestone_title stringclasses 0
values | snapshot_id stringclasses 42
values | extracted_at stringdate 2026-04-07 13:34:13 2026-05-13 11:35:24 | author_login stringlengths 3 28 | author_id int64 1.54k 282M | author_node_id stringlengths 12 20 | author_type stringclasses 3
values | author_site_admin bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 2,782,716,793 | I_kwDOHa8MBc6l3ON5 | 10,542 | https://github.com/huggingface/diffusers/issues/10542 | https://api.github.com/repos/huggingface/diffusers/issues/10542 | Hunyuan Video Batch Size > 1 is broken again | ### Describe the bug
I reported this previously in #10453, and a fix was merged in #10454. But now after #10482 was merged, I get a similar error again.
### Reproduction
(copied from the privious issue report)
```python
import torch
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel
from ... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-01-12T22:06:12Z | 2025-01-14T04:55:07Z | 2025-01-14T04:55:07Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Nerogar | 3,390,934 | MDQ6VXNlcjMzOTA5MzQ= | User | false |
huggingface/diffusers | 1,428,092,944 | I_kwDOHa8MBc5VHvgQ | 1,055 | https://github.com/huggingface/diffusers/issues/1055 | https://api.github.com/repos/huggingface/diffusers/issues/1055 | No LatentDiffusionPipeline on the latest branch | ### Describe the bug
Latent diffusion is not found even after pip install the latestest dev branch
### Reproduction
git clone https://github.com/huggingface/diffusers.git
cd diffusers && pip install -e .
then run
```
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pr... | closed | completed | false | 1 | [
"bug"
] | [] | 2022-10-29T05:07:17Z | 2022-11-02T10:58:51Z | 2022-11-02T10:58:51Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | kyleliang919 | 21,994,498 | MDQ6VXNlcjIxOTk0NDk4 | User | false |
huggingface/diffusers | 2,783,087,443 | I_kwDOHa8MBc6l4otT | 10,550 | https://github.com/huggingface/diffusers/issues/10550 | https://api.github.com/repos/huggingface/diffusers/issues/10550 | [LoRA] loading LoRA into a quantized base model | Similar issues:
1. https://github.com/huggingface/diffusers/issues/10512
2. https://github.com/huggingface/diffusers/issues/10496
<details>
<summary>Reproduction</summary>
```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, FluxTransformer2DModel, FluxPipeline
from... | closed | completed | false | 18 | [
"lora"
] | [
"sayakpaul"
] | 2025-01-13T06:03:49Z | 2025-01-16T02:52:53Z | 2025-01-15T11:49:46Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 2,783,160,684 | I_kwDOHa8MBc6l46ls | 10,553 | https://github.com/huggingface/diffusers/issues/10553 | https://api.github.com/repos/huggingface/diffusers/issues/10553 | All training scripts might be wrong when using gradients accumulation! | Here is a simple case:
```
loss = loss.mean()
accelerator.backward(loss)
if accelerator.sync_gradients:
params_to_clip = flux_transformer.parameters()
accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
should be u... | closed | completed | false | 7 | [] | [] | 2025-01-13T06:50:13Z | 2025-01-15T01:52:43Z | 2025-01-15T01:52:43Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chenbinghui1 | 11,517,207 | MDQ6VXNlcjExNTE3MjA3 | User | false |
huggingface/diffusers | 2,783,468,234 | I_kwDOHa8MBc6l6FrK | 10,555 | https://github.com/huggingface/diffusers/issues/10555 | https://api.github.com/repos/huggingface/diffusers/issues/10555 | TorchAO+diffusers | ### Describe the bug
I'm running on diffusers implementation of Flux Schnell on H100 and I get the following errors, some might be due to me, but some not:
- fp8dq
```
File "/root/.pyenv/versions/3.11.10/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwa... | closed | completed | false | 8 | [
"bug",
"stale"
] | [] | 2025-01-13T09:42:22Z | 2025-05-29T06:50:04Z | 2025-05-29T06:50:03Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | christopher5106 | 6,875,375 | MDQ6VXNlcjY4NzUzNzU= | User | false |
huggingface/diffusers | 2,784,091,057 | I_kwDOHa8MBc6l8dux | 10,559 | https://github.com/huggingface/diffusers/issues/10559 | https://api.github.com/repos/huggingface/diffusers/issues/10559 | Difference between Controlnet inpainting for SD2 and SD3 | Firstly great work with adding SD3 support on diffusers!
I was trying to run the SD3 controlnet inpainting script : [https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py](https://github.com/huggingface/diffusers/blob/main/src/dif... | open | null | false | 1 | [
"stale"
] | [] | 2025-01-13T14:14:05Z | 2025-02-12T15:03:19Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Varghese-Kuruvilla | 54,114,316 | MDQ6VXNlcjU0MTE0MzE2 | User | false |
huggingface/diffusers | 1,428,118,799 | I_kwDOHa8MBc5VH10P | 1,056 | https://github.com/huggingface/diffusers/issues/1056 | https://api.github.com/repos/huggingface/diffusers/issues/1056 | mps support under unet_2d_condition.py | ### Describe the bug
mps doesn't support float64. my hacky solution was manually setting timesteps to torch.float32 (line 277)
```
# 1. time
timesteps = timestep
if not torch.is_tensor(timesteps):
# TODO: this requires sync between CPU and GPU. So try to pass timesteps as tensor... | closed | completed | false | 3 | [] | [
"pcuenca"
] | 2022-10-29T06:37:56Z | 2022-12-02T12:10:18Z | 2022-12-02T12:10:18Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | eor-w0w0w0 | 92,644,112 | U_kgDOBYWjEA | User | false |
huggingface/diffusers | 2,785,027,056 | I_kwDOHa8MBc6mACPw | 10,562 | https://github.com/huggingface/diffusers/issues/10562 | https://api.github.com/repos/huggingface/diffusers/issues/10562 | Loading popular anime LoRA causes incompatible keys error | ### Describe the bug
My understanding is that this LoRA applies to the T5 model layers, which is where everything is getting thrown off.
The model I've reuploaded from CivitAI is available on the hub [here](https://huggingface.co/bghira/test-models-for-bug-reports/blob/main/Anime%20v1.3.safetensors).
Maybe this make... | closed | completed | false | 10 | [
"bug"
] | [
"sayakpaul"
] | 2025-01-13T18:59:05Z | 2025-02-17T13:34:49Z | 2025-02-17T13:34:49Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | bghira | 59,658,056 | MDQ6VXNlcjU5NjU4MDU2 | User | false |
huggingface/diffusers | 2,786,130,754 | I_kwDOHa8MBc6mEPtC | 10,565 | https://github.com/huggingface/diffusers/issues/10565 | https://api.github.com/repos/huggingface/diffusers/issues/10565 | Different generation with `Diffusers` in I2V tasks for LTX-video | ### Describe the bug
Hello, I encountered an issue with the generation when attempting the I2V task using `Diffusers`. Is there any difference between the `diffusers` implementation and the `LTX-video-inference scripts` in the I2V task?
- The above is the result from the `inference.py`, and the following is the resu... | open | null | false | 13 | [
"bug"
] | [] | 2025-01-14T03:24:06Z | 2026-01-31T03:18:23Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Kaihui-Cheng | 100,404,740 | U_kgDOBfwOBA | User | false |
huggingface/diffusers | 2,786,184,598 | I_kwDOHa8MBc6mEc2W | 10,566 | https://github.com/huggingface/diffusers/issues/10566 | https://api.github.com/repos/huggingface/diffusers/issues/10566 | Unnecessary operations in `CogVideoXTransformer3DModel.forward()`? | ### Describe the bug
Here are few rows of codes in `CogVideoXTransformer3DModel.forward()` :
```py
# 3. Transformer blocks
...
if not self.config.use_rotary_positional_embeddings:
# CogVideoX-2B
hidden_states = self.norm_final(hidden_states)
else:
# ... | closed | completed | false | 2 | [
"bug",
"stale"
] | [] | 2025-01-14T04:01:20Z | 2025-02-13T22:11:26Z | 2025-02-13T22:11:26Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | townwish4git | 143,256,262 | U_kgDOCInqxg | User | false |
huggingface/diffusers | 2,786,383,859 | I_kwDOHa8MBc6mFNfz | 10,569 | https://github.com/huggingface/diffusers/issues/10569 | https://api.github.com/repos/huggingface/diffusers/issues/10569 | High memory consumption for HunyuanVideo on CPU | ### Describe the bug
We have a 4th-gen Xeon Scalable system that we are trying to run HunyuanVideo (via Diffusers) on. Remarkably, the demo code runs out of the box with no tweaks, which is a testament to the quality of the Intel PyTorch code :) However, during inference we see very high memory usage - over 180GB - to... | open | null | false | 2 | [
"bug",
"stale"
] | [] | 2025-01-14T06:41:24Z | 2025-02-13T15:03:11Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | bayley | 592,053 | MDQ6VXNlcjU5MjA1Mw== | User | false |
huggingface/diffusers | 1,428,369,156 | I_kwDOHa8MBc5VIy8E | 1,057 | https://github.com/huggingface/diffusers/issues/1057 | https://api.github.com/repos/huggingface/diffusers/issues/1057 | safety_checker and pipe checking and how to disable warning safety checker | ### Describe the bug
When disabling safety checker, diffusers spits out a wall of text, every time. This is an annoyance. We get it, you don't want people making boobies and stuff. Turn it off.
Additionally, when using non-standard safety checker class for better functionality, I get
```
You have passed a no... | closed | completed | false | 4 | [
"bug",
"stale"
] | [] | 2022-10-29T17:21:58Z | 2022-12-08T15:02:59Z | 2022-12-08T15:02:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | WASasquatch | 1,151,589 | MDQ6VXNlcjExNTE1ODk= | User | false |
huggingface/diffusers | 2,786,806,371 | I_kwDOHa8MBc6mG0pj | 10,573 | https://github.com/huggingface/diffusers/issues/10573 | https://api.github.com/repos/huggingface/diffusers/issues/10573 | StableDiffusionXLControlNetInpaintPipeline unable to use padding_mask_crop with multiple controlnets | ### Describe the bug
padding_mask_crop works for no controlnet and with 1 controlnet, but when we have multiple controlnets, library returns - ValueError: The image should be a PIL image when inpainting mask crop, but is of type <class 'list'>. have double confirmed my other inputs are as required
-------------------... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2025-01-14T10:40:54Z | 2025-06-23T00:09:54Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | teoyangrui | 37,584,702 | MDQ6VXNlcjM3NTg0NzAy | User | false |
huggingface/diffusers | 2,786,880,182 | I_kwDOHa8MBc6mHGq2 | 10,575 | https://github.com/huggingface/diffusers/issues/10575 | https://api.github.com/repos/huggingface/diffusers/issues/10575 | Encounter "IndexError: index 51 is out of bounds for dimension 0 with size 51" when using DPMSolverMultistepScheduler | ### Describe the bug
I try to train a personal ddpm model for my dataset, and I use the DPM-Solver to accelerate the sample speed, but shown the error in the title, what's the problem ? Hope anyone help.
### Reproduction
scheduler = DPMSolverMultistepScheduler.from_config(noise_scheduler.config)
schedule... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-01-14T11:19:18Z | 2025-01-14T12:30:05Z | 2025-01-14T12:30:04Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ljw919 | 44,638,108 | MDQ6VXNlcjQ0NjM4MTA4 | User | false |
huggingface/diffusers | 2,786,941,964 | I_kwDOHa8MBc6mHVwM | 10,577 | https://github.com/huggingface/diffusers/issues/10577 | https://api.github.com/repos/huggingface/diffusers/issues/10577 | EDMDPMSolverMultistepScheduler init_nose_sigma | When using the EDM DPM schedular, I have found that my results are qualitatively a lot better when starting sampling from unit Gaussian noise even though the sigma_max of my training distribution is 80. For reference, I am working on denoising trajectories not images, so better in this case just means the final sample ... | closed | completed | true | 0 | [] | [] | 2025-01-14T11:51:03Z | 2025-01-27T01:21:55Z | 2025-01-27T01:21:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | reeceomahoney | 66,252,930 | MDQ6VXNlcjY2MjUyOTMw | User | false |
huggingface/diffusers | 2,787,186,774 | I_kwDOHa8MBc6mIRhW | 10,580 | https://github.com/huggingface/diffusers/issues/10580 | https://api.github.com/repos/huggingface/diffusers/issues/10580 | 关于diffusers.models.unets.unet_2d_blocks中的CrossAttnDownBlock2D | CrossAttnDownBlock2D类是下采样模块的一部分。类中sample在forward方法中依次经过self.resnets模块和self.attentions模块的处理,然而在__init__方法中,self.resnets模块和self.attentions模块的定义和调用顺序是相反的:
```
self.attentions = nn.ModuleList(attentions)
self.resnets = nn.ModuleList(resnets)
```
这导致我查看网络结构时先出现了attentions模块,而后是resnets模块,对于想要理解网络架构的人来说,这样不是很友... | open | null | false | 5 | [] | [] | 2025-01-14T13:50:35Z | 2025-03-11T03:16:16Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Charging-up | 126,473,311 | U_kgDOB4nUXw | User | false |
huggingface/diffusers | 2,788,753,698 | I_kwDOHa8MBc6mOQEi | 10,582 | https://github.com/huggingface/diffusers/issues/10582 | https://api.github.com/repos/huggingface/diffusers/issues/10582 | Flux Fill Color loss | Hi!
When I use the Flux Fill model to generate images, the resulting colors often do not match those of the images I provided, with some colors being lost. Could anyone tell me where the problem lies?
reference:

 of scheduling_flow_match_euler_discrete.py | ### Describe the bug
### Description
In the `index_for_timestep()` method, there's a precision issue when comparing floating-point timesteps with integer schedule_timesteps. The comparison `schedule_timesteps == timestep` can fail to find matching indices due to floating-point truncation, causing some timestep indices... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2025-01-15T09:20:44Z | 2025-04-04T08:46:07Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Liang-ZX | 42,173,433 | MDQ6VXNlcjQyMTczNDMz | User | false |
huggingface/diffusers | 2,789,620,091 | I_kwDOHa8MBc6mRjl7 | 10,588 | https://github.com/huggingface/diffusers/issues/10588 | https://api.github.com/repos/huggingface/diffusers/issues/10588 | [LoRA] support loading Flux Control LoRAs with `bitsandbytes` quantization | https://github.com/huggingface/diffusers/pull/10578 fixed loading LoRAs into 4bit quantized models for Flux.
https://github.com/huggingface/diffusers/pull/10576 added a test to ensure Flux LoRAs can be loaded when 8bit `bitsandbytes` quantization is applied.
We still need to support all of this for Flux Control LoRAs... | closed | completed | false | 12 | [
"lora",
"quantization"
] | [
"sayakpaul"
] | 2025-01-15T11:58:51Z | 2025-04-14T13:52:38Z | 2025-04-14T13:52:37Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | sayakpaul | 22,957,388 | MDQ6VXNlcjIyOTU3Mzg4 | User | false |
huggingface/diffusers | 1,428,568,385 | I_kwDOHa8MBc5VJjlB | 1,059 | https://github.com/huggingface/diffusers/issues/1059 | https://api.github.com/repos/huggingface/diffusers/issues/1059 | Fp16 mixed precision requires a GPU | Wrong repo sorry. | closed | completed | false | 0 | [] | [] | 2022-10-30T02:57:27Z | 2022-10-30T03:20:33Z | 2022-10-30T03:20:33Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Campfirecrucifix | 65,754,334 | MDQ6VXNlcjY1NzU0MzM0 | User | false |
huggingface/diffusers | 2,790,155,610 | I_kwDOHa8MBc6mTmVa | 10,590 | https://github.com/huggingface/diffusers/issues/10590 | https://api.github.com/repos/huggingface/diffusers/issues/10590 | Sana fails with BFloat16 and tiled VAE decode | ### Describe the bug
tried new sana 4k model, it fails when running in bfloat16 during vae decode if tiled decode is enabled.
issue is that in `SanaMultiscaleAttnProcessor2_0` it only does upcasting conditionally:
```py
if use_linear_attention:
# for linear attention upcast hidden_states to float32... | closed | completed | false | 9 | [
"bug"
] | [] | 2025-01-15T15:28:29Z | 2025-01-16T11:24:31Z | 2025-01-16T11:24:29Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,791,601,759 | I_kwDOHa8MBc6mZHZf | 10,591 | https://github.com/huggingface/diffusers/issues/10591 | https://api.github.com/repos/huggingface/diffusers/issues/10591 | Some wrong in sd3's lora training script | ### Describe the bug
https://github.com/huggingface/diffusers/blob/e8aacda762e311505ba05ae340af23b149e37af3/examples/research_projects/sd3_lora_colab/train_dreambooth_lora_sd3_miniature.py#L717
the transformer before accelerator.prepare should not convert to fp16 in mix precision;
it will broken the grad precision a... | open | null | false | 3 | [
"bug",
"stale"
] | [] | 2025-01-16T03:23:30Z | 2025-03-13T15:04:13Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | CuddleSabe | 61,224,076 | MDQ6VXNlcjYxMjI0MDc2 | User | false |
huggingface/diffusers | 2,791,775,941 | I_kwDOHa8MBc6mZx7F | 10,594 | https://github.com/huggingface/diffusers/issues/10594 | https://api.github.com/repos/huggingface/diffusers/issues/10594 | Issue with Using Multiple Controls (Depth and Canny) with LoRA on FLUX.1-dev Model | ### Describe the bug
When attempting to use multiple control images (Depth and Canny) with LoRA on the FLUX.1-dev model, an error occurs during execution. The documentation indicates that multiple control images in PIL format can be supplied, but the pipeline throws a runtime error. Notably, the pipeline functions cor... | open | null | false | 12 | [
"bug"
] | [] | 2025-01-16T06:00:05Z | 2025-03-26T16:38:08Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | pramishp | 15,194,546 | MDQ6VXNlcjE1MTk0NTQ2 | User | false |
huggingface/diffusers | 2,791,910,459 | I_kwDOHa8MBc6maSw7 | 10,596 | https://github.com/huggingface/diffusers/issues/10596 | https://api.github.com/repos/huggingface/diffusers/issues/10596 | Invalid shape error in FluxControlPipeline | ### Describe the bug
https://github.com/huggingface/diffusers/blob/b0c8973834717f8f52ea5384a8c31de3a88f4d59/src/diffusers/pipelines/flux/pipeline_flux_control.py#L761
https://github.com/huggingface/diffusers/blob/b0c8973834717f8f52ea5384a8c31de3a88f4d59/src/diffusers/pipelines/flux/pipeline_flux.py#L828
### Reproduct... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-01-16T07:29:29Z | 2025-01-16T12:08:41Z | 2025-01-16T12:08:39Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | chenxiao111222 | 154,797,505 | U_kgDOCToFwQ | User | false |
huggingface/diffusers | 2,794,229,674 | I_kwDOHa8MBc6mjI-q | 10,599 | https://github.com/huggingface/diffusers/issues/10599 | https://api.github.com/repos/huggingface/diffusers/issues/10599 | [LoRA] "Incompatible keys detected" error when open popular LoRA models from civitai | ### Describe the bug
Hello Dears!
I tried to open the next models (Flux.1 D):
https://civitai.com/models/332248?modelVersionId=1086989
https://civitai.com/models/290836?modelVersionId=981868
But diffusers raised the error - "Incompatible keys detected"
### Reproduction
import torch
from diffusers import FluxPipeli... | closed | completed | false | 3 | [
"bug"
] | [
"sayakpaul"
] | 2025-01-17T01:38:47Z | 2025-02-10T13:17:26Z | 2025-02-10T13:17:25Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | RIOFornium | 15,143,214 | MDQ6VXNlcjE1MTQzMjE0 | User | false |
huggingface/diffusers | 1,428,571,289 | I_kwDOHa8MBc5VJkSZ | 1,060 | https://github.com/huggingface/diffusers/issues/1060 | https://api.github.com/repos/huggingface/diffusers/issues/1060 | Failed to load safety-checker on scripts/convert_original_stable_diffusion_to_diffusers.py | ### Describe the bug
Trying to change my ckpt model into diffusers model but failed for lacking the cache of safety-checker
### Reproduction
Clone the code of diffusers from GitHub, then get into directory of diffusers and execute the following command:
`python3 scripts/convert_original_stable_diffusion_to_di... | closed | completed | false | 8 | [
"bug",
"stale"
] | [] | 2022-10-30T03:06:52Z | 2022-12-12T15:04:00Z | 2022-12-12T15:04:00Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | AkiKagura | 49,899,720 | MDQ6VXNlcjQ5ODk5NzIw | User | false |
huggingface/diffusers | 2,794,357,697 | I_kwDOHa8MBc6mjoPB | 10,601 | https://github.com/huggingface/diffusers/issues/10601 | https://api.github.com/repos/huggingface/diffusers/issues/10601 | redux | Can diffusers 0.31.0 support redux?
| closed | completed | false | 6 | [
"stale"
] | [] | 2025-01-17T02:54:38Z | 2025-02-16T16:01:29Z | 2025-02-16T16:01:28Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Jay-9-c | 195,251,966 | U_kgDOC6NO_g | User | false |
huggingface/diffusers | 2,797,540,309 | I_kwDOHa8MBc6mvxPV | 10,606 | https://github.com/huggingface/diffusers/issues/10606 | https://api.github.com/repos/huggingface/diffusers/issues/10606 | pred_original_sample in FlowMatchEulerDiscreteScheduler | Will pred_original_sample be supported in FlowMatchEulerDiscreteScheduler? How to get predicted x_0? | closed | completed | false | 2 | [] | [] | 2025-01-19T10:02:22Z | 2025-02-14T12:21:33Z | 2025-02-07T13:34:25Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | haofanwang | 18,741,068 | MDQ6VXNlcjE4NzQxMDY4 | User | false |
huggingface/diffusers | 2,798,289,488 | I_kwDOHa8MBc6myoJQ | 10,608 | https://github.com/huggingface/diffusers/issues/10608 | https://api.github.com/repos/huggingface/diffusers/issues/10608 | HunyuanVideoPieeline can't run on multi-GPU | ### Describe the bug
With the code provided in the doc, HunyuanVideoPipeline raises "Expected all tensors to be on the same device" error on multi-GPU platform.
### Reproduction
With the code provided in the doc, HunyuanVideoPipeline raises "Expected all tensors to be on the same device" error on multi-GPU platform.... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2025-01-20T06:15:19Z | 2025-03-13T15:04:07Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lyp741 | 10,540,431 | MDQ6VXNlcjEwNTQwNDMx | User | false |
huggingface/diffusers | 1,428,597,548 | I_kwDOHa8MBc5VJqss | 1,061 | https://github.com/huggingface/diffusers/issues/1061 | https://api.github.com/repos/huggingface/diffusers/issues/1061 | Update Textual Inversion example instruction to Stable Diffusion v1.5 | ### Describe the bug
Currently the Textual Inversion example uses Stable Diffusion v1.5 in the command, but the instructions say Stable Diffusion v1.4
https://github.com/huggingface/diffusers/blob/95414bd6bf9bb34a312a7c55f10ba9b379f33890/examples/textual_inversion/README.md?plain=1#L32
https://github.com/hugging... | closed | completed | false | 2 | [
"bug"
] | [
"patil-suraj"
] | 2022-10-30T04:32:04Z | 2022-11-02T13:03:19Z | 2022-11-02T13:03:04Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | gau-nernst | 26,946,864 | MDQ6VXNlcjI2OTQ2ODY0 | User | false |
huggingface/diffusers | 2,799,448,759 | I_kwDOHa8MBc6m3DK3 | 10,612 | https://github.com/huggingface/diffusers/issues/10612 | https://api.github.com/repos/huggingface/diffusers/issues/10612 | Loading A LoRa into NF4 Quantized Flux Fill Pipeline Gives an Error | ### Describe the bug
When i try to load a lora, such as `alimama-creative/FLUX.1-Turbo-Alpha`, into nf4 quantized flux fill pipeline it gives an error
### Reproduction
```python
from diffusers import FluxPipeline,FluxPriorReduxPipeline, FluxFillPipeline, FluxTransformer2DModel
from diffusers import BitsAndBytesConfi... | open | null | false | 2 | [
"bug",
"wip"
] | [] | 2025-01-20T14:42:33Z | 2025-02-20T20:37:26Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | hamzaakyildiz | 69,676,637 | MDQ6VXNlcjY5Njc2NjM3 | User | false |
huggingface/diffusers | 2,800,481,175 | I_kwDOHa8MBc6m6_OX | 10,614 | https://github.com/huggingface/diffusers/issues/10614 | https://api.github.com/repos/huggingface/diffusers/issues/10614 | Adding AutoencoderKL model returns option request | ### Model/Pipeline/Scheduler description
## Environment
Using `diffusers==0.32.2` and Pytorch `2.5.1`
## Context
I am developing an AutoencoderKL fine-tuning script and I have finished the single-GPU training part, but the current feature of the AutoencoderKL model makes it almost impossible for distributed training.... | open | null | false | 1 | [
"stale"
] | [] | 2025-01-21T00:57:40Z | 2025-02-20T15:03:09Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lin-tianyu | 73,957,393 | MDQ6VXNlcjczOTU3Mzkz | User | false |
huggingface/diffusers | 2,800,619,085 | I_kwDOHa8MBc6m7g5N | 10,616 | https://github.com/huggingface/diffusers/issues/10616 | https://api.github.com/repos/huggingface/diffusers/issues/10616 | Accelerate.__init__() got an unexpected keyword argument 'logging_dir' | ### Describe the bug
I'm trying to **train** an unconditional diffusion model on a greyscale image dataset. I am using [diffusers_training_example.ipynb](https://huggingface.co/docs/diffusers/v0.32.2/training/unconditional_training) on Google Colab connected to my local GPU. When running the ‘Let's train!’ cell I am g... | closed | completed | false | 5 | [
"bug",
"stale"
] | [] | 2025-01-21T03:31:01Z | 2025-02-20T20:19:19Z | 2025-02-20T20:19:18Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DavidGill159 | 116,177,739 | U_kgDOBuy7Sw | User | false |
huggingface/diffusers | 1,428,659,628 | I_kwDOHa8MBc5VJ52s | 1,062 | https://github.com/huggingface/diffusers/issues/1062 | https://api.github.com/repos/huggingface/diffusers/issues/1062 | [Dreambooth] can't repeate paper results | thank you for you great work
Using dog images (Dog toy example data ) I am trying to reconstruct the results showed in the paper
what i did?
follow the following instructors i fine tuned stable-diffusion-v1-5 using 120 others dogs images (randomly chosen from Stanford Dogs Dataset ), i also fine tuned the text model... | closed | completed | false | 8 | [
"stale"
] | [] | 2022-10-30T07:53:56Z | 2022-12-21T15:03:25Z | 2022-12-21T15:03:25Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | orydatadudes | 64,726,228 | MDQ6VXNlcjY0NzI2MjI4 | User | false |
huggingface/diffusers | 2,802,088,876 | I_kwDOHa8MBc6nBHus | 10,621 | https://github.com/huggingface/diffusers/issues/10621 | https://api.github.com/repos/huggingface/diffusers/issues/10621 | Loading a Lora on quantized model ? TorchaoLoraLinear.__init__() missing 1 required keyword-only argument: 'get_apply_tensor_subclass' | ```
import time
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-schnell",
torch_dtype=torch.bfloat16,
).to("cuda")
quantize_(pipe.transformer, float8_dynamic_activation_float8_weight(granularity=PerRow()))
pipe.load_lora_weights('Octree/flux-sch... | open | reopened | false | 13 | [
"stale"
] | [] | 2025-01-21T14:59:26Z | 2025-07-21T08:32:58Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | christopher5106 | 6,875,375 | MDQ6VXNlcjY4NzUzNzU= | User | false |
huggingface/diffusers | 2,803,323,389 | I_kwDOHa8MBc6nF1H9 | 10,625 | https://github.com/huggingface/diffusers/issues/10625 | https://api.github.com/repos/huggingface/diffusers/issues/10625 | Add support for PPVCtrl | ### Model/Pipeline/Scheduler description
I recently came across an impressive work called PPVCtrl, a controllable video generation model. It leverages an auxiliary condition encoder to transform a text-to-video generation model into a customizable video generator, all without retraining the original generator. It's ak... | open | null | false | 2 | [] | [] | 2025-01-22T04:11:28Z | 2025-03-25T15:04:32Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | owlowlohh | 195,999,316 | U_kgDOC662VA | User | false |
huggingface/diffusers | 2,806,810,763 | I_kwDOHa8MBc6nTIiL | 10,634 | https://github.com/huggingface/diffusers/issues/10634 | https://api.github.com/repos/huggingface/diffusers/issues/10634 | The huggingface repo need to be fixed for Sana 2K and 4K models | ### Describe the bug
Hello @lawrence-cj ,
I am using Sana using diffusers. The issue is applicable for both these repos and maybe for 512/1024 but not tested.
if inference_type == "Sana 4K":
model_path = "Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers"
else:
model_path = "Efficient-Larg... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-01-23T12:42:36Z | 2025-03-10T03:24:39Z | 2025-03-10T03:24:39Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,807,532,463 | I_kwDOHa8MBc6nV4uv | 10,635 | https://github.com/huggingface/diffusers/issues/10635 | https://api.github.com/repos/huggingface/diffusers/issues/10635 | Expand layerwise upcasting with optional white-list to allow Torch/GPU to perform native fp8 ops where possible | PR #10347 adds native torch fp8 as storage dtype and performs upcasting/downcasting to compute dtype in pre-forward/post-forward as needed.
however, modern gpu architectures (starting with hopper in 2022) actually do implement many ops natively in fp8.
and torch is extending number of supported ops in each release.
r... | closed | completed | false | 6 | [
"enhancement",
"wip",
"performance"
] | [] | 2025-01-23T17:43:06Z | 2025-02-13T17:27:28Z | 2025-02-13T17:27:28Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,807,770,835 | I_kwDOHa8MBc6nWy7T | 10,636 | https://github.com/huggingface/diffusers/issues/10636 | https://api.github.com/repos/huggingface/diffusers/issues/10636 | Bug inside value_guided_sampling.py | ### Describe the bug
There's a bug here: https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/experimental/rl/value_guided_sampling.py#L57-L67
The **means** and **stds** should be computed across each of the individual dimensions in the `observations`, `actions` space a... | open | null | false | 4 | [
"bug",
"stale"
] | [] | 2025-01-23T19:46:48Z | 2025-02-23T15:02:37Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rdesc | 39,059,473 | MDQ6VXNlcjM5MDU5NDcz | User | false |
huggingface/diffusers | 2,807,832,264 | I_kwDOHa8MBc6nXB7I | 10,637 | https://github.com/huggingface/diffusers/issues/10637 | https://api.github.com/repos/huggingface/diffusers/issues/10637 | Issues with FlowMatchEulerDiscreteScheduler.set_timesteps() | ### Describe the bug
Why does `num_inference_steps` have the default `None`? It's not an `Optional`. It cannot be `None`. This leads to weird error messages if you skip this parameter.
https://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-01-23T20:22:51Z | 2025-02-16T15:29:08Z | 2025-02-16T15:29:07Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dxqb | 183,307,934 | U_kgDOCu0Ong | User | false |
huggingface/diffusers | 2,807,972,880 | I_kwDOHa8MBc6nXkQQ | 10,638 | https://github.com/huggingface/diffusers/issues/10638 | https://api.github.com/repos/huggingface/diffusers/issues/10638 | Problem running SDXL schedule scheduling_dpmsolver_multistep.py | ### Describe the bug
Tried running with Euler and got same message as noted below:
Get funny message as if I'm running a different scheduler and it appears that there's an indexing problem running it:
inference steps: 30
output below:
### Reproduction
```python
inference_model = 'John6666/stoiqo-new-reality-sdxl... | closed | completed | false | 8 | [
"bug"
] | [] | 2025-01-23T21:39:03Z | 2025-01-23T23:40:41Z | 2025-01-23T23:40:41Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ukaprch | 107,368,096 | U_kgDOBmZOoA | User | false |
huggingface/diffusers | 2,808,296,605 | I_kwDOHa8MBc6nYzSd | 10,639 | https://github.com/huggingface/diffusers/issues/10639 | https://api.github.com/repos/huggingface/diffusers/issues/10639 | Add support to Lumina-Image 2.0 | ### Model/Pipeline/Scheduler description
Lumina-Image 2.0 is the latest model in the Lumina family and will be released very soon (https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0). It is a 2-B parameter Diffusion Transformer that significantly improves instruction-following and generates higher-quality, more divers... | closed | completed | false | 3 | [] | [] | 2025-01-24T01:36:21Z | 2025-02-16T15:27:50Z | 2025-02-16T15:27:49Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zhuole1025 | 53,815,869 | MDQ6VXNlcjUzODE1ODY5 | User | false |
huggingface/diffusers | 1,428,698,638 | I_kwDOHa8MBc5VKDYO | 1,064 | https://github.com/huggingface/diffusers/issues/1064 | https://api.github.com/repos/huggingface/diffusers/issues/1064 | LMSDiscreteScheduler.add_noise() returns error | ### Describe the bug
WHen I try to run the code:
```
timesteps = torch.tensor([num_inference_steps - start_step]*batch_size, dtype=torch.long, device=device)
noise = torch.randn(latents.shape, generator=generator, device=device)
latents = scheduler.add_noise(latents, noise, timesteps).to(de... | closed | completed | false | 4 | [
"bug"
] | [
"anton-l"
] | 2022-10-30T09:38:25Z | 2022-11-03T09:52:18Z | 2022-11-02T19:32:12Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vtushevskiy | 29,698,497 | MDQ6VXNlcjI5Njk4NDk3 | User | false |
huggingface/diffusers | 2,809,159,689 | I_kwDOHa8MBc6ncGAJ | 10,640 | https://github.com/huggingface/diffusers/issues/10640 | https://api.github.com/repos/huggingface/diffusers/issues/10640 | FluxInpaintPipeline error when im using DataParallel to use multiple gpus | ### Describe the bug
line 175, in forward
for t in chain(self.module.parameters(), self.module.buffers()):
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/tensorflow/lib/python3.12/site-packages/diffusers/configuration_utils.py", line 143, in __getattr__
raise AttributeError(f"'{type(self).__name__}' ob... | closed | not_planned | false | 5 | [
"bug"
] | [] | 2025-01-24T11:02:12Z | 2025-02-23T23:39:05Z | 2025-02-23T15:38:37Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Himasnhu-AT | 117,301,124 | U_kgDOBv3fhA | User | false |
huggingface/diffusers | 2,810,947,287 | I_kwDOHa8MBc6ni6bX | 10,650 | https://github.com/huggingface/diffusers/issues/10650 | https://api.github.com/repos/huggingface/diffusers/issues/10650 | Lumina: RuntimeError: shape '[2, 4, 67, 2, 120, 2]' is invalid for input of size 259200 | ### Describe the bug
If I use width=1920 and height=1080, error reported
Documentation says both should be divisible by 8
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/lumina/pipeline_lumina.py
if height % 8 != 0 or width % 8 != 0:
raise ValueError(f"`height` and `width` have... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-01-25T12:03:18Z | 2025-01-27T19:47:02Z | 2025-01-27T19:47:02Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,811,371,735 | I_kwDOHa8MBc6nkiDX | 10,653 | https://github.com/huggingface/diffusers/issues/10653 | https://api.github.com/repos/huggingface/diffusers/issues/10653 | The module 'HunyuanVideoTransformer3DModel' has been loaded in `bitsandbytes` 8bit and moving it to cpu via `.to()` is not supported. Module is still on cuda:0 | ### Describe the bug
1. enable_model_cpu_offload works with 4bit but not with 8bit. Is this the expected behavior or an issue?
_The module 'HunyuanVideoTransformer3DModel' has been loaded in bitsandbytes 8bit and moving it to cpu via .to() is not supported. Module is still on cuda:0_
2. device_map="balanced", also do... | closed | completed | false | 4 | [
"bug",
"stale"
] | [] | 2025-01-26T07:28:15Z | 2025-05-04T03:12:15Z | 2025-05-04T03:12:15Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,811,484,054 | I_kwDOHa8MBc6nk9eW | 10,655 | https://github.com/huggingface/diffusers/issues/10655 | https://api.github.com/repos/huggingface/diffusers/issues/10655 | How to use custon dataset in train_dreambooth_flux.py. | Hi. what if i want to train two images with two different prompts. somethink like m1.jpeg , m1.txt ; m2.jpeg, m2.txt.
the default example only shows all images share one instant prompt. thanks for the help! | closed | completed | false | 3 | [] | [] | 2025-01-26T11:53:01Z | 2025-01-27T19:43:55Z | 2025-01-27T19:43:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rooooc | 22,583,667 | MDQ6VXNlcjIyNTgzNjY3 | User | false |
huggingface/diffusers | 2,811,756,157 | I_kwDOHa8MBc6nl_59 | 10,656 | https://github.com/huggingface/diffusers/issues/10656 | https://api.github.com/repos/huggingface/diffusers/issues/10656 | ControlNet union pipeline fails on multi-model | ### Describe the bug
All controlnet types are typically defined inside pipeline as below (example from `StableDiffusionXLControlNetPipeline`):
> controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel],
however, StableDiffusionXLControlNetUnionPipeline pipeline define... | closed | completed | false | 17 | [
"bug",
"stale"
] | [] | 2025-01-26T19:51:39Z | 2025-02-26T17:55:48Z | 2025-02-26T17:55:46Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 2,811,979,224 | I_kwDOHa8MBc6nm2XY | 10,658 | https://github.com/huggingface/diffusers/issues/10658 | https://api.github.com/repos/huggingface/diffusers/issues/10658 | add provider_options in onnxruntime.InferenceSession | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].
We only have provider and provider_options, but we also need to config provider_options in some scenarios
https://onnxruntime.ai/docs/execution-providers/QNN... | closed | completed | false | 0 | [] | [] | 2025-01-27T02:50:07Z | 2025-01-27T19:46:19Z | 2025-01-27T19:46:18Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | xieofxie | 2,876,650 | MDQ6VXNlcjI4NzY2NTA= | User | false |
huggingface/diffusers | 2,812,223,940 | I_kwDOHa8MBc6nnyHE | 10,659 | https://github.com/huggingface/diffusers/issues/10659 | https://api.github.com/repos/huggingface/diffusers/issues/10659 | Not able to generate any good output using ConsisID | ### Describe the bug
I tried with 5-6 different image and prompts. Some do produce output but very bad.
https://github.com/user-attachments/assets/e9eccd72-a3fb-4ba0-86a3-04dc6f28a7f9
### Reproduction
tried with higher num_frames too.
```
import torch
from diffusers import ConsisIDPipeline
from diffusers.pipelines... | closed | completed | false | 6 | [
"bug"
] | [] | 2025-01-27T06:57:02Z | 2025-01-27T11:06:54Z | 2025-01-27T08:22:01Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 1,428,725,581 | I_kwDOHa8MBc5VKJ9N | 1,066 | https://github.com/huggingface/diffusers/issues/1066 | https://api.github.com/repos/huggingface/diffusers/issues/1066 | HuggingFaceDocBuilderDev makes documents for PR in forks | ### Describe the bug
HuggingFaceDocBuilderDev makes documents for PR in forks like this https://github.com/shirayu/diffusers/pull/1
The URL is ``https://moon-ci-docs.huggingface.co/docs/diffusers/pr_1/en/index``.
### Reproduction
See https://github.com/shirayu/diffusers/pull/1
### Logs
_No response_
### System I... | closed | completed | false | 5 | [
"bug"
] | [] | 2022-10-30T10:46:11Z | 2022-11-02T14:21:15Z | 2022-11-02T14:21:15Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | shirayu | 963,961 | MDQ6VXNlcjk2Mzk2MQ== | User | false |
huggingface/diffusers | 2,812,381,022 | I_kwDOHa8MBc6noYde | 10,662 | https://github.com/huggingface/diffusers/issues/10662 | https://api.github.com/repos/huggingface/diffusers/issues/10662 | Feature Request: Image-to-Image Fine-Tuning Example | Hello, and thank you for maintaining this amazing repository!
While working with the Diffusers library, I noticed there is a folder containing fine-tuning examples for text-to-image models but not for image-to-image fine-tuning.
Since image-to-image models have many use cases (e.g., style transfer, image restoration, ... | closed | completed | false | 6 | [] | [] | 2025-01-27T08:33:39Z | 2025-02-07T08:27:44Z | 2025-02-07T08:27:43Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | YanivDorGalron | 89,192,632 | MDQ6VXNlcjg5MTkyNjMy | User | false |
huggingface/diffusers | 1,428,774,727 | I_kwDOHa8MBc5VKV9H | 1,067 | https://github.com/huggingface/diffusers/issues/1067 | https://api.github.com/repos/huggingface/diffusers/issues/1067 | ValueError: The tokenizer already contains the token ... Please pass a different `placeholder_token` that is not already in the tokenizer. | Just a quick question regarding tokens:
I have a model trained on a new token and wish to add more detail and add extra run cycles to it. I get the above advice/error
How do I go about fixing this issue, or am I best to retrain from scratch? | closed | completed | false | 3 | [
"stale"
] | [] | 2022-10-30T12:49:50Z | 2022-12-09T15:03:55Z | 2022-12-09T15:03:55Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | OzzyD | 10,157,325 | MDQ6VXNlcjEwMTU3MzI1 | User | false |
huggingface/diffusers | 2,815,909,961 | I_kwDOHa8MBc6n12BJ | 10,671 | https://github.com/huggingface/diffusers/issues/10671 | https://api.github.com/repos/huggingface/diffusers/issues/10671 | Deterministic issue in DiffusionPipeline when getting dtype or device | ### Describe the bug
This issue is a follow-up of this [PR](https://github.com/huggingface/diffusers/pull/10670).
The idea was to fix an issue leading to the following error message:
```
RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
```
This was due to a dtype mismatch between image in... | closed | completed | false | 4 | [
"bug",
"stale"
] | [] | 2025-01-28T14:50:31Z | 2025-03-15T02:21:00Z | 2025-03-15T02:21:00Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dimitribarbot | 1,696,443 | MDQ6VXNlcjE2OTY0NDM= | User | false |
huggingface/diffusers | 2,816,170,189 | I_kwDOHa8MBc6n21jN | 10,672 | https://github.com/huggingface/diffusers/issues/10672 | https://api.github.com/repos/huggingface/diffusers/issues/10672 | Please support callback_on_step_end for following pipelines | **Is your feature request related to a problem? Please describe.**
Missing callback_on_step_end in these pipeline takes away the capability to show the progress in UI
**Describe the solution you'd like.**
Please support callback_on_step_end
**Describe alternatives you've considered.**
N.A.
**Additional context.**
1.... | closed | completed | false | 2 | [
"good first issue",
"help wanted",
"contributions-welcome"
] | [] | 2025-01-28T16:26:56Z | 2025-02-16T17:28:58Z | 2025-02-16T17:28:58Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,816,238,634 | I_kwDOHa8MBc6n3GQq | 10,673 | https://github.com/huggingface/diffusers/issues/10673 | https://api.github.com/repos/huggingface/diffusers/issues/10673 | AuraFlow pipeline: RuntimeError: shape '[2, 4, 67, 2, 120, 2]' is invalid for input of size 259200 | ### Describe the bug
1920x1080 (w x h) throws error
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/aura_flow/pipeline_aura_flow.py
if height % 8 != 0 or width % 8 != 0:
raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
Same as
https://githu... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-01-28T16:53:34Z | 2025-01-29T23:11:56Z | 2025-01-29T23:11:56Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,816,477,717 | I_kwDOHa8MBc6n4AoV | 10,674 | https://github.com/huggingface/diffusers/issues/10674 | https://api.github.com/repos/huggingface/diffusers/issues/10674 | FluxPipeline is not working with GGUF :( | ### Describe the bug
cpu offload is not working for Flux-GGUF, Works fine for AuraFlow-GGUF pipeline.
### Reproduction
```
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
from diffusers import GGUFQuantizationConfig
model_id = "ostris/Flex.1-alpha"
dtype = torch.bfloat16
transformer_path = "... | closed | completed | false | 8 | [
"bug"
] | [] | 2025-01-28T18:50:40Z | 2025-02-06T11:01:16Z | 2025-02-06T07:25:03Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,816,666,039 | I_kwDOHa8MBc6n4um3 | 10,675 | https://github.com/huggingface/diffusers/issues/10675 | https://api.github.com/repos/huggingface/diffusers/issues/10675 | Difference in Flux scheduler configuration max_shift | ### Describe the bug
Could you please check if the value of 1.16 here...
https://github.com/huggingface/diffusers/blob/658e24e86c4c52ee14244ab7a7113f5bf353186e/src/diffusers/pipelines/flux/pipeline_flux.py#L78
...is intentional or maybe a typo?
`max_shift` is 1.15 both in the model configuration...
https://huggingfa... | closed | completed | false | 2 | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | [] | 2025-01-28T20:35:58Z | 2025-02-18T06:54:58Z | 2025-02-18T06:54:58Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dxqb | 183,307,934 | U_kgDOCu0Ong | User | false |
huggingface/diffusers | 2,816,836,199 | I_kwDOHa8MBc6n5YJn | 10,677 | https://github.com/huggingface/diffusers/issues/10677 | https://api.github.com/repos/huggingface/diffusers/issues/10677 | Support for training with Grayscale images? | I am trying to train an unconditional diffusion model on grayscale images using your [pipeline](https://huggingface.co/docs/diffusers/training/unconditional_training). When running training with the default parameters I discovered inferred images that contained colour (specifically green). Where it learnt such colours ... | open | null | false | 1 | [
"stale"
] | [] | 2025-01-28T22:25:19Z | 2025-02-28T15:02:57Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DavidGill159 | 116,177,739 | U_kgDOBuy7Sw | User | false |
huggingface/diffusers | 2,818,365,488 | I_kwDOHa8MBc6n_Ngw | 10,679 | https://github.com/huggingface/diffusers/issues/10679 | https://api.github.com/repos/huggingface/diffusers/issues/10679 | possibly to avoid `from_single_file` loading in fp32 to save RAM | ### Describe the bug
When loading a model using `from_single_file()`, the RAM usage is really high possibly because the weights are loaded in FP32 before conversion.
### Reproduction
```python
import threading
import time
import psutil
import torch
from huggingface_hub import hf_hub_download
from diffusers import ... | open | null | false | 14 | [
"bug"
] | [
"DN6"
] | 2025-01-29T14:20:56Z | 2025-06-13T09:17:26Z | null | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | asomoza | 5,442,875 | MDQ6VXNlcjU0NDI4NzU= | User | false |
huggingface/diffusers | 1,428,778,964 | I_kwDOHa8MBc5VKW_U | 1,068 | https://github.com/huggingface/diffusers/issues/1068 | https://api.github.com/repos/huggingface/diffusers/issues/1068 | ERROR:torch.distributed.elastic.multiprocessing.api:failed? | Hello
I've found some problems it`s Before Make Classes, After Finish train
If make Classes Image
```
The following values were not passed to `accelerate launch` and had defaults used instead:
`--num_cpu_threads_per_process` was set to `4` to improve out-of-box performance
To avoid this warning pass in... | closed | completed | false | 12 | [
"stale"
] | [
"patil-suraj"
] | 2022-10-30T13:01:03Z | 2023-02-27T15:04:25Z | 2023-02-27T15:04:25Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | A-Polyana | 58,426,735 | MDQ6VXNlcjU4NDI2NzM1 | User | false |
huggingface/diffusers | 2,818,401,820 | I_kwDOHa8MBc6n_WYc | 10,680 | https://github.com/huggingface/diffusers/issues/10680 | https://api.github.com/repos/huggingface/diffusers/issues/10680 | stabilityai/stable-diffusion-2-1-base is missing diffusion_pytorch_model.fp16.bin | Got this warning on my console
```
stabilityai/stable-diffusion-2-1-base is missing diffusion_pytorch_model.fp16.bin
```
Was asked to raise this issue, can you please upload the necessary checkpoints in the hugging face repo? | closed | completed | false | 5 | [] | [] | 2025-01-29T14:35:31Z | 2025-01-30T20:52:19Z | 2025-01-30T18:19:43Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rohit901 | 30,185,369 | MDQ6VXNlcjMwMTg1MzY5 | User | false |
huggingface/diffusers | 2,818,826,837 | I_kwDOHa8MBc6oA-JV | 10,683 | https://github.com/huggingface/diffusers/issues/10683 | https://api.github.com/repos/huggingface/diffusers/issues/10683 | Would anyone consider a diffusers export_to_frames utility fuction? | **Is your feature request related to a problem? Please describe.**
The current `export_to_video` function in Hugging Face's Diffusers library exports a compressed video, but it's not straightforward for users to obtain raw, lossless PNG frames from a list of frames. This can be a problem for users who need to work with... | open | null | false | 4 | [
"stale"
] | [] | 2025-01-29T17:30:21Z | 2025-03-26T15:04:10Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | lovetillion | 61,259,492 | MDQ6VXNlcjYxMjU5NDky | User | false |
huggingface/diffusers | 2,821,188,603 | I_kwDOHa8MBc6oJ-v7 | 10,689 | https://github.com/huggingface/diffusers/issues/10689 | https://api.github.com/repos/huggingface/diffusers/issues/10689 | Support IPAdapter for all Flux pipelines- not only for txt2img | I see diffusers recently merged the IPAdapter for Flux pipelines, but only for txt2img pipeline. https://github.com/huggingface/diffusers/issues/9825
The feature request is about supporting IPAdapter to all Flux pipelines as img2img, sketch2img and more..
It will be great to have load_ip_adapter and unload_ip_adapter ... | closed | completed | false | 5 | [
"wip",
"contributions-welcome",
"roadmap"
] | [
"hlky"
] | 2025-01-30T15:50:22Z | 2025-03-31T05:39:40Z | 2025-03-31T05:39:39Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | almog2065 | 81,307,522 | MDQ6VXNlcjgxMzA3NTIy | User | false |
huggingface/diffusers | 2,821,211,606 | I_kwDOHa8MBc6oKEXW | 10,690 | https://github.com/huggingface/diffusers/issues/10690 | https://api.github.com/repos/huggingface/diffusers/issues/10690 | SDXL InPainting: Mask blur option is negated by forced binarization. | The SDXL InPainting pipeline's documentation suggests using `pipeline.mask_processor.blur()` for creating soft masks, but this functionality is effectively broken due to the implementation order. Please let me know if I'm missing something here. Based on my testing, whether I use a blurred mask or blur them with the bu... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-01-30T15:59:46Z | 2025-02-22T04:10:12Z | 2025-02-22T04:10:10Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zacheryvaughn | 114,719,371 | U_kgDOBtZ6iw | User | false |
huggingface/diffusers | 2,822,472,194 | I_kwDOHa8MBc6oO4IC | 10,695 | https://github.com/huggingface/diffusers/issues/10695 | https://api.github.com/repos/huggingface/diffusers/issues/10695 | DDIMInverseScheduler.step: Incorrect Previous Timestep Calculation | ### Describe the bug
There is a bug in DDIMInverseScheduler.step related to how the previous timestep is computed.
The inverse scheduler should move from timestep $t+1$ to $t$, but currently, the code is mistakenly computing $t-1$ instead of $t+1$.
https://github.com/huggingface/diffusers/blob/89e4d6219805975bd7d253a... | open | null | false | 2 | [
"bug"
] | [] | 2025-01-31T05:20:59Z | 2025-03-08T15:02:57Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | stop1one | 51,412,035 | MDQ6VXNlcjUxNDEyMDM1 | User | false |
huggingface/diffusers | 2,823,880,480 | I_kwDOHa8MBc6oUP8g | 10,697 | https://github.com/huggingface/diffusers/issues/10697 | https://api.github.com/repos/huggingface/diffusers/issues/10697 | Inconsistent random transform between source and target image in train_instruct_pix2pix | ### Describe the bug
Currently, random cropping and random flipping in `train_transform` of [train_instruct_pix2pix.py](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py#L701) and [train_instruct_pix2pix_sdxl.py](https://github.com/huggingface/diffusers/blob/main/ex... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-01-31T16:04:18Z | 2025-01-31T18:29:30Z | 2025-01-31T18:29:30Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Luvata | 17,178,612 | MDQ6VXNlcjE3MTc4NjEy | User | false |
huggingface/diffusers | 1,310,764,393 | I_kwDOHa8MBc5OIK1p | 107 | https://github.com/huggingface/diffusers/issues/107 | https://api.github.com/repos/huggingface/diffusers/issues/107 | [Feature request] Tensor to Image post-processing integrated with the library | For the image diffusers, the outputs are tensors that need to be pre-processed to become useful as images. Different models and schedulers may require different post-processing for the images that the user may not be aware about.
For that, the API for pipelines could have a `output_type` option that the user could c... | closed | completed | false | 1 | [] | [] | 2022-07-20T09:48:08Z | 2022-07-20T19:15:30Z | 2022-07-20T19:15:29Z | MEMBER | null | 20260407T133413Z | 2026-04-07T13:34:13Z | apolinario | 788,417 | MDQ6VXNlcjc4ODQxNw== | User | false |
huggingface/diffusers | 1,428,823,039 | I_kwDOHa8MBc5VKhv_ | 1,070 | https://github.com/huggingface/diffusers/issues/1070 | https://api.github.com/repos/huggingface/diffusers/issues/1070 | Stable diffusion inpainting new version faces a bug | ### Describe the bug
I used to use [stable diffusion inpainting legacy](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py). When I used num_images_per_prompt = 2, the model generated two similar images, while I expected to get two di... | closed | completed | false | 7 | [
"bug"
] | [] | 2022-10-30T14:13:45Z | 2023-07-07T04:37:11Z | 2022-11-05T08:47:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | FBehrad | 41,340,554 | MDQ6VXNlcjQxMzQwNTU0 | User | false |
huggingface/diffusers | 2,825,916,177 | I_kwDOHa8MBc6ocA8R | 10,705 | https://github.com/huggingface/diffusers/issues/10705 | https://api.github.com/repos/huggingface/diffusers/issues/10705 | Reusing the same pipeline (FluxPipeline) increase the inference duration | ### Describe the bug
So I create the pipe and use it to generate multiple image with same settings. During first inference it take 8 min, next 30 min. VRAM usage remains the same.
Tested on 8 GB + 8 GB
P.S. I have used AuraFlow, Sana, Hunyuan, LTX, Cog, and several other pipeline but didn't encounter this issue with... | closed | completed | false | 12 | [
"bug"
] | [] | 2025-02-02T17:28:45Z | 2025-02-03T17:43:44Z | 2025-02-03T15:44:17Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,827,122,655 | I_kwDOHa8MBc6ognff | 10,707 | https://github.com/huggingface/diffusers/issues/10707 | https://api.github.com/repos/huggingface/diffusers/issues/10707 | Annotate return type of DiffusionPipeline.from_pretrained as Self | None | DiffusionPipeline.from_pretrained returns Any | None. Return type should be explicitly annotated as Self | None. | closed | completed | false | 1 | [
"enhancement",
"good first issue",
"contributions-welcome"
] | [] | 2025-02-03T10:36:12Z | 2025-02-04T18:59:33Z | 2025-02-04T18:59:33Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | gabayben | 9,704,848 | MDQ6VXNlcjk3MDQ4NDg= | User | false |
huggingface/diffusers | 2,828,141,171 | I_kwDOHa8MBc6okgJz | 10,710 | https://github.com/huggingface/diffusers/issues/10710 | https://api.github.com/repos/huggingface/diffusers/issues/10710 | Is DDUF format supported? | I checked this PR, https://github.com/huggingface/diffusers/pull/10037 and it is merged
```
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"DDUF/FLUX.1-dev-DDUF", dduf_file="FLUX.1-dev.dduf", torch_dtype=torch.bfloat16
)
image = pipe(
"photo a cat holding a sig... | closed | completed | false | 4 | [] | [] | 2025-02-03T17:42:37Z | 2025-02-23T17:56:26Z | 2025-02-20T18:15:35Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 2,829,138,521 | I_kwDOHa8MBc6ooTpZ | 10,712 | https://github.com/huggingface/diffusers/issues/10712 | https://api.github.com/repos/huggingface/diffusers/issues/10712 | StableDiffusion3 pipeline RuntimeError when using prompt_embeds | ### Describe the bug
**StableDiffusion3** pipeline throws a RuntimeError when using `prompt_embeds` in lieu of `prompt` when using `num_images_per_prompt > 1`.
I am attempting to generate images using the StableDiffusion3 pipeline with some precomputed prompt embeddings. The prompt embeddings using the `.encode_promp... | open | null | false | 3 | [
"bug"
] | [
"yiyixuxu"
] | 2025-02-04T04:58:29Z | 2025-12-27T01:44:18Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | pjjajal | 61,586,397 | MDQ6VXNlcjYxNTg2Mzk3 | User | false |
huggingface/diffusers | 2,829,919,061 | I_kwDOHa8MBc6orSNV | 10,715 | https://github.com/huggingface/diffusers/issues/10715 | https://api.github.com/repos/huggingface/diffusers/issues/10715 | Kolors piplines produce black images on ROCm | ### Describe the bug
Generating images with Kolors pipelines produces black images on the torch ROCm backend.
fp16 VAE fix model does not appear to solve the issue.
The normal SDXL pipeline works fine, the code for VAE decoding is basically identical there, so I am not sure if it is related to the VAE.
I would like... | closed | completed | false | 16 | [
"bug",
"stale"
] | [] | 2025-02-04T11:30:50Z | 2025-03-08T08:55:48Z | 2025-03-08T08:26:27Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Teriks | 14,919,098 | MDQ6VXNlcjE0OTE5MDk4 | User | false |
huggingface/diffusers | 1,428,989,227 | I_kwDOHa8MBc5VLKUr | 1,072 | https://github.com/huggingface/diffusers/issues/1072 | https://api.github.com/repos/huggingface/diffusers/issues/1072 | Best Way To Load Multiple Fine-Tuned Models? | ### Describe the bug
I am trying to load upto 4 fine-tuned models using pipeline,
Here is what my code looks like,
```
pipe1 = StableDiffusionPipeline.from_pretrained(model_path1, revision="fp16", torch_dtype=torch.float16)
pipe2 = StableDiffusionPipeline.from_pretrained(model_path2, revision="fp16", torch_dty... | closed | completed | false | 12 | [
"bug",
"stale"
] | [] | 2022-10-30T19:24:55Z | 2022-12-29T05:26:57Z | 2022-12-29T05:26:57Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | adhikjoshi | 11,740,719 | MDQ6VXNlcjExNzQwNzE5 | User | false |
huggingface/diffusers | 2,832,296,385 | I_kwDOHa8MBc6o0WnB | 10,722 | https://github.com/huggingface/diffusers/issues/10722 | https://api.github.com/repos/huggingface/diffusers/issues/10722 | RuntimeError: The size of tensor a (4608) must match the size of tensor b (5120) at non-singleton dimension 2 during DreamBooth Training with Prior Preservation | ### Describe the bug
I am trying to run "train_dreambooth_lora_flux.py" on my dataset, but the error will happen if --with_prior_preservation is used.
**Who can help me? Thanks!**
### Reproduction
python ./examples/dreambooth/train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--ins... | open | null | false | 6 | [
"bug",
"stale"
] | [] | 2025-02-05T08:48:35Z | 2025-05-24T12:47:55Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | yinguoweiOvO | 56,142,257 | MDQ6VXNlcjU2MTQyMjU3 | User | false |
huggingface/diffusers | 2,833,508,773 | I_kwDOHa8MBc6o4-ml | 10,729 | https://github.com/huggingface/diffusers/issues/10729 | https://api.github.com/repos/huggingface/diffusers/issues/10729 | AttributeError: _hf_hook caused by delattr in hooks.remove_hook_from_module() | ### System Info
```Shell
- `Accelerate` version: 1.4.0.dev0
- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.39
- `accelerate` bash location: /workspaces/.venv_py311/bin/accelerate
- Python version: 3.11.11
- Numpy version: 1.26.3
- PyTorch version (GPU?): 2.5.1+rocm6.2 (True)
- System RAM: 1007.70 GB
- GPU type:... | open | null | false | 17 | [
"bug",
"Good second issue"
] | [] | 2025-01-13T16:44:49Z | 2025-09-05T16:45:44Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | eppaneamd | 186,286,759 | U_kgDOCxqCpw | User | false |
huggingface/diffusers | 2,834,777,498 | I_kwDOHa8MBc6o90Wa | 10,733 | https://github.com/huggingface/diffusers/issues/10733 | https://api.github.com/repos/huggingface/diffusers/issues/10733 | A simple finetune script train_text_to_image.py can not run in kaggle | ### Describe the bug

As error message stated : Duplicate GPU detected : Rank2 and Rank0 both on cuda device 40.
(The script and arguments are exactly the same as example in README.md)

init_image=utils.load_ima... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-02-06T13:26:12Z | 2025-02-06T13:38:24Z | 2025-02-06T13:38:22Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | SuffixAutomata | 121,671,093 | U_kgDOB0CNtQ | User | false |
huggingface/diffusers | 2,835,743,933 | I_kwDOHa8MBc6pBgS9 | 10,738 | https://github.com/huggingface/diffusers/issues/10738 | https://api.github.com/repos/huggingface/diffusers/issues/10738 | Scheduler sigma index out-of-bounds | ### Describe the bug
This is a continuation of issue #10266 with related pr #10267 which provided a fix for some, but not all schedulers
specifically, it fixed `DPMSolverMultistepInverseScheduler`, but NOT `DPMSolverMultistepScheduler`
### Reproduction
sampler configurations that produce error:
```log
class=DPMSolve... | closed | completed | false | 0 | [
"bug"
] | [] | 2025-02-06T14:47:25Z | 2025-02-12T20:33:58Z | 2025-02-12T20:33:58Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | vladmandic | 57,876,960 | MDQ6VXNlcjU3ODc2OTYw | User | false |
huggingface/diffusers | 1,429,425,332 | I_kwDOHa8MBc5VM0y0 | 1,074 | https://github.com/huggingface/diffusers/issues/1074 | https://api.github.com/repos/huggingface/diffusers/issues/1074 | How to save jax models | I use a model trained by pytorch, how can I convert it to jax for inference | closed | completed | false | 5 | [] | [] | 2022-10-31T08:02:25Z | 2022-11-02T12:17:00Z | 2022-11-02T12:17:00Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dingjingzhen | 21,244,263 | MDQ6VXNlcjIxMjQ0MjYz | User | false |
huggingface/diffusers | 2,836,002,471 | I_kwDOHa8MBc6pCfan | 10,741 | https://github.com/huggingface/diffusers/issues/10741 | https://api.github.com/repos/huggingface/diffusers/issues/10741 | FluxControlNetImg2ImgPipeline doesn't support generating more than one image | ### Describe the bug
The FluxControlNetImg2ImgPipeline does not support generating more than one image.
The error encountered is: RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
I figured out that the control_mode needs to be sent as a ... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-02-06T16:24:17Z | 2025-03-09T15:02:48Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | liorRabkin | 99,609,193 | U_kgDOBe_qaQ | User | false |
huggingface/diffusers | 2,838,088,164 | I_kwDOHa8MBc6pKcnk | 10,743 | https://github.com/huggingface/diffusers/issues/10743 | https://api.github.com/repos/huggingface/diffusers/issues/10743 | Support zero-3 for FLUX training | ### Describe the bug
Due to memory limitations, I am attempting to use Zero-3 for Flux training on 8 GPUs with 32GB each. I encountered a bug similar to the one reported in this issue: https://github.com/huggingface/diffusers/issues/1865. I made modifications based on the solution proposed in this pull request: https:... | closed | completed | false | 9 | [
"bug"
] | [] | 2025-02-07T12:50:44Z | 2025-10-27T09:33:59Z | 2025-10-27T09:33:59Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | xiaoyewww | 50,870,160 | MDQ6VXNlcjUwODcwMTYw | User | false |
huggingface/diffusers | 2,838,149,508 | I_kwDOHa8MBc6pKrmE | 10,744 | https://github.com/huggingface/diffusers/issues/10744 | https://api.github.com/repos/huggingface/diffusers/issues/10744 | Confusion between class and instance attributes | It might not be a bug but it is still a problem.
The method [load_lora_weights()](https://github.com/huggingface/diffusers/blob/464374fb87610c53b2cf81e08d3df628fada3ce4/src/diffusers/loaders/lora_pipeline.py#L1546) uses the instance attritubes `self.transformer_name` and `self._control_lora_supported_norm_keys` but la... | closed | completed | false | 3 | [] | [] | 2025-02-07T13:21:11Z | 2025-02-19T13:51:51Z | 2025-02-19T13:51:51Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | christopher5106 | 6,875,375 | MDQ6VXNlcjY4NzUzNzU= | User | false |
huggingface/diffusers | 2,838,477,939 | I_kwDOHa8MBc6pL7xz | 10,745 | https://github.com/huggingface/diffusers/issues/10745 | https://api.github.com/repos/huggingface/diffusers/issues/10745 | Unloading multiple loras: norms do not return to their original values | When unloading from multiple loras on flux pipeline, I believe that the norm layers are not restored [here](https://github.com/huggingface/diffusers/blob/464374fb87610c53b2cf81e08d3df628fada3ce4/src/diffusers/loaders/lora_pipeline.py#L1575).
Shouldn't we have:
```python
if len(transformer_norm_state_dict) > 0... | open | null | false | 26 | [
"stale"
] | [] | 2025-02-07T15:43:12Z | 2025-03-17T15:03:25Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | christopher5106 | 6,875,375 | MDQ6VXNlcjY4NzUzNzU= | User | false |
huggingface/diffusers | 2,839,440,283 | I_kwDOHa8MBc6pPmub | 10,748 | https://github.com/huggingface/diffusers/issues/10748 | https://api.github.com/repos/huggingface/diffusers/issues/10748 | NaN in DPMSolverMultistepInverseScheduler | Hi, everyone, I'm new to diffusers. I'm trying to use DPMSolverMultistepInverseScheduler for DDIM inversion. The applied config is:
```python
dpmpp_2m_sde_karras_scheduler_inv = DPMSolverMultistepInverseScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
algorithm_type="sde-dpmsolve... | open | null | false | 3 | [
"stale"
] | [] | 2025-02-08T03:15:55Z | 2025-10-19T05:55:04Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | DekuLiuTesla | 48,622,392 | MDQ6VXNlcjQ4NjIyMzky | User | false |
huggingface/diffusers | 2,840,137,065 | I_kwDOHa8MBc6pSQ1p | 10,749 | https://github.com/huggingface/diffusers/issues/10749 | https://api.github.com/repos/huggingface/diffusers/issues/10749 | Please add support for GGUF in Lumina2 pipeline | **Is your feature request related to a problem? Please describe.**
GGUF is already available, please add support in pipeline
https://huggingface.co/calcuis/lumina-gguf/tree/main
**Describe the solution you'd like.**
```
import torch
from diffusers import Lumina2Text2ImgPipeline, Lumina2Transformer2DModel
bfl_repo = "A... | closed | completed | false | 2 | [] | [] | 2025-02-08T16:42:05Z | 2025-02-12T13:24:52Z | 2025-02-12T13:24:50Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | nitinmukesh | 2,102,186 | MDQ6VXNlcjIxMDIxODY= | User | false |
huggingface/diffusers | 1,429,433,488 | I_kwDOHa8MBc5VM2yQ | 1,075 | https://github.com/huggingface/diffusers/issues/1075 | https://api.github.com/repos/huggingface/diffusers/issues/1075 | Using jax inference | How to generate multiple images with a single prompt when I use jax to accelerate txt2img | closed | completed | false | 3 | [
"stale"
] | [] | 2022-10-31T08:09:12Z | 2022-11-30T15:34:55Z | 2022-11-30T15:34:54Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | dingjingzhen | 21,244,263 | MDQ6VXNlcjIxMjQ0MjYz | User | false |
huggingface/diffusers | 2,841,251,191 | I_kwDOHa8MBc6pWg13 | 10,752 | https://github.com/huggingface/diffusers/issues/10752 | https://api.github.com/repos/huggingface/diffusers/issues/10752 | Attempting to Unscale FP16 Gradients Bug | ### Describe the bug
Hello, I have the following error when trying to train a LoRA with SDXL:
```
ValueError: Attempting to unscale FP16 gradients.
Traceback (most recent call last):
File "/nfs/horai.dgpsrv/year/zling/diffusers/examples/dreambooth/train_dreambooth_lora_sdxl.py", line 1994, in <module>
main(args... | closed | completed | false | 7 | [
"bug",
"training"
] | [] | 2025-02-10T03:40:48Z | 2025-02-13T01:30:23Z | 2025-02-11T02:48:46Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | iszihan | 36,280,571 | MDQ6VXNlcjM2MjgwNTcx | User | false |
huggingface/diffusers | 2,841,362,583 | I_kwDOHa8MBc6pW8CX | 10,754 | https://github.com/huggingface/diffusers/issues/10754 | https://api.github.com/repos/huggingface/diffusers/issues/10754 | eos_token_id for Textual Inversion | ### Describe the bug
Hi, I implemented textual inversion follwowing this [link](https://huggingface.co/docs/diffusers/v0.32.2/en/training/text_inversion), but I think there is something wrong with `eos_token_id` in stable-diffusion-v1-5 text encoder [config](https://huggingface.co/stable-diffusion-v1-5/stable-diffusio... | open | null | false | 1 | [
"bug",
"stale"
] | [] | 2025-02-10T05:20:05Z | 2025-03-12T15:03:16Z | null | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | solim-i | 198,528,727 | U_kgDOC9VO1w | User | false |
huggingface/diffusers | 2,841,368,415 | I_kwDOHa8MBc6pW9df | 10,755 | https://github.com/huggingface/diffusers/issues/10755 | https://api.github.com/repos/huggingface/diffusers/issues/10755 | Difference in Output When Using PIL.Image vs numpy.array for Image and Mask Input. | hi.
I get different results when providing image and mask as input using PIL.Image versus numpy. array. Why does this happen?
Is there an issue with my normalization method?
| pillow | array |
|---|---|
|  | 
### Open source status
- [X] The model implementation is available
- [X] The model weights are av... | closed | completed | false | 3 | [
"stale"
] | [] | 2022-10-31T08:50:11Z | 2022-12-08T15:02:56Z | 2022-12-08T15:02:56Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | zcswdt | 43,515,926 | MDQ6VXNlcjQzNTE1OTI2 | User | false |
huggingface/diffusers | 2,844,251,034 | I_kwDOHa8MBc6ph9Oa | 10,760 | https://github.com/huggingface/diffusers/issues/10760 | https://api.github.com/repos/huggingface/diffusers/issues/10760 | Import Bug: no attribute OnnxStableDiffusionInpaintPipelineLegacy | ### Describe the bug
I have a script that calls `from diffusers import `* and I get this error.
```
Traceback (most recent call last):
File "/nfs/horai.dgpsrv/year/zling/PairCustomization/evaluation_scripts/evaluate_with_controlnet.py", line 7, in <module>
from modules import *
File "/nfs/horai.dgpsrv/year/zli... | closed | completed | false | 1 | [
"bug"
] | [] | 2025-02-11T04:36:00Z | 2025-02-11T04:50:35Z | 2025-02-11T04:50:34Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | iszihan | 36,280,571 | MDQ6VXNlcjM2MjgwNTcx | User | false |
huggingface/diffusers | 2,844,758,342 | I_kwDOHa8MBc6pj5FG | 10,763 | https://github.com/huggingface/diffusers/issues/10763 | https://api.github.com/repos/huggingface/diffusers/issues/10763 | Bug in pipeline_utils.py with different Python versions | ### Describe the bug
In Python versions less than 3.10, the default python **typing** package does not have the **__name__** attribute for formats like List and Tuple. As a result, an issue occurs in diffusers versions (>=0.32.2).
One way to solve this issue like https://github.com/huggingface/diffusers/pull/10762
#... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-02-11T09:11:44Z | 2025-02-13T02:44:30Z | 2025-02-13T02:44:28Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | rebel-kblee | 119,555,851 | U_kgDOByBHCw | User | false |
huggingface/diffusers | 2,845,838,830 | I_kwDOHa8MBc6poA3u | 10,767 | https://github.com/huggingface/diffusers/issues/10767 | https://api.github.com/repos/huggingface/diffusers/issues/10767 | FlashVideo <> Diffusers | ### Model/Pipeline/Scheduler description
There is a brilliant new two-stage video model released under the MIT license: FlashVideo.
Would be amazing to see this in Diffusers!
https://github.com/user-attachments/assets/b53e2a39-3127-4027-a1b2-9f435d20da60
The left panel is the Stage I video (240p) and the right pane... | open | null | false | 4 | [
"stale"
] | [] | 2025-02-11T16:11:55Z | 2025-03-16T15:03:00Z | null | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | ghunkins | 12,562,057 | MDQ6VXNlcjEyNTYyMDU3 | User | false |
huggingface/diffusers | 2,845,868,433 | I_kwDOHa8MBc6poIGR | 10,768 | https://github.com/huggingface/diffusers/issues/10768 | https://api.github.com/repos/huggingface/diffusers/issues/10768 | Hunyuan video not support negative prompt? | the official repo has supported negative prompt in inference, https://github.com/Tencent/HunyuanVideo/blob/main/hyvideo/inference.py#L505,
does diffusers has any plans to add this feature | closed | completed | false | 6 | [] | [] | 2025-02-11T16:22:35Z | 2025-02-21T01:18:16Z | 2025-02-21T01:18:16Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | trouble-maker007 | 73,164,596 | MDQ6VXNlcjczMTY0NTk2 | User | false |
huggingface/diffusers | 2,846,690,059 | I_kwDOHa8MBc6prQsL | 10,772 | https://github.com/huggingface/diffusers/issues/10772 | https://api.github.com/repos/huggingface/diffusers/issues/10772 | Sana Controlnet Support | **Is your feature request related to a problem? Please describe.**
The first controlnet for Sana has appeared, so the feature is to add the sana controlnet to the diffusers pipeline https://github.com/NVlabs/Sana/blob/main/asset/docs/sana_controlnet.md
**Describe the solution you'd like.**
Be able to use the sana cont... | closed | completed | false | 5 | [
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] | [] | 2025-02-11T22:39:10Z | 2025-04-13T13:49:40Z | 2025-04-13T13:49:40Z | NONE | null | 20260407T133413Z | 2026-04-07T13:34:13Z | jloveric | 5,200,721 | MDQ6VXNlcjUyMDA3MjE= | User | false |
huggingface/diffusers | 2,847,026,902 | I_kwDOHa8MBc6psi7W | 10,775 | https://github.com/huggingface/diffusers/issues/10775 | https://api.github.com/repos/huggingface/diffusers/issues/10775 | Support multiple IP adapter in Flux | When I pass the weights in the form of [0.4, 0.4], it tells me "Expected list of 19 scales, got 2."
pipe.set_ip_adapter_scale([0.4, 0.4]) | closed | completed | false | 6 | [
"roadmap"
] | [
"hlky"
] | 2025-02-12T02:38:58Z | 2025-02-25T09:51:17Z | 2025-02-25T09:51:17Z | CONTRIBUTOR | null | 20260407T133413Z | 2026-04-07T13:34:13Z | Honey-666 | 76,274,942 | MDQ6VXNlcjc2Mjc0OTQy | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.