repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 1,275 | How to use @xenova/transformers in a musl-based environment? | ### Question
Hi,
I encountered the following error when using @xenova/transformers:
```bash
Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /app/node_modules/onnxruntime-node/bin/napi-v3/linux/x64//libonnxruntime.so.1.14.0)
```
After investigating the issue, I found that it was caused by using the Node Alpine Docker image.
(https://github.com/huggingface/transformers.js/issues/555)
(https://github.com/huggingface/transformers.js/issues/376)
Since Alpine Linux uses musl as its standard C library, and @xenova/transformers depends on onnxruntime-node (which is built against glibc), this incompatibility appears to be the root cause.
I confirmed this by switching to the node:slim image (which uses glibc), and the error was resolved.
However, I would really like to use @xenova/transformers in a musl-based environment (e.g., Alpine).
Is there currently any way to run it on Alpine using musl?
If not, are there any plans to support musl or an alternative backend (e.g., onnxruntime-web with WASM) in Node.js?
Thanks in advance! | https://github.com/huggingface/transformers.js/issues/1275 | closed | [
"question"
] | 2025-04-07T06:34:51Z | 2025-10-07T21:23:36Z | null | ezcolin2 |
huggingface/open-r1 | 583 | num_iterations in GRPOConfig does NOT DO what it is supposed to DO | Hi @qgallouedec and @lewtun
Thanks again for the amazing work ! I got the chance to try the v0.16.0 trl release in open-r1.
I was excited about num_iterations which was supposed to make the training 6 times faster. Simply one needs something like:
`training_args = GRPOConfig(..., num_iterations=4)
`
But I did not see this happening. Using this simple receipe, it takes 58 steps and about 3 hours and 30 minutes to train the model on 8 A100 GPUs with `num_iterations=1`. But increasing it to `num_iterations=4` linearly increases the number of steps to 232 and increases the training time to 4 hours and 20 minutes under the same exact setup.
Am I missing something here ? are we not supposed to re-use the generated data across multiple steps ? then why the training time has increased ? | https://github.com/huggingface/open-r1/issues/583 | closed | [] | 2025-04-06T15:57:43Z | 2025-04-12T06:00:21Z | null | ahatamiz |
huggingface/agents-course | 412 | [QUESTION] - Dummy Agent Library | _---
Do you see the issue?
The answer was hallucinated by the model. We need to stop to actually execute the function! Let’s now stop on “Observation” so that we don’t hallucinate the actual function response.
---_
Can someone explain how the system is hallucinating in this example. I am kind of stuck on this. | https://github.com/huggingface/agents-course/issues/412 | open | [
"question"
] | 2025-04-06T09:44:14Z | 2025-04-06T09:44:14Z | null | NewTonDBA |
huggingface/lerobot | 940 | Possible mismatch in observations.state metadata in Libero datasets on Hugging Face | Hello,
I believe there might be a mistake in the Libero datasets hosted on huggingface/datasets.
Specifically, the issue is with the `observations.state` column. According to `meta/info.json`, the structure is described as:
```
"observation.state": {
"dtype": "float32",
"shape": [
8
],
"names": {
"motors": [
"x",
"y",
"z",
"rx",
"ry",
"rz",
"rw",
"gripper"
]
}
}
```
However, when I check the values in the `observations.state` column, the last two values appear to be negative of each other. It seems like those two values are `robot0_gripper_qpos` from the environment observations. When I compare the values of observations from the environment, the first three values in the column are `robot0_eef_pos` and the second three seems like `robot0_eef_quat` (rx, ry, rz, rw) converted to axis angle representation.
Could you please clarify or confirm whether this is an intended design or a labeling error?
Thanks for your work on LeRobot datasets! | https://github.com/huggingface/lerobot/issues/940 | closed | [
"question",
"dataset",
"stale"
] | 2025-04-06T04:18:55Z | 2025-10-19T02:32:09Z | null | ozgraslan |
huggingface/diffusers | 11,208 | MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline | ### Describe the bug
When using `StableDiffusion3ControlNetInpaintingPipeline` with `SD3MultiControlNetModel`, I receive an error:
`NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.`
### Reproduction
Example reproduction code:
```python
import os
import torch
from diffusers.utils import load_image
from diffusers.pipelines import StableDiffusion3ControlNetInpaintingPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel
from diffusers import BitsAndBytesConfig, SD3Transformer2DModel
from transformers import T5EncoderModel
# Load images
image = load_image(
"https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog.png"
)
mask = load_image(
"https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog_mask.png"
)
# Initialize ControlNet models
controlnetA = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Pose")
controlnetB = SD3ControlNetModel.from_pretrained("alimama-creative/SD3-Controlnet-Inpainting", use_safetensors=True, extra_conditioning_channels=1)
controlnet = SD3MultiControlNetModel([controlnetA, controlnetB])
# Load transformer and text encoder
nf4_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16)
model_id = "stabilityai/stable-diffusion-3.5-large-turbo"
model_nf4 = SD3Transformer2DModel.from_pretrained(model_id, subfolder="transformer", quantization_config=nf4_config, torch_dtype=torch.bfloat16)
t5_nf4 = T5EncoderModel.from_pretrained("diffusers/t5-nf4", torch_dtype=torch.bfloat16)
# Initialize pipeline
pipe = StableDiffusion3ControlNetInpaintingPipeline.from_pretrained(
"stabilityai/stable-diffusion-3.5-large-turbo",
token=os.getenv("HF_TOKEN"),
controlnet=controlnet,
transformer=model_nf4,
text_encoder_3=t5_nf4,
torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()
# This fails with NotImplementedError
result_image = pipe(
prompt="a cute dog with a hat",
negative_prompt="low quality, bad anatomy",
control_image=[image, image],
num_inference_steps=30,
guidance_scale=7.5,
controlnet_conditioning_scale=[1.0, 1.0],
output_type="pil",
).images[0]
```
### Logs
```shell
Error
NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.
Error occurs in `diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py` at line 1026. *Full error code*:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[1], line 41
38 pipe.enable_model_cpu_offload()
40 # This fails with NotImplementedError
---> 41 result_image = pipe(
42 prompt="a cute dog with a hat",
43 negative_prompt="low quality, bad anatomy",
44 control_image=[image, image],
45 num_inference_steps=30,
46 guidance_scale=7.5,
47 controlnet_conditioning_scale=[1.0, 1.0],
48 output_type="pil",
49 ).images[0]
File ~/miniconda3/envs/bnb310/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File ~/miniconda3/envs/bnb310/lib/python3.10/site-packages/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py:1026, in StableDiffusion3ControlNetInpaintingPipeline.__call__(self, prompt, prompt_2, prompt_3, height, width, num_inference_steps, sigmas, guidance_scale, control_guidance_start, control_guidance_end, control_image, control_mask, controlnet_conditioning_scale, controlnet_pooled_projections, negative_prompt, negative_prompt_2, negative_prompt_3, num_images_per_prompt, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, output_type, return_dict, joint_attention_kwargs, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length)
1023 width = latent_width * self.vae_scale_factor
1025 elif isinstance(self.controlnet, SD3MultiControlNetModel):
-> 1026 raise NotImplementedError("MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.")
1027 else:
1028 assert False
NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.
Expected Behavior
I expect `StableDiffusion3ControlNetInpaintingPipeline` to support `SD3MultiControlNetModel`
```
### System Info
Versions
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0]
PyTorch version: 2.2.0+cu118
CUDA version: 11.8
Diffusers version: 0.32.2
Transformers version: 4.50.3
Accelerate version: 1.7.0.dev0
### Who can help?
@yiyixuxu @sayakpaul | https://github.com/huggingface/diffusers/issues/11208 | open | [
"bug",
"help wanted",
"Good Example PR",
"contributions-welcome"
] | 2025-04-04T12:39:10Z | 2025-05-11T15:03:00Z | 5 | DanilaAniva |
huggingface/sentence-transformers | 3,308 | How to load locally saved transformer models into sentence transformer? | I’ve made some modifications to the NVEMBEDV2 model architecture and saved the updated version locally using `model.save_pretrained()`. However, when I try to wrap the saved model in a SentenceTransformer, I encounter a `KeyError: 'NVEmbedConfig'`.
I checked the documentation, and while loading pretrained models seems straightforward, I’m unsure how to handle models with a custom configuration and type. Is there a guide on how to properly load and integrate a locally modified transformer model into SentenceTransformer?
I'm attaching a simple notebook for reproducibility and also the error. Thanks!
[issue.ipynb.txt](https://github.com/user-attachments/files/19589812/issue.ipynb.txt)
[requirements.txt](https://github.com/user-attachments/files/19589811/requirements.txt) | https://github.com/huggingface/sentence-transformers/issues/3308 | open | [] | 2025-04-03T15:11:20Z | 2025-04-08T15:48:26Z | null | samehkhattab |
huggingface/datasets | 7,497 | How to convert videos to images? | ### Feature request
Does someone know how to return the images from videos?
### Motivation
I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two version, one is data include images infos and another one is separate to data and videos.
Does someone know how to return the images from videos?
| https://github.com/huggingface/datasets/issues/7497 | open | [
"enhancement"
] | 2025-04-03T07:08:39Z | 2025-04-15T12:35:15Z | null | Loki-Lu |
huggingface/blog | 2,781 | How to submit revised version of Arxiv paper (v2) to Daily Papers | I would like to submit a revised version (v2) of our arXiv paper to Daily Papers, but the original submission (v1) was uploaded too long ago, so it's not eligible through the regular submission form.
However, this v2 version was recently accepted to CVPR 2025, and it is a completely different paper compared to v1, both in content and contributions. It is based on a completely new idea and contains significant updates and improvements over the original version.
Is there any way we can submit this revised version (v2) to Daily Papers? | https://github.com/huggingface/blog/issues/2781 | closed | [] | 2025-04-02T09:20:30Z | 2025-11-03T15:22:36Z | null | eveningglow |
huggingface/lerobot | 927 | How to train a model for VLN? | ### System Info
```Shell
To control four legs dogs.
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
rt
### Expected behavior
tret | https://github.com/huggingface/lerobot/issues/927 | closed | [
"question"
] | 2025-04-01T13:26:20Z | 2025-04-01T15:50:04Z | null | lucasjinreal |
huggingface/agents-course | 391 | [QUESTION] UNIT-3 not yet published ? | <img width="1440" alt="Image" src="https://github.com/user-attachments/assets/aa8ed881-f998-4c63-805f-8af936d630c5" /> | https://github.com/huggingface/agents-course/issues/391 | closed | [
"question"
] | 2025-04-01T11:24:07Z | 2025-04-30T04:50:26Z | null | ynareshkalyan21 |
huggingface/hub-docs | 1,664 | Page: "how to be registered as a provider"? | https://github.com/huggingface/hub-docs/issues/1664 | closed | [] | 2025-04-01T10:55:01Z | 2025-04-03T13:03:26Z | null | hanouticelina | |
huggingface/lerobot | 926 | [Question] Deploy leRobot for a delta kinematic | Bonjour everyone,
I'm currently working on the development of an **open source delta robot** via ROS.
I'm wondering if any of you have a clue to help me integrate leRobot ACT algorithm to the custom kinematic of my delta.
ATM the inverse kinematic is managed by a marlin CNC firmware (on arudino mega), so we communicated via gcode, but considering moving to micro-ros to have direct angular control of the stepper motors and better ROS integration
| https://github.com/huggingface/lerobot/issues/926 | closed | [
"question"
] | 2025-04-01T09:46:29Z | 2025-04-28T10:57:31Z | null | man0n0n0 |
huggingface/optimum | 2,220 | optimum-cli diffusion policy model issue | ### System Info
```shell
Hi,
Trying to export a diffusion policy model to onnx format. From the error message and printed list of model types, it looks like “diffusion” model cannot be exported to onnx.
Is there a way to get around this?
optimum-cli export onnx --model lerobot/diffusion_pusht --task reinforcement-learning /onnx/
Traceback (most recent call last):
File "/optimum-cli", line 8, in
sys.exit(main())
File "/python3.10/site-packages/optimum/commands/optimum_cli.py", line 208, in main
service.run()
File "/python3.10/site-packages/optimum/commands/export/onnx.py", line 265, in run
main_export(
File "/python3.10/site-packages/optimum/exporters/onnx/main.py", line 272, in main_export
config = AutoConfig.from_pretrained(
File "/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1008, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in lerobot/diffusion_pusht. Should have a model_type key in its config.json, or contain one of the following strings in its name:
Model type form config.json:
"type": "diffusion"
Supported Models:
albert, align, altclip, audio-spectrogram-transformer, autoformer, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, git, glpn, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, graphormer, grounding-dino, groupvit, hiera, hubert, ibert, idefics, idefics2, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llava, llava-next-video, llava_next, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mistral, mixtral, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, pix2struct, plbart, poolformer, pop2piano, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rwkv, sam, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, siglip, siglip_vision_model, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, time_series_transformer, timesformer, timm_backbone, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zoedepth
Thanks
To reproduce
Download model from HF
Use optimum-cli to export the model
Platform
Linux
OS Version
Ubuntu 22.04.4 LTS
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.21.0
ONNX Runtime API
Python
Architecture
ARM64
Execution Provider
CUDA
Execution Provider Library Version
12.4
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
To reproduce
Download model from HF
Use optimum-cli to export the model
### Expected behavior
onnx export to succeed | https://github.com/huggingface/optimum/issues/2220 | closed | [
"bug"
] | 2025-04-01T04:59:53Z | 2025-06-11T13:57:20Z | 1 | kraza8 |
huggingface/lerobot | 923 | Cannot install Lerobot | I am getting an error when the installation is building the av wheel. It is not passing this part of the installation | https://github.com/huggingface/lerobot/issues/923 | closed | [
"documentation",
"question",
"dependencies"
] | 2025-03-31T18:26:16Z | 2025-07-03T01:32:17Z | null | Prasit7 |
huggingface/open-r1 | 564 | How to evaluate pass@16 for aime 2024 benchmark? | https://github.com/huggingface/open-r1/issues/564 | open | [] | 2025-03-31T09:27:02Z | 2025-03-31T09:27:02Z | null | Cppowboy | |
huggingface/diffusers | 11,176 | How to use attention_mask and encoder_attention_mask or apply prompts to specific areas in the image? | Hi, I'm aware of the attention_mask and encoder_attention_mask that exist in the forward function of the UNet2DConditionModel yet there are no examples on how to use this
I would appreciate some help on that, thank you in advance
@patrickvonplaten @Birch-san | https://github.com/huggingface/diffusers/issues/11176 | open | [
"stale"
] | 2025-03-30T16:56:40Z | 2025-04-30T15:03:34Z | null | alexblattner |
huggingface/lerobot | 920 | [Question] How to convert dataset locally | I've noticed that `convert_dataset_v20_to_v21.py` convert LeRobot dataset from v20 to v21 that've already been pushed to the hub. But is there a script to do with local dataset? | https://github.com/huggingface/lerobot/issues/920 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-30T13:32:50Z | 2025-10-13T02:30:26Z | null | Frozenkiddo |
huggingface/lerobot | 919 | [Question] Why does "action" exist? | I am a beginner and I am very confused about it. What I can understand is that during my entire operation, I sampled at fixed time intervals. It's like a signal being collected by a letter. I only have to observe and what does action mean? Many data sets in the project have data with the column title `action`. Moreover, according to the expression of the project, `action` means the goal of the movement. However, this goal never seems to match the results in the observation. It looks like the robot never moves to its target. I was completely confused. | https://github.com/huggingface/lerobot/issues/919 | closed | [
"question"
] | 2025-03-30T10:45:57Z | 2025-03-31T07:50:19Z | null | ipc-robot |
huggingface/trl | 3,179 | How to resume from the last checkpoint? | I want to continue training from the last checkpoint. How should I do it? I set resume_from_checkpoint=True in the GRPOConfig, but based on the output, it seems to start training from the first step. Do I also need to change the model to the checkpoint path? | https://github.com/huggingface/trl/issues/3179 | closed | [
"❓ question",
"🏋 GRPO"
] | 2025-03-30T02:30:47Z | 2025-03-30T04:35:58Z | null | Tuziking |
huggingface/diffusers | 11,168 | Sage Attention for diffuser library | **Is your feature request related to a problem? No
**Describe the solution you'd like.**
A clear and concise description of what you want to happen.
Incorporate a way to add sage attention to the diffusers library: Flux pipeline, Wan pipeline, etc.
**Describe alternatives you've considered.**
None
**Additional context.**
When I incorporated sage attention in the flux pipeline (text to image) I achieved a 16% speed advantage vs no sage attention.
My environment was the same save for including / excluding sage attention in my 4 image benchmark creation.
How to incorporate sage attention? We must consider that this only applies to the Transformer. With this in mind I did the following to the FluxPipeline. Obviously there must be a way to do this via a variable of sorts so that we may/may not run it:
Need some kind of indicator to decide whether to include or not! This must be done before the denoising step in the model pipeline.
` import torch.nn.functional as F
sage_function = False
try:
from sageattention import sageattn
self.transformer.scaled_dot_product_attention = F.scaled_dot_product_attention = sageattn
sage_function = True
except (ImportError):
pass
# 6. Denoising loop
with self.progress_bar(total=num_inference_steps) as progress_bar:
for i, t in enumerate(timesteps):
if self.interrupt:
continue
`
After the denoising step we must remove sage attention else we get a VAE error due to Sage Attn wanting only torch.float16 or torch.bfloat16 dtypes which the VAE doesn't want:
` if output_type == "latent":
image = latents
else:
if sage_function:
self.transformer.scaled_dot_product_attention = F.scaled_dot_product_attention = torch._C._nn.scaled_dot_product_attention
`
Hopefully this helps.
| https://github.com/huggingface/diffusers/issues/11168 | open | [
"wip"
] | 2025-03-28T20:39:30Z | 2025-06-23T05:59:27Z | 12 | ukaprch |
huggingface/agents-course | 381 | [QUESTION]LLM or Agent? | In the tutorial, a lot of the contents mislead to a wrong conectp with LLM and Agents.
```
The Stop and Parse Approach
One key method for implementing actions is the stop and parse approach. This method ensures that the agent’s output is structured and predictable:
Generation in a Structured Format:
The agent outputs its intended action in a clear, predetermined format (JSON or code).
Halting Further Generation:
Once the action is complete, the agent stops generating additional tokens. This prevents extra or erroneous output.
Parsing the Output:
An external parser reads the formatted action, determines which Tool to call, and extracts the required parameters.
For example, an agent needing to check the weather might output:
```
The agent can output? or the author means the LLM? | https://github.com/huggingface/agents-course/issues/381 | closed | [
"question"
] | 2025-03-28T15:36:45Z | 2025-04-30T04:50:54Z | null | joshhu |
huggingface/lerobot | 912 | [Question]When will MultiLeRobotDataset available? | Hello, the MultiLeRobotDataset is very useful for training on large amounts of data; without it, training complex tasks would be difficult. However, I noticed that after the Simplify configs(#550) commit on January 31st, MultiLeRobotDataset have been marked as unavailable(raise NotImplementedError("The MultiLeRobotDataset isn't supported for now.")). Could you please let me know approximately when this functionality will be restored, or why it has been made unavailable?
| https://github.com/huggingface/lerobot/issues/912 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-28T09:16:06Z | 2025-10-22T02:30:53Z | null | Vacuame |
huggingface/agents-course | 380 | [QUESTION] Question on using HuggingFace space | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.
I am on AI Agents course now.
I have trouble in using HuggingFace space.
I studied this course at company so I have to open a firewall.
So I opened these port(80, 443. 8080) refer to following guide
(https://huggingface.co/docs/hub/en/spaces-overview)
But my edge window can not display anything.
Is there anything I'm missing?
Thank you for opening this course.

| https://github.com/huggingface/agents-course/issues/380 | closed | [
"question"
] | 2025-03-28T08:28:23Z | 2025-04-30T04:47:14Z | null | kjh0303 |
huggingface/Math-Verify | 47 | Question: How to configure `verify` for strict multi-part answer checking? | Hi Math-Verify Team,
I'm currently using `math-verify` for evaluating LLM outputs, specifically for questions that might require multiple answers (e.g., "Find all X...").
I've observed that the `verify` function in `grader.py`, which seems to use logic similar to `any(product(gold, target))`, can return `True` even if the prediction only contains a subset of the required answers.
**Example Observation:**
In my setup:
* Ground Truth: `"1331 and 1728"` (appears to parse into something like `[1331, 1728]`)
* Prediction: `"1728"` (parses to `[1728]`)
* Result: `verify` returns `True`.
While this makes sense if checking for *any* overlap, it seems too lenient for "find all" type questions where an exact match of all required elements is needed. This can lead to inflated scores or misleading reward signals in my use case.
**Question:**
Is there an existing configuration option or a recommended way within `math-verify` (perhaps via specific `ExtractionConfig` settings or ground truth formatting) to enforce a stricter check? Specifically, I'd like to verify if the *set* of predicted answers exactly matches the *set* of ground truth answers (considering mathematical equivalence).
Or is the current behavior the intended default, and handling stricter set-based validation would require custom logic outside `verify` or modifications to the library?
Any clarification or guidance on the best practice for achieving strict multi-part answer verification with `math-verify` would be greatly appreciated!
Thanks! | https://github.com/huggingface/Math-Verify/issues/47 | closed | [] | 2025-03-27T16:54:52Z | 2025-07-01T19:31:51Z | null | TweedBeetle |
huggingface/transformers.js | 1,259 | 3.2.4 has wrong env check in transformers.web.js | ### Question
## Background
I have developed a chrome extension which is followed by the [example](https://github.com/huggingface/transformers.js/tree/main/examples/extension). The example was used the package @xenova/transformers.
## Motivation
It seems that multithreads is work now. [Issue](https://github.com/huggingface/transformers.js/issues/928) [Issue2](https://github.com/huggingface/transformers.js/issues/882)
## Question
I change the package from **@xenova/transformers@2.17.2** to **@huggingface/transformers@3.4.1**. It shows a error **TypeError: sharp__WEBPACK_IMPORTED_MODULE_4__ is not a function** which have no been shown before. Anyone can help?
## Code (background.js)
```
// import { pipeline, env } from '@xenova/transformers';
// env.localModelPath = './';
// env.allowRemoteModels = false;
// env.backends.onnx.wasm.numThreads = 1;
import { env, pipeline } from '@huggingface/transformers';
env.localModelPath = './';
class ImagePipelineSingleton {
static task = 'image-classification';
static model = '/deepfake/';
static instance = null;
static async getInstance() {
try {
if (this.instance === null) {
this.instance = await pipeline(this.task, this.model);
}
} catch (error) {
console.error("Initialization error:", error);
}
return this.instance;
}
}
...
try{
let model = await ImagePipelineSingleton.getInstance();
let classification = await model(url);
}catch (error) {
console.error("image processing error:", error); //error here
}
...
```
## Folder Structure
- deepfake
- onnx
- model_quantized.onnx | https://github.com/huggingface/transformers.js/issues/1259 | closed | [
"question"
] | 2025-03-27T07:35:23Z | 2025-07-02T04:45:26Z | null | sanixa |
huggingface/datasets | 7,480 | HF_DATASETS_CACHE ignored? | ### Describe the bug
I'm struggling to get things to respect HF_DATASETS_CACHE.
Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.
Current version: 3.2.1dev. In the process of testing 3.4.0
### Steps to reproduce the bug
[Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]
dump.py:
```python
from datasets import load_dataset
dataset = load_dataset("HuggingFaceFW/fineweb", name="sample-100BT", split="train")
```
Repro steps
```bash
# ensure no cache
$ mv ~/.cache/huggingface ~/.cache/huggingface.bak
$ export HF_DATASETS_CACHE=/tmp/roller/datasets
$ rm -rf ${HF_DATASETS_CACHE}
$ env | grep HF | grep -v TOKEN
HF_DATASETS_CACHE=/tmp/roller/datasets
$ python dump.py
# (omitted for brevity)
# (while downloading)
$ du -hcs ~/.cache/huggingface/hub
18G hub
18G total
# (after downloading)
$ du -hcs ~/.cache/huggingface/hub
```
It's a shame because datasets supports s3 (which I could really use right now) but hub does not.
### Expected behavior
* ~/.cache/huggingface/hub stays empty
* /tmp/roller/datasets becomes full of stuff
### Environment info
[Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results] | https://github.com/huggingface/datasets/issues/7480 | open | [] | 2025-03-26T17:19:34Z | 2025-10-23T15:59:18Z | 8 | stephenroller |
huggingface/transformers.js | 1,258 | Tokenizer encode and decode get different token ids and text, missing word_ids | ### Question
```js
import { AutoTokenizer } from '@huggingface/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1')
console.log(tokenizer.encode(" e.g., ♩"))
console.log(tokenizer.decode([105]))
console.log(tokenizer.encode("♩"))
```
```
[ 312, 3588, 1042, 30717, 105 ]
�
[ 21315, 105 ]
```
how do I encode the words, and loop it and return it as single token,
because now ♩ is returning 2 tokens and becoming confusing
so is this a bug or something?
I guess i need word_ids? | https://github.com/huggingface/transformers.js/issues/1258 | closed | [
"question"
] | 2025-03-26T10:44:12Z | 2025-03-31T20:18:45Z | null | liho00 |
huggingface/lerobot | 905 | Supporting selection of obs and action keys in dataset | Hi all, thanks a lot for the framework.
Currently, it seems the LeRobotDataset format requires users to have a fixed state/environment state/images or actions defined in their dataset. However, this means that for multiple similar applications, the user has to record different datasets with different state or action definitions.
Is it possible to select certain keys from the state or actions similar to how we can do in robomimic?
https://github.com/ARISE-Initiative/robomimic/blob/master/robomimic/config/default_templates/bc_transformer.json#L107-L113 | https://github.com/huggingface/lerobot/issues/905 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-26T08:12:10Z | 2025-10-10T02:27:27Z | null | Mayankm96 |
huggingface/chat-ui | 1,772 | USE_LOCAL_WEBSEARCH No results found for this search query | ## Bug description
With `USE_LOCAL_WEBSEARCH=true`, Web Search always reports _No results found for this search query_.
## Steps to reproduce
- enable search
- enter and submit question
## Screenshots
<img width="488" alt="Image" src="https://github.com/user-attachments/assets/b948b629-ff67-4edb-9f7c-25ca9d3d1325" />
## Context
I'm running chat-ui-db using podman on an M1 Macbook. I'm using LM Studio as the model provider.
`podman run --rm --mount type=bind,source="$(pwd)/.env.local",target=/app/.env.local -v chat-ui:/data -p 3000:3000 ghcr.io/huggingface/chat-ui-db`
### Logs
<!-- Add any logs that are relevant to your issue. Could be browser or server logs. Wrap in code blocks. -->
```
{"level":50,"time":1742937489975,"pid":18,"hostname":"bbd76a6649ad","msg":"No results found for this search query"}
```
### Specs
- **OS**: macOS 15.3.1 (24D70)
- **Browser**: Firefox 136.0.2 (aarch64)
- **chat-ui commit**: ghcr.io/huggingface/chat-ui-db f679ed220b9b
### Config
_.env.local_
```
HF_TOKEN=hf_...
MODELS=`[
{
"name": "LM Studio",
"endpoints": [{
"type" : "openai",
"baseURL": "http://host.docker.internal:1234/v1"
}],
},
]`
USE_LOCAL_WEBSEARCH=true
WEBSEARCH_JAVASCRIPT=true
``` | https://github.com/huggingface/chat-ui/issues/1772 | open | [
"bug",
"help wanted",
"websearch"
] | 2025-03-25T21:28:11Z | 2025-10-22T21:13:54Z | 6 | brechtm |
huggingface/chat-ui | 1,771 | Client disconnects before response is received | ## Bug description
If an answer takes several minutes to complete, the chat-ui client simply disconnects. This disconnection happens at 1 minute, but I'm unsure.
## Steps to reproduce
Ask your LLM a riddle but change it a little, so it becomes confused and wonders for a while.
man and a goat are one one side of a river with a boat. How do they get across?
Notice that the response is terminated during thinking/reasoning phase.
The LM Studio logs indicates that the client disconnects so it terminates the response at that point.
## Screenshots
## Context
### Logs
<!-- Add any logs that are relevant to your issue. Could be browser or server logs. Wrap in code blocks. -->
This request is terminated as 1min in the browser.
```
curl 'https://example.com/conversation/67e1af3d9becaf215b19d526' \
-X 'POST' \
-H 'Content-Type: multipart/form-data; boundary=----WebKitFormBoundarywFDiAu9glkYBEPBf' \
-H 'Accept: */*' \
--data-binary $'------WebKitFormBoundarywFDiAu9glkYBEPBf\r\nContent-Disposition: form-data; name="data"\r\n\r\n{"id":"91f280d4-9852-4453-b941-582eb531e911","is_retry":true,"is_continue":false,"web_search":false,"tools":[]}\r\n------WebKitFormBoundarywFDiAu9glkYBEPBf--\r\n'
```
### Specs
- **OS**: OS X
- **Browser**: Orion
- **chat-ui commit**: chat-ui-db image: `ghcr.io/huggingface/chat-ui-db@sha256:a69b02884d0de64bb60d8011828b0e4be778673cadfc5f783fe6df14fa737504`
### Config
<!-- Add the environment variables you've used to setup chat-ui, making sure to redact any secrets. -->
## Notes
How do I configure these timeouts? | https://github.com/huggingface/chat-ui/issues/1771 | open | [
"bug"
] | 2025-03-25T19:14:54Z | 2025-06-14T13:46:28Z | 3 | drewwells |
huggingface/datasets | 7,477 | What is the canonical way to compress a Dataset? | Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset?
Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https://github.com/huggingface/datasets/issues/7047)].
Am I missing something?
And if so, why is this not the standard/default way that `Dataset`'s work as they do in Xarray, Ray Data, Composer, etc.? | https://github.com/huggingface/datasets/issues/7477 | open | [] | 2025-03-25T16:47:51Z | 2025-04-03T09:13:11Z | null | eric-czech |
huggingface/lerobot | 901 | Any tutorial on how to make experiments on the SimXArm enviroment? | https://github.com/huggingface/lerobot/issues/901 | closed | [] | 2025-03-25T13:29:59Z | 2025-03-25T16:42:11Z | null | chenkang455 | |
huggingface/chat-ui | 1,765 | `truncate` parameter ignored for OpenAI chat_completions endpoint | ## Bug description
The `truncate` parameter in the ChatUI configuration is not being applied when using the OpenAI chat_completions endpoint.
## Root Cause
The issue arises because the chat_completions endpoint does not utilize the buildPrompt function where the `truncate` parameter is handled. The logic for truncation is solely within buildPrompt and is therefore bypassed entirely when processing chat_completions requests. This means there's no truncation mechanism applied to the chat history before it's sent to vllm-openai or OpenAI.
#1654 | https://github.com/huggingface/chat-ui/issues/1765 | open | [
"bug"
] | 2025-03-25T10:13:40Z | 2025-03-25T10:20:33Z | 0 | calycekr |
huggingface/finetrainers | 350 | how to train wan using 8 GPUs | I notice that there is only 4 GPUs scripts, even though I modify the script for 8 GPU training, it gets some errors. | https://github.com/huggingface/finetrainers/issues/350 | open | [] | 2025-03-25T05:02:18Z | 2025-05-06T14:54:50Z | null | tanshuai0219 |
huggingface/diffusers | 11,147 | [LTX0.9.5] make LTX0.9.5 works with text-to-video | see more context here https://github.com/huggingface/diffusers/issues/11143#issuecomment-2747390564 | https://github.com/huggingface/diffusers/issues/11147 | closed | [
"help wanted"
] | 2025-03-24T09:56:47Z | 2025-04-04T14:43:16Z | 9 | yiyixuxu |
huggingface/search-and-learn | 47 | How to run this project on CPU? | Hello, I'm going to run the code for the project on cpu
The graphics card I have now is 4060ti, but even with the lightest option (minimum batch size, use 1.5B model, etc.), I couldn't run the project due to memory capacity issues
So I want to move this project to cpu and see the results even if it takes some time
However, even though all settings and codes have been checked, the flash attention backend is automatically set and we are having trouble solving the error
So I would like to ask if this project cannot be implemented in cpu through vllm setting change only | https://github.com/huggingface/search-and-learn/issues/47 | open | [] | 2025-03-24T01:13:44Z | 2025-03-24T01:13:44Z | null | pss0204 |
huggingface/datasets | 7,473 | Webdataset data format problem | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.)
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("ejschwartz/idioms")
### Expected behavior
The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | https://github.com/huggingface/datasets/issues/7473 | closed | [] | 2025-03-21T17:23:52Z | 2025-03-21T19:19:58Z | 1 | edmcman |
huggingface/datasets | 7,470 | Is it possible to shard a single-sharded IterableDataset? | I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.
Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs more cost too). But the results are also big enough that we don't want to materialize it entirely and instead stream it with an IterableDataset.
But after we have the results we want to split it up across workers to parallelize processing.
Is something like this possible to do?
Here's a failed attempt. The end result should be that each of the shards has unique data, but unfortunately with this attempt the generator gets run once in each shard and the results end up with duplicates...
```
import random
import datasets
def gen():
print('RUNNING GENERATOR!')
items = list(range(10))
random.shuffle(items)
yield from items
ds = datasets.IterableDataset.from_generator(gen)
print('dataset contents:')
for item in ds:
print(item)
print()
print('dataset contents (2):')
for item in ds:
print(item)
print()
num_shards = 3
def sharded(shard_id):
for i, example in enumerate(ds):
if i % num_shards in shard_id:
yield example
ds1 = datasets.IterableDataset.from_generator(
sharded, gen_kwargs={'shard_id': list(range(num_shards))}
)
for shard in range(num_shards):
print('shard', shard)
for item in ds1.shard(num_shards, shard):
print(item)
``` | https://github.com/huggingface/datasets/issues/7470 | closed | [] | 2025-03-21T04:33:37Z | 2025-11-22T07:55:43Z | 6 | jonathanasdf |
huggingface/lerobot | 884 | [Question] Support of PointCloud | Hi,
I'm currently developing a plugin for lerobot and would like to know if there are any plans to support PointCloud data.
Additionally, I'd like to ask if there is a recommended storage format for handling PointCloud data within the project.
Looking forward to your response.
Thanks | https://github.com/huggingface/lerobot/issues/884 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-21T04:29:15Z | 2025-10-07T02:26:39Z | null | yilin404 |
huggingface/inference-benchmarker | 4 | Can i use local model's tokenizer and local dataset? | Hello, may I specify the paths of the locally downloaded model and dataset through the ./inference-benchmarker command, instead of accessing Hugging Face via the network? | https://github.com/huggingface/inference-benchmarker/issues/4 | open | [
"question"
] | 2025-03-21T01:55:03Z | 2025-03-27T18:44:04Z | null | handsome-chips |
huggingface/video-dataset-scripts | 20 | parquet file how to convert to Training Dataset Format for finetrainers | parquet file how to convert to Training Dataset Format for finetrainers ? | https://github.com/huggingface/video-dataset-scripts/issues/20 | closed | [] | 2025-03-20T16:22:39Z | 2025-04-10T17:46:06Z | null | kanghua309 |
huggingface/trl | 3,114 | What is the reason for using only one GPU when integration with llm? | At [line](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L507) of the code, when using vllm, a unique GPU device is specified here. However, in fact, it is quite common to use a single vllm instance with multiple GPUs.
1. What is the reason that the code is designed to only select a single GPU?
2. Where does the '**device**' parameter of this LLM interface eventually get passed to? When I entered this function, I couldn't find the corresponding parameter processing method (this might be a very basic question).
3. When I changed the '**device**' parameter to **tensor_parallel_size** (and also set the world_size and other parameters), an error occurred.
I've noticed that some other PRs have made modifications to the multi-GPU usage of vllm, but not at the interface where [LLM is used](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L507). I'm curious about the reasons behind this.
If anyone is willing to answer me, I would be very grateful. | https://github.com/huggingface/trl/issues/3114 | closed | [
"❓ question",
"🏋 GRPO"
] | 2025-03-19T16:20:03Z | 2025-04-05T17:01:33Z | null | spencergotowork |
huggingface/smollm | 67 | How to fine tune smolvlm on OCR | Is there any guid to finet-tune smovlm on OCR like in https://huggingface.co/ds4sd/SmolDocling-256M-preview | https://github.com/huggingface/smollm/issues/67 | open | [
"Image"
] | 2025-03-19T14:17:33Z | 2025-07-29T13:09:05Z | null | abdelkareemkobo |
huggingface/peft | 2,436 | Fine-tuning with Multiple LoRAs | Thanks for your valuable work!
I would like to know if it's possible to jointly train two LoRAs while only loading one base model. The overall output depends on the respective outputs of LoRA1 and LoRA2. For example, logits1 is obtained from the base model with LoRA1, and logits2 is obtained from the base model with LoRA2. I have tried the following code
```python
model.add_adapter(lora_1)
model.add_adapter(lora_2)
model.enable_adapters()
model.set_adapter("lora_1")
logits1 = model(input_ids).logits # use model with lora1 to get output
model.set_adapter("lora_2")
logits2 = model(input_ids).logits # use model with lora2 to get output
logits = logits1+logits2
loss=loss_fct(logits, labels)
loss.backward()
```
but it seems there might be some issues:
1. Once set_adapter(lora2) is called, LoRA1 no longer receives gradients;
2. If I modify the source code of set_adapter to make both requires_grad=True, would that be correct?
What I'm confused about is, after I execute set_adapter(lora2), does the model perform computations using the base model with LoRA2 (as I hope), or does it use the base model with both LoRA1 and LoRA2 combined?
I'm looking forward to your help! Thank you! | https://github.com/huggingface/peft/issues/2436 | closed | [] | 2025-03-19T13:49:28Z | 2025-07-19T05:45:12Z | 7 | xymou |
huggingface/setfit | 590 | How do I disable requests to huggingface.co:443 after training? | I'm currently evaluating setfit in a proof of concept situation. Unfortunately, I'm working behind a company firewall, where I do not have access to the world wide web, only to company-internal URLs.
That's a bit annoying in terms of downloading models, but I can work around that. More importantly, it seems there are calls to huggingface.co:443 after the training is done, which obviously cannot succeed due to the blocked internet access.
That wouldn't be big problem if the timeout were 1 minute or so, but it seems to be more like 5-10 minutes, which is a lot of time wasted just waiting for the results.
How can I disable these blocking HTTP requests?
My minimal training pipeline looks somewhat like this (shortened for readability, especially data loading):
```
model = SetFitModel.from_pretrained(
"/local/path/local-bge-small-en-v1.5",
local_files_only=True,
multi_target_strategy="multi-output",
)
train_dataset, test_dataset = a_bunch_of_loading_and_sampling_code_thats_irrelevant_here()
args = TrainingArguments(
batch_size=128,
num_epochs=10,
report_to=None
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
metric="f1",
callbacks=None,
column_mapping={"column": "mapping"},
metric_kwargs={"average": "samples"}
)
trainer.train()
```
After all training steps are done, I get the following console logs:
```
INFO:sentence_transformers.trainer:Saving model checkpoint to checkpoints/checkpoint-258
INFO:sentence_transformers.SentenceTransformer:Save model to checkpoints/checkpoint-258
Request [id]: GET https://huggingface.co/api/models/setfit-test/local-bge-small-en-v1.5 (authenticated: False)
DEBUG:huggingface_hub.utils._http:Request [id]: GET https://huggingface.co/api/models/setfit-test/local-bge-small-en-v1.5 (authenticated: False)
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
```
Then nothing happens for about 10 minutes, before I get a "Batches: 100% [tqdm progress bar]", which is however finished almost immediately.
Is there any parameter I can set to disable this call to huggingface? "report_to=None" or "callbacks=None" don't seem to do the trick. | https://github.com/huggingface/setfit/issues/590 | open | [] | 2025-03-19T08:42:12Z | 2025-03-19T18:44:12Z | null | AdrianSchneble |
huggingface/diffusers | 11,114 | channel inconsistency in cogvideo Lora training example | ### Describe the bug
while using the training script in (https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_image_to_video_lora.py)
I made a dataset as described in readme and run training.
but a bug occurred at the forward pass process.It is because the model in-channel is 16 but model_input in-channel is 32.
how can i fix it?
### Reproduction
# Sample noise that will be added to the latents
noise = torch.randn_like(video_latents)
# Add noise to the model input according to the noise magnitude at each timestep
# (this is the forward diffusion process)
noisy_video_latents = scheduler.add_noise(video_latents, noise, timesteps)
noisy_model_input = torch.cat([noisy_video_latents, image_latents], dim=2)
# Prepare rotary embeds
image_rotary_emb = (
prepare_rotary_positional_embeddings(
height=args.height,
width=args.width,
num_frames=num_frames,
vae_scale_factor_spatial=vae_scale_factor_spatial,
patch_size=model_config.patch_size,
attention_head_dim=model_config.attention_head_dim,
device=accelerator.device,
)
if model_config.use_rotary_positional_embeddings
else None
)
# Predict the noise residual
model_output = transformer(
hidden_states=noisy_model_input,
encoder_hidden_states=prompt_embeds,
timestep=timesteps,
image_rotary_emb=image_rotary_emb,
return_dict=False,
)[0]
### Logs
```shell
[rank0]: File "train_cogvideox_i_t2v_lora_raw.py", line 1426, in main
[rank0]: model_output = transformer(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1643, in forward
[rank0]: else self._run_ddp_forward(*inputs, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1459, in _run_ddp_forward
[rank0]: return self.module(*inputs, **kwargs) # type: ignore[index]
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/accelerate/utils/operations.py", line 819, in forward
[rank0]: return model_forward(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/accelerate/utils/operations.py", line 807, in __call__
[rank0]: return convert_to_fp32(self.model_forward(*args, **kwargs))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/diffusers/models/transformers/cogvideox_transformer_3d.py", line 476, in forward
[rank0]: hidden_states = self.patch_embed(encoder_hidden_states, hidden_states)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/diffusers/models/embeddings.py", line 715, in forward
[rank0]: image | https://github.com/huggingface/diffusers/issues/11114 | open | [
"bug",
"stale"
] | 2025-03-19T07:55:00Z | 2025-04-18T15:02:52Z | 2 | MrTom34 |
huggingface/trl | 3,109 | where is file https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py | ### Reproduction
```python
from trl import ...
```
outputs:
```
Traceback (most recent call last):
File "example.py", line 42, in <module>
...
```
### System Info
https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py
### Checklist
- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [x] I have included my system information
- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any traceback provided is complete | https://github.com/huggingface/trl/issues/3109 | closed | [
"🐛 bug",
"🏋 SFT"
] | 2025-03-19T02:20:26Z | 2025-03-19T02:22:23Z | null | zh794390558 |
huggingface/transformers.js | 1,245 | QuestionAnsweringOutput does not return start/end index | ### Question
Question/Answering pipeline does not seem to return start/end index.
console output example
``` { answer: 'anywhere', score: 0.8719829671013909 }```
source code in pipeline.js
```
class QuestionAnsweringPipeline ...
// TODO add start and end?
// NOTE: HF returns character index
toReturn.push({
answer, score
});```
| https://github.com/huggingface/transformers.js/issues/1245 | open | [
"question"
] | 2025-03-18T21:20:25Z | 2025-03-18T21:20:25Z | null | sleep9 |
huggingface/transformers.js | 1,243 | Transformer.js compatibility with Angular17 | ### Question
I want to add transformer.js in Angular 17 project. Getting several errors can some one guide me how to add transformer.js with Angular project | https://github.com/huggingface/transformers.js/issues/1243 | open | [
"question"
] | 2025-03-18T16:15:30Z | 2025-03-24T21:27:11Z | null | AnuragPant01 |
huggingface/diffusers | 11,108 | Is there a way to generate a single image using multiple GPUs? | This is related to #2977 and #3392, but I would like to know how to generate a single image using multiple GPUs. If such a method does not exist, I would also like to know if Accelerate's [Memory-efficient pipeline parallelism](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference#memory-efficient-pipeline-parallelism-experimental) can be applied to this. | https://github.com/huggingface/diffusers/issues/11108 | closed | [
"stale"
] | 2025-03-18T13:43:05Z | 2025-05-02T21:00:31Z | 12 | suzukimain |
huggingface/lerobot | 876 | Multiple GPU Training Support | Hi, lerobot team!
Thanks for the great work and organized content.
Are there plans to support PyTorch's Distributed Data Parallel (DDP) training in this framework? | https://github.com/huggingface/lerobot/issues/876 | closed | [
"enhancement",
"question",
"stale"
] | 2025-03-18T12:44:43Z | 2025-10-07T02:26:45Z | null | kingchou007 |
huggingface/open-r1 | 521 | How to use my own dataset in sft? | Could you please give an instruction/demo on how to use my own dataset (any column name) to apply sft? | https://github.com/huggingface/open-r1/issues/521 | open | [] | 2025-03-18T11:38:19Z | 2025-03-18T14:21:36Z | null | dongdongzhaoUP |
huggingface/diffusers | 11,103 | Which repo should I use for LTX-Video 0.9.5 diffusers | I see the changes are merged
Checked repo and it is empty
https://huggingface.co/Lightricks/LTX-Video-0.9.5/tree/main
Noticed in test pipeline it is
repo = "YiYiXu/ltx-95"
So can I safely assume that the above can be used?
@yiyixuxu | https://github.com/huggingface/diffusers/issues/11103 | closed | [] | 2025-03-18T10:50:41Z | 2025-03-18T11:00:34Z | 2 | nitinmukesh |
huggingface/trl | 3,103 | How are Lora parameters used in VLLM generation? (_move_model_to_vllm in GRPO trainer) | From the following code does not see the process of moving lora training parameters to VLLM? How guarantee that generated with the latest parameters? Can someone help explain.
<img width="1123" alt="Image" src="https://github.com/user-attachments/assets/62cacf0a-0197-4210-b326-c4e24b9b6701" />
And I printed the vllm loaded model, and I didn't see LORA-related parameters either.
<img width="1157" alt="Image" src="https://github.com/user-attachments/assets/8d085743-97b9-4d9e-9c4b-558153a6cb05" />
More, LORARequest was also not seen in the generation calls
<img width="1117" alt="Image" src="https://github.com/user-attachments/assets/3193f66f-607d-4b0b-8903-f5f1b45d7adc" />
| https://github.com/huggingface/trl/issues/3103 | closed | [
"❓ question",
"⚡ PEFT"
] | 2025-03-18T09:24:48Z | 2025-03-24T18:32:19Z | null | cuiyuhao1996 |
huggingface/datasets | 7,457 | Document the HF_DATASETS_CACHE env variable | ### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`.
It should be nice to add `HF_DATASETS_CACHE` to datasets documentation if it's an intended feature.
If it's not, I think a depreciation warning would be appreciated.
### Motivation
This variable is fully working and similar to what `HF_HUB_CACHE` does for models, so it's nice to know that this exists. This seems to be a quick change to implement.
### Your contribution
I could contribute since this is only affecting a small portion of the documentation | https://github.com/huggingface/datasets/issues/7457 | closed | [
"enhancement"
] | 2025-03-17T12:24:50Z | 2025-05-06T15:54:39Z | 4 | LSerranoPEReN |
huggingface/transformers | 36,762 | When what needs to be loaded is in the cache directory, there is no need to make a request to the remote | ### Feature request
When what needs to be loaded is in the cache directory, there is no need to make a request to the remote.
### Motivation
I noticed that when `AutoTokenizer` loads a file using `from_pretrained`, it first tries to load it from a cached directory when `pretrained_model_name_or_path` is a model_id (such as gpt2).
However, `commit_hash` is `None` by default, e.g. `AutoTokenizer` will call `get_tokenizer_config` to load the configuration file, where the code to get `commit_hash` is: `commit_hash = kwargs.get("_commit_ hash”, None)`.
Since it is None, the `cached_file` method doesn't know where the corresponding file is actually stored, so it uses the `hf_hub_download` method to request the corresponding `commit_hash` first.
Although this request is very simple and infrequent, **in offline environments (e.g., a company or school intranet that does not allow access to the extranet), it will report an error.**
I know I can copy files from the cache to my project directory, but the host is usually used by multiple people, which means it may have to be copied many times, which defeats the purpose of using a cached directory in the first place.
### Your contribution
**I suggest changing `commit_hash = kwargs.get(“_commit_hash”, None)` to `commit_hash = kwargs.get(“_commit_hash”, “main”)`**. | https://github.com/huggingface/transformers/issues/36762 | closed | [
"Feature request"
] | 2025-03-17T11:20:24Z | 2025-03-19T15:49:04Z | null | JinFish |
huggingface/diffusers | 11,086 | RuntimeError after using apply_group_offloading on diffusers: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same | Can anyone help me?
I used WanX's diffusers and used apply_group_offloading according to url: https://huggingface.co/docs/diffusers/main/en/optimization/memory.
The code is as follows:
```
image_encoder = CLIPVisionModel.from_pretrained(local_model_path, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(local_model_path, subfolder="vae", torch_dtype=torch.float32)
scheduler_b = UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=5.0)
pipe = WanImageToVideoPipeline.from_pretrained(local_model_path, vae=vae, image_encoder=image_encoder, scheduler=scheduler_b, torch_dtype=torch.bfloat16)
pipe.transformer.enable_group_offload(onload_device=torch.device("cuda"), offload_device=torch.device("cpu"), offload_type="block_level", num_blocks_per_group=1, use_stream=True)
apply_group_offloading(pipe.text_encoder, onload_device=torch.device("cuda"), offload_type="block_level", num_blocks_per_group=1, use_stream=True)
apply_group_offloading(pipe.vae, onload_device=torch.device("cuda"), offload_type="block_level", num_blocks_per_group=1, use_stream=True)
apply_group_offloading(pipe.image_encoder, onload_device=torch.device("cuda"), offload_type="block_level", num_blocks_per_group=1, use_stream=True)
```
Then print the device information:
`Before apply_offload:
text_encoder device: cpu
transformer device: cpu
vae device: cpu
image_encoder device: cpu
start to group_offload_block_1_stream
After apply_offload:
text_encoder device: cpu
transformer device: cpu
vae device: cpu
image_encoder device: cpu`
Finally, an exception is thrown:
` return F.conv3d(
^^^^^^^^^
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same`
Does anyone know how to fix this? Thanks a lot. | https://github.com/huggingface/diffusers/issues/11086 | open | [
"stale"
] | 2025-03-17T11:03:48Z | 2025-04-16T15:03:36Z | 5 | tiga-dudu |
huggingface/trl | 3,093 | How to use a custom function as the reward model for PPO training | The new version of TRL's PPOtrainer requires Module as the reward model, but I need a custom function calculation to calculate the reward. I tried to lower the TRL version to 0.11.4, but the old version does not seem to support the peft model. I get the following error:
ValueError: model must be a PreTrainedModelWrapper, got <class 'peft.peft_model.PeftModelForCausalLM'> - supported architectures are: (<class 'trl.models.modeling_value_head.AutoModelForCausalLMWithValueHead'>, <class 'trl.models.modeling_value_head.AutoModelForSeq2SeqLMWithValueHead'>)
However, I see the is_peft_model parameter in PPOConfig, but there is no such parameter as peft_config in PPOTrainer
So I am very troubled now. Is there a good brother who can help me?
| https://github.com/huggingface/trl/issues/3093 | open | [
"❓ question",
"🏋 PPO",
"⚡ PEFT"
] | 2025-03-16T09:02:25Z | 2025-03-20T10:33:02Z | null | JWQZ |
huggingface/ai-deadlines | 19 | How to know the rankings of a conference? | @NielsRogge, may I know where we can get the conference rankings? | https://github.com/huggingface/ai-deadlines/issues/19 | closed | [] | 2025-03-15T18:32:34Z | 2025-03-15T21:45:02Z | null | julurisaichandu |
huggingface/diffusers | 11,063 | prepare_attention_mask - incorrect padding? | ### Describe the bug
I'm experimenting with attention masking in Stable Diffusion (so that padding tokens aren't considered for cross attention), and I found that UNet2DConditionModel doesn't work when given an `attention_mask`.
https://github.com/huggingface/diffusers/blob/8ead643bb786fe6bc80c9a4bd1730372d410a9df/src/diffusers/models/attention_processor.py#L740
For the attn1 blocks (self-attention), the target sequence length is different from the current length (target 4096, but it's only 77 for a typical CLIP output). The padding routine pads by *adding* `target_length` zeros to the end of the last dimension, which results in a sequence length of 4096 + 77, rather than the desired 4096. I think it should be:
```diff
- attention_mask = F.pad(attention_mask, (0, target_length), value=0.0)
+ attention_mask = F.pad(attention_mask, (0, target_length - current_length), value=0.0)
```
`encoder_attention_mask` works fine - it's passed to the attn2 block and no padding ends up being necessary.
It seems that this would additionally fail if current_length were greater than target_length, since you can't pad by a negative amount, but I don't know that that's a practical concern.
(I know that particular masking isn't even semantically valid, but that's orthogonal to this issue!)
### Reproduction
```python
# given a Stable Diffusion pipeline
# given te_mask = tokenizer_output.attention_mask
pipeline.unet(latent_input, timestep, text_encoder_output, attention_mask=te_mask).sample
```
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.28.1
- Transformers version: 4.48.3
- Accelerate version: 1.3.0
- PEFT version: not installed
- Bitsandbytes version: 0.45.2
- Safetensors version: 0.5.2
- xFormers version: 0.0.29.post2
- Accelerator: NVIDIA GeForce RTX 3060, 12288 MiB
NVIDIA GeForce RTX 4060 Ti, 16380 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11063 | open | [
"bug",
"stale"
] | 2025-03-14T19:01:01Z | 2025-04-14T15:03:14Z | 2 | cheald |
huggingface/transformers.js | 1,237 | Using pipeline API in Mobile Devices | ### Question
How can I do the pipeline running in mobile devices?
Like here:
pipeline('background-removal', 'briaai/RMBG-1.4', { device: "webgpu" })
Or it depends from the model avaliable?
I don't find documentations about pipeline API options, like 'device' and others params... | https://github.com/huggingface/transformers.js/issues/1237 | open | [
"question"
] | 2025-03-14T17:55:27Z | 2025-05-11T19:58:39Z | null | LuSrodri |
huggingface/autotrain-advanced | 869 | How to fine-tune a custom model for Ollama? | Probably a stupid question, but I'm trying to upload a .csv dataset and fine-tune an 8B model in Autotrain. But when I add the model name taken from Ollama (e.g. deepseek-r1:8b or DeepSeek-R1-Distill-Llama-8B-NexaQuant) and try to train, I get an error.
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for LLMTrainingParams
token
Input should be a valid string [type=string_type, input_value=<starlette.templating._Te...bject at 0x7f7e9daa3a00>, input_type=_TemplateResponse]
For further information visit https://errors.pydantic.dev/2.10/v/string_type
I'm too stupid to know what's wrong or how to correct it, so any help gratefully received. I can fine-tune with existing models in the drop-down list OK, so the setup seems to be working. | https://github.com/huggingface/autotrain-advanced/issues/869 | closed | [
"stale"
] | 2025-03-14T14:46:23Z | 2025-05-03T15:01:33Z | null | nigelp |
huggingface/diffusers | 11,060 | `prepare_image` in Kandinsky pipelines doesn't support `torch.Tensor` | Hi, I want to report a bug in Kandinsky pipelines.
https://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L413-L420
According to the above contents, elements in `image` can be either `PIL.Image.Image` or `torch.Tensor`.
https://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L98-L104
However, the `prepare_image` function is only for `PIL.Image.Image`, and does not support `torch.Tensor`.
Can you resolve this problem by implementing an image resize function for `torch.Tensor`? | https://github.com/huggingface/diffusers/issues/11060 | closed | [
"good first issue",
"help wanted"
] | 2025-03-14T10:34:30Z | 2025-04-21T18:41:10Z | 1 | dk-hong |
huggingface/Math-Verify | 39 | How to choose ExprExtractionConfig() and LatexExtractionConfig() | Hi. Thanks for your awesome tool.
I want to ask how I should set the configuration when the answer is either LaTeX or Expr? I found that if the case below (without $$ $$) is not set, the output will be false when the expected result is true.
```python
from math_verify import parse, verify
gold = parse("\\frac{\sqrt{3}}{3}")
answer = parse("sqrt(3)/3")
# Order here is important!
verify(gold, answer)
``` | https://github.com/huggingface/Math-Verify/issues/39 | closed | [] | 2025-03-13T23:36:27Z | 2025-04-28T20:42:03Z | null | Zhuofeng-Li |
huggingface/diffusers | 11,055 | Training on unconditional image generation creates colorized images | ### Describe the bug
Hi, I'm trying to follow the tutorial from unconditional image generation on my own dataset, and I'm getting weirdly colored images. I originally thought it was due to RGB/BGR channel order, but I've switched it around and got the same result. Do you have any suggestions of how to fix it?
### Reproduction
NA
### Logs
```shell
```
### System Info
NA
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11055 | open | [
"bug",
"stale"
] | 2025-03-13T20:47:22Z | 2025-04-13T15:02:53Z | 1 | esizikova-fda |
huggingface/lerobot | 860 | Modify camera async_read/read API to return a dictionary instead of tuple for better compatability? | Currently the intel real sense camera api supports returning either a single rgb image or a rgb image and depth image as a 2-uple
https://github.com/huggingface/lerobot/blob/3c0a209f9fac4d2a57617e686a7f2a2309144ba2/lerobot/common/robot_devices/cameras/intelrealsense.py#L440-L443
However this is not super compatible to work with since not all cameras might return two values (open cv one only does rgb?). For a potentially better API would it be possible to have the async read / read functions always return a dictionary instead with some standard names and data types for the types of image data returned?
e.g.
```
return dict(rgb=..., depth=...)
```
This way it is also easier for me to check if the returned data has depth data or not. The current solution is a bit complicated as I need to check if its the IntelRealSenseCamera and if its config has use_depth=True or not.
Thanks! | https://github.com/huggingface/lerobot/issues/860 | closed | [
"enhancement",
"question"
] | 2025-03-13T18:44:20Z | 2025-05-26T09:28:48Z | null | StoneT2000 |
huggingface/transformers.js | 1,230 | Using background-removal pipeline produces images with 50% opacity | ### Question
I have a issue using the background-removal pipeline. Some models returns the exacly same image, but 50% opacite (RGBA: [X, Y, Z, 127]). So other models, returns an error like this: Uncaught Error: Unsupported model type: null transformers:1:670067.
How can I procede? | https://github.com/huggingface/transformers.js/issues/1230 | closed | [
"question"
] | 2025-03-13T17:00:13Z | 2025-03-25T22:28:37Z | null | LuSrodri |
huggingface/lerobot | 858 | DATASET conversion from V.16 to V2.0 ❌❌❌ |
Hi @aliberts @Cadene
Thanks for your amazing work. I have one doubt, I forked lerobot repo and training some policies, now i want to convert to V1.6 to V2.0, but my episodes are .pth format not in parquet format. I check remaining issues, i didn't find anything. right now while conversion it takes only parquet format.
image
Can you please help me here
Thanks
### Information
- [x] One of the scripts in the examples/ folder of LeRobot
- [x] My own task or dataset (give details below)
### Reproduction
tried covert_v1_to_v2.py
But its expecting only parquet but mine is pth
### Expected behavior
 | https://github.com/huggingface/lerobot/issues/858 | closed | [
"question",
"dataset",
"stale"
] | 2025-03-13T15:22:51Z | 2025-10-07T02:26:46Z | null | Kacchan16 |
huggingface/optimum | 2,215 | not able to convert DeepSeek-R1 into Onnx using optimum-cli | ### System Info
```shell
v1.24.0
```
### Who can help?
@michaelbenayoun
I'm trying to convert DeepSeek-R1 into a onnx format, but i'm being presented with
> ValueError: Loading deepseek-ai/DeepSeek-R1 requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
I'm trying to do this using optimum-cli
`optimum-cli export onnx --model deepseek-ai/DeepSeek-R1 --task causal-lm C:\DeepSeek-R1-Onnx`
Can i somehow enable this using cli, or do i have to manually download the model into my system and using cli i would have to perform onnx instead of repo link
if yes, then how can i enable trust_remote_code=True once i download the repo?
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
optimum-cli export onnx --model deepseek-ai/DeepSeek-R1 --task causal-lm C:\DeepSeek-R1-Onnx
Running this command doesn't provide an output
### Expected behavior
The conversion should start for DeepSeek-R1 to ONNX | https://github.com/huggingface/optimum/issues/2215 | open | [
"bug"
] | 2025-03-13T07:07:10Z | 2025-05-13T11:13:36Z | 1 | volcano619 |
huggingface/trl | 3,066 | How to switch on the multi-GPU for GRPOTrainer? | Issue:
OOM errors during GRPO training - Need multi-GPU support for combined VRAM
Problem Description:
I'm encountering Out-of-Memory (OOM) errors while using GRPOTrainer to train reasoning capabilities similar to DeepSeek R1.
My Question:
How to switch on multi-GPU support for GRPOTrainer to utilize the combined VRAM across multiple GPUs (e.g., 40GB × 8 cards = 320GB total VRAM)?
Thank you! | https://github.com/huggingface/trl/issues/3066 | closed | [
"🏋 GRPO"
] | 2025-03-13T05:01:12Z | 2025-04-05T17:04:50Z | null | tjoymeed |
huggingface/agents-course | 314 | [QUESTION] agent.run(stream=True) How get finall result | agent = CodeAgent(
tools=[],
model=model,
max_steps=10,
verbosity_level=2
)
response = agent.run(
"""
descripe image
""",
images=image_urls,
stream=True
)
print()??? | https://github.com/huggingface/agents-course/issues/314 | open | [
"question"
] | 2025-03-13T02:32:47Z | 2025-03-13T02:32:47Z | null | via007 |
huggingface/diffusers | 11,046 | flux pipeline inference with controlnet, inpainting, plus ip-adapter | ### Describe the bug
Hi, I would like to utilize flux pipeline. But for now, I have gpu issues to use origin flux pipeline.
If I would like to use nf4 version, How can I set up the inference file on controlnet, inpainting, ip-adapter?
Do I use Fluxcontrol depth or canny and mask, ip-adapter model? or fluxcontrol, fluxfill, ip-adapter?
Thanks,
@hlky, @sayakpaul
### Reproduction
import torch
from diffusers import FluxControlInpaintPipeline
from diffusers.models.transformers import FluxTransformer2DModel
from transformers import T5EncoderModel
from diffusers.utils import load_image, make_image_grid
from image_gen_aux import DepthPreprocessor # https://github.com/huggingface/image_gen_aux
from PIL import Image
import numpy as np
access_token = ""
pipe = FluxControlInpaintPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Depth-dev",
torch_dtype=torch.bfloat16, token=access_token)
# use following lines if you have GPU constraints
# ---------------------------------------------------------------
transformer = FluxTransformer2DModel.from_pretrained(
"sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="transformer", torch_dtype=torch.bfloat16
)
text_encoder_2 = T5EncoderModel.from_pretrained(
"sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="text_encoder_2", torch_dtype=torch.bfloat16
)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
pipe.enable_model_cpu_offload()
# ---------------------------------------------------------------
pipe.to("cuda")
prompt = "a blue robot sad expressions"
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
head_mask = np.zeros_like(image)
head_mask[65:580,300:642] = 255
mask_image = Image.fromarray(head_mask)
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(image)[0].convert("RGB")
output = pipe(
prompt=prompt,
image=image,
control_image=control_image,
mask_image=mask_image,
num_inference_steps=30,
strength=1,
guidance_scale=10.0,
generator=torch.Generator().manual_seed(42),
).images[0]
make_image_grid([image, control_image, mask_image, output.resize(image.size)], rows=1, cols=4).save("output.png")
changing depth to canny, and add ip-adapter?
### Logs
```shell
```
### System Info
.
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11046 | open | [
"bug",
"stale"
] | 2025-03-12T20:14:01Z | 2025-04-12T15:02:52Z | 1 | john09282922 |
huggingface/lerobot | 854 | How to train diffusion policy in only state space, no images? | I have been having a lot of trouble trying to only train a model on purely a state space task so there are no images involved. I have already looked through every tutorial and most source code files and just can not get this working.
I have a script that creates a LeRobotDataset through human demonstrations. The script is simplified and only contains the relevant information. I simply record 10 demonstrations to create a LeRobotDataset from. There are no images the only observations is a (31, ) numpy float array.
```
feature_dict = {
"next.reward": {
"dtype": "float",
"shape": (1,),
"names": None,
},
"action": {
"dtype": "float64",
"shape": (5, 1),
"names": None
},
"next.success": {
"dtype": "bool",
"shape": (1,),
"names": None,
},
# "timestamp": {
# "dtype": "float32",
# "shape": (1, ),
# "names": None,
# },
"observation.environment_state": {
"dtype": "float64",
"shape": (31, ),
"names": None
},
}
dataset_le_name = "second_save"
dataset_dir = os.path.join(os.path.dirname(__file__), "./files/", dataset_le_name)
le_dataset = LeRobotDataset.create(
repo_id=dataset_le_name,
fps=500,
root=dataset_dir,
features=feature_dict
)
env.reset()
for _ in range(10):
while True:
step_start = time.time()
obs, reward, terminated, _, _ = env.step(None)
action = teleoperate_command()
frame = {
"action": torch.from_numpy(action),
"next.reward": np.array([reward]),
"next.success": np.array([not terminated]),
#"timestamp": np.array([env.unwrapped.sim_object.data.time], dtype=np.float32).reshape(1,),
"observation.environment_state": obs,
"task": "flick switch"
}
le_dataset.add_frame(frame)
if terminated:
print("Task completed")
break
le_dataset.save_episode()
```
This script works fine and is able to create the dataset with no errors. But then when I try to train a diffusion policy from scratch, the exact example script from https://github.com/huggingface/lerobot/blob/main/examples/3_train_policy.py
```# Create a directory to store the training checkpoint.
output_directory = Path("outputs/train/example_pusht_diffusion")
output_directory.mkdir(parents=True, exist_ok=True)
# # Select your device
device = torch.device("cuda")
# Number of offline training steps (we'll only do offline training for this example.)
# Adjust as you prefer. 5000 steps are needed to get something worth evaluating.
training_steps = 5000
log_freq = 1
# When starting from scratch (i.e. not from a pretrained policy), we need to specify 2 things before
# creating the policy:
# - input/output shapes: to properly size the policy
# - dataset stats: for normalization and denormalization of input/outputs
dataset_le_name = "second_save"
dataset_dir = os.path.join(os.path.dirname(__file__), "./files/imitationDataset", dataset_le_name)
dataset_metadata = LeRobotDatasetMetadata(dataset_le_name, root=dataset_dir)
features = dataset_to_policy_features(dataset_metadata.features)
output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}
input_features = {key: ft for key, ft in features.items() if key not in output_features}
print(input_features)
# Policies are initialized with a configuration class, in this case `DiffusionConfig`. For this example,
# we'll just use the defaults and so no arguments other than input/output features need to be passed.
cfg = DiffusionConfig(input_features=input_features, output_features=output_features)
# We can now instantiate our policy with this config and the dataset stats.
policy = DiffusionPolicy(cfg, dataset_stats=dataset_metadata.stats)
```
I keep getting the error
```Traceback (most recent call last):
File "path/trainDiffusion.py", line 105, in <module>
main()
File "path/trainDiffusion.py", line 44, in main
policy = DiffusionPolicy(cfg, dataset_stats=dataset_metadata.stats)
File "path/lerobot/lerobot/common/policies/diffusion/modeling_diffusion.py", line 70, in __init__
config.validate_features()
File "pathlerobot/lerobot/common/policies/diffusion/configuration_diffusion.py", line 220, in validate_features
first_image_key, first_image_ft = next(iter(self.image_features.items()))
StopIteration
```
Looking at the source code it seems its always checking for image features in the validate feature function, but I just want to train a diffusion policy with no images. How do I do this? | https://github.com/huggingface/lerobot/issues/854 | closed | [
"question",
"policies",
"stale"
] | 2025-03-12T16:01:19Z | 2025-10-26T02:30:57Z | null | Nicholas-Baldassini |
huggingface/diffusers | 11,045 | Crash when loading Flux Schnell 1 model with train_dreambooth_lora_flux | ### Describe the bug
When using the `Diffusers/example/dreambooth/train_dreambooth_lora_flux` script with the Flux Schnell 1 model, the process consistently crashes during the transformer shard loading at 33% (1/3), causing my entire Google JupyterLab kernel to crash.
**Question:** Is this related to using the Flux Schnell model instead of a Dev model? Is there a known incompatibility?
**Logs:** 03/12/2025 14:14:26 - INFO - __main__ - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: bf16
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{'use_karras_sigmas', 'shift_terminal', 'use_beta_sigmas', 'time_shift_type', 'invert_sigmas', 'use_exponential_sigmas'} was not found in config. Values will be initialized to default values.
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|████████ | 1/2 [00:13<00:13, 13.01s/it]
Loading checkpoint shards: 100%|████████████████| 2/2 [00:25<00:00, 12.53s/it]
Loading checkpoint shards: 100%|████████████████| 2/2 [00:25<00:00, 12.60s/it]
Instantiating AutoencoderKL model under default dtype torch.float32.
All model checkpoint weights were used when initializing AutoencoderKL.
All the weights of AutoencoderKL were initialized from the model checkpoint at /home/jupyter/flux_model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use AutoencoderKL for predictions without further training.
Instantiating FluxTransformer2DModel model under default dtype torch.float32.
{'out_channels', 'axes_dims_rope'} was not found in config. Values will be initialized to default values.
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]
Loading checkpoint shards: 33%|█████▎ | 1/3 [00:26<00:52, 26.10s/it]
### Reproduction
export MODEL_NAME="black-forest-labs/FLUX.1-schnell"
export INSTANCE_DIR="images"
export OUTPUT_DIR="output"
accelerate launch train_dreambooth_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--mixed_precision="bf16" \
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--guidance_scale=1 \
--gradient_accumulation_steps=4 \
--optimizer="prodigy" \
--learning_rate=1. \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=500 \
--validation_prompt="A photo of sks dog in a bucket" \
--validation_epochs=25 \
--seed="0"
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.10.16
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.3
- Transformers version: 4.49.0
- Accelerate version: 1.4.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA L4, 23034 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11045 | closed | [
"bug",
"stale"
] | 2025-03-12T15:08:11Z | 2025-05-07T15:18:15Z | 4 | rleygonie |
huggingface/diffusers | 11,043 | When will we be getting Quanto support for Wan 2.1? | The diffusers library for quantizers currently doesn't contain an entry for Quantro:
https://github.com/huggingface/diffusers/tree/main/src/diffusers/quantizers
Isn't this needed to perform requantization on a quantized Transformer for WAN 2.1?
Currently we can't do this due to missing Quanto quantizer after we've quantized and stored a Transformer:
` print('Quantize transformer')
class QuantizedWanTransformer3DModel(QuantizedDiffusersModel):
base_class = WanTransformer3DModel
transformer = QuantizedWanTransformer3DModel.from_pretrained(
"./wan quantro T2V 14B Diffusers/basemodel/wantransformer3dmodel_qint8"
).to(dtype=dtype)` | https://github.com/huggingface/diffusers/issues/11043 | closed | [] | 2025-03-12T12:43:59Z | 2025-03-23T18:17:53Z | 2 | ukaprch |
huggingface/lerobot | 853 | How to customize adding other robot and manipulator? | Thanks for your great work! Now I got a problem how to customize adding other robot and manipulator.
I have 7DOF bimanual manipulators robot, which is powered by servo-motor. I want to add it to lerobot so I can use this fantastic platform to collect data and train. Specially the ACT and diffusion policy.
I have the URDF file, and already setup in ROS moveit and Isaac Sim, using 485 to drive the real robot.
I checked the code and maybe I should crate new yaml file in /configs/robot an some other files for my robot.
Which is simpler compared to directly collecting data and training with ACT repository? Is there any tutorial on how to add a custom robot for fresh man?
Thanks a lot !
 | https://github.com/huggingface/lerobot/issues/853 | closed | [
"question",
"robots"
] | 2025-03-12T11:39:19Z | 2025-10-08T20:16:23Z | null | meijie-jesse |
huggingface/smollm | 65 | How to set video size when fine tuning | Hi,
I've tried a bunch of variants but I can't seem to figure out how to set the video size. Currently, I have:
```py
processor.video_size = { "longest_edge": 128 }
processor.do_image_splitting = False
def sample_indices_fn(metadata, num_frames=None, fps=None, **kwargs):
return np.arange(0, 20, dtype=int)
messages = [
{"role": "user", "content": [
{ "type": "video", "path": example["clip_chunked_path"] },
] },
{
"role": "assistant",
"content": [
{"type": "text", "text": json.dumps(last_player_inputs)},
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
sample_indices_fn=sample_indices_fn,
video_load_backend="torchvision",
images_kwargs={ "max_image_size": {"longest_edge": 128 } }
).to(model.device, dtype=model.dtype)
print("FRAMES", inputs["pixel_values"].shape)
```
Which gives me a pixel_values shape of `[1, 20, 3, 128, 128]` (which is what I want), but then training crashes:
```
(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
2025-03-12 04:16:13,286 ERROR tune_controller.py:1331 -- Trial task failed for trial TorchTrainer_4b80b_00000
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
result = ray.get(future)
^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/worker.py", line 2772, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/worker.py", line 919, in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(RuntimeError): ray::_Inner.train() (pid=308044, ip=172.31.24.115, actor_id=164821b0515a3af42f0d03bc68000000, repr=TorchTrainer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/tune/trainable/trainable.py", line 331, in train
raise skipped from exception_cause(skipped)
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/utils.py", line 57, in check_for_failure
ray.get(object_ref)
^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ray.exceptions.RayTaskError(RuntimeError): ray::_RayTrainWorker__execute.get_next() (pid=308152, ip=172.31.24.115, actor_id=3794a93b2a61f6b6efb8496d68000000, repr=<ray.train._internal.worker_group.RayTrainWorker object at 0x79e43e8d7890>)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/worker_group.py", line 33, in __execute
raise skipped from exception_cause(skipped)
File "/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/utils.py", line 176, in discard_return_wrapper
train_func(*args, **kwargs)
File "/tmp/ray/session_2025-03-04_07-50-04_397300_8643/runtime_resources/working_dir_files/_ray_pkg_77cdef2c25570eb4/agent/train_smol.py", line 214, in train_func
trainer.train()
File "/home/ray/anaconda3/lib/python3.12/site-packages/transformers/trainer.py", line 2243, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/ray/anaconda3/lib/python3.12/site-packages/transformers/trainer.py", line 2554, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_i | https://github.com/huggingface/smollm/issues/65 | open | [
"Video"
] | 2025-03-12T11:20:28Z | 2025-07-29T13:12:05Z | null | FredrikNoren |
huggingface/accelerate | 3,437 | Need help on how to disable enable_model_cpu_offload / enable_sequential_cpu_offload | So during my testing when used individually, I observed that
enable_sequential_cpu_offload require- 11 GB VRAM
enable_model_cpu_offload require - 8 GB VRAM
I am using Diffusers + nunchaku + sd_embed
Problem: sd_embed does not support enable_sequential_cpu_offload but support enable_model_cpu_offload
Requirement:
1. Form pipe
2. Use sd_embed to generate prompt_embeds using enable_model_cpu_offload
3. Disable enable_model_cpu_offload
4. Enable enable_sequential_cpu_offload and do inference
So I tried this code
1. During prompt_embeds VRAM is ~6 GB
2. During inference VRAM is ~8GB
Noticed enable_model_cpu_offload is not disabled after invoking optionally_disable_offloading and enabling enable_sequential_cpu_offload. The VRAM requirement remains same as what is needed for enable_model_cpu_offload .
Is this something that is doable or not supported? Any guidance is appreciated.
```python
import torch
from diffusers import FluxPipeline
import torch.nn as nn
from accelerate.hooks import CpuOffload, AlignDevicesHook, remove_hook_from_module
from nunchaku import NunchakuFluxTransformer2dModel, NunchakuT5EncoderModel
from sd_embed.embedding_funcs import get_weighted_text_embeddings_flux1
def optionally_disable_offloading(_pipeline):
is_model_cpu_offload = False
is_sequential_cpu_offload = False
if _pipeline is not None:
for _, component in _pipeline.components.items():
if isinstance(component, nn.Module) and hasattr(component, "_hf_hook"):
if not is_model_cpu_offload:
is_model_cpu_offload = isinstance(component._hf_hook, CpuOffload)
if not is_sequential_cpu_offload:
is_sequential_cpu_offload = isinstance(component._hf_hook, AlignDevicesHook)
remove_hook_from_module(component, recurse=True)
return (is_model_cpu_offload, is_sequential_cpu_offload)
transformer = NunchakuFluxTransformer2dModel.from_pretrained("mit-han-lab/svdq-int4-flux.1-schnell")
text_encoder_2 = NunchakuT5EncoderModel.from_pretrained("mit-han-lab/svdq-flux.1-t5")
pipeline = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-schnell",
text_encoder_2=text_encoder_2,
transformer=transformer,
torch_dtype=torch.bfloat16,
)
pipeline.enable_model_cpu_offload()
prompt = """\
A dreamy, soft-focus photograph capturing a romantic Jane Austen movie scene,
in the style of Agnes Cecile. Delicate watercolors, misty background,
Regency-era couple, tender embrace, period clothing, flowing dress, dappled sunlight,
ethereal glow, gentle expressions, intricate lace, muted pastels, serene countryside,
timeless romance, poetic atmosphere, wistful mood, look at camera.
"""
prompt_embeds, pooled_prompt_embeds = get_weighted_text_embeddings_flux1(
pipe = pipeline
, prompt = prompt
)
print(">>>>>>>", optionally_disable_offloading(pipeline))
pipeline.enable_sequential_cpu_offload()
image = pipeline(
prompt_embeds=prompt_embeds,
pooled_prompt_embeds=pooled_prompt_embeds,
num_inference_steps=4,
guidance_scale=3.5,
generator=torch.Generator(device="cpu").manual_seed(123456)
).images[0]
image.save("flux.1-schnell_sd-embed1.png")
prompt = """\
A dreamy, soft-focus photograph capturing a romantic Jane Austen movie scene,
in the style of Agnes Cecile. Delicate watercolors, misty background,
Regency-era couple, tender embrace, period clothing, flowing dress, dappled sunlight,
ethereal glow, gentle expressions, intricate lace, muted pastels, serene countryside,
timeless romance, poetic atmosphere, wistful mood, look at camera.
"""
print(">>>>>>>", optionally_disable_offloading(pipeline))
pipeline.enable_model_cpu_offload()
prompt_embeds, pooled_prompt_embeds = get_weighted_text_embeddings_flux1(
pipe = pipeline
, prompt = prompt
)
print(">>>>>>>", optionally_disable_offloading(pipeline))
pipeline.enable_sequential_cpu_offload()
image = pipeline(
prompt_embeds=prompt_embeds,
pooled_prompt_embeds=pooled_prompt_embeds,
num_inference_steps=4,
guidance_scale=3.5,
generator=torch.Generator(device="cpu").manual_seed(12345678)
).images[0]
image.save("flux.1-schnell_sd-embed2.png")
``` | https://github.com/huggingface/accelerate/issues/3437 | closed | [] | 2025-03-12T09:29:08Z | 2025-03-12T10:10:33Z | null | nitinmukesh |
huggingface/diffusers | 11,042 | ZeroDivisionError when performing forward pass with UNet3DConditionModel | ### Describe the bug
# ZeroDivisionError when performing forward pass with UNet3DConditionModel
I'm encountering a ZeroDivisionError when attempting to perform a forward pass with the UNet3DConditionModel. This seems to be related to the num_attention_heads parameter being None, which causes self.inner_dim to be 0.
Here's the code I'm using:
```python
from diffusers import UNet3DConditionModel
import torch
model = UNet3DConditionModel(
down_block_types=(
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"DownBlock3D",
),
up_block_types=(
"UpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
),
block_out_channels=(32, 64, 128, 128),
norm_num_groups=4,
)
data = torch.randn(1, 4, 32, 32, 32)
model(data, timestep=3, encoder_hidden_states=torch.zeros(1, 4, 32, 32, 32))
```
The error traceback indicates that the issue occurs in the attention processing:
```
ZeroDivisionError: integer division or modulo by zero
```
This seems to be because num_attention_heads is None, leading to self.inner_dim = 0 in the transformer configuration.
I noticed that in the UNet3DConditionModel implementation, there's a check that raises an error if num_attention_heads is provided:
```python
if num_attention_heads is not None:
raise NotImplementedError(
"At the moment it is not possible to define the number of attention heads via num_attention_heads because of a naming issue as described in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131 . Passing num_attention_heads will only be supported in diffusers v0.19."
)
```
Given this limitation, I'm unsure how to properly configure the model to avoid this error. Could you provide guidance on:
1. How to correctly perform a forward pass with demo hidden states
2. What parameters I should adjust to ensure the model is properly configured
3. If there's a workaround for this issue in the current version of diffusers
Thank you for your assistance!
### Reproduction
```python
from diffusers import UNet3DConditionModel
import torch
model = UNet3DConditionModel(
down_block_types=(
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"CrossAttnDownBlock3D",
"DownBlock3D",
),
up_block_types=(
"UpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
"CrossAttnUpBlock3D",
),
block_out_channels=(32, 64, 128, 128),
norm_num_groups=4,
)
data = torch.randn(1, 4, 32, 32, 32)
model(data, timestep=3, encoder_hidden_states=torch.zeros(1, 4, 32, 32, 32))
```
### Logs
```shell
```
### System Info
Python 3.11.10
diffusers version 0.32.2
ubuntu 24.04
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11042 | closed | [
"bug"
] | 2025-03-12T09:26:01Z | 2025-03-13T02:00:12Z | 2 | txz32102 |
huggingface/lerobot | 851 | Hello, I would like to ask if I can use my ROS2 MoveIt2 robotic arm? | Can it support ROS training? I believe this would be beneficial for ecosystem development. | https://github.com/huggingface/lerobot/issues/851 | open | [
"question"
] | 2025-03-12T07:39:51Z | 2025-08-04T19:29:03Z | null | Gates-456 |
huggingface/open-r1 | 502 | How to use vllm with 2 GPUs? | Just as GRPO OOM #475 stated, the vllm kv init is so large that 1 A100 80GB could not hold it, while I have 8*A100 in total.
However, only 1 GPU is allowed to assign to vllm, as `vllm_device: auto` or `ib/python3.10/site-packages/trl/trainer/grpo_trainer.py`.
How should I solve the issue? Would anybody know?
| https://github.com/huggingface/open-r1/issues/502 | open | [] | 2025-03-12T03:36:18Z | 2025-06-03T11:55:47Z | null | greatxue |
huggingface/diffusers | 11,036 | Why perform the following operations on the latent condition? | in the code :https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
line 395-404:
```
latents_mean = (
torch.tensor(self.vae.config.latents_mean)
.view(1, self.vae.config.z_dim, 1, 1, 1)
.to(latents.device, latents.dtype)
)
latents_std = 1.0 / torch.tensor(self.vae.config.latents_std).view(1, self.vae.config.z_dim, 1, 1, 1).to(
latents.device, latents.dtype
)
latent_condition = (latent_condition - latents_mean) * latents_std
```
The official inference code of Wan2.1 does not perform similar operations:
https://github.com/Wan-Video/Wan2.1/blob/main/wan/image2video.py#L237 | https://github.com/huggingface/diffusers/issues/11036 | closed | [] | 2025-03-12T02:32:09Z | 2025-03-15T02:40:13Z | 2 | trouble-maker007 |
huggingface/lerobot | 847 | Is there a way Merge | Convert | Edit datasets function or a way how we can train model using different datasets ? | Hey, everyone.
At the moment, we have this problem: We have recorded datasets with around 100 episodes each, but we would like to train our model with 1000 episodes. Unfortunately, we didn't find a way to load multiple datasets into a single policy training job, is it even possible ? If no, ss there a way to merge a couple of small datasets into a big one?
If none of that is possible, is there a way to convert to hdf5 ?
I was referencing https://github.com/huggingface/lerobot/issues/533, but there are no answers as well.
| https://github.com/huggingface/lerobot/issues/847 | closed | [
"question",
"policies",
"dataset"
] | 2025-03-11T17:25:08Z | 2025-10-17T12:09:32Z | null | runmaget |
huggingface/lerobot | 846 | How to convert my own dataset to LerobotDataset format? | Hi, I am new to Lerobot and have a dataset in my own format. I would like to convert it to the LerobotDataset format.
I referred to `lerobot/scripts/push_dataset_to_hub.py`, but it seems to be deprecated. Could you provide guidance or an updated method for converting custom datasets?
Thanks in advance! | https://github.com/huggingface/lerobot/issues/846 | closed | [
"question",
"dataset"
] | 2025-03-11T09:17:23Z | 2025-04-15T00:59:10Z | null | yilin404 |
huggingface/open-r1 | 498 | How to Enable enforce_eager or Disable CUDA Graph in Evaluation | Evaluation code is currently using lighteval and vLLM for inference, and I would like to disable CUDA Graph by enabling options like ```enforce_eager```. However, I could not find a command-line argument for this in ```$MODEL_ARGS```. Additionally, setting it as an environment variable (e.g., VLLM_ENFORCE_EAGER) does not seem to work.
Is there a way to achieve this? Any guidance would be appreciated. | https://github.com/huggingface/open-r1/issues/498 | closed | [] | 2025-03-11T00:25:49Z | 2025-03-11T04:54:02Z | null | superdocker |
huggingface/diffusers | 11,020 | Multi-gpus Context Parallel training support? | Nowadays, the number of parameters in video generation models is increasing, and the video length is increasing. When training video models, it is difficult to fit a complete video sequence(200k~ tokens) on a single GPU. Some sequence parallel training technologies can solve this problem, such as the [fastvideo](https://github.com/hao-ai-lab/FastVideo) training framework, but the imperfection of this framework makes it difficult to use. Can the diffusers framework support sequence parallel training? | https://github.com/huggingface/diffusers/issues/11020 | open | [] | 2025-03-10T11:45:30Z | 2025-07-18T13:05:08Z | 2 | yinian-lw |
huggingface/blog | 2,728 | Open In "02_how_to_generate", code cell 1 has an outdated version of tensorflow | The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.
if we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1. | https://github.com/huggingface/blog/issues/2728 | open | [] | 2025-03-09T18:05:55Z | 2025-03-09T18:06:11Z | null | Umashankar86 |
huggingface/blog | 2,727 | Open In "02_how_to_generate", code cell 1 has an outdated version of tensorflow | The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.
if we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1. | https://github.com/huggingface/blog/issues/2727 | closed | [] | 2025-03-09T18:04:48Z | 2025-03-09T18:05:03Z | null | Umashankar86 |
huggingface/datasets | 7,442 | Flexible Loader | ### Feature request
Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?
It can be something as simple as this one:
```
def load_hf_dataset(path_or_name):
if os.path.exists(path_or_name):
return load_from_disk(path_or_name)
else:
return load_dataset(path_or_name)
```
### Motivation
This can be done inside the user codebase, too, but in my experience, it becomes repetitive code.
### Your contribution
I can open a pull request. | https://github.com/huggingface/datasets/issues/7442 | open | [
"enhancement"
] | 2025-03-09T16:55:03Z | 2025-03-27T23:58:17Z | 3 | dipta007 |
huggingface/chat-ui | 1,751 | Analyze uploaded PDF files through OpenAI API | When I upload a PDF file and leverage it, I will get the base64 data. But I didn't find the code to process it in endpoints/openai, while it can handle the image base64 data. Besides, I failed to transfer it back to text. How can I analyze the file through OpenAI API?
 | https://github.com/huggingface/chat-ui/issues/1751 | open | [
"support"
] | 2025-03-09T09:31:13Z | 2025-03-15T18:38:17Z | 2 | zu0feng |
huggingface/hf-hub | 99 | Where is the `0.4.2` commit? | I saw on [crates.io](https://crates.io/crates/hf-hub/versions) that the latest version of hf-hub is 0.4.2, but I can't find the 0.4.2 tag on GitHub. Could you tell me what is the commit ID corresponding to this version?
Sincerely suggest that you add a corresponding tag for each version release, which can effectively avoid such inefficient communication and thus speed up the work efficiency of other contributors.🙏 | https://github.com/huggingface/hf-hub/issues/99 | closed | [] | 2025-03-08T12:43:18Z | 2025-06-16T09:41:15Z | null | HairlessVillager |
huggingface/transformers | 36,613 | In "02_how_to_generate", code cell 1 has an error message | ### System Info
In "02_how_to_generate", code cell 1 has an error message but the rest works fine: ERROR: Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Run code cell 1
### Expected behavior
No error message should appear when running code cell | https://github.com/huggingface/transformers/issues/36613 | closed | [
"bug"
] | 2025-03-08T07:46:39Z | 2025-04-16T08:03:04Z | null | kvutien |
huggingface/diffusers | 11,008 | Support wan2.1 video model? | ### Did you like the remote VAE solution?
Yes.
### What can be improved about the current solution?
Wan2.1 video model support is appreciated!
### What other VAEs you would like to see if the pilot goes well?
Wan2.1 video model support is appreciated!
### Notify the members of the team
@hlky @sayakpaul | https://github.com/huggingface/diffusers/issues/11008 | open | [
"stale"
] | 2025-03-08T04:21:33Z | 2025-05-09T15:03:47Z | 6 | kexul |
huggingface/trl | 3,028 | Distill teacher models where the vocab size of teacher and student is different | I am trying to distill a Qwen2.5-7B-Instruct to Qwen2.5-5B-Instruct using a sample code
```from datasets import Dataset
from trl import GKDConfig, GKDTrainer
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
NUM_DUMMY_SAMPLES = 100
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
teacher_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
train_dataset = Dataset.from_dict(
{
"messages": [
[
{"role": "user", "content": "Hi, how are you?"},
{"role": "assistant", "content": "I'm great thanks"},
]
]
* NUM_DUMMY_SAMPLES
}
)
eval_dataset = Dataset.from_dict(
{
"messages": [
[
{"role": "user", "content": "What colour is the sky?"},
{"role": "assistant", "content": "The sky is blue"},
]
]
* NUM_DUMMY_SAMPLES
}
)
training_args = GKDConfig(output_dir="gkd-model", per_device_train_batch_size=1)
trainer = GKDTrainer(
model=model,
teacher_model=teacher_model,
args=training_args,
processing_class=tokenizer,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()```
But this gives me an error because their vocab sizes are different (so might be their tokenizers). Is there a workaround for these kind of situations? How are such cases handled? | https://github.com/huggingface/trl/issues/3028 | open | [
"🏋 GKD"
] | 2025-03-08T00:29:01Z | 2025-10-29T04:15:50Z | null | shaunakjoshi12 |
huggingface/diffusers | 11,005 | pipeline_wan_i2v.py: minor discrepancy between arg default and docstring | ### Describe the bug
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py
Line 447 (arg default):
```output_type: Optional[str] = "np",```
Line 496 (docstring):
```output_type (`str`, *optional*, defaults to `"pil"`):```
### Reproduction
n/a
### Logs
```shell
```
### System Info
n/a
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11005 | closed | [
"bug",
"good first issue",
"help wanted",
"contributions-welcome"
] | 2025-03-07T16:37:48Z | 2025-04-24T18:49:38Z | 2 | rolux |
huggingface/finetrainers | 301 | How to train text-to-video generation model on different generation models using Disney dataset? | The current repository does not explicitly describe ho to change training methods between t2v or i2v.
| https://github.com/huggingface/finetrainers/issues/301 | closed | [] | 2025-03-07T16:02:42Z | 2025-03-07T16:08:06Z | null | kjosh925 |
huggingface/speech-to-speech | 159 | What is from df.enhance import enhance, init_df ? in vad_handler? | https://github.com/huggingface/speech-to-speech/issues/159 | open | [] | 2025-03-07T15:07:53Z | 2025-03-07T15:07:53Z | null | Manukrishna2K | |
huggingface/diffusers | 11,002 | Any chance class members like self._interrupt could be defined in __init__ across pipelines? | ### Describe the bug
I think there is no benefit to late initializing here and it puts a burden on the library user that could be easily avoided. Also leads to some confusion as it is uncommon, code inspection flags this. Let me know if I'm missing something.
### Reproduction
```
class WanImageToVideoPipeline:
def __init__(self):
pass
def __call__(self, *args, **kwargs):
self._interrupt = False
return 23
@property
def interrupt(self):
return self._interrupt
pipe = WanImageToVideoPipeline()
def on_async_user_abort_call_me_any_time():
# check if already interrupted but mid step
print(pipe.interrupt)
on_async_user_abort_call_me_any_time()
```
### Logs
```shell
AttributeError: 'WanImageToVideoPipeline' object has no attribute '_interrupt'. Did you mean: 'interrupt'?
```
### System Info
Diffusers 0.33.0.dev0, Linux, Python 3.10
### Who can help?
@yiyixuxu @DN6 | https://github.com/huggingface/diffusers/issues/11002 | open | [
"bug",
"help wanted",
"contributions-welcome"
] | 2025-03-07T11:28:27Z | 2025-05-26T07:21:47Z | 9 | spezialspezial |
huggingface/diffusers | 10,993 | f-divergence | Is there a plan to implement the f-divergence scheduler ? I would like to contribute that to the library. | https://github.com/huggingface/diffusers/issues/10993 | open | [
"stale"
] | 2025-03-06T22:46:13Z | 2025-04-06T15:02:55Z | 5 | manmeet3591 |
huggingface/smolagents | 902 | How to populate custom variables in prompt template? | I'm trying to configure custom template variables in my system prompt.
**Current Implementation:**
1. I have a system prompt template with custom variables:
```python
CUSTOM_CODE_SYSTEM_PROMPT = """You are {{ bot_name }}, a customer support assistant...
{{ formatting_guidelines }}
```
2. Agent creation and configuration:
```python
from smolagents import CodeAgent, LiteLLMModel
def get_agent(platform: str = "whatsapp", variables: dict = None):
manager_agent = CodeAgent(
tools=[ClinicKnowledgeTool()],
model=model,
max_steps=3,
)
return manager_agent
```
3. Calling the agent:
```python
agent = get_agent(
platform=platform,
variables={
"conversation_history": conversation_history,
"formatting_guidelines ": "test",
},
)
agent.prompt_templates["system_prompt"] = CUSTOM_CODE_SYSTEM_PROMPT
```
**Questions:**
1. What's the correct way to populate template variables like `{{ bot_name }}` and `{{ formatting_guidelines }}` in the system prompt?
2. How do I handle dynamic variables like `conversation_history` that change with each request?
**Environment:**
- smolagents v1.10.0
- Python 3.10+
- FastAPI integration | https://github.com/huggingface/smolagents/issues/902 | closed | [] | 2025-03-06T20:45:51Z | 2025-03-07T08:54:22Z | null | Luisotee |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.