repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers.js | 1,339 | Model is cached, but still reloads from network? | ### Question
I have this code in a React project :
```
import { env, pipeline } from "@xenova/transformers";
const model = await pipeline("translation", "Xenova/opus-mt-de-en");
let transText = await model("hallo, ich bin hier");
```
When I inspect the browser cache, I see relevant files in "cache storage". (xenova-opus-mt-de-en...)
But when I reload the network says I am re-downloading it each time from cdn.jsdeliver.net
How can I get it to grab the cached version instead of do a network request? | https://github.com/huggingface/transformers.js/issues/1339 | closed | [
"question"
] | 2025-06-11T16:19:26Z | 2025-06-27T06:06:25Z | null | patrickinminneapolis |
huggingface/peft | 2,583 | Lora transfer learning | Hello, I am training a lora model using flux fill pipeline using diffusers+peft+accelerate. I already have a lora model for general purpose for my application which was trained for 5k steps and large dataset. Now, I want to do transfer learning to finetune on very small dataset but want to train from previous lora model instead of scratch training. how can I do it? My lora config is as following. Currently I am using `gaussian` method to initialize lora model. Is there anyway to use pretrained lora model without random initialize? Thanks in advance.
```
lora_config:
r: 256
lora_alpha: 256
init_lora_weights: "gaussian"
target_modules: "(.*x_embedder|.*(?<!single_)transformer_blocks\\.[0-9]+\\.norm1\\.linear|.*(?<!single_)transformer_blocks\\.[0-9]+\\.attn\\.to_k|.*(?<!single_)transformer_blocks\\.[0-9]+\\.attn\\.to_q|.*(?<!single_)transformer_blocks\\.[0-9]+\\.attn\\.to_v|.*(?<!single_)transformer_blocks\\.[0-9]+\\.attn\\.to_out\\.0|.*(?<!single_)transformer_blocks\\.[0-9]+\\.ff\\.net\\.2|.*single_transformer_blocks\\.[0-9]+\\.norm\\.linear|.*single_transformer_blocks\\.[0-9]+\\.proj_mlp|.*single_transformer_blocks\\.[0-9]+\\.proj_out|.*single_transformer_blocks\\.[0-9]+\\.attn.to_k|.*single_transformer_blocks\\.[0-9]+\\.attn.to_q|.*single_transformer_blocks\\.[0-9]+\\.attn.to_v|.*single_transformer_blocks\\.[0-9]+\\.attn.to_out)"
``` | https://github.com/huggingface/peft/issues/2583 | closed | [] | 2025-06-11T12:00:25Z | 2025-07-20T15:04:05Z | 4 | hardikdava |
huggingface/transformers | 38,750 | Is it a good choice to early error when `output_attentions=True` and attn implementation not equal to `eager` | ### System Info
Before this PR [38288](https://github.com/huggingface/transformers/pull/38288), the program will run smoothly even when we set `output_attentions=True` and the attn implementation is not `eager`, as it will fallback to use eager mode, after this PR, it will throw error directly: [L342](https://github.com/huggingface/transformers/blob/main/src/transformers/configuration_utils.py#L342). I think it would be better if we just throw a warning and fallback to `eager` attn. Is it possible to revert it back or make small direct change based on this PR?
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
We want to make sure program can run without crash even we set `output_attentions=True` and attn implementation not equal to `eager` | https://github.com/huggingface/transformers/issues/38750 | closed | [
"bug"
] | 2025-06-11T11:05:48Z | 2025-06-25T08:00:06Z | 2 | kaixuanliu |
huggingface/lerobot | 1,262 | use smolVLA, How to know the current task is completed | I use smolVLA to do a wiping task, it will keep doing the task again and again, how to judge the task is completed, thank you | https://github.com/huggingface/lerobot/issues/1262 | open | [
"question",
"policies"
] | 2025-06-11T08:48:03Z | 2025-08-12T10:04:14Z | null | haoyankai |
huggingface/transformers.js | 1,338 | Question about supporting Float16Array | ### Question
I am trying transformers.js with WebGPU. The performance is great, but I found that transformers.js returns a Float32Array where the model is quantized to `fp16`:
```javascript
const extractor = await pipeline(
"feature-extraction",
"bge-small-zh-v1.5",
{
device: "webgpu",
dtype: "fp16",
local_files_only: true,
},
);
// ...
const embeddings = await extractor(texts, {pooling: "mean", normalize: true});
console.log(embeddings.data);
// -> Float32Array(5120000) [...]
```
Since the model itself has only 16-bit precision, returning a Float32Array (instead of [Float16Array](https://caniuse.com/mdn-javascript_builtins_float16array) that is supported in latest browsers) seems a waste of performance. Is this comment correct, and do we have plans to support Float16Array for better performance? Thanks! | https://github.com/huggingface/transformers.js/issues/1338 | open | [
"question"
] | 2025-06-11T07:29:19Z | 2025-07-03T05:50:56Z | null | xmcp |
huggingface/transformers | 38,745 | [Bug][InformerForPredict] The shape will cause a problem | ### System Info
When I set the infomerconfig.input_size = 1, I find a bug, but I don't know how to fix it.
- Function Name : `create_network_inputs`
```
time_feat = (
torch.cat(
(
past_time_features[:, self._past_length - self.config.context_length :, ...],
future_time_features,
),
dim=1,
)
if future_values is not None
else past_time_features[:, self._past_length - self.config.context_length :, ...]
)
print(self._past_length)
# target
if past_observed_mask is None:
past_observed_mask = torch.ones_like(past_values)
context = past_values[:, -self.config.context_length :]
observed_context = past_observed_mask[:, -self.config.context_length :]
_, loc, scale = self.scaler(context, observed_context)
inputs = (
(torch.cat((past_values, future_values), dim=1) - loc) / scale
if future_values is not None
else (past_values - loc) / scale
)
print(loc.shape, scale.shape, inputs.shape)
# static features
log_abs_loc = loc.abs().log1p() if self.config.input_size == 1 else loc.squeeze(1).abs().log1p()
log_scale = scale.log() if self.config.input_size == 1 else scale.squeeze(1).log()
print(f"log_abs_loc: {log_abs_loc.shape}, {log_scale.shape}")
print(time_feat.shape, self.config.input_size)
static_feat = torch.cat((log_abs_loc, log_scale), dim=1)
print(time_feat.shape, static_feat.shape)
if static_real_features is not None:
static_feat = torch.cat((static_real_features, static_feat), dim=1)
if static_categorical_features is not None:
embedded_cat = self.embedder(static_categorical_features)
static_feat = torch.cat((embedded_cat, static_feat), dim=1)
print(time_feat.shape, static_feat.shape)
expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
# all features
features = torch.cat((expanded_static_feat, time_feat), dim=-1)
# lagged features
subsequences_length = (
self.config.context_length + self.config.prediction_length
if future_values is not None
else self.config.context_length
)
lagged_sequence = self.get_lagged_subsequences(sequence=inputs, subsequences_length=subsequences_length)
lags_shape = lagged_sequence.shape
reshaped_lagged_sequence = lagged_sequence.reshape(lags_shape[0], lags_shape[1], -1)
if reshaped_lagged_sequence.shape[1] != time_feat.shape[1]:
raise ValueError(
f"input length {reshaped_lagged_sequence.shape[1]} and time feature lengths {time_feat.shape[1]} does not match"
)
# transformer inputs
transformer_inputs = torch.cat((reshaped_lagged_sequence, features), dim=-1)
return transformer_inputs, loc, scale, static_feat
```
As we can see, I add some `print` sentence in the library to see the shape, now the bug is:
```
Traceback (most recent call last):
File "/home/wjt/luck/FinalWork/alert_models/informer_based_model_3_cpu.py", line 820, in <module>
pipline.train_model()
File "/home/wjt/luck/FinalWork/alert_models/informer_based_model_3_cpu.py", line 466, in train_model
outputs = model(
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py", line 1844, in forward
outputs = self.model(
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py", line 1568, in forward
transformer_inputs, loc, scale, static_feat = self.create_network_inputs(
File "/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py", line 1386, in create_network_inputs
expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
RuntimeError: expand(torch.cuda.FloatTensor{[32, 1, 2, 1]}, size=[-1, 27, -1]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)
```
- First
```
log_abs_loc = loc.abs().log1p() if self.config.input | https://github.com/huggingface/transformers/issues/38745 | closed | [
"bug"
] | 2025-06-11T07:22:06Z | 2025-07-20T11:41:45Z | 11 | 2004learner |
huggingface/transformers | 38,740 | [DOCS] Add `pruna` as optimization framework | ### Feature request
Have a section on Pruna AI within the documentation. We did [a similar PR for diffusers](https://github.com/huggingface/diffusers/pull/11688) and thought it would be nice to show how to optimize transformers models too.
.
### Motivation
Have a section on Pruna AI within the documentation to show how to optimize LLMs for inference.
### Your contribution
We could do everything for the PR. | https://github.com/huggingface/transformers/issues/38740 | open | [
"Feature request"
] | 2025-06-11T04:52:33Z | 2025-07-16T08:56:52Z | 8 | davidberenstein1957 |
huggingface/sentence-transformers | 3,390 | How to create a customized model architecture that fits sentence-transformer's training framework? | I'd like to train a two tower model that takes categorical features, floats features in one tower, and the other tower just encodes a document using an out of the box embedding. Then the outputs from both towers are feed into sentence transformers loss function. All the training configuration should reuse sentence transformer's setup (loss function implementation, Training Arguments, etc) as much as possible.
Is this even feasible? Skimmed through the document found this page here (https://www.sbert.net/docs/sentence_transformer/usage/custom_models.html#structure-of-sentence-transformer-models), but the example on this page seems to be creating a new module, but only as part of a purely sequential models, each connected to its next.
Much appreciated! | https://github.com/huggingface/sentence-transformers/issues/3390 | open | [] | 2025-06-11T03:07:42Z | 2025-06-12T05:05:54Z | null | HuangLED |
huggingface/lerobot | 1,258 | Leader Servo Numbering different from script to documentation | First thank you for sharing this amazing work!
I am initializing the servos for the arm leader and I noticed that the numbering for the Wrist Roll and Wrist Pitch are different from the documentation when I ran the script:

wrist_roll is set to 5 in the script but set to 4 in the documentation
wrist_flex is set to 4 in the script but set to 5 (assuming it is Wrist Pitch) in the documentation
I guess nothing to worry about ?
| https://github.com/huggingface/lerobot/issues/1258 | open | [
"documentation",
"question"
] | 2025-06-10T21:03:03Z | 2025-08-12T10:04:29Z | null | FaboNo |
huggingface/transformers | 38,733 | GRPO per_device_eval_batch_size can't be set as 1, when there is only 1 GPU | `eval batch size must be evenly divisible by the number of generations per prompt. ` When I only have one GPU, I cannot set `per_device_eval_batch_size=1` because there will be no reasonable G to choose from. Is it possible to automatically calculate a value similar to the number of gradient accumulation steps to achieve this feature? | https://github.com/huggingface/transformers/issues/38733 | closed | [] | 2025-06-10T14:58:11Z | 2025-06-11T09:45:32Z | 0 | CasanovaLLL |
huggingface/lerobot | 1,254 | [Feature Proposal] Planning a new user friendly simulation environment for new task and data collection | Hello and bonjour! First and foremost, I really wanted to thanks the team and community for making this wonderful repo. It really helps and guide beginner in this field. And I also wanted to contribute for the community.
Reading the issues here, I found a lot of people are trying to run without physical robot. But with the current Aloha and Xarm simulation environment it is hard to config and train new task. So I was thinking to make new env where we could do that.
Here is the main new feature:
- New sim env we can use as extra like Xarm, Aloha and Pusht in a new repo.
- Make a simple, game like GUI which enable controlling the manipulator with only keyboard and mouse. (Thinking of making a mini robot on html that can be controlled with mouse, z axis and gripper with keyboard)
- Make it compatible to recent official MuJoCo release for further [update](https://playground.mujoco.org/) and [extension](https://github.com/google-deepmind/mujoco_warp). (Planning to use [MJX](https://mujoco.readthedocs.io/en/stable/mjx.html)(RL compatible) model)
- Realtime inference using mujoco view.
I'm a beginner in this field, so it might be a hard task for me. But I thought this this project might help quite people, and also really funny to do. So I'll try my best.
What are your thoughts on this proposal? (Sorry if there is already similar features.)
If it is okay, I'll start to dig in.
| https://github.com/huggingface/lerobot/issues/1254 | open | [
"question",
"simulation"
] | 2025-06-10T12:36:13Z | 2025-08-12T10:04:42Z | null | Bigenlight |
huggingface/lerobot | 1,252 | Failed to sync read 'Present_Position' on ids=[2,3,4,6]after 1 tries. [TxRxResult] There is no status packet | my arm is koch,when I set the motors ids and baudrates, it report error:
Failed to sync read 'Present_Position' on ids=[2,3,4,6]after 1 tries. [TxRxResult] There is no status packet | https://github.com/huggingface/lerobot/issues/1252 | open | [
"question",
"robots"
] | 2025-06-10T10:21:05Z | 2025-09-01T02:24:25Z | null | huazai665 |
huggingface/lerobot | 1,251 | where is async inference | hi,thx for your SmolVLA
I have a question:**where is the async inference?**
the eval.py in script doesn't seem for SmolVLA inference
hope for your early reply,thx in advance | https://github.com/huggingface/lerobot/issues/1251 | closed | [] | 2025-06-10T07:44:38Z | 2025-06-30T11:35:25Z | null | JuilieZ |
huggingface/transformers.js | 1,336 | node.js WebGPU compatibility and WASM performance in web enviornment | ### Question
Hello!
I've been running some performance benchmarks on whisper models and noticed that the web environment (running in react renderer in electron, separate worker with WASM) produced slower transcription results than the python counterpart (e.g. 1400ms vs 400ms per batch) - both utilizing the same number of threads and data types.
node.js environment running with WASM was almost on par with python, but unfortunately it won't let me pick webgpu as device - only cpu and dml are supported.
The onnxruntime-node package does mention webgpu being supported so I was wondering if it will be available for transformers running in node.js environment.
And I'm also wondering if the performance drop using WASM in web environment is expected or if I'm doing something wrong. | https://github.com/huggingface/transformers.js/issues/1336 | open | [
"question"
] | 2025-06-10T06:05:36Z | 2025-06-11T06:53:35Z | null | devnarekm |
huggingface/transformers | 38,709 | `get_video_features` in XCLIPModel always returns `pooled_output` | ### System Info
https://github.com/huggingface/transformers/blob/f4fc42216cd56ab6b68270bf80d811614d8d59e4/src/transformers/models/x_clip/modeling_x_clip.py#L1376
Hi
The `get_video_features` function is hardcoded to always return the `pooled_output`. But sometimes, it might be beneficial to get the `last_hidden_state` instead. Can we fix this behavior?
Thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```import av
import torch
import numpy as np
from transformers import AutoProcessor, AutoModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
'''
Decode the video with PyAV decoder.
Args:
container (`av.container.input.InputContainer`): PyAV container.
indices (`List[int]`): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
'''
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
'''
Sample a given number of frame indices from the video.
Args:
clip_len (`int`): Total number of frames to sample.
frame_sample_rate (`int`): Sample every n-th frame.
seg_len (`int`): Maximum allowed index of sample's last frame.
Returns:
indices (`List[int]`): List of sampled frame indices
'''
converted_len = int(clip_len * frame_sample_rate)
end_idx = np.random.randint(converted_len, seg_len)
start_idx = end_idx - converted_len
indices = np.linspace(start_idx, end_idx, num=clip_len)
indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
)
container = av.open(file_path)
# sample 8 frames
indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32")
model = AutoModel.from_pretrained("microsoft/xclip-base-patch32")
inputs = processor(
videos=list(video),
return_tensors="pt",
padding=True,
)
# forward pass
with torch.no_grad():
outputs = model.get_video_features(**inputs)
print(outputs.shape)
### Expected behavior
The `get_video_features` function should have the option to output the `last_hidden_state` as well. | https://github.com/huggingface/transformers/issues/38709 | closed | [
"bug"
] | 2025-06-10T00:51:37Z | 2025-07-18T08:02:50Z | 4 | Vishu26 |
huggingface/lerobot | 1,242 | SmolVLA Gym Simulation - Release? | Hello,
I've trained the smolvla_base for 200K steps. I'm trying to do a inference and visualize like we do for aloha or pusht. Could anyone guide me on this.
I dont have a robot arm, so Gym simulation is something I'm looking for, when will it be released? | https://github.com/huggingface/lerobot/issues/1242 | closed | [
"question",
"policies",
"visualization"
] | 2025-06-09T13:05:38Z | 2025-10-17T11:00:57Z | null | Jaykumaran |
huggingface/smollm | 78 | how to continously pretrain VLM base model | rt.
How can I pretrain VLM base model? | https://github.com/huggingface/smollm/issues/78 | open | [
"Image",
"Video"
] | 2025-06-09T07:04:57Z | 2025-07-29T12:50:50Z | null | allenliuvip |
huggingface/text-generation-inference | 3,259 | Enable passing arguments to chat templates | ### Feature request
I would like to enable passing parameters to a chat template when using the messages API. Something like:
```python
qwen3_model = HuggingFaceModel(...)
predictor = qwen3_model.deploy(...)
predictor.predict({
"messages": [
{"role": "system", "content": "You are a helpful assistant." },
{"role": "user", "content": "What is deep learning?"}
]
"template_args": { "enable_thinking": False }
})
```
### Motivation
There are models with various custom arguments that can be passed to chat templates. For example, Qwen3 comes with `enable_thinking` parameter than can be either True or False, and CohereLabs c4ai-command-r-plus RAG chat template has a `citation_mode` flag that can be `accurate` or `fast`.
### Your contribution
Unfortunately, no. Do not know Rust beyond some basics. | https://github.com/huggingface/text-generation-inference/issues/3259 | open | [] | 2025-06-09T06:04:27Z | 2025-06-09T07:53:17Z | 2 | alexshtf |
huggingface/datasets | 7,600 | `push_to_hub` is not concurrency safe (dataset schema corruption) | ### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`)
- each process calls `push_to_hub` on their particular config when they're done processing
- all calls to `push_to_hub` succeed
- the `README.md` now has some configs with `new_col` added and some with `new_col` missing
Any attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising).
We have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand.
Reading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded.
### Steps to reproduce the bug
See above.
### Expected behavior
Concurrent edits to disjoint configs of a dataset should never corrupt the dataset schema.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.2
- `fsspec` version: 2023.9.0 | https://github.com/huggingface/datasets/issues/7600 | closed | [] | 2025-06-07T17:28:56Z | 2025-07-31T10:00:50Z | 4 | sharvil |
huggingface/lerobot | 1,226 | 404 Not Found | [lerobot](https://github.com/huggingface/lerobot/tree/main)/[examples](https://github.com/huggingface/lerobot/tree/main/examples)
/10_use_so100.md/
This is supposed to be a tutorial but cannot be opened???
404 Not Found!!!
| https://github.com/huggingface/lerobot/issues/1226 | closed | [
"documentation",
"question"
] | 2025-06-07T09:02:37Z | 2025-06-08T21:26:07Z | null | luk-e158 |
huggingface/transformers | 38,656 | Potential Memory Leak or Caching in Fast Image Processor | ### System Info
Hi team,
Thank you for your great work on `transformers`!
While using the `AutoProcessor` with `use_fast=True`, I noticed that there seems to be a memory leak or possibly some form of persistent caching when processing images. Even after deleting the processor and clearing the CUDA cache, approximately 600MB of GPU memory remains occupied.
Here is a minimal reproducible example:
```python
from transformers import AutoProcessor
from PIL import Image
import time
import torch
import requests
from io import BytesIO
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2.5-VL-7B-Instruct",
use_fast=True,
trust_remote_code=False,
revision=None,
)
url = "https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true"
response = requests.get(url)
images = [Image.open(BytesIO(response.content)).convert("RGB")]
result = processor(
text=[
"<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n"
"<|im_start|>user\nWhat’s in this image?<|vision_start|><|image_pad|><|vision_end|><|im_end|>\n"
"<|im_start|>assistant\n"
],
padding=True,
return_tensors="pt",
images=images,
device="cuda"
)
del result
del processor
torch.cuda.empty_cache()
print("You can now use nvidia-smi to observe GPU memory usage, which is around 600MB.")
while True:
time.sleep(60)
```
I’d like to kindly ask:
1. If this is due to caching, is there a way to control or disable the cache?
2. If this is an unintended memory leak, would it be possible to investigate and potentially fix it?
Thanks again for your help and time!
Best regards
### Who can help?
tokenizers: @ArthurZucker and @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
As provided above.
### Expected behavior
It would be great if caching could be made optional, or if there could be an option to avoid any GPU memory usage entirely. | https://github.com/huggingface/transformers/issues/38656 | closed | [
"bug"
] | 2025-06-07T08:46:48Z | 2025-08-12T13:02:37Z | 8 | yhyang201 |
huggingface/transformers | 38,654 | The visualization of image input in Qwen2.5-VL | The image input of Qwen2.5-VL is processed by processor and then saved as tensor in inputs['pixel_values'].
I tried to restore the image, using tensor in inputs['pixel_values'], but I found that the restored image patches were in disorder.
So how to restore the image from inputs['pixel_values'] in a proper way?
For example, the origin input image is as follows.

And failed to restore from the inputs['pixel_values'].
 | https://github.com/huggingface/transformers/issues/38654 | closed | [] | 2025-06-07T08:15:44Z | 2025-06-10T09:04:04Z | 2 | Bytes-Lin |
huggingface/lerobot | 1,223 | smolvla introduce an asynchronous inference stack decoupling perception and action prediction? | why code not realize? | https://github.com/huggingface/lerobot/issues/1223 | closed | [
"question",
"policies"
] | 2025-06-07T01:23:24Z | 2025-06-08T21:25:04Z | null | zmf2022 |
huggingface/transformers | 38,650 | Support of Qwen3 GGUF model | Hi, I am getting the following error when I want to use the GGUF model with Qwen3
"ValueError: GGUF model with architecture qwen3 is not supported yet."
I have the latest transformers and gguf-0.17.0
```
self.tokenizer = AutoTokenizer.from_pretrained(model_name, gguf_file= "Qwen3-0.6B-Q2_K_L.gguf",use_fast=True)
if self.tokenizer.pad_token is None:
self.tokenizer.pad_token = "<pad>"
self.tokenizer.add_special_tokens({"pad_token": "<pad>"})
self.tokenizer.padding_side = "left"
self.model = AutoModelForCausalLM.from_pretrained(
model_name,
gguf_file = "Qwen3-0.6B-Q2_K_L.gguf",
pad_token_id=self.tokenizer.pad_token_id,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
```
How can I use the gguf model of Qwen3 with transformers? Could you please add the support of it?
Thanks! | https://github.com/huggingface/transformers/issues/38650 | closed | [] | 2025-06-06T20:11:23Z | 2025-07-15T08:02:59Z | 2 | Auth0rM0rgan |
huggingface/diffusers | 11,675 | Error in loading the pretrained lora weights | Hi, I am using the script https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py to train a lora.
An error is raised on https://github.com/huggingface/diffusers/blob/73a9d5856f2d7ae3637c484d83cd697284ad3962/examples/text_to_image/train_text_to_image_lora_sdxl.py#L1314C9-L1314C52
```
Loading adapter weights from state_dict led to missing keys in the model: down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_A
.default_0.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight, ...
```
The difference between the keys in the saved lora weights and the ''missing keys'' mentioned above is ''default_0''. How can I resolve this problem?
diffusers 0.32.2
peft 0.15.2
| https://github.com/huggingface/diffusers/issues/11675 | closed | [] | 2025-06-06T17:09:45Z | 2025-06-07T07:40:14Z | 1 | garychan22 |
huggingface/text-generation-inference | 3,257 | if use chat.completions, text+image inference return incorrect output because of template issue | ### System Info
common in all platform
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
text-generation-launcher --model-id=llava-hf/llava-v1.6-mistral-7b-hf --max-input-tokens 4096 --max-batch-prefill-tokens 16384 --max-total-tokens 8192 --max-batch-size 4
client:
```
from openai import OpenAI
client = OpenAI(base_url="http://localhost:80/v1", api_key="-")
chat_completion = client.chat.completions.create(
model="tgi",
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
},
},
{"type": "text", "text": "Whats in this image?"},
],
},
],
max_tokens=50,
temperature=0.0,
stream=False,
)
print(chat_completion)
```
### Expected behavior
incorrect output is
ChatCompletion(id='', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=" I'm sorry, but I'm not sure what you're asking. Can you please provide more context or information about what you're looking for? ", refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1749197214, model='llava-hf/llava-v1.6-mistral-7b-hf', object='chat.completion', service_tier=None, system_fingerprint='3.3.1-dev0-native', usage=CompletionUsage(completion_tokens=35, prompt_tokens=8, total_tokens=43, completion_tokens_details=None, prompt_tokens_details=None))
| https://github.com/huggingface/text-generation-inference/issues/3257 | open | [] | 2025-06-06T13:06:20Z | 2025-06-06T13:11:22Z | 2 | sywangyi |
huggingface/nanotron | 372 | datatrove need numpy>=2.0.0 bug nanotron 0.4 requires numpy<2, how to fix? | https://github.com/huggingface/nanotron/issues/372 | open | [] | 2025-06-06T12:12:39Z | 2025-11-22T14:44:01Z | null | lxyyang | |
huggingface/transformers | 38,613 | MDX Errors | ### System Info
Ubuntu 24.04.2 LTS, CPython 3.11.12, transformers==4.53.0.dev0
@stevhliu I'm trying to contribute to the model cards. I forked the latest transformers and I ran the scripts, from the home page and then I want to the documents page. I'm having issues with the doc builder. I keep receiving the errors "ValueError: There was an error when converting docs/source/en/internal/generation_utils.md to the MDX format.
Unable to find generation.TFGreedySearchEncoderDecoderOutput in transformers. Make sure the path to that object is correct." And Unable to find image_processing_utils_fast.BaseImageProcessorFast in transformers. Make sure the path to that object is correct.
I ran the " pip install -e ".[docs]" and saw this after installing everything: "warning: The package `transformers @ file://s` does not have an extra named `docs`"
I ran the doc builder and that ran as expected until I ran the doc-builder command "doc-builder build transformers docs/source/en/ --build_dir ~/tmp/test-build"
Is there something that I'm misunderstanding? Is there a workaround for me to write the markdown of the card that I have been assigned without having to run those scripts instead, in the meantime.. Thank you!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Ran install scripts on the Documents folder
### Expected behavior
To generate the docs | https://github.com/huggingface/transformers/issues/38613 | closed | [
"bug"
] | 2025-06-05T14:19:45Z | 2025-06-06T20:12:36Z | 7 | rileyafox |
huggingface/diffusers | 11,661 | [BUG]: Using args.max_train_steps even if it is None in diffusers/examples/flux-control | ### Describe the bug
Under [https://github.com/huggingface/diffusers/tree/main/examples/flux-control](examples/flux-control) there are two files showing how to fine tune flux-control:
- [train_control_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/flux-control/train_control_flux.py)
- [train_control_lora_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/flux-control/train_control_lora_flux.py)
Both of them have a bug when args.max_train_steps is None:
Starting from [Line 905](https://github.com/huggingface/diffusers/blob/c934720629837257b15fd84d27e8eddaa52b76e6/examples/flux-control/train_control_flux.py#L905) we have following code:
```.py
if args.max_train_steps is None:
len_train_dataloader_after_sharding = math.ceil(len(train_dataloader) / accelerator.num_processes)
num_update_steps_per_epoch = math.ceil(len_train_dataloader_after_sharding / args.gradient_accumulation_steps)
num_training_steps_for_scheduler = (
args.num_train_epochs * num_update_steps_per_epoch * accelerator.num_processes
)
else:
num_training_steps_for_scheduler = args.max_train_steps * accelerator.num_processes
lr_scheduler = get_scheduler(
args.lr_scheduler,
optimizer=optimizer,
num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes,
num_training_steps=args.max_train_steps * accelerator.num_processes,
num_cycles=args.lr_num_cycles,
power=args.lr_power,
)
```
Note how it gets checked that `args.max_train_steps` is None in the if, in this case a num_training_steps_for_scheduler gets prepared. However in [Line 918](https://github.com/huggingface/diffusers/blob/c934720629837257b15fd84d27e8eddaa52b76e6/examples/flux-control/train_control_flux.py#L918) we use `args.max_train_steps`
```.py
num_training_steps=args.max_train_steps * accelerator.num_processes,
```
isntead of the prepared num_training_steps_for_scheduler and causing following error:
```.sh
num_training_steps=args.max_train_steps * accelerator.num_processes,
~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
```
### Reproduction
Training runs where the max_train_steps are not set, i.e.:
```.sh
accelerate launch train_control_lora_flux.py \
--pretrained_model_name_or_path="black-forest-labs/FLUX.1-dev" \
--dataset_name="raulc0399/open_pose_controlnet" \
--output_dir="pose-control-lora" \
--mixed_precision="bf16" \
--train_batch_size=1 \
--rank=64 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--use_8bit_adam \
--learning_rate=1e-4 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--num_train_epochs=10 \
--validation_image="openpose.png" \
--validation_prompt="A couple, 4k photo, highly detailed" \
--offload \
--seed="0" \
--push_to_hub
```
### Logs
```shell
```
### System Info
Not relevant for the mentioned Bug.
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11661 | closed | [
"bug"
] | 2025-06-05T07:18:06Z | 2025-06-05T09:26:26Z | 0 | Markus-Pobitzer |
huggingface/lerobot | 1,203 | Could you please upload the config.json file for smolvla? |
Could you please upload the config.json file for smolvla? Thank you very much!
FileNotFoundError: config.json not found on the HuggingFace Hub in lerobot/smolvla_base
| https://github.com/huggingface/lerobot/issues/1203 | closed | [
"question"
] | 2025-06-05T06:59:12Z | 2025-06-11T14:56:56Z | null | Pandapan01 |
huggingface/transformers | 38,601 | Contribute to Transformers on windows natively without WSL | ### System Info
### System info
OS: Windows 11
Python: 3.13.3 and 3.10
Git: 2.49.0
CMake: 4.0.2
Msys64: Pacman v6.1.0 - libalpm v14.0.0
Pip: 25.1.1
Setuptools: 80.9.0
Visual studio C++ build tools
### NOTE: I followed the steps here [Contribute to 🤗 Transformers](https://huggingface.co/docs/transformers/en/contributing) and for sure system info already existed before following but let me walk through again for additional info.
1- Forked the repo.
2- Cloned it
3- cd transformers (so made sure I am in the right path which is the root for the repo)
3- switched to my own branch
4- made a python virtual environment using python 3.10 then activated it
5- made sure transformers ain't installed inside it
6- installed PyTorch
7- Ran this command `pip install -e ".[dev]"`
### NOTE: I tried making requirements.txt and using this command `pip install -r requirements.txt` but I got no output and I tried installing onnx with pip which happened successfully then Ran this command `pip install -e ".[dev]"` but nothing changed
### NOTE 6/6/2025: I tried uv instead of python venv, nothing worked. I tried deleting everything including system info and install everything from the beginning, nothing worked still. I made a requiremets.txt from what is in setup.py and installed it and tried to run `pip install -e ".[dev]"` but same issues again, nothing worked
```
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [11 lines of output]
...\setup.py:36: DeprecationWarning: Use shutil.which instead of find_executable
CMAKE = find_executable('cmake3') or find_executable('cmake')
...\setup.py:37: DeprecationWarning: Use shutil.which instead of find_executable
MAKE = find_executable('make')
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 35, in <module>
File "...\setup.py", line 318, in <module>
raise FileNotFoundError("Unable to find " + requirements_file)
FileNotFoundError: Unable to find requirements.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`pip install -e ".[dev]"`
### Expected behavior
Being able to install transformers for contributing with no issue | https://github.com/huggingface/transformers/issues/38601 | closed | [
"bug"
] | 2025-06-05T04:14:12Z | 2025-07-27T08:02:54Z | 4 | ghost |
huggingface/diffusers | 11,657 | Custom Wan diffusion Lora runs without error but doesn't apply effect and gives warning: No LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'. | ### Describe the bug
I run the diffusers pipe using the standard process with a custom diffusers trained lora:
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
pipe.scheduler = scheduler
pipe.load_lora_weights("lora/customdiffusers_lora.safetensors")
etc...
it runs without error but the effect was not applied, and I see the following warning:
No LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any WanTransformer3DModel related params. You can also try specifying `prefix=None` to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new
Is there any config file I need to change for this to work? Thanks
### Reproduction
N/A as a custom Lora
### Logs
```shell
```
### System Info
0.33, linux, python 3.10
### Who can help?
_No response_ | https://github.com/huggingface/diffusers/issues/11657 | closed | [
"bug"
] | 2025-06-04T19:50:14Z | 2025-09-12T03:32:17Z | 3 | st-projects-00 |
huggingface/transformers | 38,576 | A local variable 'image_seq_length' leading to UnboundLocalError: cannot access local variable 'image_seq_length' where it is not associated with a value | ### System Info
- `transformers` version: 4.52.3
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
- Python version: 3.12.2
- Huggingface_hub version: 0.32.2
- Safetensors version: 0.5.3
- Accelerate version: 0.26.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The code snippet is as follows:
from transformers.utils.attention_visualizer import AttentionMaskVisualizer
visualizer = AttentionMaskVisualizer("meta-llama/Llama-2-7b-hf")
visualizer("Plants create energy through a process known as")
In the Class AttentionMaskVisualizer, a local variable in the first branch (lines 181-201), 'image_seq_length,' is passed to the function (line 232). However, in the text case, the branch will not be executed, and it will lead to UnboundLocalError: cannot access local variable 'image_seq_length' where it is not associated with a value.
### Expected behavior
None | https://github.com/huggingface/transformers/issues/38576 | closed | [
"bug"
] | 2025-06-04T09:06:04Z | 2025-06-04T12:20:33Z | null | IceGiraffe |
huggingface/lerobot | 1,195 | ros2_control support | Hello,
I was thinking that it would be great to use the robot with ros2_control :
- to test code developped with the ROS2 framework:
- for education purposes : the robot is great, easily and not expensive to build (thank you for the work achieved), transporteable in a case, etc.
Do you have any knowledge of an existing project ?
If not, would you be interested in this kind of implementation ?
Best,
Aline | https://github.com/huggingface/lerobot/issues/1195 | open | [
"enhancement",
"question"
] | 2025-06-03T15:31:53Z | 2025-11-27T16:30:08Z | null | baaluidnrey |
huggingface/diffusers | 11,648 | how to load lora weight with fp8 transfomer model? | Hi, I want to run fluxcontrolpipeline with transformer_fp8 reference the code :
https://huggingface.co/docs/diffusers/api/pipelines/flux#quantization
```
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, FluxTransformer2DModel, FluxControlPipeline
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel
quant_config = BitsAndBytesConfig(load_in_8bit=True)
text_encoder_8bit = T5EncoderModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
subfolder="text_encoder_2",
quantization_config=quant_config,
torch_dtype=torch.float16,
)
quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)
transformer_8bit = FluxTransformer2DModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
subfolder="transformer",
quantization_config=quant_config,
torch_dtype=torch.float16,
)
pipeline = FluxControlPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
text_encoder_2=text_encoder_8bit,
transformer=transformer_8bit,
torch_dtype=torch.float16,
device_map="balanced",
)
prompt = "a tiny astronaut hatching from an egg on the moon"
image = pipeline(prompt, guidance_scale=3.5, height=768, width=1360, num_inference_steps=50).images[0]
image.save("flux.png")
```
but when I load lora after build a pipeline
```
pipeline = FluxControlPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
text_encoder_2=text_encoder_8bit,
transformer=transformer_8bit,
torch_dtype=torch.float16,
device_map="balanced",
)
pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora")
```
There a error:
not support fp8 weight , how to fix it??
| https://github.com/huggingface/diffusers/issues/11648 | open | [] | 2025-06-03T10:31:23Z | 2025-06-19T12:37:35Z | null | Johnson-yue |
huggingface/candle | 2,986 | How to reset gradient before each batch | In Pytorch, you would call `optimizer.zero_grad` to zero the gradients before every batch. How do you do this in candle? | https://github.com/huggingface/candle/issues/2986 | open | [] | 2025-06-03T10:17:52Z | 2025-06-03T10:17:52Z | null | lokxii |
huggingface/transformers | 38,544 | Paligemma model card needs update | Hi
I found a minor problem with paligemma model card. How can I raise a PR to fix it ? I am first time contributor. I raised PR. Whom should I mention to review it ?
https://huggingface.co/google/paligemma-3b-pt-896 | https://github.com/huggingface/transformers/issues/38544 | closed | [] | 2025-06-03T06:55:14Z | 2025-07-14T16:23:52Z | 7 | punitvara |
huggingface/transformers | 38,541 | `eager_attention_forward` and `repeat_kv` code duplication | I see the two functions appear in a lot of places in the code base. Shall we unify them into a single place?
And can we treat `eager_attention_forward` as another option in [`ALL_ATTENTION_FUNCTIONS`](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L6186)? Any concerns? | https://github.com/huggingface/transformers/issues/38541 | closed | [] | 2025-06-03T00:57:16Z | 2025-06-10T10:27:25Z | 3 | ChengLyu |
huggingface/chat-ui | 1,843 | can you make a release? | The current codebase is far away from the official release in November, maybe you can stabilize and release current code? | https://github.com/huggingface/chat-ui/issues/1843 | open | [
"enhancement"
] | 2025-06-02T21:26:51Z | 2025-07-21T20:44:03Z | 1 | antonkulaga |
huggingface/transformers | 38,527 | Why do you remove sample_indices_fn for processor.apply_chat_template? | Just as shown in the picture, since 4.52 processor.apply_chat_template does no longer support sample_indices_fn but the args doc is still there.
<img width="712" alt="Image" src="https://github.com/user-attachments/assets/e055d5f5-4800-4eb7-8054-0f41a9be5707" /> | https://github.com/huggingface/transformers/issues/38527 | closed | [] | 2025-06-02T12:34:23Z | 2025-06-03T02:44:22Z | 1 | futrime |
huggingface/optimum | 2,284 | Error when exporting DinoV2 with Registers | When trying :
` python -m scripts.convert --quantize --model_id facebook/dinov2-with-registers-small`
I Got :
`ValueError: Trying to export a dinov2-with-registers model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type dinov2-with-registers to be supported natively in the ONNX export.` | https://github.com/huggingface/optimum/issues/2284 | closed | [
"Stale"
] | 2025-06-02T08:53:55Z | 2025-07-04T02:16:54Z | 1 | elkizana |
huggingface/agents-course | 523 | [QUESTION] The final quiz of Unit 1, always crashes with dataset not found | First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord
However, if you prefer you can ask here, please **be specific**.
Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.
The full log is:
```
Traceback (most recent call last):
File "/home/user/app/app.py", line 28, in <module>
ds = load_dataset(EXAM_DATASET_ID, split="train")
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2129, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1849, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1719, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1645, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.
Traceback (most recent call last):
File "/home/user/app/app.py", line 28, in <module>
ds = load_dataset(EXAM_DATASET_ID, split="train")
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2129, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1849, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1719, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 1645, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.
```
Am I missing something trivial?
| https://github.com/huggingface/agents-course/issues/523 | open | [
"question"
] | 2025-06-02T07:58:01Z | 2025-06-02T07:58:01Z | null | abcnishant007 |
huggingface/peft | 2,563 | Integrate Lily | ### Feature request
This request proposes integrating Lily (Low-Rank Interconnected Adaptation across Layers), accepted to ACL 2025 Findings, into the PEFT library.
Paper: https://arxiv.org/pdf/2407.09946
Repo: https://github.com/yibozhong/lily
### Motivation
Lily aims to directly make the rank of each individual adapter bigger under the same parameter budget, as it's shown in many papers that higher ranks are beneficial to PEFT performance. This is achieved by breaking the pair-AB-per-layer constraint of LoRA. That is, we do not give each layer a dedicated pair of A and B. Rather, we decouple all the Bs from the layer, and when adapting at each layer, we use a weighted sum of these Bs as the B for this layer. The weight is calculated by a lightweight trainable router, currently data-dependent.

Several points worth noting:
- The method looks somewhat similar to MosLoRA in structure, but it operates at the model level and the aim is to increase the individual rank of each adapter with dynamic adaptation.
- Currently in the paper, we use a data-dependent router, which makes it tricky to merge the weights. I do not observe notable inference latency, possibly due to small model size, but an option for using a non-data-dependent router can be included and enable easy merging the weights.
- The current As are still positioned at a fixed layer (using layer-wise sharing to reduce params). However, it also can be decoupled, simply by providing two routers for weighting As and Bs respectively, rather than one router for B in the current setup. This is a more elegant design and shares the same principle as Lily. After I run quick experiments demonstrating its effectiveness, I can integrate this setup into my current code as Lily v2.
### Your contribution
Implement Lily, repo: https://github.com/yibozhong/lily. | https://github.com/huggingface/peft/issues/2563 | closed | [] | 2025-06-02T07:23:30Z | 2025-12-18T14:03:32Z | 15 | yibozhong |
huggingface/lerobot | 1,180 | dataset training | How many episodes do you recommend making for each file when learning the dataset? Can I create about 400 episodes by putting different tasks in each episode? Or can I create the same task data for each file and combine multiple files? | https://github.com/huggingface/lerobot/issues/1180 | closed | [
"question",
"dataset"
] | 2025-06-01T15:59:47Z | 2025-10-08T12:54:48Z | null | bruce577 |
huggingface/lerobot | 1,177 | [Question] Why using a kernel device for IP cameras? | I'm wondering why, when we have an IP camera (by using DroidCam on Android for instance), the team decided to plug the IP camera into a loopback device in `/dev/videoX` instead of directly reading the video stream in the code with Opencv `cv2.VideoCapture(url)`. I understand doing this allows controlling FPS & resolution which is not possible when `cv2.VideoCapture(url)` is used directly, however the downside is that you need to map the camera to a kernel device which becomes really cumbersome, especially when you need root access and when the device gets stuck in a weird state.
Why didn't the team simply read the video stream from `cv2.VideoCapture(url)` and then downsized the video stream inside the code loop? (The only downside of doing this I found is that we can't get 30fps if the stream outputs only 25fps but this shouldn't be a problem imo since `OpenCVCamera.read_loop` adds a 0.1 latency which messes up the fps sync anyways). | https://github.com/huggingface/lerobot/issues/1177 | closed | [
"question",
"robots",
"stale"
] | 2025-05-31T05:24:21Z | 2025-12-31T02:35:18Z | null | godardt |
huggingface/transformers | 38,501 | torch.compile fails for gemma-3-1b-it | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-6.15.0-1-MANJARO-x86_64-with-glibc2.41
- Python version: 3.12.8
- Huggingface_hub version: 0.32.3
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA GeForce RTX 3090 Ti
### Who can help?
@ArthurZucker @gante
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Running `TORCHDYNAMO_VERBOSE=1 TORCH_LOGS="+dynamo" uv run main.py` fails:
<details>
<summary>Minimal reproducible example</summary>
```python
import torch
from transformers import GemmaTokenizer, Gemma3ForCausalLM
ckpt = "google/gemma-3-1b-it"
model = Gemma3ForCausalLM.from_pretrained(
ckpt,
device_map="cuda:0",
torch_dtype=torch.bfloat16,
)
processor = GemmaTokenizer.from_pretrained(ckpt)
messages = [{"role": "user", "content": "What is 2^7-2^4??"}]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
# generate_fn = model.generate
generate_fn = torch.compile(model.generate, fullgraph=True)
generation = generate_fn(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
</details>
<details>
<summary>Stack trace</summary>
Full paste: https://pastebin.com/V103pCWM
```
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 2111, in call_deepcopy
unimplemented(f"copy.deepcopy {repr(x)}")
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py", line 439, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: copy.deepcopy UserDefinedObjectVariable(GenerationConfig)
from user code:
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/external_utils.py", line 70, in inner
return fn(*args, **kwargs)
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 2354, in generate
generation_config, model_kwargs = self._prepare_generation_config(
File "/tmp/gemma_torch/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 1744, in _prepare_generation_config
generation_config = copy.deepcopy(generation_config)
```
</details>
### Expected behavior
Compilation proceeds | https://github.com/huggingface/transformers/issues/38501 | closed | [
"bug"
] | 2025-05-30T21:01:41Z | 2025-06-02T20:45:54Z | 6 | InCogNiTo124 |
huggingface/transformers | 38,500 | Unable to deploy Gemma 3 on AWS SageMaker due to lack of support in tranfomers release | hi,
it seems when i deploy the model
```
huggingface_model = HuggingFaceModel(
model_data=model_s3_uri,
role=role,
transformers_version="4.49.0",
pytorch_version="2.6.0",
py_version="py312",
)
predictor = huggingface_model.deploy(
instance_type="ml.g5.48xlarge",
initial_instance_count=1,
endpoint_name="gemma-27b-inference",
container_startup_health_check_timeout=900
)
response = predictor.predict({
"inputs": "what can i do?"
})
print(response)
```
```
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400)
from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "The checkpoint you are trying to load has model type gemma3_text but Transformers does not
recognize this architecture. This could be because of an issue with the checkpoint, or because your version of
Transformers is out of date.\n\nYou can update Transformers with the command pip install --upgrade transformers.
```
now i know HuggingFaceModel doesnt support anything above 4.49.0 so if i try to run 4.50.0 it will give an error saying please use this version. the thing is gemma3 is not available in 4.49 so how to fix this? i have the model in my bucket trained just cant deploy it due to the versions of transformers. is there a way to override the container inside the huggingface that takes a more advanced transformer?
I did this, but the issue now is in sagemaker, cuz i cannot use this for the huggingface version as it doesn't support it
pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3 | https://github.com/huggingface/transformers/issues/38500 | closed | [] | 2025-05-30T17:10:22Z | 2025-07-08T08:02:37Z | 2 | ehrun32 |
huggingface/transformers | 38,499 | ModernBERT for MLM outputs incorrect hidden state shape. | ### System Info
When using `ModernBERTForMaskedLM` with `output_hidden_states=True` the hidden state is not correctly padded when it is returned. A minimal example is included below:
```
import torch
from transformers import AutoTokenizer, ModernBertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base")
model = ModernBertForMaskedLM.from_pretrained("answerdotai/ModernBERT-base").to("cuda")
inputs = tokenizer(
[
"The capital of France is <mask>.",
"The name of the first president of the united states is <mask>.",
],
padding=True,
return_tensors="pt",
).to("cuda")
with torch.no_grad():
outputs = model(**inputs, output_hidden_states=True)
print(inputs["attention_mask"].sum())
# >>> 26
print(outputs.hidden_states[-1].shape)
# >>> torch.Size([26, 768])
assert outputs.hidden_states[-1].shape == inputs["input_ids"].shape + (
model.config.hidden_size,
)
```
I'm using the following library versions:
- `transformers==4.48.2`
- `torch==2.6.0`
It appears that what is returned is the flattened version as the tensor is 2D and the first dimension corresponds to the sum of the attention mask. This issue doesn't happen when using the non MLM version.
I searched modern bert and hidden state and looked at the recent commits and didn't see any mention of this issue, but it might have been fixed in a newer version without it being obvious.
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the code provided in the issue with flash attention on a Cuda GPU.
### Expected behavior
The hidden states should have shape [batch size, max sequence length, model dim] but they have shape [unknown dim (I think the number of unpadded tokens), model dim]. | https://github.com/huggingface/transformers/issues/38499 | closed | [
"bug"
] | 2025-05-30T17:02:55Z | 2025-07-08T08:02:39Z | 2 | jfkback |
huggingface/lerobot | 1,174 | [Question] Multi-Rate Sensor and Discrete Event Handling in `lerobot` | Hello `lerobot` Team,
First off, huge thanks for building such an awesome open-source project!
I'm currently exploring `lerobot` for a project and have some critical questions regarding its data handling, specifically for multi-rate sensors and discrete events. My understanding from the README is that `lerobot` records at a fixed `fps`, creating a table with `fps * record_time` rows.
This leads to two primary concerns:
1. **Multi-Rate Sensors:**
Consider a sensor like an IMU operating at 1KHz, while other sensors might be at much lower rates. To capture the IMU data without loss, the `fps` would need to be set extremely high, to match highest-rate-sensor. This implies:
* **Massive Data Redundancy:** A significant portion of rows would contain sparse information from the lower-rate sensors.
* **Recording Performance:** Could such a high `fps` and resulting data volume negatively impact recording performance, potentially making it infeasible to capture this type of data?
* **Storage Load:** This approach would also lead to very large dataset sizes.
Am I correct in this interpretation? If so, how does `lerobot` effectively manage multi-rate sensor data to mitigate these issues?
2. **Discrete Events:**
How are discrete events, such as keyboard presses/releases or joystick button presses, recorded into a `LeRobotDataset`? The current design of `LeRobotDataset`, particularly `__nextitem__` and `delta_timestamps`, seems to implicitly assume continuous data that can be interpolated. How does `lerobot` accommodate and represent these non-continuous, event-driven data points within its framework?
A quick response addressing these points would be incredibly helpful for our ongoing development.
Thanks for your time and insight! | https://github.com/huggingface/lerobot/issues/1174 | open | [
"question",
"dataset"
] | 2025-05-30T09:04:13Z | 2025-12-17T10:44:46Z | null | MilkClouds |
huggingface/transformers | 38,489 | VLM reverse mapping logic in modeling_utils.py save_pretrained not doing anything? | ### System Info
transformers version: 4.52.3
Platform: Ubuntu 24.04
Python version: 3.11.0
Huggingface_hub version: 0.32.2
Safetensors version: 0.5.3
Accelerate version: 1.7.0
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (GPU?): 2.7.0+cu126 (H100)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?: No
Using GPU in script?: No
GPU type: NVIDIA H100
### Who can help?
@amyeroberts @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
borrowing the reverse key mapping logic in the modeling_utils.py save_pretrained method as shown here:
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L3649
If we also use the qwen2 model mappings for Qwen2ForConditionalGeneration as an example
and a sample of keys as shown below to test the reversal logic:
```
import re
from transformers import Qwen2VLForConditionalGeneration
checkpoint_conversion_mapping = Qwen2VLForConditionalGeneration._checkpoint_conversion_mapping
checkpoint_keys = [
'model.language_model.layers.9.post_attention_layernorm.weight', # Should be remapped
'model.layers.9.self_attn.k_proj.bias', # Should not be remapped
'model.visual.blocks.0.attn.proj.bias', # Should be remapped
'visual.blocks.0.attn.proj.weight', # Should not be remapped
]
reverse_key_mapping = {v: k for k, v in checkpoint_conversion_mapping.items()}
for key in checkpoint_keys:
print(f"\nOperating on sample key: {key}:")
for pattern, replacement in reverse_key_mapping.items():
replacement = replacement.lstrip("^") # strip off un-needed chars and patterns
replacement = re.sub(r"\(.*?\)", "", pattern)
key, n_replace = re.subn(pattern, replacement, key)
print(f"pattern: {pattern}, replacement: {replacement}, resultant key: {key}")
# Early exit of the loop
if n_replace > 0:
print(f"Result: final mapped key is {key}")
break
else:
print(f"Result: no mappings performed")
```
returns the following output where no mapping reversal is performed where it should be.
```
Operating on sample key: model.language_model.layers.9.post_attention_layernorm.weight:
pattern: model.visual, replacement: model.visual, resultant key: model.language_model.layers.9.post_attention_layernorm.weight
Result: no mappings performed
pattern: model.language_model, replacement: model.language_model, resultant key: model.language_model.layers.9.post_attention_layernorm.weight
Result: final mapped key is model.language_model.layers.9.post_attention_layernorm.weight
Operating on sample key: model.layers.9.self_attn.k_proj.bias:
pattern: model.visual, replacement: model.visual, resultant key: model.layers.9.self_attn.k_proj.bias
Result: no mappings performed
pattern: model.language_model, replacement: model.language_model, resultant key: model.layers.9.self_attn.k_proj.bias
Result: no mappings performed
Operating on sample key: model.visual.blocks.0.attn.proj.bias:
pattern: model.visual, replacement: model.visual, resultant key: model.visual.blocks.0.attn.proj.bias
Result: final mapped key is model.visual.blocks.0.attn.proj.bias
Operating on sample key: visual.blocks.0.attn.proj.weight:
pattern: model.visual, replacement: model.visual, resultant key: visual.blocks.0.attn.proj.weight
Result: no mappings performed
pattern: model.language_model, replacement: model.language_model, resultant key: visual.blocks.0.attn.proj.weight
Result: no mappings performed
```
### Expected behavior
The expected behavior should be such that we observe the following mapping:
```
model.language_model.layers.9.post_attention_layernorm.weight -> model.layers.9.post_attention_layernorm.weight
model.visual.blocks.0.attn.proj.bias-> visual.blocks.0.attn.proj.bias
model.layers.9.self_attn.k_proj.bias -> model.layers.9.self_attn.k_proj.bias (remains the same)
visual.blocks.0.attn.proj.weight -> visual.blocks.0.attn.proj.weight (remains the same)
```
This could be achieved by changing the reversal code inside the for pattern, replacement in reverse_key_mapping.items(): loop to be
```
replacement = replacement.lstrip("^") # strip off un-needed chars and patterns
replacement = re.sub(r"\^?([^(?]+).*", r"\1", replacement)
key, n_replace = re.subn(pattern, replacement, key)
print(f"pattern: {pattern}, replacement: {replacement}, resultant key: {key}")
# Early exit of the loop
if n_replace > 0:
break
```
instead.
I could | https://github.com/huggingface/transformers/issues/38489 | closed | [
"bug"
] | 2025-05-30T08:55:57Z | 2025-05-30T13:08:58Z | 6 | rolandtannous |
huggingface/diffusers | 11,637 | How to load lora weight in distribution applications? | If I want to use xDiT with 2 GPU inference FluxControlPipeline, how should I do
I write a xFuserFluxControlPipeline class, but it can not load lora weight with right way
xFuserFluxTransformer in 1GPU have some parameters and another GPU have others.
How should I do ?? | https://github.com/huggingface/diffusers/issues/11637 | open | [] | 2025-05-30T07:14:50Z | 2025-06-03T10:15:51Z | null | Johnson-yue |
huggingface/peft | 2,558 | GraLoRA support? | ### Feature request
will the library support the [GraLoRA](https://arxiv.org/abs/2505.20355) technique?
### Motivation
GraLoRA addresses a fundamental limitation of LoRA: overfitting when the bottleneck is widened.
The technique seems to more closely approximate full fine-tuning; hybrid GraLoRA gets the best of both worlds, with LoRA benefiting from low-rank scenarios (16 or less) and GraLoRA from high-rank scenarios (16 to 128).
The authors have a modified peft library; would be nice to have support in the official library.
### Your contribution
I have limited time for the next two weeks. Then, I will be able to contribute.
But should be very easy for the authors to port the implementation; most of it in the [gralora](https://github.com/SqueezeBits/GraLoRA/tree/8dff8438c80969f5f11f23249fed62aac9d687e8/peft/src/peft/tuners/gralora) sub-package. | https://github.com/huggingface/peft/issues/2558 | closed | [] | 2025-05-29T18:36:27Z | 2025-07-15T15:04:20Z | 10 | DiTo97 |
huggingface/lerobot | 1,171 | sync_read.py | Hi, I am currently testing the functions in the STServo_Python folder to work with my STS3215 motors. When I run the sync_read.py script, I encounter an issue caused by the addParam(self, sts_id) function returning False. I tried several things, but I can't get past the error.
I made sure that the motor IDs are correct and that the motors are connected and powered. I'm using a GroupSyncRead object with a start_address of SCSCL_PRESENT_POSITION_L and data_length of 4. Still, addParam() fails, and the motor ID is not added to the list.
Does anyone know why this is happening or how to fix it?
Thanks in advance! | https://github.com/huggingface/lerobot/issues/1171 | closed | [
"bug",
"question",
"robots",
"stale"
] | 2025-05-29T15:33:16Z | 2025-12-31T02:35:19Z | null | Baptiste-le-Beaudry |
huggingface/candle | 2,974 | Any good first issues a newcomer could tackle? | Hey! I've been using this crate for a while now and would love to start contributing back! I notice that your issues aren't labelled, who should I contact/do you have a list of issues that would be good for me? | https://github.com/huggingface/candle/issues/2974 | open | [] | 2025-05-29T04:19:18Z | 2025-05-30T18:25:37Z | 3 | Heidar-An |
huggingface/xet-core | 358 | How can I have snapshot_download to have continue feature? Errors became very common | Whenever some error happens and i run same code, it starts from 0
It is XET enabled repo and hf xet installed
I really need to have resume feature
my entire code
```
from huggingface_hub import snapshot_download
import os
import argparse
def download_models(target_dir=None):
"""
Download models from HuggingFace hub to specified directory
Args:
target_dir (str, optional): Target directory for downloads.
If None, uses current working directory
"""
# Set repo ID
repo_id = "MonsterMMORPG/Kohya_Train"
# Use provided target dir or default to current working directory
download_dir = target_dir if target_dir else os.getcwd()
# Create target directory if it doesn't exist
os.makedirs(download_dir, exist_ok=True)
try:
snapshot_download(
local_dir=download_dir,
repo_id=repo_id
)
print(f"\nDOWNLOAD COMPLETED to: {download_dir}")
print("Check folder content for downloaded files")
except Exception as e:
print(f"Error occurred during download: {str(e)}")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Download models from HuggingFace hub')
parser.add_argument('--dir', type=str, help='Target directory for downloads', default=None)
args = parser.parse_args()
download_models(args.dir)
``` | https://github.com/huggingface/xet-core/issues/358 | closed | [
"enhancement"
] | 2025-05-28T22:30:19Z | 2025-11-20T17:08:35Z | null | FurkanGozukara |
huggingface/transformers | 38,452 | Memory saving by upcasting logits for only non-ignored positions | ### Feature request
In [`loss_utils.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/loss/loss_utils.py), logits are upcasted for float32 for some losses. This can waste memory for cases where certain labels are `ignore_index`. This is especially true for fine tuning cases where one chooses to calculate loss only on the completion. They would keep label as -100 for prompt tokens and upcasting those logits would be unnecessary. We can instead call `logits.float()` after we have our final labels. This would be especially useful for `ForCausalLMLoss` as that seems to be the most likely use case.
### Motivation
When fine tuning a causal LM, one can choose to calculate loss only on the completion, thus setting labels for prompt tokens to be -100. Upcasting logits at those positions when calculating loss is not needed. Avoiding that can save memory. Most likely use case is `ForCausalLMLoss`.
### Your contribution
An example for `ForCausalLMLoss`:
```
def ForCausalLMLoss(
logits,
labels,
vocab_size: int,
num_items_in_batch: Optional[int] = None,
ignore_index: int = -100,
shift_labels: Optional[torch.Tensor] = None,
**kwargs,
) -> torch.Tensor:
# Don't upcast yet
# logits = logits.float()
if shift_labels is None:
# Shift so that tokens < n predict n
labels = nn.functional.pad(labels, (0, 1), value=ignore_index)
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
logits = logits.view(-1, vocab_size)
shift_labels = shift_labels.view(-1)
# Upcast to float if we need to compute the loss to avoid potential precision issues
# Now that we have our final labels, take only the useful logits and then upcast
logits = logits[shift_labels != ignore_index]
shift_labels = shift_labels[shift_labels != ignore_index]
logits = logits.float()
# Enable model parallelism
shift_labels = shift_labels.to(logits.device)
# Calculate loss on truncated logits and labels
loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)
return loss
```
We can do something similar in `ForMaskedLMLoss` on line 83 instead of 77. `ForTokenClassification` does not take `ignore_index` as an argument but we can still do the same here because `fixed_cross_entropy` does take `ignore_index`.
Another alternative was to move the upcasting to inside `fixed_cross_entropy` but a few losses don't do that. So, that might change/break existing things.
Let me know if this change sounds good. I can submit a PR. | https://github.com/huggingface/transformers/issues/38452 | open | [
"Feature request"
] | 2025-05-28T18:58:52Z | 2025-05-29T12:38:15Z | 1 | harshit2997 |
huggingface/speech-to-speech | 163 | how to use this with Livekit Agent? | how to use this with Livekit Agent? | https://github.com/huggingface/speech-to-speech/issues/163 | open | [] | 2025-05-28T18:27:11Z | 2025-05-28T18:27:11Z | null | Arslan-Mehmood1 |
huggingface/transformers | 38,448 | num_items_in_batch larger than the actual useful token when computing loss | def fixed_cross_entropy(source, target, num_items_in_batch: int = None, ignore_index: int = -100, **kwargs):
I check the shape of the inputs and find follows:
In [1]: logits.shape
Out[1]: torch.Size([4, 896, 152064])
In [2]: labels.shape
Out[2]: torch.Size([4, 896])
In [3]: num_items_in_batch
Out[3]: 4390
Why is 4390>4*896? | https://github.com/huggingface/transformers/issues/38448 | closed | [] | 2025-05-28T15:28:05Z | 2025-05-31T02:30:07Z | 4 | SHIFTTTTTTTT |
huggingface/transformers | 38,435 | [i18n-ro] Translating docs to Romanian | Hi!
Let's bring the documentation to all the Romanian-speaking community 🌐
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) (in progress, [see](https://github.com/zero-point/transformers/tree/add_ro_translation_to_readme))
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
| https://github.com/huggingface/transformers/issues/38435 | open | [
"WIP"
] | 2025-05-28T12:01:48Z | 2025-05-28T15:53:39Z | 2 | zero-point |
huggingface/transformers | 38,428 | [Question] The logic of data sampler in data parallel. | Hi, thanks for your attention.
When reading the source code of transformers, I cannot understand the implementation of `_get_train_sampler` in `trainer.py`. Why the default data sampler is `RandomSampler` rather than `DistributedSampler`? How does the trainer handle the sampler for data parallel?
reference code: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L975 | https://github.com/huggingface/transformers/issues/38428 | closed | [] | 2025-05-28T08:49:13Z | 2025-07-06T08:02:36Z | 3 | kxzxvbk |
huggingface/transformers | 38,425 | Can not load TencentBAC/Conan-embedding-v2 | ### System Info
Description
When attempting to load the “Conan-embedding-v2” model directly via transformers.AutoModel.from_pretrained, I get a ValueError indicating that the repo’s config.json lacks a model_type key. This prevents the Transformers library from inferring which model class to instantiate.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModel
model = AutoModel.from_pretrained("TencentBAC/Conan-embedding-v2")
ValueError: Unrecognized model in TencentBAC/Conan-embedding-v2.
Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, bart, bert, …, whisper, xlnet, …
### Expected behavior
AutoModel.from_pretrained("TencentBAC/Conan-embedding-v2") should load the model automatically, or at minimum provide guidance on how to set the correct model_type. | https://github.com/huggingface/transformers/issues/38425 | closed | [
"bug"
] | 2025-05-28T08:21:23Z | 2025-05-28T14:58:03Z | 1 | shanekao-sks |
huggingface/accelerate | 3,596 | How to distribute the model into multiple GPUs using accelerate? | I have 4 GPUs. If I only use a single GPU to train the model, there will be an OutOfMemoryError raised. How can I distribute the model into all the 4 GPUs to avoid the OutOfMemoryError using accelerate? | https://github.com/huggingface/accelerate/issues/3596 | closed | [] | 2025-05-28T06:27:08Z | 2025-05-28T14:06:18Z | null | GeorgeCarpenter |
huggingface/candle | 2,971 | Enhance the usability of the tensor struct | Hello,
I’m currently learning how to use Candle with the book Dive into Deep Learning, but implementing the code in Candle. I noticed that Candle is missing some practical utility functions, such as:
* The Frobenius norm
* dot product (vector or matrix dot product)
* matrix-vector multiplication
While these functions aren’t overly complex to implement manually, having them natively supported by the Tensor struct would significantly improve usability.
I’ve tried adding some of these functions myself to extend Candle’s functionality (to make it more user-friendly). | https://github.com/huggingface/candle/issues/2971 | closed | [] | 2025-05-28T03:41:44Z | 2025-05-29T07:41:02Z | 1 | ssfdust |
huggingface/transformers.js | 1,323 | Cannot get the SAM model running like in example | ### Question
I've found that transformers.js supports SAM as written in 2.14.0 release notes.
https://github.com/huggingface/transformers.js/releases/tag/2.14.0
I'm running the code on a M1 mac in a Brave browser.
But after I've used and adapted the example script, I can actually see in my browser console that the model is loaded and the browser is working.
<img width="1129" alt="Image" src="https://github.com/user-attachments/assets/fd256c77-62f5-4da2-a44c-cbb022333789" />
But then suddenly it crashes with following error:
```
transformers.js:11821 Uncaught Error: An error occurred during model execution: "Missing the following inputs: input_points, input_labels.
```
**My adapted code looks like this:**
````javascript
// using version 3.5.1
import {AutoProcessor, RawImage, SamModel} from "./node_modules/@huggingface/transformers/dist/transformers.js";
const model = await SamModel.from_pretrained('Xenova/slimsam-77-uniform');
const processor = await AutoProcessor.from_pretrained('Xenova/slimsam-77-uniform');
const img_url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/corgi.jpg';
const raw_image = await RawImage.read(img_url);
const input_points = [[[340, 250]]] // 2D localization of a window
const inputs = await processor(raw_image, input_points);
const outputs = await model(inputs); /// Error happens here
const masks = await processor.post_process_masks(outputs.pred_masks, inputs.original_sizes, inputs.reshaped_input_sizes);
console.log(masks);
// [
// Tensor {
// dims: [ 1, 3, 410, 614 ],
// type: 'bool',
// data: Uint8Array(755220) [ ... ],
// size: 755220
// }
// ]
const scores = outputs.iou_scores;
console.log(scores);
// Tensor {
// dims: [ 1, 1, 3 ],
// type: 'float32',
// data: Float32Array(3) [
// 0.8350210189819336,
// 0.9786665439605713,
// 0.8379436731338501
// ],
// size: 3
// }
````
Markup:
````html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
</head>
<body>
<h1>SAM DEMO</h1>
<script src="main.js" type="module">
</script>
<pre id="pre"></pre>
</body>
</html>
````
Can you maybe give me a hint what's the issue here or what I must e.g. change according to major version changes.
Thanks so much :-) | https://github.com/huggingface/transformers.js/issues/1323 | closed | [
"question"
] | 2025-05-27T20:01:49Z | 2025-11-29T12:32:29Z | null | BernhardBehrendt |
huggingface/chat-ui | 1,836 | Search feature tasks | We implemented a first version of the search chat feature in #1823, there's still some todos if people feel like tackling:
- [ ] Right now we only return the N most relevant snippets, we would need to return all matching conversations and implement infinite loading & pagination. The building blocks already exist in `NavMenu.svelte` they need to be ported over.
- [ ] - It would be nice to show, below the conversation title, a little sample of text which matches the search query, so we can see why it matched, right now we only show the title. | https://github.com/huggingface/chat-ui/issues/1836 | closed | [
"enhancement",
"help wanted",
"front",
"back"
] | 2025-05-27T08:17:44Z | 2025-06-02T14:30:40Z | 7 | nsarrazin |
huggingface/transformers | 38,396 | Can I disable all CI works in my forked version of Transformers? | After I synced the `main` branch of Transformers in my forked version, github keeps running CI works and fails. Can I disable it? Thanks. | https://github.com/huggingface/transformers/issues/38396 | closed | [] | 2025-05-27T04:44:07Z | 2025-05-28T18:06:31Z | 2 | ChengLyu |
huggingface/doc-builder | 564 | How to ignore some line when applying style? | I have this in my code:
```python
expected_output = textwrap.dedent("""\
╭────────────────────── Step 42 ───────────────────────╮
│ ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┓ │
│ ┃ Prompt ┃ Completion ┃ Correctness ┃ Format ┃ │
│ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━┩ │
│ │ The sky is │ blue. │ 0.12 │ 0.79 │ │
│ ├────────────┼──────────────┼─────────────┼────────┤ │
│ │ The sun is │ in the sky. │ 0.46 │ 0.10 │ │
│ └────────────┴──────────────┴─────────────┴────────┘ │
╰──────────────────────────────────────────────────────╯
""")
```
And it gets reformatted into this:
```python
expected_output = textwrap.dedent("""\
╭────────────────────── Step 42 ───────────────────────╮ │ ┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┓
│ │ ┃ Prompt ┃ Completion ┃ Correctness ┃ Format ┃ │ │ ┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━┩ │ │
│ The sky is │ blue. │ 0.12 │ 0.79 │ │ │ ├────────────┼──────────────┼─────────────┼────────┤ │ │ │ The sun is
│ in the sky. │ 0.46 │ 0.10 │ │ │ └────────────┴──────────────┴─────────────┴────────┘ │
╰──────────────────────────────────────────────────────╯
""")
```
is there a way to avoid this? | https://github.com/huggingface/doc-builder/issues/564 | open | [] | 2025-05-26T21:58:08Z | 2025-05-26T21:59:13Z | null | qgallouedec |
huggingface/safetensors | 609 | Properties data | ### Feature request
Please add properties for the content of safetensor files.
(Which can be read without the requirement to load the whole thing ...)
### Motivation
Rename all your safetensor files to a numeric value from 1.safetensors to n.safetensors, where n is the amount of such files you have.
Now try to find out, what is inside, like:
- Model type (checkpoint, lora, ip-adapter-files, anything else)
- Application type (SD1, SD2, SD3, SDXL, FLUX, Audio, Video and more)
- Original name
- Version
- and more ...
The safetensor file is like a package without any description. There's something inside, but you don't have any possibility to see what it is.
What users are missing is the package label that tells them, what's inside, like anything in the warehouse. If you go shopping, such a label tells you the name, the producers name, the weight and normally something about the ingredients.
It would be very useful, if a safetensor package could do this too.
### Your contribution
I just have the idea.
I don't know how to PR ... | https://github.com/huggingface/safetensors/issues/609 | closed | [] | 2025-05-26T20:06:13Z | 2025-06-16T12:13:08Z | 2 | schoenid |
huggingface/open-r1 | 660 | How to control the number of responses per query for each benchmark? | Hi, thank you for the great work!
In the README, I noticed that you mention the use of different numbers of responses per query for estimating pass@1 across benchmarks. For example:
Benchmark | Number of responses per query
-- | --
AIME 2024 | 64
MATH-500 | 4
GPQA Diamond | 8
LiveCodeBench | 16
However, I'm unable to find where in the code or CLI these values are configured. When running the following example:
```
NUM_GPUS=1
MODEL=deepseek-ai/{model_name}
MODEL_ARGS="model_name=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,data_parallel_size=$NUM_GPUS,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL
lighteval vllm $MODEL_ARGS "lighteval|aime24|0|0" \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
Does this automatically sample 64 responses per query for AIME24, as indicated in the table? Or do I need to explicitly specify the number of responses? If so, how can I pass that parameter through the CLI? | https://github.com/huggingface/open-r1/issues/660 | open | [] | 2025-05-26T14:38:15Z | 2025-05-27T15:32:50Z | null | Zoeyyao27 |
huggingface/transformers | 38,377 | Why are the model classes in unit tests imported directly from the transformer package instead of directly importing the model classes in the file? Is there any special consideration? | ### Feature request
Take qwen3MoE unit test as an example:
if is_torch_available():
import torch
from transformers import (
Qwen3MoeForCausalLM,
Qwen3MoeForQuestionAnswering,
Qwen3MoeForSequenceClassification,
Qwen3MoeForTokenClassification,
Qwen3MoeModel,
)
Why not this:
from src.transformers.models.qwen3_moe.modeling_qwen3_moe import (
Qwen3MoeForCausalLM,
Qwen3MoeForQuestionAnswering,
Qwen3MoeForSequenceClassification,
Qwen3MoeForTokenClassification,
Qwen3MoeModel,
)
### Motivation
Unit tests should guard their own code files
### Your contribution
No PR has been submitted yet | https://github.com/huggingface/transformers/issues/38377 | open | [
"Feature request"
] | 2025-05-26T11:41:19Z | 2025-05-26T11:41:19Z | 0 | ENg-122 |
huggingface/transformers | 38,375 | Unable to run run_instance_segmentation_no_trainer with HF Accelerate | ### System Info
I am trying to run the [examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py](https://github.com/huggingface/transformers/blob/d1b92369ca193da49f9f7ecd01b08ece45c2c9aa/examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py) with HF Accelerate. I was able to run the other Trainer API example successfully, but the No Trainer (Accelerate) version is facing the following bug.
This is using the `4.52.0.dev0` instance. The only change I've made was to change epochs=2.
The following error arose, when trying to prompt for more information, ChatGPT suggests it could be the following issues but I have no idea on what could be the root cause. No other related issues found and the docs bot was not working. Would appreciate advice on how to run this example script as I hope to adopt it for my task.
| **Category** | **Potential Issue** | **Explanation** | **Recommended Fix** |
|----------------------------|--------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|
| **Model Config Mismatch** | Mismatch in `num_labels` vs checkpoint (81 vs 3) | Causes some layers (e.g., `class_predictor`) to be randomly initialized, might desync ranks | Set `config.num_labels = 3` **before** loading the model or use a matching checkpoint |
| **DDP Desynchronization** | Different logic across ranks (e.g., `if rank == 0:` doing extra things) | All ranks must call collectives in the same order and time | Ensure logic is **identical** across all ranks |
| **Evaluation in DDP** | Evaluation logic not synchronized | Can cause hanging during collective ops like `all_gather` | Skip evaluation for non-zero ranks or use `if rank == 0:` carefully |
| **GPU Communication** | NCCL timeout or deadlock due to driver/hardware/GIL issues | Long-running or stuck collectives cause watchdog termination | Set env vars: `NCCL_BLOCKING_WAIT=1`, `NCCL_ASYNC_ERROR_HANDLING=1`, and reduce batch size if needed |
| **Distributed Setup** | Improper `accelerate` or `torchrun` configuration | One process might be behaving incorrectly | Test with single GPU first: `CUDA_VISIBLE_DEVICES=0 accelerate launch --num_processes=1 ...` |
| **Deprecated Args** | `_max_size` passed to `Mask2FormerImageProcessor` | Harmless, but messy | Remove `_max_size` from processor initialization |
| **Resource Overload** | GPU memory, bandwidth, or CPU bottleneck | Can indirectly cause slowdowns or crashes | Monitor with `nvidia-smi`, lower batch size, reduce `num_workers` |
Error message below:
```
loading weights file model.safetensors from cache at /home/jiayi/.cache/huggingface/hub/models--facebook--mask2former-swin-tiny-coco-instance/snapshots/22c4a2f15dc88149b8b8d9f4d42c54431fbd66f6/model.safetensors
Instantiating SwinBackbone model under default dtype torch.float32.
All model checkpoint weights were used when initializing Mask2FormerForUniversalSegmentation.
Some weights of Mask2FormerForUniversalSegmentation were not initialized from the model checkpoint at facebook/mask2former-swin-tiny-coco-instance and are newly initialized because the shapes did not match:
- class_predictor.bias: found shape torch.Size([81]) in the checkpoint and torch.Size([3]) in the model instantiated
- class_predictor.weight: found shape torch.Size([81, 256]) in the checkpoint and torch.Size([3, 256]) in the model instantiated
- criterion.empty_weight: found shape torch.Size([81]) in the checkpoint and torch.Size([3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/raid/jiayi/safety_barrier_breach/mask2former_hf/venv/lib/python | https://github.com/huggingface/transformers/issues/38375 | closed | [
"bug"
] | 2025-05-26T10:23:04Z | 2025-07-05T08:03:07Z | 3 | gohjiayi |
huggingface/huggingface_hub | 3,117 | how to download huggingface model files organize the http header and so on in other language | Hi,
I want to use another language like java or scala to download huggging face model and config.json. but meet connnect error , it is not make sense . so I want to know does huggingface have some more setting to download file ?
````
package torch.tr
import java.io.FileOutputStream
import java.net.URI
import java.net.http.{HttpClient, HttpRequest, HttpResponse}
import java.time.Duration
object HuggingFaceDownloader {
def main(args: Array[String]): Unit = {
val fileUrl = "https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/resolve/main/config.json"
val savePath = "config.json"
val headers = Map(
"Accept-Encoding" -> "identity",
// "user-agent" -> "transformers/0.0.1; java/23.0.2+7-58; hf_hub/null; java/23.0.2; file_type/config; from_autoclass/false; session_id/1AC306C59B944E9EA06A482682BE9584; unknown/None",
"authorization" -> "Bearer hf_XXAdogOLotfVSVFMKrWXSITeByDgRe"
)
try {
downloadFile(fileUrl, savePath, headers)
println(s"文件下载成功,保存路径: $savePath")
} catch {
case e: Exception =>
System.err.println(s"文件下载失败: ${e.getMessage}")
e.printStackTrace()
}
}
def downloadFile(fileUrl: String, savePath: String, headers: Map[String, String]): Unit = {
val client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(10))
.followRedirects(HttpClient.Redirect.NORMAL)
.build()
val requestBuilder = HttpRequest.newBuilder()
.uri(URI.create(fileUrl))
.GET()
headers.foreach { case (key, value) =>
requestBuilder.header(key, value)
}
val request = requestBuilder.build()
val response = client.send(request, HttpResponse.BodyHandlers.ofInputStream())
if (response.statusCode() == 200) {
val inputStream = response.body()
val outputStream = new FileOutputStream(savePath)
try {
val buffer = new Array[Byte](4096)
var bytesRead = inputStream.read(buffer)
while (bytesRead != -1) {
outputStream.write(buffer, 0, bytesRead)
bytesRead = inputStream.read(buffer)
}
} finally {
inputStream.close()
outputStream.close()
}
} else {
throw new Exception(s"下载失败,状态码: ${response.statusCode()}")
}
}
}
```
```
package dev.transformers4j.transformers;
import java.io.BufferedInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.net.URL;
public class HuggingFaceDownloader2 {
public static void main(String[] args) {
String fileUrl = "https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/resolve/main/config.json";
String savePath = "config.json"; // 本地保存的文件路径
try {
downloadFile(fileUrl, savePath);
System.out.println("文件下载成功,保存路径: " + savePath);
} catch (IOException e) {
System.err.println("文件下载失败: " + e.getMessage());
e.printStackTrace();
}
}
/**
* 从指定 URL 下载文件并保存到本地路径
* @param fileUrl 要下载的文件的 URL
* @param savePath 本地保存的文件路径
* @throws IOException 如果在下载或保存文件过程中发生 I/O 错误
*/
public static void downloadFile(String fileUrl, String savePath) throws IOException {
URL url = new URL(fileUrl);
try (BufferedInputStream in = new BufferedInputStream(url.openStream());
FileOutputStream fileOutputStream = new FileOutputStream(savePath)) {
System.out.println("<UNK>: " + savePath);
byte[] dataBuffer = new byte[1024];
int bytesRead;
while ((bytesRead = in.read(dataBuffer, 0, 1024)) != -1) {
fileOutputStream.write(dataBuffer, 0, bytesRead);
}
}
}
}
``` | https://github.com/huggingface/huggingface_hub/issues/3117 | open | [] | 2025-05-26T10:00:25Z | 2025-06-15T14:55:48Z | null | mullerhai |
huggingface/agents-course | 510 | anyone can run unit 1 dumm agent notebook???? | <img width="1226" alt="Image" src="https://github.com/user-attachments/assets/1813be3d-0d73-478e-86fa-11304e796614" /> | https://github.com/huggingface/agents-course/issues/510 | closed | [
"question"
] | 2025-05-25T03:00:04Z | 2025-06-25T09:03:52Z | null | chaoshun2025 |
huggingface/transformers | 38,346 | Why is return_assistant_tokens_mask and continue_final_message incompatible? | I'm currently authoring a new chat template, and while debugging encountered the check for this, however when uncommenting the check, the resulting mask and template both seem to still be correct. So I'm curious as to why or whether this check is needed at all?
I can see it was introduced in [the original PR](https://github.com/huggingface/transformers/pull/33198), however there doesn't seem to be any justification/explanation for this assertion. | https://github.com/huggingface/transformers/issues/38346 | closed | [] | 2025-05-24T23:44:13Z | 2025-07-02T08:03:11Z | 2 | nyxkrage |
huggingface/candle | 2,967 | Logit Discrepancy Between Candle and PyTorch When Using XLM-RoBERTa Model | When running the same XLM-RoBERTa model (`s-nlp/xlmr_formality_classifier` - [HF](https://huggingface.co/s-nlp/xlmr_formality_classifier) ) in both Candle and PyTorch, I'm observing significant differences in the logits produced by the model's classification head for identical inputs. Is this expected behavior? See [this repository](https://github.com/jpe90/candle-pytorch-parity-testing/tree/master/xlm-roberta-finetuned) for a reproduction.
## Environment/Setup
- Model: `s-nlp/xlmr_formality_classifier`
- Candle version: 0.9.1
- Model SHA256: `66037d963856d6d001f3109d2b3cf95c76bce677947e66f426299c89bc1b58e7`
- OS: macOS
## Observed Behavior
Given identical inputs, the logits produced by Candle and PyTorch differ significantly:
**Candle logits:**
```
[[2.0820313, -1.7548828], [0.7783203, -0.5629883], [1.2871094, -1.0039063], [2.1601563, -1.9277344]]
```
**PyTorch logits:**
```
[[ 2.6433, -2.3445],
[ 1.0379, -0.9621],
[ 1.4154, -1.2704],
[ 3.4423, -3.1726]]
```
## Expected Behavior
I would expect the logits to be extremely close (within floating-point precision differences) when running the same model with identical inputs across different frameworks.
## Steps to Reproduce
1. Clone the repository: https://github.com/jpe90/candle-pytorch-parity-testing
2. Run the PyTorch implementation in `/xlm-roberta-finetuned/pytorch/main.py`
3. Run the Candle implementation in `/xlm-roberta-finetuned/candle/src/main.rs`
4. Compare the logits produced by both implementations
## Additional Context
- The tokenization appears to be identical between both implementations (identical token IDs)
- I checked and made sure model checksums match at runtime
- Config seems to match ([see here](https://github.com/jpe90/candle-pytorch-parity-testing/blob/master/xlm-roberta-finetuned/troubleshooting.md))
## Questions
1. Should I expect identical (or very close) logits between PyTorch and Candle implementations?
2. If differences are expected, what is the acceptable range of variation?
3. Could these differences impact more sensitive applications that rely on logit values rather than just the final classifications?
4. Are there known issues with XLM-RoBERTa models specifically in Candle?
| https://github.com/huggingface/candle/issues/2967 | closed | [] | 2025-05-24T17:24:33Z | 2025-05-26T10:45:24Z | 2 | jpe90 |
huggingface/diffusers | 11,607 | with a custom attention processor for Flux.dev, inference time changes when manually load and inject the transformer model into a flux pipeline versus let the flux pipeline constructor load the transformer internally. | With a custom attention processor for Flux.dev transformer, the inference time is different between the following two ways:
1. Manually load and inject the transformer into a flux.dev pipeline
2. Let the pipeline constructor load the transformer internally
The inference time of the first way is about 15% slower than second way.
What is the reason?
I built diffusers from the source code.
Any insights are appreciated! | https://github.com/huggingface/diffusers/issues/11607 | closed | [] | 2025-05-24T06:42:11Z | 2025-05-26T01:27:00Z | 1 | LinchuanXuTheSEAAI |
huggingface/transformers | 38,326 | Allow `MllamaModel` to accept `pixel_values` and `inputs_embeds` | ### Feature request
`MllamaModel` does not allow users to pass `pixel_values` and `inputs_embeds` simultaneously:
https://github.com/huggingface/transformers/blob/54cd86708d2b63a1f696ee1c59384a2f04100f57/src/transformers/models/mllama/modeling_mllama.py#L1702-L1705
However, commenting out those lines and running the follow script does generate the same logits:
```python
import torch
from transformers import MllamaForConditionalGeneration, AutoProcessor
model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
model = MllamaForConditionalGeneration.from_pretrained(
model_id, device_map="auto", torch_dtype=torch.bfloat16
)
processor = AutoProcessor.from_pretrained(model_id)
messages = [
[
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://llava-vl.github.io/static/images/view.jpg",
},
{"type": "text", "text": "What does the image show?"},
],
}
],
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model(**inputs)
# Manually compute inputs_embeds
input_ids = inputs.pop("input_ids")
inputs_embeds = model.get_input_embeddings()(input_ids)
new_outputs = model(inputs_embeds=inputs_embeds, **inputs)
assert torch.allclose(outputs.logits, new_outputs.logits)
```
### Motivation
Being able to pass `inputs_embeds` along with `pixel_values` enables soft embeddings to be passed to the model in addition to images, which is useful for prompt tuning.
### Your contribution
Could contribute a PR removing the check assuming there isn't something I'm unaware of about the check. | https://github.com/huggingface/transformers/issues/38326 | closed | [
"Feature request"
] | 2025-05-23T15:26:28Z | 2025-05-27T16:33:57Z | 1 | dxoigmn |
huggingface/transformers | 38,323 | `PYTHONOPTIMIZE=2` seems not work with `transformers-`based library | ### System Info
I am currently having the latest package install.
torch 2.6.0+cu124
transformers 4.51.3
sentence-transformers 4.1.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Error:
```python
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "D:\Dataset\AgentAI\.venv\Lib\site-packages\transformers\modeling_utils.py", line 5494, in <module>
class SQuADHead(nn.Module):
...<113 lines>...
)
File "D:\Dataset\AgentAI\.venv\Lib\site-packages\transformers\modeling_utils.py", line 5513, in SQuADHead
@replace_return_docstrings(output_type=SquadHeadOutput, config_class=PretrainedConfig)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Dataset\AgentAI\.venv\Lib\site-packages\transformers\utils\doc.py", line 1194, in docstring_decorator
lines = func_doc.split("\n")
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'
```
A simple reproduction:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')
embedding = model.encode("What is the capital of France?")
print(embedding.shape)
```
### Expected behavior
This is not actually an issue, but I expect a documentation update from `transformers` maintainer to any end-users who use or develop a `transformers-` based library on the function `replace_return_docstrings` at `src/transformers/utils/doc.py` is to don't strip out the docstring by switching the option `PYTHONOPTIMIZE=2` to reduce the size of the bytecode. The use of `PYTHONOPTIMIZE=1` is OK
The reason is that the function `replace_return_docstrings` is expecting to be a decorator function without supporting the case of empty docstring. In some case, such as web hosting on Docker or production environment, or hosting an LLM without tool call where we usually strip out the docstring.
In the reproduction above (my use-case), I am just need to run the RAG search and thus don't need the docstring to be there. | https://github.com/huggingface/transformers/issues/38323 | closed | [
"bug"
] | 2025-05-23T14:24:34Z | 2025-05-26T14:29:17Z | 1 | IchiruTake |
huggingface/candle | 2,965 | Are there any support for complex number? | Are there any support for complex number? | https://github.com/huggingface/candle/issues/2965 | closed | [] | 2025-05-23T09:33:47Z | 2025-11-23T22:16:54Z | 1 | hndrbrm |
huggingface/accelerate | 3,586 | Where is PartialState._shared_state initialized? | Hi! When I step through the code line by line, before this line ([entering into `__init__` of `AcceleratorState`](https://github.com/huggingface/accelerate/blob/v0.34.2/src/accelerate/state.py#L856 )) , `PartialState._shared_state`returns
```
{}
```
But after entering into `__init__` of `AcceleratorState`, `PartialState._shared_state`returns
```
{'_cpu': False, 'backend': 'nccl', 'device': device(type='cuda', index=0), 'debug': False, 'distributed_type': <DistributedType.DEE...EEPSPEED'>, 'num_processes': 1, 'process_index': 0, 'local_process_index': 0, 'fork_launched': False}
```
I'm wondering where is `PartialState._shared_state` initialized? | https://github.com/huggingface/accelerate/issues/3586 | closed | [] | 2025-05-23T08:17:44Z | 2025-06-30T15:08:15Z | null | SonicZun |
huggingface/transformers | 38,300 | Will Gemma 3n be added to transformers? | ### Model description
Question: Are there plans from Google or Huggingface to implement Gemma 3n in other frameworks?
I've seen the LiteRT weights and Android App Link on Huggingface, and was wandering if it would be possible to convert the model architecture in the *.task file to a transformer pytorch Module?
Personally I'll really interested in the Per-Layer Embeddings and MatFormer implementation they used, but do not have any experience with Tensorflow Lite
### Open source status
- [ ] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/google/gemma-3n-E4B-it-litert-preview | https://github.com/huggingface/transformers/issues/38300 | closed | [
"New model"
] | 2025-05-22T15:26:20Z | 2025-06-30T07:07:53Z | 4 | TheMrCodes |
huggingface/transformers | 38,281 | KeyError in Llama-4-Maverick-17B-128E-Instruct-FP8 Inference with Offloading | ### Issue Description
Loading `meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` succeeds with `transformers==4.51.0`, but inference fails with `KeyError: 'model.layers.37.feed_forward.experts.gate_up_proj'` during `model.generate`. This occurs on 4x NVIDIA RTX A6000 (~196GB VRAM, CUDA 12.4, Python 3.12.3, Ubuntu 24.04.2) with offloading, critical for sentiment analysis (~100–150GB/day, ~85–90% accuracy). Disabling MoE (`num_experts=0`) didn’t resolve it.
### Steps to Reproduce
1. Install dependencies:
```bash
pip install torch==2.4.1 accelerate==1.7.0 compressed-tensors==0.9.4 transformers==4.51.0
2. Confirm model files (~389GB, 84 .safetensors) at /mnt/data/ai_super_palace/models/llama4/.
3. Run:
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
os.environ["TORCHVISION_DISABLE_NMS"] = "1"
model = AutoModelForCausalLM.from_pretrained(
'/mnt/data/ai_super_palace/models/llama4',
torch_dtype=torch.float16,
device_map="auto",
low_cpu_mem_usage=True,
offload_folder="/mnt/data/ai_super_palace/models/llama4/offload",
config={"parallel_style": "none"}
)
tokenizer = AutoTokenizer.from_pretrained('/mnt/data/ai_super_palace/models/llama4')
prompt = "What is the sentiment of this text: 'I love this product, it's amazing!'"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
4. Error:
KeyError: 'model.layers.37.feed_forward.experts.gate_up_proj'
**Environment**
Transformers: 4.51.0
Python: 3.12.3
PyTorch: 2.4.1
CUDA: 12.4
Accelerate: 1.7.0
Compressed-tensors: 0.9.4
OS: Ubuntu 24.04.2 LTS
Hardware: 4x NVIDIA RTX A6000 (~196GB VRAM)
Model: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
**Additional Details**
Model card requires transformers>=4.51.0, supports FP8 via compressed-tensors.
Warnings: Uninitialized MoE weights (feed_forward.experts.*), offloaded parameters (VRAM limit).
Prior errors (TypeError: NoneType not iterable) resolved with config={"parallel_style": "none"}.
Suspect bug in accelerate offloading or MoE weight initialization.
**Request**
Is this a known llama4 MoE offloading issue?
Can MoE weights be initialized or offloading fixed?
Workaround for inference without re-downloading (~389GB)?
Urgent for sentiment analysis.
**Logs**
See traceback above. config.json (40KB) available.
Thank you!
| https://github.com/huggingface/transformers/issues/38281 | closed | [] | 2025-05-22T05:45:30Z | 2025-07-27T08:03:11Z | 4 | pchu2025 |
huggingface/transformers | 38,268 | Group beam search with sampling? | ### Feature request
In the current generation code, group beam search is necessarily greedy. From a theoretical point of view, it is not very clear why that should be the case, since the diversity penalty is applied on the logits anyway, yielding a full distribution from which sampling can still be performed.
### Motivation
I think there is a reasonable use case for such a feature: diversity beam search is very useful in particular for modalities like biological sequences which increasingly use the transformers library, but I could see it be useful as well for natural language or code, to generate diverse paths without falling to the drawbacks of greedy generation. From a more abstract point of view it is also seemingly unjustified to allow sampling for standard beam search and not for diversity beam search.
### Your contribution
I am aware of the work in #30810 so don't want to disrupt but would be happy to look into it. | https://github.com/huggingface/transformers/issues/38268 | open | [
"Feature request"
] | 2025-05-21T18:08:59Z | 2025-06-06T18:11:13Z | 4 | adrian-valente |
huggingface/candle | 2,961 | Shape Mismatch in MatMul During Forward Pass of ModernBertForSequenceClassification | ModernBertForSequenceClassification model (hidden size = 768, sequence length = 128) to categorize text into one of classes. During the initial training epoch, however, the forward pass fails with a “shape mismatch in matmul” error.
Is there any way to solve this?
#Error log
Tokenized shape: [4, 128]
Attention mask shape: [4, 128]
Input IDs shape: [4, 128]
Attention mask shape: [4, 128]
First sample token count: 128
Error in forward pass: shape mismatch in matmul, lhs: [4, 128], rhs: [768, 768]
Input shape: [4, 128], Attention mask shape: [4, 128]
Error: shape mismatch in matmul, lhs: [4, 128], rhs: [768, 768]
#Expected Behavior
Input IDs should be a tensor of shape (batch_size, sequence_length) whose values are token indices (integers) and which the embedding layer then projects into the model’s hidden dimension (hidden_size = 768) before any matrix multiplication with weight matrices of shape (768, 768)
The forward pass should succeed without dimension errors, yielding logits of shape (batch_size, num_classes).
#Code
```
use candle_core::{Device, Tensor, D, DType, Error};
use candle_nn::{ops, loss, VarBuilder, optim::{Optimizer},var_map::VarMap};
use candle_transformers::models::modernbert::{ClassifierConfig, ClassifierPooling, ModernBertForSequenceClassification,Config
};
use hf_hub::{api::sync::Api, Repo, RepoType};
use tokenizers::{PaddingParams, Tokenizer};
use std::collections::HashMap;
use candle_optimisers::adam::{ParamsAdam, Adam};
use rand::{seq::SliceRandom, SeedableRng};
use rand::rngs::StdRng;
// Training settings
const LEARNING_RATE: f64 = 2e-5;
const EPOCHS: usize = 5;
const BATCH_SIZE: usize = 8;
const SEQ_LEN: usize = 128; // Sequence length
const SEED: u64 = 42;
// Data structure for text and label mapping
type LabeledDataset = HashMap<String, usize>;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Device selection (CPU or GPU)
let device = candle_examples::device(true)?;
println!("Using device: {:?}", device);
// HuggingFace API configuration
let revision = "main".to_string();
let api = Api::new()?;
let model_id = "answerdotai/ModernBERT-base".to_string();
let repo = api.repo(Repo::with_revision(
model_id,
RepoType::Model,
revision,
));
// Load tokenizer and model configuration
let tokenizer_filename = repo.get("tokenizer.json")?;
let config_filename = repo.get("config.json")?;
let weights_filename = repo.get("model.safetensors")?;
// Load configuration file
let config = std::fs::read_to_string(config_filename)?;
let mut config: Config = serde_json::from_str(&config)?;
// Output model configuration
println!("Model config:");
println!(" Hidden size: {}", config.hidden_size);
println!(" Intermediate size: {}", config.intermediate_size);
println!(" Max position embeddings: {}", config.max_position_embeddings);
println!(" Num attention heads: {}", config.num_attention_heads);
println!(" Num hidden layers: {}", config.num_hidden_layers);
println!(" Vocab size: {}", config.vocab_size);
// Check configuration compatibility
if config.max_position_embeddings < SEQ_LEN {
println!("Warning: SEQ_LEN ({}) is larger than max_position_embeddings ({}), adjusting SEQ_LEN",
SEQ_LEN, config.max_position_embeddings);
}
// Initialize tokenizer
let mut tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(Error::msg)?;
// Padding and truncation settings
tokenizer
.with_padding(Some(PaddingParams {
strategy: tokenizers::PaddingStrategy::Fixed(SEQ_LEN),
pad_id: config.pad_token_id,
pad_token: "[PAD]".to_string(),
pad_type_id: 0,
pad_to_multiple_of: None,
direction: tokenizers::PaddingDirection::Right,
}))
.with_truncation(Some(tokenizers::TruncationParams {
max_length: SEQ_LEN,
strategy: tokenizers::TruncationStrategy::LongestFirst,
stride: 0,
direction: tokenizers::TruncationDirection::Right,
}))
.map_err(Error::msg)?;
// Configure label mappings
let mut id2label = HashMap::new();
let mut label2id = HashMap::new();
let class_names = vec!["News", "Entertainment", "Sports", "Technology"];
for (i, name) in class_names.iter().enumerate() {
id2label.insert(i.to_string(), name.to_string());
label2id.insert(name.to_string(), i.to_string());
}
// Add classifier configuration
config.classifier_config = Some(ClassifierConfig {
id2label: id2label.clone(),
label2id: label2id.clone(),
classifier_pooling: ClassifierPooling::CLS, // Use [CLS] token for pooling
});
// Create variable map for the model
let mut varmap = VarMap::new();
// Load model weights
varmap.load(weights_filename)?;
let vb = VarBuilder::from_varmap(&varmap | https://github.com/huggingface/candle/issues/2961 | closed | [] | 2025-05-21T14:25:07Z | 2025-06-08T12:11:46Z | 2 | whitebox2 |
huggingface/transformers | 38,243 | <spam> | We are looking for an experienced Machine Learning Engineer for a BTC/USDT prediction project using CNN, LSTM, and Transformers. The goal is to forecast cryptocurrency price movements with a target accuracy of 90%+.
More details here:[ ](https://gist.github.com/DandBman/c76a548b1972da50ffe6bbdd93fdd613) | https://github.com/huggingface/transformers/issues/38243 | closed | [] | 2025-05-20T22:14:11Z | 2025-05-21T13:14:41Z | 0 | DandBman |
huggingface/diffusers | 11,590 | Infinite (not literally) length video creation using LTX-Video? | First of all thanks to Aryan (0.9.7 integration) and DN6 (adding GGUF). Model is quite good and output is also promising.
I need help in creating continuous video using the last frame. 1 trick is to generate the video, extract the last frame and do inference. Is there any easy way where I can do this in loop.
My thought is
1. Use text encoder to generate prompt embed once and then remove text encoders from memory
2. Loop the inference code, once complete extract the last latent (preferred as I can upscale using LTXLatentUpsamplePipeline) frame or image and again create image1 and condition with that frame...and continue doing this for n iterations.
3. Also need to save the video locally for each inference, otherwise OOM.
Any thoughts / suggestions?
```python
import torch
import gc
from diffusers import GGUFQuantizationConfig
from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline, LTXVideoTransformer3DModel
from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
from diffusers.utils import export_to_video, load_video, load_image
transformer_path = f"https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-distilled-GGUF/blob/main/ltxv-13b-0.9.7-distilled-Q3_K_S.gguf"
# transformer_path = f"https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-distilled-GGUF/blob/main/ltxv-13b-0.9.7-distilled-Q8_0.gguf"
transformer_gguf = LTXVideoTransformer3DModel.from_single_file(
transformer_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)
pipe = LTXConditionPipeline.from_pretrained(
"Lightricks/LTX-Video-0.9.7-distilled",
transformer=transformer_gguf,
torch_dtype=torch.bfloat16
)
# pipe.to("cuda")
# pipe.enable_sequential_cpu_offload()
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()
height, width = 480, 832
num_frames = 151
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
prompt = "hyperrealistic digital artwork of a young woman walking confidently down a garden pathway, wearing white button-up blouse with puffed sleeves and blue denim miniskirt, long flowing light brown hair caught in gentle breeze, carrying a small black handbag, bright sunny day with blue sky and fluffy white clouds, lush green hedges and ornamental plants lining the stone pathway, traditional Asian-inspired architecture in background, photorealistic style with perfect lighting, unreal engine 5, ray tracing, 16K UHD. camera follows subject from front as she walks forward with elegant confidence"
image1 = load_image( "assets/ltx/00039.png" )
condition1 = LTXVideoCondition(
image=image1,
frame_index=0,
)
width=512
height=768
num_frames = 161
# LOOP HERE
latents = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
conditions=[condition1],
width=width,
height=height,
num_frames=num_frames,
guidance_scale=1.0,
num_inference_steps=4,
decode_timestep=0.05,
decode_noise_scale=0.025,
image_cond_noise_scale=0.0,
guidance_rescale=0.7,
generator=torch.Generator().manual_seed(42),
output_type="latent",
).frames
# save video locally
# Update image1 = load_image( latent/image from current inference to be used with next inference)
``` | https://github.com/huggingface/diffusers/issues/11590 | closed | [] | 2025-05-20T13:37:36Z | 2025-05-20T19:51:20Z | 1 | nitinmukesh |
huggingface/agents-course | 501 | [BUG] Notebook on HF Hub is not updated | "Workflows in LlamaIndex" [course page](https://huggingface.co/learn/agents-course/unit2/llama-index/workflows#creating-workflows) is referring notebook on [HF Hub](https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/workflows.ipynb), which is not the updated version from [GitHub](https://github.com/huggingface/agents-course/blob/main/notebooks/unit2/llama-index/workflows.ipynb).
The old version contains bug in loop event workflow so update is needed. | https://github.com/huggingface/agents-course/issues/501 | closed | [
"question"
] | 2025-05-20T06:45:26Z | 2025-05-29T05:28:46Z | null | karenwky |
huggingface/open-r1 | 649 | how to evaluate use local models and datasets? | I change the readme eval command like following:
**MODEL=./deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=./data/evals/
# AIME 2024
TASK=aime24
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR \
--cache-dir ./datasets/aime24**
but it try to use the network,and get a network error,how can i do to solve this problem? | https://github.com/huggingface/open-r1/issues/649 | open | [] | 2025-05-20T05:57:29Z | 2025-05-20T05:57:29Z | null | SiqingHe |
huggingface/lerobot | 1,130 | Drive mode reversed on calibration. | I had an issue where after calibrating drive_mode was reversed for one of my motors (0 vs. 1) as a result, moving the leader in one direction caused the follower to go the opposite direction.
Saw some suggestions that moving it through the full range of motion resolved this but I wasn't able to get that to work. I could also see cases where this could be problematic during initial setup. @Lemin2 suggested to always set this to 0 across the board, which does seem like a good fix, unless there's a reason want to control reverse mode.
In any case I would expect the calibration process to be consistent for both arms, else this issue will be encountered. If reverse mode is needed maybe have a step in the calibration processes to ensure consistency.
FYI in case anyone encounters this the solution is to go into `.cache/calibration/<arm>/<each of your arms>.json`
Seems to be the same cause for #441 and #930 | https://github.com/huggingface/lerobot/issues/1130 | open | [
"bug",
"question",
"robots"
] | 2025-05-20T03:08:06Z | 2025-07-16T06:50:20Z | null | brainwavecoder9 |
huggingface/text-generation-inference | 3,233 | Docker image For llama cpp backend? | Hey,
Is there any reason in particular why docker images for the llama-cpp backend do not get built along with new versions? It seems the backend has been ready for a while so just curious why images don't get built as part of the build pipeline
cc @mfuntowicz | https://github.com/huggingface/text-generation-inference/issues/3233 | open | [] | 2025-05-20T02:07:46Z | 2025-05-20T02:07:46Z | 0 | vrdn-23 |
huggingface/diffusers | 11,580 | Can diffusers support loading and running FLUX with fp8 ? | This is how I use diffusers to load flux model:
```
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained(
"/ckptstorage/repo/pretrained_weights/black-forest-labs/FLUX.1-dev",
torch_dtype=torch.float16,
)
device = torch.device(f"cuda:{device_number}" if torch.cuda.is_available() else "cpu")
pipe = pipe.to(device)
```
it consumes about 75 seconds on my computer with A800 GPU.
But I found in comfyui, it only need 22 seconds to load flux model, but it load the fp8 model.
Can diffusers load flux fp8 model ?
or is there any other speed up method ? | https://github.com/huggingface/diffusers/issues/11580 | open | [] | 2025-05-19T12:18:13Z | 2025-12-12T19:30:33Z | 5 | EmmaThompson123 |
huggingface/lerobot | 1,124 | How to add force data to lerobot and models? | As title said, I use a force sensor on SO100 arm and want to record the data in lerobot dataset then train with the force data. How to do it?
force data looks like: a list: [x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4, x5, y5, z5] (15 d list)
Thanks! | https://github.com/huggingface/lerobot/issues/1124 | closed | [] | 2025-05-19T07:48:20Z | 2025-05-19T13:36:44Z | null | milong26 |
huggingface/diffusers | 11,575 | Hidream Model loading takes too long — any way to speed it up? | Hi, thanks for this great project.
I'm running Hidream with this library in a serverless environment and facing major delays during model loading. It can be very frustrating, especially for time-sensitive or ephemeral deployments.
I've tried everything I could think of to reduce the loading time, but nothing has worked so far. Does anyone have any tips, tricks, or even sample code to help speed up the model initialization?
Any guidance would be greatly appreciated! | https://github.com/huggingface/diffusers/issues/11575 | open | [] | 2025-05-19T00:49:00Z | 2025-05-23T12:55:05Z | 6 | Me-verner |
huggingface/optimum | 2,275 | ONNX export for ColPali | Hi Optimum,
I have created a small tutorial how to export the ColPali late-interaction VLM in this [notebook](https://gist.github.com/kstavro/9bcdf930f0e69626dd5aa9aa5f09f867), but I think it shouldn't be too difficult to integrate it to Optimum as well.
However, as far as I have seen, there is not much support for late-interaction VLMs at the moment. So, before I get into it just by myself, I thought I could first see if someone could give me a couple of hints about some choices regarding the library, eg what base configs I should use for ColPali or if I should create new ones everywhere, what names, do we need tiny dummy models for tests, etc. | https://github.com/huggingface/optimum/issues/2275 | closed | [] | 2025-05-18T18:56:22Z | 2025-06-11T13:56:43Z | 2 | kstavro |
huggingface/transformers | 38,190 | Gibberish generations with FSDP2 and MixedPrecisionPolicy | ### System Info
```
transformers.__version__='4.51.2'
torch.__version__='2.6.0+cu124'
sys.version='3.10.17 (main, Apr 16 2025, 15:03:57) [GCC 12.1.1 20220628 (Red Hat 12.1.1-3)]'
```
### Who can help?
@SunMarc @zach-huggingface
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I'm sharding `llama-3.1-8b-instruct` on 8 GPUs using FSDP2. The goal is to be able to call `generate` during the training loop. I have noticed that If I use `MixedPrecisionPolicy` with `param_dtype=torch.bfloat16` the generations are gibberish. A hopefully reproducible example below.
```python
import os
import torch
import torch.distributed as dist
from torch.distributed._composable.fsdp import register_fsdp_forward_method
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.fsdp import (
MixedPrecisionPolicy,
fully_shard,
)
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
def get_local_rank() -> int:
return int(os.environ.get("LOCAL_RANK", "0"))
def get_global_rank() -> int:
return int(os.environ.get("RANK", get_local_rank()))
def barrier():
dist.barrier(device_ids=[get_local_rank()])
def test_generate(model, tokenizer):
prompt = "Concisely answer the following question: "
queries = [
"What is the tallest animal?\n",
"What are 3 fruits larger in size than an apple?\n",
"What's the derivative of e^x?\n",
]
tokens = [tokenizer.encode(prompt + q) for q in queries]
max_len = max(len(t) for t in tokens)
padded = [[tokenizer.eos_token_id] * (max_len - len(t)) + t for t in tokens]
padded_t = torch.tensor(padded).long()
generations = model.generate(padded_t, max_new_tokens=128)
parsed = tokenizer.batch_decode(generations)
for p in parsed:
print(p, flush=True)
def main():
device = torch.device("cuda", get_local_rank())
dist.init_process_group(
backend="nccl",
)
torch.cuda.set_device(device)
LOCAL_MODEL_PATH = "/llama-3.1-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(LOCAL_MODEL_PATH)
model_config = AutoConfig.from_pretrained(LOCAL_MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(
LOCAL_MODEL_PATH,
config=model_config,
use_safetensors=True,
torch_dtype=torch.float32,
)
fsdp2_kwargs = {}
fsdp2_kwargs["mesh"] = init_device_mesh(
"cuda", (torch.distributed.get_world_size(),)
)
fsdp2_kwargs["mp_policy"] = MixedPrecisionPolicy(
param_dtype=torch.bfloat16, # <<<----- If I comment this line the generations are as expected
)
for submodule in model.modules():
if isinstance(submodule, LlamaDecoderLayer):
fully_shard(submodule, **fsdp2_kwargs)
fully_shard(model, **fsdp2_kwargs)
register_fsdp_forward_method(model, "generate")
barrier()
test_generate(model, tokenizer)
barrier()
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
The following is an example of the output I get if `param_dtype=torch.bfloat16`:
```
<|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What is the tallest animal?
The odense aalborg limburg fetisch odense fetisch<|start_header_id|>OO
<|begin_of_text|>Concisely answer the following question: What are 3 fruits larger in size than an apple?
Here fetisch<|start_header_id|>OOOOOOOOOO
<|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What's the derivative of e^x?
The aalborg salopes<|start_header_id|>OOOOOOOOOOOOAAAAAAAA\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
```
### Expected behavior
The following is an example of the output I get if I comment out the `param_dtype=torch.bfloat16` in `MixedPrecisionPolicy`
```
<|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What is the tallest animal?
The tallest animal is the giraffe, which can grow up to 18 feet (5.5 meters) tall.
The gi | https://github.com/huggingface/transformers/issues/38190 | closed | [
"bug"
] | 2025-05-18T11:56:08Z | 2025-08-29T09:36:57Z | 17 | dlvp |
huggingface/transformers | 38,181 | Add a way for `callbacks` to get `trainer` handler | When I want to implement differential privacy for the model, I customize the gradient clipping before `optimizer.step()`. The add custom noise to the model after `optimizer.step()`. I cannot get `Trainer.optimizer` in the `callback` function, it shows as `None`. Is it possible to get the reference of `Trainer` directly in `callback`? | https://github.com/huggingface/transformers/issues/38181 | closed | [] | 2025-05-16T16:01:35Z | 2025-05-19T12:17:06Z | 1 | MinzhiYoyo |
huggingface/open-r1 | 645 | How to set vllm max-model-len? | I use qwen2.5-7b-Instruct to run grpo, and open yarn, to accommodate a longer window(greater than 32768). But fowllowing error exists:
0%| | 0/187 [00:00<?, ?it/s]WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
WARNING 05-16 10:48:52 scheduler.py:947] Input prompt (48173 tokens) is too long and exceeds limit of 32768
[rank2]: Traceback (most recent call last):
[rank2]: File "/cto_studio/huyongquan/python_project/open-r1/src/open_r1/grpo.py", line 358, in <module>
[rank2]: main(script_args, training_args, model_args)
[rank2]: File "/cto_studio/huyongquan/python_project/open-r1/src/open_r1/grpo.py", line 309, in main
[rank2]: train_result = trainer.train(resume_from_checkpoint=checkpoint) | https://github.com/huggingface/open-r1/issues/645 | closed | [] | 2025-05-16T03:28:50Z | 2025-06-12T08:45:15Z | null | huyongquan |
huggingface/transformers | 38,165 | Gemma 3 Pipeline does not accept dictionary with no images | ### System Info
System info not really relevant as the bug is root caused in my description below.
- `transformers` version: 4.51.3
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.11.9
- Huggingface_hub version: 0.31.2
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script:Yes
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This issue can be created using the following snippet copied from Gemma 3 docs and up until transformer 4.51.3.
```
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="google/gemma-3-12b-it",
device="cuda", # Or "cpu" if you don't have a compatible GPU
torch_dtype=torch.bfloat16 # Or torch.float16 or torch.float32 based on your hardware/needs
)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
# Removed the image link from the example
{"type": "text", "text": "What is the capital of France?"} # Keep only the text part
]
}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
which will result in the error:
```
Traceback (most recent call last):
File "D:\experiments\personal\gemma_editor\gemma_editor.py", line 78, in <module>
run_gemma(SENTENCES)
File "D:\experiments\personal\gemma_editor\gemma_editor.py", line 41, in run_gemma
output = pipe(text=messages)
^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\pipelines\image_text_to_text.py", line 311, in __call__
return super().__call__(Chat(text, images), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\pipelines\base.py", line 1379, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\pipelines\base.py", line 1385, in run_single
model_inputs = self.preprocess(inputs, **preprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\pipelines\image_text_to_text.py", line 365, in preprocess
model_inputs = self.processor(images=images, text=text, return_tensors=self.framework, **processing_kwargs).to(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\models\gemma3\processing_gemma3.py", line 106, in __call__
image_inputs = self.image_processor(batched_images, **output_kwargs["images_kwargs"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\image_processing_utils.py", line 42, in __call__
return self.preprocess(images, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\utils\generic.py", line 866, in wrapper
return func(*args, **valid_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\experiments\personal\gemma_editor\venv\Lib\site-packages\transformers\models\gemma3\image_processing_gemma3.py", line 361, in preprocess
if do_rescale and is_scaled_image(images[0]):
~~~~~~^^^
IndexError: list index out of range
```
### Expected behavior
The problem here is that within image_text_to_text, the dictionary is made into type: Chat. [By default chat makes images an empty list](https://github.com/huggingface/transformers/blame/v4.51.3/src/transformers/pipelines/image_text_to_text.py#L114). Then this is propagated to [images](https://github.com/huggingface/transformers/blame/v4.51.3/src/transformers/pipelines/image_text_to_text.py#L353C16-L353C39) where it ultimately lands in processing_gemma_3.py where the [if condition only checks if the images are None](https://github.com/huggingface/transformers/blob/v4.51.3/src/transformers/models/gemma3/ | https://github.com/huggingface/transformers/issues/38165 | closed | [
"bug"
] | 2025-05-16T01:34:15Z | 2025-06-23T08:03:03Z | 6 | sheldonlai |
huggingface/lerobot | 1,114 | How to collect data and train the policy from Lerobot totally out of the leader arm only by learning from demonstration using the main arm such as XARM or UR series | https://github.com/huggingface/lerobot/issues/1114 | closed | [
"question",
"robots",
"stale"
] | 2025-05-15T15:31:13Z | 2025-12-31T02:35:25Z | null | David-Kingsman | |
huggingface/transformers | 38,147 | How to check the number of tokens processed or the load of each expert in the Qwen3 MoE model during inference? | https://github.com/huggingface/transformers/issues/38147 | closed | [] | 2025-05-15T09:21:29Z | 2025-05-15T13:36:53Z | null | wumaotegan |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.