url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
null | comments
list | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
string | type
dict | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool | is_closed
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/41946
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41946/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41946/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41946/events
|
https://github.com/huggingface/transformers/pull/41946
| 3,569,130,814
|
PR_kwDOCUB6oc6wiX6O
| 41,946
|
feat: add gradient_accumulation_steps argument to image classificatio…
|
{
"login": "Priyanshjain10",
"id": 240654067,
"node_id": "U_kgDODlgW8w",
"avatar_url": "https://avatars.githubusercontent.com/u/240654067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Priyanshjain10",
"html_url": "https://github.com/Priyanshjain10",
"followers_url": "https://api.github.com/users/Priyanshjain10/followers",
"following_url": "https://api.github.com/users/Priyanshjain10/following{/other_user}",
"gists_url": "https://api.github.com/users/Priyanshjain10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Priyanshjain10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Priyanshjain10/subscriptions",
"organizations_url": "https://api.github.com/users/Priyanshjain10/orgs",
"repos_url": "https://api.github.com/users/Priyanshjain10/repos",
"events_url": "https://api.github.com/users/Priyanshjain10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Priyanshjain10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-30T06:34:06
| 2025-10-30T06:48:04
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41946",
"html_url": "https://github.com/huggingface/transformers/pull/41946",
"diff_url": "https://github.com/huggingface/transformers/pull/41946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41946.patch",
"merged_at": null
}
|
## What does this PR do?
Adds `--gradient_accumulation_steps` argument to the image classification no_trainer example script, addressing issue #18436.
## Motivation
Gradient accumulation allows training with larger effective batch sizes by accumulating gradients over multiple batches before performing an optimizer step. This is especially useful when GPU memory is limited.
## Changes
- Added `--gradient_accumulation_steps` argument parser in `run_image_classification_no_trainer.py`
- Default value: 1 (no accumulation, maintains backward compatibility)
- Type: int
- Includes help text explaining the feature
## Related Issue
Fixes #18436
---
**Submitted for Hacktoberfest 2025** 🎃
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41946/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41945
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41945/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41945/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41945/events
|
https://github.com/huggingface/transformers/issues/41945
| 3,568,862,150
|
I_kwDOCUB6oc7UuIPG
| 41,945
|
Consider not using emojis in `print`, which encounterred encoding error.
|
{
"login": "acane77",
"id": 9192383,
"node_id": "MDQ6VXNlcjkxOTIzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9192383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acane77",
"html_url": "https://github.com/acane77",
"followers_url": "https://api.github.com/users/acane77/followers",
"following_url": "https://api.github.com/users/acane77/following{/other_user}",
"gists_url": "https://api.github.com/users/acane77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acane77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acane77/subscriptions",
"organizations_url": "https://api.github.com/users/acane77/orgs",
"repos_url": "https://api.github.com/users/acane77/repos",
"events_url": "https://api.github.com/users/acane77/events{/privacy}",
"received_events_url": "https://api.github.com/users/acane77/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-30T04:14:40
| 2025-10-30T04:14:40
| null |
NONE
| null | null | null | null |
### System Info
transformer version: latest commit
in file: https://github.com/huggingface/transformers/blob/main/src/transformers/utils/auto_docstring.py#L1121 (and any other files using emoji symbols)
This "🚨" symbol cause encoding error as the system charset is not UTF-8 encoded. (especially on Windows, the UTF-8 charset support is default disabled, instead the system charset is GBK, cp1252, etc)
For compatibility, it's better to not use such non-ASCII, UTF-8 charset dependent multi-word emojis.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
On windows system without UTF-8 charset enabled, run code to reach any print statement containing emoji characters. like "🚨"
### Expected behavior
Program crashed with the following error message.
```
[6424] [8464] File "transformers\models\bert\modeling_bert.py", line 778, in <module>
[6424] [8464] File "transformers\utils\auto_docstring.py", line 2048, in auto_docstring
[6424] [8464] File "transformers\utils\auto_docstring.py", line 2045, in auto_docstring_decorator
[6424] [8464] File "transformers\utils\auto_docstring.py", line 1787, in auto_class_docstring
[6424] [8464] File "transformers\utils\auto_docstring.py", line 1728, in auto_method_docstring
[6424] [8464] File "transformers\utils\auto_docstring.py", line 1243, in _get_model_info
[6424] [8464] File "transformers\utils\auto_docstring.py", line 1124, in get_model_name
[6424] [8464] File "encodings\cp1252.py", line 19, in encode
[6424] [8464] UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f6a8' in position 0: character maps to <undefined>
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41945/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41944
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41944/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41944/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41944/events
|
https://github.com/huggingface/transformers/issues/41944
| 3,568,698,595
|
I_kwDOCUB6oc7UtgTj
| 41,944
|
FA2 vs. SPDA leading to different performance on Qwen3
|
{
"login": "jiosephlee",
"id": 43046526,
"node_id": "MDQ6VXNlcjQzMDQ2NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/43046526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiosephlee",
"html_url": "https://github.com/jiosephlee",
"followers_url": "https://api.github.com/users/jiosephlee/followers",
"following_url": "https://api.github.com/users/jiosephlee/following{/other_user}",
"gists_url": "https://api.github.com/users/jiosephlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiosephlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiosephlee/subscriptions",
"organizations_url": "https://api.github.com/users/jiosephlee/orgs",
"repos_url": "https://api.github.com/users/jiosephlee/repos",
"events_url": "https://api.github.com/users/jiosephlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiosephlee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-30T02:35:01
| 2025-10-30T02:35:01
| null |
NONE
| null | null | null | null |
### System Info
Hi this is using TRL but it seems like a lower level issue.
I'm training a variant of Qwen3 (Intern-S1-mini) but I'm not using the vision tower so it's effectively Qwen3-8B. I've been doing finetuning and checking different attention implementations i.e. SPDA vs. Flash Attention 2. However, I've been getting strange results where the downstream test accuracy is different (FA2 is worse). Furthermore, it seems like this issue is accentuated with grad accumulation. I'm not sure what's the best way to share this as my current code abstracts upon HF Trainer for my personal convenience.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Here are the current values of my config
```"context_length": 0,
"per_device_train_batch_size": 16,
"gradient_accumulation_steps": 2,
"optim": "paged_adamw_8bit",
"evaluation_strategy": "epoch",
"weight_decay": 0.1,
"gradient_checkpointing": true,
"use_liger_kernel": true,
"num_train_epochs": 1,
"learning_rate": 8e-05,
"lr_scheduler_type": "cosine",
"warmup_steps": 0,
"warmup_ratio": 0.1,
"report_to": "wandb",
"run_name": "finetune_Tox_internlm_Intern-S1-mini",
"logging_steps": 1,
"logging_strategy": "steps",
"save_strategy": "no",
"remove_unused_columns": false,
"seed": 42,
"completion_only_loss": false,
"dataset_text_field": "text",
"packing": false,
"padding_free": false,
"loss_type": "nll"```
### Expected behavior
They should have equal test accuracy.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41944/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41943
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41943/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41943/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41943/events
|
https://github.com/huggingface/transformers/issues/41943
| 3,568,691,174
|
I_kwDOCUB6oc7Utefm
| 41,943
|
error: argument --include_num_input_tokens_seen/--include-num-input-tokens-seen: Truthy value expected: got non_padding but expected one of yes/no, true/false, t/f, y/n, 1/0 (case insensitive).
|
{
"login": "guofy-ai",
"id": 218227932,
"node_id": "U_kgDODQHk3A",
"avatar_url": "https://avatars.githubusercontent.com/u/218227932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guofy-ai",
"html_url": "https://github.com/guofy-ai",
"followers_url": "https://api.github.com/users/guofy-ai/followers",
"following_url": "https://api.github.com/users/guofy-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/guofy-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guofy-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guofy-ai/subscriptions",
"organizations_url": "https://api.github.com/users/guofy-ai/orgs",
"repos_url": "https://api.github.com/users/guofy-ai/repos",
"events_url": "https://api.github.com/users/guofy-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/guofy-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-30T02:31:09
| 2025-10-30T02:31:09
| null |
NONE
| null | null | null | null |
### System Info
Failed to parse argument "include_num_input_tokens_seen" using HfArgumentParser, code is unusable.
error: argument --include_num_input_tokens_seen/--include-num-input-tokens-seen: Truthy value expected: got non_padding but expected one of yes/no, true/false, t/f, y/n, 1/0 (case insensitive).
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
torchrun --save_steps=100 --cutoff_len=4096 --include_num_input_tokens_seen=non_padding
### Expected behavior
can not use args include_num_input_tokens_seen=non_padding
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41943/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41942
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41942/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41942/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41942/events
|
https://github.com/huggingface/transformers/pull/41942
| 3,568,061,322
|
PR_kwDOCUB6oc6we0yw
| 41,942
|
fix prepare_config_and_inputs_for_common bug in llava test
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T21:59:54
| 2025-10-29T22:08:41
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41942",
"html_url": "https://github.com/huggingface/transformers/pull/41942",
"diff_url": "https://github.com/huggingface/transformers/pull/41942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41942.patch",
"merged_at": null
}
|
@ydaigo , pls help review, thx very much.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41942/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41941
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41941/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41941/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41941/events
|
https://github.com/huggingface/transformers/pull/41941
| 3,567,778,419
|
PR_kwDOCUB6oc6wd2ZL
| 41,941
|
fix some ut failures on XPU w/ torch 2.9
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T20:35:39
| 2025-10-29T22:27:49
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41941",
"html_url": "https://github.com/huggingface/transformers/pull/41941",
"diff_url": "https://github.com/huggingface/transformers/pull/41941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41941.patch",
"merged_at": null
}
|
@ydshieh , pls help review, thx very much.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41941/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41940
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41940/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41940/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41940/events
|
https://github.com/huggingface/transformers/pull/41940
| 3,566,995,885
|
PR_kwDOCUB6oc6wbJS9
| 41,940
|
Fix typo in image_processing_lfm2_vl_fast
|
{
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T17:02:51
| 2025-10-29T17:12:27
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41940",
"html_url": "https://github.com/huggingface/transformers/pull/41940",
"diff_url": "https://github.com/huggingface/transformers/pull/41940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41940.patch",
"merged_at": null
}
|
# What does this PR do?
Fix small typo. Without consequences, but still confusing.
Fixes https://github.com/huggingface/transformers/issues/41919
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41940/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41939
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41939/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41939/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41939/events
|
https://github.com/huggingface/transformers/pull/41939
| 3,566,788,498
|
PR_kwDOCUB6oc6wachG
| 41,939
|
feat: add fallback to slow tokenizer when `use_fast=True` in AutoTokenizer fails at runtime
|
{
"login": "m-misiura",
"id": 82826099,
"node_id": "MDQ6VXNlcjgyODI2MDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/82826099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m-misiura",
"html_url": "https://github.com/m-misiura",
"followers_url": "https://api.github.com/users/m-misiura/followers",
"following_url": "https://api.github.com/users/m-misiura/following{/other_user}",
"gists_url": "https://api.github.com/users/m-misiura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m-misiura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m-misiura/subscriptions",
"organizations_url": "https://api.github.com/users/m-misiura/orgs",
"repos_url": "https://api.github.com/users/m-misiura/repos",
"events_url": "https://api.github.com/users/m-misiura/events{/privacy}",
"received_events_url": "https://api.github.com/users/m-misiura/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T16:05:52
| 2025-10-29T16:07:28
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41939",
"html_url": "https://github.com/huggingface/transformers/pull/41939",
"diff_url": "https://github.com/huggingface/transformers/pull/41939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41939.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds graceful fallback to slow tokenizers when `AutoTokenizer.from_pretrained()` with `use_fast=True` fails at runtime. Subsequently, this PR should improve robustness issue in tokenizer loading
## Problem
Currently, when `use_fast=True` is specified but the fast tokenizer fails to load due to runtime errors (corrupted files, missing dependencies, file permissions, etc.), the exception propagates and crashes the application. This forces users to implement defensive try-catch wrappers in production code, e.g. see this [PR](https://github.com/trustyai-explainability/guardrails-detectors/pull/56)
## Solution
Wraps fast tokenizer instantiation in a try-except block that:
- attempts to load the fast tokenizer when requested
- falls back to the slow tokenizer if loading fails (with a warning)
- re-raises the exception if no slow tokenizer is available (prevents silent failures)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Library:
- tokenizers: @ArthurZucker and @itazap
- model loading (from pretrained, etc): @CyrilVallez
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41939/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41939/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41938
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41938/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41938/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41938/events
|
https://github.com/huggingface/transformers/pull/41938
| 3,566,412,300
|
PR_kwDOCUB6oc6wZMh7
| 41,938
|
Fixed wrong padding value in OWLv2
|
{
"login": "gjamesgoenawan",
"id": 67161633,
"node_id": "MDQ6VXNlcjY3MTYxNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/67161633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gjamesgoenawan",
"html_url": "https://github.com/gjamesgoenawan",
"followers_url": "https://api.github.com/users/gjamesgoenawan/followers",
"following_url": "https://api.github.com/users/gjamesgoenawan/following{/other_user}",
"gists_url": "https://api.github.com/users/gjamesgoenawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gjamesgoenawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gjamesgoenawan/subscriptions",
"organizations_url": "https://api.github.com/users/gjamesgoenawan/orgs",
"repos_url": "https://api.github.com/users/gjamesgoenawan/repos",
"events_url": "https://api.github.com/users/gjamesgoenawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/gjamesgoenawan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T14:54:23
| 2025-10-29T16:47:28
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41938",
"html_url": "https://github.com/huggingface/transformers/pull/41938",
"diff_url": "https://github.com/huggingface/transformers/pull/41938.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41938.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR proposes changing the default padding value from 0.5 to 0.0 in OWLv2. While OWLv1 originally used a padding value of 0.5 (gray) as described in its paper [1], OWLv2 adopts 0.0 instead [2], consistent with its official implementation [3]. Using the incorrect padding value (0.5) leads to degraded performance on the LVIS dataset.
| Implementation | LVIS mAP |
| - | - |
| Scenic | 43.9 |
| Transformers (0.5 padding) | 43.4 |
| Transformers (0.0 padding) | 44.0 |
### Reproducing the results
Testing scripts:
The following scripts explicitly resized and pad the image beforehand so no padding will be done in the processor.
```
import os
import re
import torch
import argparse
import warnings
import numpy as np
import torch.distributed as dist
from torch.utils.data import Dataset, DataLoader, DistributedSampler
from transformers import Owlv2Processor, Owlv2ForObjectDetection
from PIL import Image
from lvis import LVIS, LVISResults, LVISEval
from tqdm import tqdm
warnings.filterwarnings("ignore")
NOT_PROMPTABLE_MARKER = '#'
PROMPT_TEMPLATES = [
'itap of a {}.',
'a bad photo of the {}.',
'a origami {}.',
'a photo of the large {}.',
'a {} in a video game.',
'art of the {}.',
'a photo of the small {}.',
]
def _canonicalize_string(string: str) -> str:
string = string.lower()
string = re.sub(f'[^a-z0-9-{NOT_PROMPTABLE_MARKER} ]', ' ', string)
string = re.sub(r'\s+', ' ', string)
string = re.sub(r'-+', '-', string)
string = string.strip()
string = re.sub(f'([^^]){NOT_PROMPTABLE_MARKER}+', r'\1', string)
return string
class LVISDataset(Dataset):
def __init__(self, ann_file, img_dir, processor, pad_value):
self.lvis = LVIS(ann_file)
self.img_ids = sorted(self.lvis.imgs.keys())
self.img_dir = img_dir
self.processor = processor
self.img_size = self.processor.image_processor.size['height']
self.pad_value = pad_value
def __len__(self):
return len(self.img_ids)
def __getitem__(self, idx):
img_id = self.img_ids[idx]
img_info = self.lvis.imgs[img_id]
img_path = os.path.join(self.img_dir, os.path.basename(img_info['coco_url']))
# Load image
image = Image.open(img_path).convert("RGB")
image = np.array(image).astype(np.float32) / 255.0 # scale to [0,1]
# Determine square size
max_side = max(image.shape[1], image.shape[0])
# Create padded square with floating-point pad value
pad_value = np.array(self.pad_value, dtype=np.float32) # e.g., [0.5,0.5,0.5]
padded_image = np.ones((max_side, max_side, 3), dtype=np.float32) * pad_value
# Paste original image at top-left
padded_image[:image.shape[0], :image.shape[1], :] = image
# Convert back to PIL for resizing
padded_image = Image.fromarray((padded_image * 255).astype(np.uint8))
# Resize to target size
resized_image = padded_image.resize((self.img_size, self.img_size), Image.Resampling.BILINEAR)
# Process image
pixel_values = self.processor.image_processor(
images=resized_image,
return_tensors="pt"
)['pixel_values']
return img_id, image, img_info['width'], img_info['height'], pixel_values
def collate_fn(batch):
img_ids, images, widths, heights, pixel_values = zip(*batch)
return list(img_ids), list(images), list(widths), list(heights), torch.cat(list(pixel_values), axis=0)
def main():
parser = argparse.ArgumentParser(description="Evaluate OWLv2 on LVIS dataset")
parser.add_argument("--dataset_dir", default="/path/to/lvis/dataset")
parser.add_argument("--pad_value", type=float, default=0.5)
parser.add_argument("--local_rank", default=int(os.getenv('LOCAL_RANK', -1)), type=int)
parser.add_argument("--topk", type=int, default=300)
parser.add_argument("--num_workers", type=int, default=4)
args = parser.parse_args()
torch.cuda.set_device(args.local_rank)
dist.init_process_group(
backend="nccl",
init_method="env://",
world_size=int(os.getenv("WORLD_SIZE", 1)),
rank=int(os.getenv("RANK", 0)),
device_id=torch.device(f'cuda:{args.local_rank}'),
)
rank = dist.get_rank()
world_size = dist.get_world_size()
print(f'Using Pad Value : {args.pad_value}')
device = torch.device(f"cuda:{args.local_rank}" if args.local_rank >= 0 else "cuda")
if rank == 0:
print(f"Running evaluation on {world_size} GPUs, device={device}")
processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-ensemble", use_fast=True)
model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-ensemble").to(device).eval()
ann_file = os.path.join(args.dataset_dir, "lvis_v1_val.json")
img_dir = os.path.join(args.dataset_dir, "val2017")
dataset = LVISDataset(ann_file, img_dir, processor=processor, pad_value=args.pad_value)
sampler = DistributedSampler(dataset, num_replicas=world_size, rank=rank, shuffle=False)
dataloader = DataLoader(
dataset,
batch_size=1,
sampler=sampler,
collate_fn=collate_fn,
num_workers=args.num_workers,
pin_memory=True,
persistent_workers=(args.num_workers > 0)
)
lvis_gt = dataset.lvis
cats = sorted(lvis_gt.cats.items(), key=lambda x: x[0])
class_names = [cat['name'] for _, cat in cats]
texts_ens = []
for template in PROMPT_TEMPLATES:
texts_ens += [_canonicalize_string(template.format(name)) for name in class_names]
with torch.no_grad():
text_inputs = processor.tokenizer(
texts_ens, padding=True, truncation=True, max_length=16, return_tensors="pt"
).to(device)
text_outputs = model.owlv2.text_model(**text_inputs)
text_embeds = model.owlv2.text_projection(text_outputs[1])
text_embeds = text_embeds / torch.linalg.norm(text_embeds, ord=2, dim=-1, keepdim=True)
input_ids = text_inputs['input_ids'].reshape(1, -1, text_inputs['input_ids'].shape[-1])
query_mask = input_ids[..., 0] > 0
print(f'RANK {rank}, Ready!')
dist.barrier()
raw_predictions = []
progress_bar = tqdm(dataloader, desc="Evaluating") if rank == 0 else dataloader
for n, batch in enumerate(progress_bar):
img_ids, images, widths, heights, pixel_values = batch
with torch.no_grad():
num_patches_height = model.num_patches_height
num_patches_width = model.num_patches_width
vision_outputs = model.owlv2.vision_model(pixel_values=pixel_values.to(device))
last_hidden_state = vision_outputs[0]
image_embeds = model.owlv2.vision_model.post_layernorm(last_hidden_state)
class_token_out = torch.broadcast_to(image_embeds[:, :1, :], image_embeds[:, :-1].shape)
image_embeds = image_embeds[:, 1:, :] * class_token_out
image_embeds = model.layer_norm(image_embeds)
image_embeds = image_embeds.reshape(
image_embeds.shape[0], num_patches_height, num_patches_width, image_embeds.shape[-1]
)
image_feats = image_embeds.view(image_embeds.shape[0], -1, image_embeds.shape[-1])
(pred_logits, _) = model.class_predictor(image_feats, text_embeds, query_mask)
pred_boxes = model.box_predictor(image_feats, image_embeds, False)
num_templates = len(PROMPT_TEMPLATES)
num_classes = len(class_names)
scores = pred_logits.reshape(1, -1, num_templates, num_classes).mean(2)
bsz, num_patches, num_classes = scores.shape
k = min(args.topk, num_patches * num_classes)
scores_flat = scores.view(bsz, -1)
topk_scores, topk_inds = torch.topk(scores_flat, k, dim=1)
patch_inds = topk_inds // num_classes
label_inds = topk_inds % num_classes
batch_idx = torch.arange(bsz, device=pred_boxes.device).unsqueeze(-1)
selected_boxes = pred_boxes[batch_idx, patch_inds]
raw_predictions.append([
img_ids, widths, heights,
topk_scores.cpu(), label_inds.cpu(), selected_boxes.cpu()
])
torch.cuda.synchronize()
predictions = []
for img_ids, widths, heights, topk_scores_cpu, label_inds_cpu, selected_boxes_cpu in raw_predictions:
image_id = img_ids[0]
w, h = float(widths[0]), float(heights[0])
scale = max(w, h)
scores_np = topk_scores_cpu[0].numpy()
labels_np = label_inds_cpu[0].numpy()
boxes_np = selected_boxes_cpu[0].numpy()
cx, cy, bw, bh = boxes_np[:, 0], boxes_np[:, 1], boxes_np[:, 2], boxes_np[:, 3]
x, y = (cx - bw / 2) * scale, (cy - bh / 2) * scale
width, height = bw * scale, bh * scale
preds_img = [
{
"image_id": image_id,
"category_id": cats[label][0],
"bbox": [float(x[i]), float(y[i]), float(width[i]), float(height[i])],
"score": float(scores_np[i]),
}
for i, label in enumerate(labels_np)
]
predictions.extend(preds_img)
print(f'RANK {rank}, Done!')
all_predictions = [None] * world_size
dist.all_gather_object(all_predictions, predictions)
if rank == 0:
full_predictions = [p for sublist in all_predictions for p in sublist]
lvis_dt = LVISResults(lvis_gt, full_predictions)
lvis_eval = LVISEval(lvis_gt, lvis_dt, iou_type='bbox')
lvis_eval.evaluate()
lvis_eval.accumulate()
lvis_eval.summarize()
lvis_eval.print_results()
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
Commands:
```
# 0.5 padding:
torchrun --nproc-per-node=NUM_GPUS myscript.py --pad_value 0.5 --dataset_dir /path/to/lvis/
# 0.0 padding:
torchrun --nproc-per-node=NUM_GPUS myscript.py --pad_value 0.0 --dataset_dir /path/to/lvis/
```
Please prepare LVIS dataset beforehand with the following structure:
```
/path/to/lvis/
├── val2017
│ ├── 000000062833.jpg
│ └── ...
└── lvis_v1_val.json
```
After Running the scripts, the following logs should be printed:
#### 0.5 padding
```
Using Pad Value : 0.5
Running evaluation on 1 GPUs, device=cuda:0
RANK 0, Ready!
RANK 0, Done!
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds=all] = 0.434
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=300 catIds=all] = 0.600
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=300 catIds=all] = 0.473
Average Precision (AP) @[ IoU=0.50:0.95 | area= s | maxDets=300 catIds=all] = 0.330
Average Precision (AP) @[ IoU=0.50:0.95 | area= m | maxDets=300 catIds=all] = 0.533
Average Precision (AP) @[ IoU=0.50:0.95 | area= l | maxDets=300 catIds=all] = 0.652
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= r] = 0.403
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= c] = 0.430
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= f] = 0.451
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds=all] = 0.563
Average Recall (AR) @[ IoU=0.50:0.95 | area= s | maxDets=300 catIds=all] = 0.406
Average Recall (AR) @[ IoU=0.50:0.95 | area= m | maxDets=300 catIds=all] = 0.672
Average Recall (AR) @[ IoU=0.50:0.95 | area= l | maxDets=300 catIds=all] = 0.805
```
#### 0.0 padding
```
Using Pad Value : 0.0
Running evaluation on 1 GPUs, device=cuda:0
RANK 0, Ready!
RANK 0, Done!
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds=all] = 0.440
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=300 catIds=all] = 0.602
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=300 catIds=all] = 0.482
Average Precision (AP) @[ IoU=0.50:0.95 | area= s | maxDets=300 catIds=all] = 0.333
Average Precision (AP) @[ IoU=0.50:0.95 | area= m | maxDets=300 catIds=all] = 0.540
Average Precision (AP) @[ IoU=0.50:0.95 | area= l | maxDets=300 catIds=all] = 0.664
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= r] = 0.406
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= c] = 0.438
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= f] = 0.458
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds=all] = 0.570
Average Recall (AR) @[ IoU=0.50:0.95 | area= s | maxDets=300 catIds=all] = 0.411
Average Recall (AR) @[ IoU=0.50:0.95 | area= m | maxDets=300 catIds=all] = 0.678
Average Recall (AR) @[ IoU=0.50:0.95 | area= l | maxDets=300 catIds=all] = 0.815
```
Reference:
[1] [OWLv1](https://arxiv.org/pdf/2205.06230) (Figure A4.)
[2] [OWLv2](https://arxiv.org/pdf/2306.09683) (Figure A3),
[3] OWLv2 [original implementation](https://github.com/google-research/scenic/blob/096e6a52b4cbbf30936c168c5d3d42d80e001988/scenic/projects/owl_vit/evaluator.py#L172C7-L172C58), which is changed with [this PR](https://github.com/google-research/scenic/commit/17cc144993f855a66b7301e35e329962da13b060#diff-9e13daafe2df21216a7227dffb5b2c71bda7eb27c0de64df40a681e3ff0d44bfR158) (scenic/projects/owl_vit/evaluator.py, line 158).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41938/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41937
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41937/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41937/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41937/events
|
https://github.com/huggingface/transformers/pull/41937
| 3,566,389,893
|
PR_kwDOCUB6oc6wZHuD
| 41,937
|
Refactor: Replace _default_log_level with DEFAULT_LOG_LEVEL constant
|
{
"login": "Pranavi125",
"id": 187347675,
"node_id": "U_kgDOCyqy2w",
"avatar_url": "https://avatars.githubusercontent.com/u/187347675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pranavi125",
"html_url": "https://github.com/Pranavi125",
"followers_url": "https://api.github.com/users/Pranavi125/followers",
"following_url": "https://api.github.com/users/Pranavi125/following{/other_user}",
"gists_url": "https://api.github.com/users/Pranavi125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pranavi125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pranavi125/subscriptions",
"organizations_url": "https://api.github.com/users/Pranavi125/orgs",
"repos_url": "https://api.github.com/users/Pranavi125/repos",
"events_url": "https://api.github.com/users/Pranavi125/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pranavi125/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T14:49:43
| 2025-10-29T14:55:45
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41937",
"html_url": "https://github.com/huggingface/transformers/pull/41937",
"diff_url": "https://github.com/huggingface/transformers/pull/41937.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41937.patch",
"merged_at": null
}
|
This PR refactors the logging module by renaming `_default_log_level` to `DEFAULT_LOG_LEVEL`
to align with constant naming conventions and improve readability.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41937/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41936
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41936/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41936/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41936/events
|
https://github.com/huggingface/transformers/pull/41936
| 3,566,359,056
|
PR_kwDOCUB6oc6wZBCm
| 41,936
|
Fix: add missing SAFE_WEIGHTS_INDEX_NAME to __all__ in constants.py
|
{
"login": "Pranavi125",
"id": 187347675,
"node_id": "U_kgDOCyqy2w",
"avatar_url": "https://avatars.githubusercontent.com/u/187347675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pranavi125",
"html_url": "https://github.com/Pranavi125",
"followers_url": "https://api.github.com/users/Pranavi125/followers",
"following_url": "https://api.github.com/users/Pranavi125/following{/other_user}",
"gists_url": "https://api.github.com/users/Pranavi125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pranavi125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pranavi125/subscriptions",
"organizations_url": "https://api.github.com/users/Pranavi125/orgs",
"repos_url": "https://api.github.com/users/Pranavi125/repos",
"events_url": "https://api.github.com/users/Pranavi125/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pranavi125/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T14:43:20
| 2025-10-29T14:54:38
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41936",
"html_url": "https://github.com/huggingface/transformers/pull/41936",
"diff_url": "https://github.com/huggingface/transformers/pull/41936.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41936.patch",
"merged_at": null
}
|
This PR adds the missing SAFE_WEIGHTS_INDEX_NAME constant to the __all__ list in constants.py.
Why:
Without this, the constant isn’t exported when using
from transformers.utils.constants import *.
Impact:
- Keeps the constants module consistent.
- No functional changes or breaking impact.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41936/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41935
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41935/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41935/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41935/events
|
https://github.com/huggingface/transformers/issues/41935
| 3,566,241,264
|
I_kwDOCUB6oc7UkIXw
| 41,935
|
Missing `config.json` and `preprocessor_config.json` in `kyutai/moshiko-pytorch-bf16 model` repo
|
{
"login": "akshatvishu",
"id": 33392262,
"node_id": "MDQ6VXNlcjMzMzkyMjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/33392262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshatvishu",
"html_url": "https://github.com/akshatvishu",
"followers_url": "https://api.github.com/users/akshatvishu/followers",
"following_url": "https://api.github.com/users/akshatvishu/following{/other_user}",
"gists_url": "https://api.github.com/users/akshatvishu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshatvishu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshatvishu/subscriptions",
"organizations_url": "https://api.github.com/users/akshatvishu/orgs",
"repos_url": "https://api.github.com/users/akshatvishu/repos",
"events_url": "https://api.github.com/users/akshatvishu/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshatvishu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-29T14:20:23
| 2025-10-29T18:12:54
| null |
NONE
| null | null | null | null |
### System Info
transformers version: 4.57.1
python version: 3.11
### Who can help?
@Cyrilvallez @eustlb
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm opening this issue to request that `config.json` and `preprocessor_config.json` be added to the [kyutai/moshiko-pytorch-bf16](<https://huggingface.co/kyutai/moshiko-pytorch-bf16/tree/main>) model repository.
**Problem:**
Currently, `AutoFeatureExtractor.from_pretrained("kyutai/moshiko-pytorch-bf16")` taken from model doc page at [huggingface.co/docs/transformers/en/model_doc/moshi](<https://huggingface.co/docs/transformers/en/model_doc/moshi>) under the heading `1. Model generation` fails with an `OSError` because `preprocessor_config.json` is missing. This is inconsistent with other repos in the collection, like [kyutai/moshiko-pytorch-q8](<https://huggingface.co/kyutai/moshiko-pytorch-q8/tree/main>) and [kmhf/hf-moshiko](<https://huggingface.co/kmhf/hf-moshiko/tree/main>), which do contain these necessary configuration files.
```
from datasets import load_dataset, Audio
import torch, math
from transformers import MoshiForConditionalGeneration, AutoFeatureExtractor, AutoTokenizer
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
feature_extractor = AutoFeatureExtractor.from_pretrained("kyutai/moshiko-pytorch-bf16")
tokenizer = AutoTokenizer.from_pretrained("kyutai/moshiko-pytorch-bf16")
device = "cuda"
dtype = torch.bfloat16
# prepare user input audio
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate))
audio_sample = librispeech_dummy[-1]["audio"]["array"]
user_input_values = feature_extractor(raw_audio=audio_sample, sampling_rate=feature_extractor.sampling_rate, return_tensors="pt").to(device=device, dtype=dtype)
# prepare moshi input values - we suppose moshi didn't say anything while the user spoke
moshi_input_values = torch.zeros_like(user_input_values.input_values)
# prepare moshi input ids - we suppose moshi didn't say anything while the user spoke
num_tokens = math.ceil(moshi_input_values.shape[-1] * waveform_to_token_ratio)
input_ids = torch.ones((1, num_tokens), device=device, dtype=torch.int64) * tokenizer.encode("<pad>")[0]
# generate 25 new tokens (around 2s of audio)
output = model.generate(input_ids=input_ids, user_input_values=user_input_values.input_values, moshi_input_values=moshi_input_values, max_new_tokens=25)
text_tokens = output.sequences
audio_waveforms = output.audio_sequences
```
error:
```
OSError: kyutai/moshiko-pytorch-bf16 does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/kyutai/moshiko-pytorch-bf16/tree/main' for available files.
```
**Confirmation from Source Repository:**
This has been confirmed by the model's authors as an issue for the Transformers port to handle (see: https://github.com/kyutai-labs/moshi/issues/234 )
### Expected behavior
**Proposed Solution:**
Adding the missing configuration files will resolve this. The content can be derived from the existing `q8` variant.
**Proposed `preprocessor_config.json`:**
(Copied from [kmhf/hf-moshiko](<https://huggingface.co/kmhf/hf-moshiko/tree/main>))
```json
{
"feature_extractor_type": "EncodecFeatureExtractor",
"sampling_rate": 24000,
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": true,
"chunk_length_s": null,
"overlap": null
}
```
**Proposed `config.json` :**
(Based on [kyutai/moshiko-pytorch-q8](<https://huggingface.co/kyutai/moshiko-pytorch-q8/blob/main/config.json>) and [kyutai/moshiko-pytorch-bf16/](<https://huggingface.co/kyutai/moshiko-pytorch-bf16/tree/main>)
```json
{
"moshi_name": "model.safetensors",
"mimi_name": "tokenizer-e351c8d8-checkpoint125.safetensors",
"tokenizer_name": "tokenizer_spm_32k_3.model",
"quantize": false,
"dim": 4096,
"text_card": 32000,
"existing_text_padding_id": 3,
"n_q": 16,
"dep_q": 8,
"card": 2048,
"num_heads": 32,
"num_layers": 32,
"hidden_scale": 4.125,
"causal": true,
"layer_scale": null,
"context": 3000,
"max_period": 10000,
"gating": "silu",
"norm": "rms_norm_f32",
"positional_embedding": "rope",
"depformer_dim": 1024,
"depformer_dim_feedforward": 4224,
"depformer_num_heads": 16,
"depformer_num_layers": 6,
"depformer_causal": true,
"depformer_layer_scale": null,
"depformer_multi_linear": true,
"depformer_context": 8,
"depformer_max_period": 10000,
"depformer_gating": "silu",
"depformer_pos_emb": "none",
"depformer_weights_per_step": true,
"delays": [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1]
}
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41935/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41934
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41934/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41934/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41934/events
|
https://github.com/huggingface/transformers/pull/41934
| 3,566,116,512
|
PR_kwDOCUB6oc6wYLfy
| 41,934
|
Fix: Gemma3TextConfig rope scaling assignments
|
{
"login": "RyanMullins",
"id": 868555,
"node_id": "MDQ6VXNlcjg2ODU1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/868555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMullins",
"html_url": "https://github.com/RyanMullins",
"followers_url": "https://api.github.com/users/RyanMullins/followers",
"following_url": "https://api.github.com/users/RyanMullins/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanMullins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanMullins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanMullins/subscriptions",
"organizations_url": "https://api.github.com/users/RyanMullins/orgs",
"repos_url": "https://api.github.com/users/RyanMullins/repos",
"events_url": "https://api.github.com/users/RyanMullins/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanMullins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T13:54:01
| 2025-10-29T13:56:56
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41934",
"html_url": "https://github.com/huggingface/transformers/pull/41934",
"diff_url": "https://github.com/huggingface/transformers/pull/41934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41934.patch",
"merged_at": null
}
|
# What does this PR do?
Related to https://github.com/huggingface/transformers/pull/41922, this PR corrects the assignment of the `rope_scaling` dictionary present on some Gemma 3 pre-trained models on HF Hub when normalizing to the new `rope_parameters` value.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@zucchini-nlp PTAL since you have been handling the RoPE changes.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41934/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41933
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41933/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41933/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41933/events
|
https://github.com/huggingface/transformers/pull/41933
| 3,566,094,335
|
PR_kwDOCUB6oc6wYGnB
| 41,933
|
Fix: Skip weight initialization for quantized int8 models
|
{
"login": "Pranavi125",
"id": 187347675,
"node_id": "U_kgDOCyqy2w",
"avatar_url": "https://avatars.githubusercontent.com/u/187347675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pranavi125",
"html_url": "https://github.com/Pranavi125",
"followers_url": "https://api.github.com/users/Pranavi125/followers",
"following_url": "https://api.github.com/users/Pranavi125/following{/other_user}",
"gists_url": "https://api.github.com/users/Pranavi125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pranavi125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pranavi125/subscriptions",
"organizations_url": "https://api.github.com/users/Pranavi125/orgs",
"repos_url": "https://api.github.com/users/Pranavi125/repos",
"events_url": "https://api.github.com/users/Pranavi125/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pranavi125/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T13:49:42
| 2025-10-29T13:49:42
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41933",
"html_url": "https://github.com/huggingface/transformers/pull/41933",
"diff_url": "https://github.com/huggingface/transformers/pull/41933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41933.patch",
"merged_at": null
}
|
What does this PR do?
This PR fixes an issue where quantized models (e.g., RedHatAI/Qwen2.5-VL-7B-Instruct-quantized.w8a8) fail to load due to a dtype incompatibility during weight initialization.
Problem
When loading quantized models (dtype=torch.int8), the method _load_pretrained_model() still calls initialize_weights().
Since PyTorch’s normal_() operation is unsupported for integer tensors, this leads to:
RuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8
Fix
Added a condition to skip weight initialization when the model is quantized:
if not is_quantized:
self.initialize_weights()
This ensures that quantized models bypass floating-point initialization safely.
Impact
✅ Prevents reinitialization of quantized weights
✅ Allows quantized models to load successfully using llmcompressor or compressed-tensors
✅ No change or performance impact for standard (float/bfloat) models
Checklist
Fixes dtype initialization crash for quantized models
Tested locally with Qwen2.5-VL-7B-Instruct-quantized.w8a8
Maintains full compatibility with non-quantized models
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41933/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41932
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41932/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41932/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41932/events
|
https://github.com/huggingface/transformers/pull/41932
| 3,565,840,312
|
PR_kwDOCUB6oc6wXPeU
| 41,932
|
Fix: Handle missing safetensors gracefully to prevent import errors
|
{
"login": "Pranavi125",
"id": 187347675,
"node_id": "U_kgDOCyqy2w",
"avatar_url": "https://avatars.githubusercontent.com/u/187347675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pranavi125",
"html_url": "https://github.com/Pranavi125",
"followers_url": "https://api.github.com/users/Pranavi125/followers",
"following_url": "https://api.github.com/users/Pranavi125/following{/other_user}",
"gists_url": "https://api.github.com/users/Pranavi125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pranavi125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pranavi125/subscriptions",
"organizations_url": "https://api.github.com/users/Pranavi125/orgs",
"repos_url": "https://api.github.com/users/Pranavi125/repos",
"events_url": "https://api.github.com/users/Pranavi125/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pranavi125/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T12:52:40
| 2025-10-29T13:31:53
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41932",
"html_url": "https://github.com/huggingface/transformers/pull/41932",
"diff_url": "https://github.com/huggingface/transformers/pull/41932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41932.patch",
"merged_at": null
}
|
This PR adds a safeguard for environments where `safetensors` is not installed.
It prevents import errors during dependency checks and allows transformers to load normally.
Changes made:
- Updated `setup.py` to conditionally check for safetensors
- Improved dependency handling logic
Tested locally: verified that transformers imports correctly with and without safetensors.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41932/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41931
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41931/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41931/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41931/events
|
https://github.com/huggingface/transformers/pull/41931
| 3,565,179,734
|
PR_kwDOCUB6oc6wVCVh
| 41,931
|
fix 3 failed test cases for video_llama_3 model on Intel XPU
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T09:55:19
| 2025-10-30T01:38:52
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41931",
"html_url": "https://github.com/huggingface/transformers/pull/41931",
"diff_url": "https://github.com/huggingface/transformers/pull/41931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41931.patch",
"merged_at": null
}
|
This PR fixes 3 failed test cases on Intel XPU:
```
1.tests/models/video_llama_3/test_modeling_video_llama_3.py::VideoLlama3IntegrationTest::test_small_model_integration_test
2.tests/models/video_llama_3/test_modeling_video_llama_3.py::VideoLlama3IntegrationTest::test_small_model_integration_test_batch_wo_ima
ge
3.tests/models/video_llama_3/test_modeling_video_llama_3.py::VideoLlama3ModelTest::test_generate_with_quant_cache
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41931/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41930
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41930/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41930/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41930/events
|
https://github.com/huggingface/transformers/pull/41930
| 3,565,167,412
|
PR_kwDOCUB6oc6wU_v7
| 41,930
|
handle inputs from Siglip/Siglip2 non-automapped encoder layers
|
{
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T09:51:32
| 2025-10-30T07:52:07
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41930",
"html_url": "https://github.com/huggingface/transformers/pull/41930",
"diff_url": "https://github.com/huggingface/transformers/pull/41930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41930.patch",
"merged_at": null
}
|
# What does this PR do?
Should fix #41929 . The `check_model_inputs` / `can_record_outputs` interaction is not always trivial and models with several entrypoints such as `VisionModel` vs `VisionTransformer` are missing some, adding it here. Also added a modification in `generic` to make sure the flag was captured, not 100% sure it's needed.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41930/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41929
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41929/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41929/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41929/events
|
https://github.com/huggingface/transformers/issues/41929
| 3,564,860,708
|
I_kwDOCUB6oc7Ue3Uk
| 41,929
|
ViT model's output_attention is not work
|
{
"login": "naturesh",
"id": 150237898,
"node_id": "U_kgDOCPRyyg",
"avatar_url": "https://avatars.githubusercontent.com/u/150237898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naturesh",
"html_url": "https://github.com/naturesh",
"followers_url": "https://api.github.com/users/naturesh/followers",
"following_url": "https://api.github.com/users/naturesh/following{/other_user}",
"gists_url": "https://api.github.com/users/naturesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naturesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naturesh/subscriptions",
"organizations_url": "https://api.github.com/users/naturesh/orgs",
"repos_url": "https://api.github.com/users/naturesh/repos",
"events_url": "https://api.github.com/users/naturesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/naturesh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-29T08:21:16
| 2025-10-29T09:52:11
| null |
NONE
| null | null | null | null |
### System Info
macos 26.0
python 3.10
pytorch 2.7.1
transformers 4.57.1
### Who can help?
@yonigozlan @molbap
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoModel, AutoProcessor
from transformers.image_utils import load_image
# load the model and processor
ckpt = "google/siglip2-so400m-patch16-naflex"
model = AutoModel.from_pretrained(ckpt).eval()
processor = AutoProcessor.from_pretrained(ckpt)
# load the image
image = load_image("https://huggingface.co/datasets/merve/coco/resolve/main/val2017/000000000285.jpg")
inputs = processor(images=[image], return_tensors="pt").to(model.device)
# run infernece
with torch.no_grad():
image_embeddings = model.vision_model(
pixel_values = inputs['pixel_values'],
attention_mask = inputs['pixel_attention_mask'],
spatial_shapes = inputs['spatial_shapes'],
output_attentions = True,
output_hidden_states = True
)
print(image_embeddings)
```
### Expected behavior
I'm trying to get attributes, hidden_states from google/siglip2-so400m-patch16-naflex using the model.vision_model.forward() method.
from_pretrained(), you can also add output_attentions as a parameter,
If you add it as a new config or as a parameter in .forward(), it always returns attributes and hidden_states to none in all operations.
Changing attn_implementation to eager does not solve the problem.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41929/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41928
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41928/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41928/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41928/events
|
https://github.com/huggingface/transformers/pull/41928
| 3,564,445,852
|
PR_kwDOCUB6oc6wSnXu
| 41,928
|
fix: add clear error message when mistral-common is missing for AutoTokenizer loading Voxtral
|
{
"login": "junjunjd",
"id": 55823903,
"node_id": "MDQ6VXNlcjU1ODIzOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/55823903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junjunjd",
"html_url": "https://github.com/junjunjd",
"followers_url": "https://api.github.com/users/junjunjd/followers",
"following_url": "https://api.github.com/users/junjunjd/following{/other_user}",
"gists_url": "https://api.github.com/users/junjunjd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junjunjd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junjunjd/subscriptions",
"organizations_url": "https://api.github.com/users/junjunjd/orgs",
"repos_url": "https://api.github.com/users/junjunjd/repos",
"events_url": "https://api.github.com/users/junjunjd/events{/privacy}",
"received_events_url": "https://api.github.com/users/junjunjd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T05:36:55
| 2025-10-29T20:28:44
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41928",
"html_url": "https://github.com/huggingface/transformers/pull/41928",
"diff_url": "https://github.com/huggingface/transformers/pull/41928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41928.patch",
"merged_at": null
}
|
- Add clear error message when mistral-common is missing for AutoTokenizer loading Voxtral
- Add unit test
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41928/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41927
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41927/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41927/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41927/events
|
https://github.com/huggingface/transformers/issues/41927
| 3,564,435,640
|
I_kwDOCUB6oc7UdPi4
| 41,927
|
Nightly / Nvidia CI workflows trigger on forks and fail due to missing org-specific runners
|
{
"login": "AvinashDwivedi",
"id": 86379589,
"node_id": "MDQ6VXNlcjg2Mzc5NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/86379589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AvinashDwivedi",
"html_url": "https://github.com/AvinashDwivedi",
"followers_url": "https://api.github.com/users/AvinashDwivedi/followers",
"following_url": "https://api.github.com/users/AvinashDwivedi/following{/other_user}",
"gists_url": "https://api.github.com/users/AvinashDwivedi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AvinashDwivedi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AvinashDwivedi/subscriptions",
"organizations_url": "https://api.github.com/users/AvinashDwivedi/orgs",
"repos_url": "https://api.github.com/users/AvinashDwivedi/repos",
"events_url": "https://api.github.com/users/AvinashDwivedi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AvinashDwivedi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-29T05:33:37
| 2025-10-29T15:18:51
| null |
NONE
| null | null | null | null |
### System Info
When forking the huggingface/transformers repository, certain GitHub Actions workflows (like “Nvidia CI with nightly torch” and “Nightly PyTorch build”) are automatically triggered on the forked repo’s default branch (main) — even though they depend on organization-specific GPU runners and secrets.
This leads to immediate workflow failures and email notifications such as:
Run failed: Nvidia CI with nightly torch - main (...)
<img width="1750" height="1205" alt="Image" src="https://github.com/user-attachments/assets/513427b6-3992-4f95-aae4-c50676b2dc29" />
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Fork the upstream repository huggingface/transformers on GitHub (press Fork in the web UI).
### Expected behavior
Forked repositories should:
1. not trigger organization-specific CI pipelines, or
2. gracefully skip such jobs without failure.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41927/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41926
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41926/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41926/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41926/events
|
https://github.com/huggingface/transformers/pull/41926
| 3,564,293,431
|
PR_kwDOCUB6oc6wSL9e
| 41,926
|
Cache latest pytorch amd image locally on mi325 CI runner cluster
|
{
"login": "jitesh-gupta",
"id": 202713221,
"node_id": "U_kgDODBUohQ",
"avatar_url": "https://avatars.githubusercontent.com/u/202713221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jitesh-gupta",
"html_url": "https://github.com/jitesh-gupta",
"followers_url": "https://api.github.com/users/jitesh-gupta/followers",
"following_url": "https://api.github.com/users/jitesh-gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/jitesh-gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jitesh-gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jitesh-gupta/subscriptions",
"organizations_url": "https://api.github.com/users/jitesh-gupta/orgs",
"repos_url": "https://api.github.com/users/jitesh-gupta/repos",
"events_url": "https://api.github.com/users/jitesh-gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/jitesh-gupta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-29T04:29:36
| 2025-10-29T18:45:38
| 2025-10-29T18:45:37
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41926",
"html_url": "https://github.com/huggingface/transformers/pull/41926",
"diff_url": "https://github.com/huggingface/transformers/pull/41926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41926.patch",
"merged_at": "2025-10-29T18:45:37"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Caches the latest `huggingface/transformers-pytorch-amd-gpu` image on the amd-mi325 runner cluster.
This image is heavily used by the models CI job in the AMD mi325 CI workflow `Self-hosted runner scale set (AMD mi325 scheduled CI caller)` and hence caching it locally will help reduce the network traffic as well as significantly improve the jobs turnaround time.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41926/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41925
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41925/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41925/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41925/events
|
https://github.com/huggingface/transformers/pull/41925
| 3,563,669,430
|
PR_kwDOCUB6oc6wQOi7
| 41,925
|
[deepspeed tests fixes]
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T22:51:46
| 2025-10-29T12:19:45
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41925",
"html_url": "https://github.com/huggingface/transformers/pull/41925",
"diff_url": "https://github.com/huggingface/transformers/pull/41925.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41925.patch",
"merged_at": null
}
|
Fixing a few deepspeed tests
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41925/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41924
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41924/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41924/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41924/events
|
https://github.com/huggingface/transformers/issues/41924
| 3,563,519,251
|
I_kwDOCUB6oc7UZv0T
| 41,924
|
`output_attentions=True` always warns for non-`"eager"` attention implementations, even when a custom AttentionInterface backend does return attention weights
|
{
"login": "kannandeepti",
"id": 35346947,
"node_id": "MDQ6VXNlcjM1MzQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/35346947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kannandeepti",
"html_url": "https://github.com/kannandeepti",
"followers_url": "https://api.github.com/users/kannandeepti/followers",
"following_url": "https://api.github.com/users/kannandeepti/following{/other_user}",
"gists_url": "https://api.github.com/users/kannandeepti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kannandeepti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kannandeepti/subscriptions",
"organizations_url": "https://api.github.com/users/kannandeepti/orgs",
"repos_url": "https://api.github.com/users/kannandeepti/repos",
"events_url": "https://api.github.com/users/kannandeepti/events{/privacy}",
"received_events_url": "https://api.github.com/users/kannandeepti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T21:57:36
| 2025-10-28T21:57:36
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.57.1
- Platform: Linux-5.10.233-223.887.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.19
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.6.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0+cu124 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@vasqu @ArthurZucker @Cyrilvallez When using a custom attention function registered via the new AttentionInterface and selecting it with `attn_implementation="<custom_name>"`, passing `output_attentions=True` to `model.forward(...)` triggers a UserWarning like:
> UserWarning: `output_attentions=True` is not supported with `attn_implementation` other than ['eager', 'eager_paged', 'flex_attention']. Please use `model.set_attn_implementation('eager')` to enable capturing attention outputs.
This warning is misleading for custom backends that do compute and return attention probabilities (same shape as eager). In addition, some models still set outputs.attentions=None unless the implementation name is exactly "eager", even though the custom backend returns (attn_output, attn_probs).
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The below code snippet triggers the undesirable UserWarning.
```python
import torch
import torch.nn as nn
from transformers.models.esm.modeling_esm import TransformersKwargs
from typing import Optional
from transformers import AutoModel, AttentionInterface
def eager_with_bias_attention_forward(
module: nn.Module,
query: torch.Tensor, # [B, H, T, D]
key: torch.Tensor, # [B, H, S, D]
value: torch.Tensor, # [B, H, S, D]
attention_mask: Optional[torch.Tensor],
scaling: Optional[float] = None,
dropout: float = 0.0,
**kwargs: TransformersKwargs,
):
"""
Adds `attention_bias` (broadcastable to [B, H or 1, T, S]) to logits before softmax.
Pass it via model(..., attention_bias=your_bias).
"""
if scaling is None:
scaling = query.size(-1) ** -0.5
# Take the dot product between "query" and "key" to get the raw attention scores.
attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling # [B, H, T, S]
if attention_mask is not None:
attention_mask = attention_mask[:, :, :, : key.shape[-2]]
attn_weights = attn_weights + attention_mask
# Add the bias matrix to the attention weights
attention_bias = kwargs.get("attention_bias", None)
if attention_bias is not None:
# allow [B, 1, T, S], [B, H, T, S], or [1, 1, T, S]; truncate S if needed
if attention_bias.size(-1) != key.shape[-2]:
attention_bias = attention_bias[..., : key.shape[-2]]
attention_bias = attention_bias.to(
dtype=attn_weights.dtype, device=attn_weights.device
)
attn_weights = attn_weights + attention_bias
attn_weights = nn.functional.softmax(attn_weights, dim=-1)
attn_weights = nn.functional.dropout(
attn_weights, p=dropout, training=module.training
)
attn_output = torch.matmul(attn_weights, value)
attn_output = attn_output.transpose(1, 2).contiguous()
return attn_output, attn_weights
# Register custom attention implementation
AttentionInterface.register("eager_with_bias", eager_with_bias_attention_forward)
# Load ESM2 model using the custom attention backend
model = AutoModel.from_pretrained(
"facebook/esm2_t33_650M_UR50D",
token_dropout=False,
local_files_only=True,
attn_implementation="eager_with_bias",
)
# --- dummy batch ---
B, T = 2, 64
hidden_size = model.config.hidden_size
H = model.config.num_attention_heads
# inputs_embeds must be [B, T, hidden_size]
emb = torch.randn(B, T, hidden_size, device=next(model.parameters()).device)
# attention_mask must be [B, T] with 1 for tokens you want to keep
attention_mask = torch.ones(B, T, dtype=torch.long, device=emb.device)
# bias should broadcast to [B, H, T, T]; using shared-across-heads:
attention_bias = torch.zeros(B, 1, T, T, device=emb.device)
# Triggers a UserWarning even though backend returns attention weights,
# and some models set outputs.attentions = None unless impl == "eager".
out = model(
inputs_embeds=emb,
attention_mask=attention_mask,
output_attentions=True,
attention_bias=attention_bias,
)
# Check if attention weights are being returned
assert (
out.attentions is not None and len(out.attentions) == model.config.num_hidden_layers
)
print("OK: got attention weights from custom backend")
```
### Expected behavior
### Expected behavior
- If the selected attention backend **returns attention probabilities**, `outputs.attentions` should be populated and **no warning** should be emitted.
- The warning (or error) should trigger **only** when the chosen backend **cannot** provide attention probabilities.
---
### Actual behavior
- A **UserWarning** is emitted whenever `attn_implementation != "eager"`, regardless of whether the custom backend supports returning attention weights.
- In some models, `outputs.attentions` is `None` unless the implementation name is literally `"eager"`.
---
### Where this comes from / related context
- There’s an **“early-error if `output_attentions=True` and impl isn’t eager”** change discussed in [PR #38288](https://github.com/huggingface/transformers/pull/38288) (config path).
- The [Attention Interface docs](https://huggingface.co/docs/transformers/main/en/attention) show how to register/select custom implementations and say extra kwargs are forwarded to the attention function, but they don’t document a way to declare that a custom backend supports returning attentions.
---
### Proposed solutions
#### 1. Capability flag on backends
Extend `AttentionInterface.register(name, fn, supports_attn_probs: bool = False)` (or use a small descriptor object) so model code can check capability instead of name equality.
If `supports_attn_probs=True`, allow `output_attentions=True` without warnings and surface the returned probabilities.
#### 2. Name-agnostic check
Replace `impl != "eager"` string checks with an interface query like `AttentionInterface.supports_attn_probs(impl)` to decide warning/error behavior, so custom backends that return weights aren’t penalized.
#### 3. Documented workaround
If changing the check is not desirable, document an official way to **declare** a custom backend as “eager-compatible,” or provide a supported alias/registration API that treats a custom backend like `"eager"` for the purpose of attention-weight return (avoiding the need for users to override `"eager"` globally just to silence the warning).
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41924/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41923
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41923/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41923/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41923/events
|
https://github.com/huggingface/transformers/pull/41923
| 3,563,463,159
|
PR_kwDOCUB6oc6wPhqk
| 41,923
|
fix some ut failures on XPU w/ torch 2.9
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T21:37:42
| 2025-10-29T15:20:07
| 2025-10-29T15:15:34
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41923",
"html_url": "https://github.com/huggingface/transformers/pull/41923",
"diff_url": "https://github.com/huggingface/transformers/pull/41923.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41923.patch",
"merged_at": "2025-10-29T15:15:34"
}
|
cases are below, all passed. @ydshieh , pls help review, thx very much.
> tests/models/aya_vision/test_modeling_aya_vision.py::AyaVisionIntegrationTest::test_small_model_integration_generate_text_only
> tests/models/aya_vision/test_modeling_aya_vision.py::AyaVisionIntegrationTest::test_small_model_integration_forward
> tests/models/aya_vision/test_modeling_aya_vision.py::AyaVisionIntegrationTest::test_small_model_integration_batched_generate_multi_image
> tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_whisper_longform
> tests/test_pipeline_mixin.py::AutomaticSpeechRecognitionPipelineTests::test_whisper_longform
> tests/models/aria/test_modeling_aria.py::AriaForConditionalGenerationIntegrationTest::test_generation_no_images
> tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_bf16
> tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops
> tests/models/glm4v/test_modeling_glm4v.py::Glm4vIntegrationTest::test_small_model_integration_test_expand
> tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_generate
> tests/models/mllama/test_modeling_mllama.py::MllamaForConditionalGenerationIntegrationTest::test_11b_model_integration_generate_text_only
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41923/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41922
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41922/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41922/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41922/events
|
https://github.com/huggingface/transformers/pull/41922
| 3,563,371,335
|
PR_kwDOCUB6oc6wPNh2
| 41,922
|
Fix rope_parameters for gemma3 weights conversion script
|
{
"login": "douglas-reid",
"id": 21148125,
"node_id": "MDQ6VXNlcjIxMTQ4MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/21148125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/douglas-reid",
"html_url": "https://github.com/douglas-reid",
"followers_url": "https://api.github.com/users/douglas-reid/followers",
"following_url": "https://api.github.com/users/douglas-reid/following{/other_user}",
"gists_url": "https://api.github.com/users/douglas-reid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/douglas-reid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/douglas-reid/subscriptions",
"organizations_url": "https://api.github.com/users/douglas-reid/orgs",
"repos_url": "https://api.github.com/users/douglas-reid/repos",
"events_url": "https://api.github.com/users/douglas-reid/events{/privacy}",
"received_events_url": "https://api.github.com/users/douglas-reid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T21:07:44
| 2025-10-29T13:58:59
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41922",
"html_url": "https://github.com/huggingface/transformers/pull/41922",
"diff_url": "https://github.com/huggingface/transformers/pull/41922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41922.patch",
"merged_at": null
}
|
# What does this PR do?
Fixes the rope_parameters in the weights conversion script for Gemma 3.
These should be:
```
local => default @ 10_000.0
global => linear(8.0) @ 1_000_000.0
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41922/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41921
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41921/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41921/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41921/events
|
https://github.com/huggingface/transformers/pull/41921
| 3,563,255,026
|
PR_kwDOCUB6oc6wO1N3
| 41,921
|
fix tensor device placement issue of 2 UT cases
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T20:27:38
| 2025-10-29T15:17:39
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41921",
"html_url": "https://github.com/huggingface/transformers/pull/41921",
"diff_url": "https://github.com/huggingface/transformers/pull/41921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41921.patch",
"merged_at": null
}
|
If run below 2 cases on 2 accelerators:
> pytest -rA tests/models/speech_to_text/test_modeling_speech_to_text.py::Speech2TextModelIntegrationTests::test_generation_librispeech
>pytest -rA tests/models/speech_to_text/test_modeling_speech_to_text.py::Speech2TextModelIntegrationTests::test_generation_librispeech_batched
Will fail w/ error as below:
> self = Speech2TextEncoderLayer(
> (self_attn): Speech2TextAttention(
> (k_proj): Linear(in_features=256, out_features=256, ...atures=2048, out_features=256, bias=True)
> (final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
> )
> hidden_states = tensor([[[-313.5520, -90.0677, 48.3815, ..., -22.3849, -48.9418,
> -49.9119],
> [-285.8702, -94...2.9486],
> [ 25.0123, -37.5042, 13.0347, ..., -58.4456, -16.1031,
> 45.5035]]], device='xpu:1')
> attention_mask = tensor([[[[0., 0., 0., ..., 0., 0., 0.],
> [0., 0., 0., ..., 0., 0., 0.],
> [0., 0., 0., ..., 0., 0....., 0., 0., 0.],
> [0., 0., 0., ..., 0., 0., 0.],
> [0., 0., 0., ..., 0., 0., 0.]]]], device='xpu:0')
> output_attentions = False
>
> def forward(
> self,
> hidden_states: torch.Tensor,
> attention_mask: torch.Tensor,
> output_attentions: bool = False,
> ) -> torch.Tensor:
> """
> Args:
> hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
> attention_mask (`torch.FloatTensor`): attention mask of size
> `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
> output_attentions (`bool`, *optional*):
> Whether or not to return the attentions tensors of all attention layers. See `attentions` under
> returned tensors for more detail.
> """
> residual = hidden_states
> hidden_states = self.self_attn_layer_norm(hidden_states)
> hidden_states, attn_weights = self.self_attn(
> hidden_states=hidden_states,
> attention_mask=attention_mask,
> output_attentions=output_attentions,
> )
> hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
> hidden_states = residual + hidden_states
>
> residual = hidden_states
> hidden_states = self.final_layer_norm(hidden_states)
> hidden_states = self.activation_fn(self.fc1(hidden_states))
> hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
> hidden_states = self.fc2(hidden_states)
> hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
> hidden_states = residual + hidden_states
> ^^^^^^^^^^^^^^^^^^^^^^^^
> E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, xpu:0 and xpu:1!
>
> src/transformers/models/speech_to_text/modeling_speech_to_text.py:369: RuntimeError
This PR fixed the issue.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41921/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41920
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41920/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41920/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41920/events
|
https://github.com/huggingface/transformers/pull/41920
| 3,562,379,835
|
PR_kwDOCUB6oc6wL7Po
| 41,920
|
evaluate>=0.4.6 is needed
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T16:25:50
| 2025-10-29T22:59:11
| 2025-10-29T12:20:54
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41920",
"html_url": "https://github.com/huggingface/transformers/pull/41920",
"diff_url": "https://github.com/huggingface/transformers/pull/41920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41920.patch",
"merged_at": "2025-10-29T12:20:54"
}
|
Some HF Transformers tests/examples fail with main and `evaluate < 0.4.6`
Fixing:
```
stderr: [rank0]: Traceback (most recent call last):
stderr: [rank0]: File "/code/users/stas/github/transformers-alst-integration/examples/pytorch/question-answering/run_qa.py", line 692, in <module>
stderr: [rank0]: main()
stderr: [rank0]: File "/code/users/stas/github/transformers-alst-integration/examples/pytorch/question-answering/run_qa.py", line 608, in main
stderr: [rank0]: metric = evaluate.load(
stderr: [rank0]: ^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 748, in load
stderr: [rank0]: evaluation_module = evaluation_module_factory(
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 680, in evaluation_module_factory
stderr: [rank0]: raise e1 from None
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 639, in evaluation_module_factory
stderr: [rank0]: ).get_module()
stderr: [rank0]: ^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 479, in get_module
stderr: [rank0]: local_path = self.download_loading_script(revision)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 469, in download_loading_script
stderr: [rank0]: return cached_path(file_path, download_config=download_config)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/utils/file_utils.py", line 175, in cached_path
stderr: [rank0]: output_path = get_from_cache(
stderr: [rank0]: ^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/utils/file_utils.py", line 448, in get_from_cache
stderr: [rank0]: headers = get_authentication_headers_for_url(url, token=token)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/utils/file_utils.py", line 236, in get_authentication_headers_for_url
stderr: [rank0]: token = hf_api.HfFolder.get_token()
stderr: [rank0]: ^^^^^^^^^^^^^^^
stderr: [rank0]: AttributeError: module 'huggingface_hub.hf_api' has no attribute 'HfFolder'
```
`evaluate>=0.4.6` is needed to fix this.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41920/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41919
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41919/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41919/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41919/events
|
https://github.com/huggingface/transformers/issues/41919
| 3,562,344,322
|
I_kwDOCUB6oc7UVQ-C
| 41,919
|
LFM2 image_processing_lfm2_vl_fast.py Mean Std swapped?
|
{
"login": "florianvoss-commit",
"id": 214635446,
"node_id": "U_kgDODMsTtg",
"avatar_url": "https://avatars.githubusercontent.com/u/214635446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/florianvoss-commit",
"html_url": "https://github.com/florianvoss-commit",
"followers_url": "https://api.github.com/users/florianvoss-commit/followers",
"following_url": "https://api.github.com/users/florianvoss-commit/following{/other_user}",
"gists_url": "https://api.github.com/users/florianvoss-commit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/florianvoss-commit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/florianvoss-commit/subscriptions",
"organizations_url": "https://api.github.com/users/florianvoss-commit/orgs",
"repos_url": "https://api.github.com/users/florianvoss-commit/repos",
"events_url": "https://api.github.com/users/florianvoss-commit/events{/privacy}",
"received_events_url": "https://api.github.com/users/florianvoss-commit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T16:17:44
| 2025-10-29T17:03:09
| null |
NONE
| null | null | null | null |
### System Info
In LFM2-VL image_processing_lfm2_vl_fast.py line 212 following the MEAN and STD from imagenet is used for preprocessing.
However it seems like they are swapped:
image_mean = IMAGENET_STANDARD_STD
image_std = IMAGENET_STANDARD_MEAN
or is this correct ?
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Have a look at https://github.com/huggingface/transformers/blob/main/src/transformers/models/lfm2_vl/image_processing_lfm2_vl_fast.py
### Expected behavior
Not optimized VLM Behaviour
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41919/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41918
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41918/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41918/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41918/events
|
https://github.com/huggingface/transformers/pull/41918
| 3,562,252,059
|
PR_kwDOCUB6oc6wLhGi
| 41,918
|
V4.57.1 training ci: Refactor `test_tensor_parallel.py`
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T15:57:00
| 2025-10-29T11:22:13
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41918",
"html_url": "https://github.com/huggingface/transformers/pull/41918",
"diff_url": "https://github.com/huggingface/transformers/pull/41918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41918.patch",
"merged_at": null
}
|
# What does this PR do?
Refactor `test_tensor_parallel.py` by removing `subprocess`. This way we can easily debug with breakpoint the test if it fails. On top of that, I made the tests more robust by testing on more than `--nproc_per_node> 2.` `--nproc_per_nodes = 8` crashes because the llama model we use is too tiny so can't do TP. But since it works with `--nproc_per_nodes = 4` already, no need to get a bigger llama (that may slow down the tests)
## Who can review?
@ArthurZucker @Cyrilvallez
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41918/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41917
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41917/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41917/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41917/events
|
https://github.com/huggingface/transformers/pull/41917
| 3,562,148,537
|
PR_kwDOCUB6oc6wLKyH
| 41,917
|
update v4.57.1-training-ci with main
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T15:32:22
| 2025-10-28T15:53:27
| 2025-10-28T15:53:27
|
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41917",
"html_url": "https://github.com/huggingface/transformers/pull/41917",
"diff_url": "https://github.com/huggingface/transformers/pull/41917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41917.patch",
"merged_at": "2025-10-28T15:53:26"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41917/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41916
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41916/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41916/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41916/events
|
https://github.com/huggingface/transformers/pull/41916
| 3,561,939,166
|
PR_kwDOCUB6oc6wKeh_
| 41,916
|
feat(ci): add continuous batching to benchmarks
|
{
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T14:45:20
| 2025-10-29T17:22:37
| null |
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41916",
"html_url": "https://github.com/huggingface/transformers/pull/41916",
"diff_url": "https://github.com/huggingface/transformers/pull/41916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41916.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41916/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41915
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41915/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41915/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41915/events
|
https://github.com/huggingface/transformers/pull/41915
| 3,561,733,374
|
PR_kwDOCUB6oc6wJyyB
| 41,915
|
V4.57.1 training ci: Refactor and Fix `test_tensor_parallel.py` to make it more robust
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T14:02:32
| 2025-10-28T14:46:17
| 2025-10-28T14:46:16
|
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41915",
"html_url": "https://github.com/huggingface/transformers/pull/41915",
"diff_url": "https://github.com/huggingface/transformers/pull/41915.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41915.patch",
"merged_at": null
}
|
# What does this PR do?
- TODO:
- update my branch
- explain changes
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @Cyrilvallez
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41915/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41914
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41914/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41914/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41914/events
|
https://github.com/huggingface/transformers/pull/41914
| 3,561,517,574
|
PR_kwDOCUB6oc6wJFvx
| 41,914
|
Run slow v2
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T13:13:51
| 2025-10-29T20:54:47
| null |
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41914",
"html_url": "https://github.com/huggingface/transformers/pull/41914",
"diff_url": "https://github.com/huggingface/transformers/pull/41914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41914.patch",
"merged_at": null
}
|
# What does this PR do?
Run slow v2!
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41914/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41913
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41913/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41913/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41913/events
|
https://github.com/huggingface/transformers/issues/41913
| 3,561,156,260
|
I_kwDOCUB6oc7UQu6k
| 41,913
|
`epoch` in the log message uses a wrong denominator under some conditions
|
{
"login": "nzw0301",
"id": 7121753,
"node_id": "MDQ6VXNlcjcxMjE3NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7121753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nzw0301",
"html_url": "https://github.com/nzw0301",
"followers_url": "https://api.github.com/users/nzw0301/followers",
"following_url": "https://api.github.com/users/nzw0301/following{/other_user}",
"gists_url": "https://api.github.com/users/nzw0301/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nzw0301/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nzw0301/subscriptions",
"organizations_url": "https://api.github.com/users/nzw0301/orgs",
"repos_url": "https://api.github.com/users/nzw0301/repos",
"events_url": "https://api.github.com/users/nzw0301/events{/privacy}",
"received_events_url": "https://api.github.com/users/nzw0301/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T11:38:39
| 2025-10-30T02:07:04
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.57.1
- Platform: macOS-26.0.1-arm64-arm-64bit
- Python version: 3.12.0
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```py
import torch
from datasets import Dataset
from torch import nn
from transformers import Trainer, TrainingArguments
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(2, 2)
def forward(self, a, return_loss=True):
output = self.linear(a)
return {"loss": output.sum()}
data = torch.tensor([[i, i] for i in range(10)], dtype=torch.float32) # [[0., 0.], [1., 1.], [2., 2.], ...]
dataset = Dataset.from_dict({"a": data}).to_iterable_dataset() # finite iterable dataset
args = TrainingArguments(output_dir=".", per_device_train_batch_size=1, max_steps=20, logging_steps=1)
trainer = Trainer(model=MyModule(), args=args, train_dataset=dataset)
trainer.train()
```
```
{'loss': 0.9867, 'grad_norm': 1.4142135381698608, 'learning_rate': 5e-05, 'epoch': 0.05}
{'loss': 1.3851, 'grad_norm': 2.4494898319244385, 'learning_rate': 4.75e-05, 'epoch': 0.1}
{'loss': 1.7833, 'grad_norm': 4.242640495300293, 'learning_rate': 4.5e-05, 'epoch': 0.15}
{'loss': 2.1812, 'grad_norm': 6.164413928985596, 'learning_rate': 4.25e-05, 'epoch': 0.2}
{'loss': 2.5788, 'grad_norm': 8.124038696289062, 'learning_rate': 4e-05, 'epoch': 0.25}
{'loss': 2.9761, 'grad_norm': 10.099504470825195, 'learning_rate': 3.7500000000000003e-05, 'epoch': 0.3}
{'loss': 3.3731, 'grad_norm': 12.083045959472656, 'learning_rate': 3.5e-05, 'epoch': 0.35}
{'loss': 3.7699, 'grad_norm': 14.071247100830078, 'learning_rate': 3.2500000000000004e-05, 'epoch': 0.4}
{'loss': 4.1665, 'grad_norm': 16.0623779296875, 'learning_rate': 3e-05, 'epoch': 0.45}
{'loss': 4.563, 'grad_norm': 18.055469512939453, 'learning_rate': 2.7500000000000004e-05, 'epoch': 0.5}
{'loss': 0.9861, 'grad_norm': 1.4142135381698608, 'learning_rate': 2.5e-05, 'epoch': 1.05}
{'loss': 1.3833, 'grad_norm': 2.4494898319244385, 'learning_rate': 2.25e-05, 'epoch': 1.1}
{'loss': 1.7803, 'grad_norm': 4.242640495300293, 'learning_rate': 2e-05, 'epoch': 1.15}
{'loss': 2.1772, 'grad_norm': 6.164413928985596, 'learning_rate': 1.75e-05, 'epoch': 1.2}
{'loss': 2.574, 'grad_norm': 8.124038696289062, 'learning_rate': 1.5e-05, 'epoch': 1.25}
{'loss': 2.9707, 'grad_norm': 10.099504470825195, 'learning_rate': 1.25e-05, 'epoch': 1.3}
{'loss': 3.3673, 'grad_norm': 12.083045959472656, 'learning_rate': 1e-05, 'epoch': 1.35}
{'loss': 3.764, 'grad_norm': 14.071247100830078, 'learning_rate': 7.5e-06, 'epoch': 1.4}
{'loss': 4.1606, 'grad_norm': 16.0623779296875, 'learning_rate': 5e-06, 'epoch': 1.45}
{'loss': 4.5572, 'grad_norm': 18.055469512939453, 'learning_rate': 2.5e-06, 'epoch': 1.5}
{'train_runtime': 0.2074, 'train_samples_per_second': 96.438, 'train_steps_per_second': 96.438, 'train_loss': 2.774213859438896, 'epoch': 1.5}
```
In my understanding, `epoch` is computed at https://github.com/huggingface/transformers/blob/1f0b490a2c42eb129dccc69031ccb537058689c4/src/transformers/trainer.py#L2555 and the denominator: `steps_in_epoch` is initialised with `args.max_steps` at
https://github.com/huggingface/transformers/blob/1f0b490a2c42eb129dccc69031ccb537058689c4/src/transformers/trainer.py#L2402 when dataset has no `__len__`, like the example above
### Expected behavior
```
{'loss': 0.9867, 'grad_norm': 1.4142135381698608, 'learning_rate': 5e-05, 'epoch': 0.1}
{'loss': 1.3851, 'grad_norm': 2.4494898319244385, 'learning_rate': 4.75e-05, 'epoch': 0.2}
{'loss': 1.7833, 'grad_norm': 4.242640495300293, 'learning_rate': 4.5e-05, 'epoch': 0.3}
{'loss': 2.1812, 'grad_norm': 6.164413928985596, 'learning_rate': 4.25e-05, 'epoch': 0.4}
{'loss': 2.5788, 'grad_norm': 8.124038696289062, 'learning_rate': 4e-05, 'epoch': 0.5}
{'loss': 2.9761, 'grad_norm': 10.099504470825195, 'learning_rate': 3.7500000000000003e-05, 'epoch': 0.6}
{'loss': 3.3731, 'grad_norm': 12.083045959472656, 'learning_rate': 3.5e-05, 'epoch': 0.7}
{'loss': 3.7699, 'grad_norm': 14.071247100830078, 'learning_rate': 3.2500000000000004e-05, 'epoch': 0.8}
{'loss': 4.1665, 'grad_norm': 16.0623779296875, 'learning_rate': 3e-05, 'epoch': 0.9}
{'loss': 4.563, 'grad_norm': 18.055469512939453, 'learning_rate': 2.7500000000000004e-05, 'epoch': 1.0}
{'loss': 0.9861, 'grad_norm': 1.4142135381698608, 'learning_rate': 2.5e-05, 'epoch': 1.1}
{'loss': 1.3833, 'grad_norm': 2.4494898319244385, 'learning_rate': 2.25e-05, 'epoch': 1.2}
{'loss': 1.7803, 'grad_norm': 4.242640495300293, 'learning_rate': 2e-05, 'epoch': 1.3}
{'loss': 2.1772, 'grad_norm': 6.164413928985596, 'learning_rate': 1.75e-05, 'epoch': 1.4}
{'loss': 2.574, 'grad_norm': 8.124038696289062, 'learning_rate': 1.5e-05, 'epoch': 1.5}
{'loss': 2.9707, 'grad_norm': 10.099504470825195, 'learning_rate': 1.25e-05, 'epoch': 1.6}
{'loss': 3.3673, 'grad_norm': 12.083045959472656, 'learning_rate': 1e-05, 'epoch': 1.7}
{'loss': 3.764, 'grad_norm': 14.071247100830078, 'learning_rate': 7.5e-06, 'epoch': 1.8}
{'loss': 4.1606, 'grad_norm': 16.0623779296875, 'learning_rate': 5e-06, 'epoch': 1.9}
{'loss': 4.5572, 'grad_norm': 18.055469512939453, 'learning_rate': 2.5e-06, 'epoch': 2.0}
{'train_runtime': 0.2074, 'train_samples_per_second': 96.438, 'train_steps_per_second': 96.438, 'train_loss': 2.774213859438896, 'epoch': 2.0}
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41913/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41912
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41912/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41912/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41912/events
|
https://github.com/huggingface/transformers/pull/41912
| 3,560,866,876
|
PR_kwDOCUB6oc6wHC-p
| 41,912
|
restore dtype of `hidden_states` in modeling_t5.py
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T10:32:20
| 2025-10-29T05:54:43
| 2025-10-29T05:54:43
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41912",
"html_url": "https://github.com/huggingface/transformers/pull/41912",
"diff_url": "https://github.com/huggingface/transformers/pull/41912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41912.patch",
"merged_at": null
}
|
In t5 model, as dtype of `self.wo.weight` is kept fp32 in [L783](https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/t5/modeling_t5.py#L783), `hidden_states` need to be converted to fp32 in some cases, we should restore it back in scenerios we use model dtype like FP16.
@ArthurZucker @Cyrilvallez, pls help review, thx!
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41912/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41911
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41911/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41911/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41911/events
|
https://github.com/huggingface/transformers/issues/41911
| 3,560,683,826
|
I_kwDOCUB6oc7UO7ky
| 41,911
|
The forward() Method in ModernBertForTokenClassification is missing **kwargs
|
{
"login": "SinaDBMS",
"id": 30014810,
"node_id": "MDQ6VXNlcjMwMDE0ODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/30014810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SinaDBMS",
"html_url": "https://github.com/SinaDBMS",
"followers_url": "https://api.github.com/users/SinaDBMS/followers",
"following_url": "https://api.github.com/users/SinaDBMS/following{/other_user}",
"gists_url": "https://api.github.com/users/SinaDBMS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SinaDBMS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SinaDBMS/subscriptions",
"organizations_url": "https://api.github.com/users/SinaDBMS/orgs",
"repos_url": "https://api.github.com/users/SinaDBMS/repos",
"events_url": "https://api.github.com/users/SinaDBMS/events{/privacy}",
"received_events_url": "https://api.github.com/users/SinaDBMS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T09:54:00
| 2025-10-28T13:50:06
| null |
NONE
| null | null | null | null |
### System Info
I'm using `peft.LoraConfig()` to fine tune `ModernBertForTokenClassification`. According to the trail of the exception i'm getting, the `forward()` method of `ModernBertForTokenClassification` is missing `**kwargs`:
```
│ /home/gsgs2tk/ADA_ModelingFramework/.venv/lib/python3.11/site-packages/peft/tuners/tuners_utils.py:222 in forward │
│ │
│ 219 │ │ return self.active_adapter │
│ 220 │ │
│ 221 │ def forward(self, *args: Any, **kwargs: Any): │
│ ❱ 222 │ │ return self.model.forward(*args, **kwargs) │
│ 223 │ │
│ 224 │ def _pre_injection_hook(self, model: nn.Module, config: PeftConfig, adapter_name: st │
│ 225 │ │ r""" │
│ │
│ ╭────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ args = () │ │
│ │ kwargs = { │ │
│ │ │ 'input_ids': tensor([[ 102, 1170, 853, ..., 0, 0, 0], │ │
│ │ │ │ [ 102, 16567, 1774, ..., 0, 0, 0]], device='cuda:0'), │ │
│ │ │ 'attention_mask': tensor([[1, 1, 1, ..., 1, 1, 1], │ │
│ │ │ │ [1, 1, 1, ..., 1, 1, 1]], device='cuda:0'), │ │
│ │ │ 'inputs_embeds': None, │ │
│ │ │ 'labels': tensor([[-100, 0, 0, ..., -100, -100, -100], │ │
│ │ │ │ [-100, 0, 0, ..., -100, -100, -100]], device='cuda:0'), │ │
│ │ │ 'output_attentions': None, │ │
│ │ │ 'output_hidden_states': None, │ │
│ │ │ 'return_dict': True, │ │
│ │ │ 'use_cache': False │ │
│ │ } │ │
│ │ self = LoraModel( │ │
│ │ (model): ModernBertForTokenClassification( │ │
│ │ │ (model): ModernBertModel( │ │
│ │ │ (embeddings): ModernBertEmbeddings( │ │
│ │ │ │ (tok_embeddings): Embedding(31103, 768, padding_idx=0) │ │
│ │ │ │ (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) │ │
│ │ │ │ (drop): Dropout(p=0.0, inplace=False) │ │
│ │ │ ) │ │
│ │ │ (layers): ModuleList( │ │
│ │ │ │ (0): ModernBertEncoderLayer( │ │
│ │ │ │ (attn_norm): Identity() │ │
│ │ │ │ (attn): ModernBertAttention( │ │
│ │ │ │ │ (Wqkv): lora.Linear( │ │
│ │ │ │ │ (base_layer): Linear(in_features=768, out_features=2304, bias=False) │ │
│ │ │ │ │ (lora_dropout): ModuleDict( │ │
│ │ │ │ │ │ (default): Dropout(p=0.1, inplace=False) │ │
│ │ │ │ │ ) │ │
│ │ │ │ │ (lora_A): ModuleDict( │ │
│ │ │ │ │ │ (default): Linear(in_features=768, out_features=8, bias=False) │ │
│ │ │ │ │ ) │ │
│ │ │ │ │ (lora_B): ModuleDict( │ │
│ │ │ │ │ │ (default): Linear(in_features=8, out_features=2304, bias=False) │ │
│ │ │ │ │ ) │ │
│ │ │ │ │ (lora_embedding_A): ParameterDict() │ │
│ │ │ │ │ (lora_embedding_B): ParameterDict() │ │
│ │ │ │ │ (lora_magnitude_vector): ModuleDict() │ │
│ │ │ │ │ ) │ │
│ │ │ │ │ (rotary_emb): ModernBertRotaryEmbedding() │ │
│ │ │ │ │ (Wo): Linear(in_features=768, out_features=768, bias=False) │ │
│ │ │ │ │ (out_drop): Identity() │ │
│ │ │ │ ) │ │
│ │ │ │ (mlp_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) │ │
│ │ │ │ (mlp): ModernBertMLP( │ │
│ │ │ │ │ (Wi): Linear(in_features=768, out_features=2304, bias=False) │ │
│ │ │ │ │ (act): GELUActivation() │ │
│ │ │ │ │ (drop): Dropout(p=0.0, inplace=False) │ │
│ │ │ │ │ (Wo): Linear(in_features=1152, out_features=768, bias=False) │ │
│ │ │ │ ) │ │
│ │ │ │ ) │ │
│ │ │ │ (1-21): 21 x ModernBertEncoderLayer( │ │
│ │ │ │ (attn_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) │ │
│ │ │ │ (attn): ModernBertAttention( │ │
│ │ │ │ │ (Wqkv): lora.Linear( │ │
│ │ │ │ │ (base_layer): Linear(in_features=768, out_features=2304, bias=False) │ │
│ │ │ │ │ (lora_dropout): ModuleDict( │ │
│ │ │ │ │ │ (default): Dropout(p=0.1, inplace=False) │ │
│ │ │ │ │ ) │ │
│ │ │ │ │ (lora_A): ModuleDict( │ │
│ │ │ │ │ │ (default): Linear(in_features=768, out_features=8, bias=False) │ │
│ │ │ │ │ ) │ │
│ │ │ │ │ (lora_B): ModuleDict( │ │
│ │ │ │ │ │ (default): Linear(in_features=8, out_features=2304, bias=False) │ │
│ │ │ │ │ ) │ │
│ │ │ │ │ (lora_embedding_A): ParameterDict() │ │
│ │ │ │ │ (lora_embedding_B): ParameterDict() │ │
│ │ │ │ │ (lora_magnitude_vector): ModuleDict() │ │
│ │ │ │ │ ) │ │
│ │ │ │ │ (rotary_emb): ModernBertRotaryEmbedding() │ │
│ │ │ │ │ (Wo): Linear(in_features=768, out_features=768, bias=False) │ │
│ │ │ │ │ (out_drop): Identity() │ │
│ │ │ │ ) │ │
│ │ │ │ (mlp_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) │ │
│ │ │ │ (mlp): ModernBertMLP( │ │
│ │ │ │ │ (Wi): Linear(in_features=768, out_features=2304, bias=False) │ │
│ │ │ │ │ (act): GELUActivation() │ │
│ │ │ │ │ (drop): Dropout(p=0.0, inplace=False) │ │
│ │ │ │ │ (Wo): Linear(in_features=1152, out_features=768, bias=False) │ │
│ │ │ │ ) │ │
│ │ │ │ ) │ │
│ │ │ ) │ │
│ │ │ (final_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) │ │
│ │ │ ) │ │
│ │ │ (head): ModernBertPredictionHead( │ │
│ │ │ (dense): Linear(in_features=768, out_features=768, bias=False) │ │
│ │ │ (act): GELUActivation() │ │
│ │ │ (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) │ │
│ │ │ ) │ │
│ │ │ (drop): Dropout(p=0.0, inplace=False) │ │
│ │ │ (classifier): ModulesToSaveWrapper( │ │
│ │ │ (original_module): Linear(in_features=768, out_features=13, bias=True) │ │
│ │ │ (modules_to_save): ModuleDict( │ │
│ │ │ │ (default): Linear(in_features=768, out_features=13, bias=True) │ │
│ │ │ ) │ │
│ │ │ ) │ │
│ │ ) │ │
│ │ ) │ │
│ ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: ModernBertForTokenClassification.forward() got an unexpected keyword argument 'use_cache'
```
Fix:
Just add `**kwargs` to the method signature:
```
@auto_docstring(
custom_intro="""
The ModernBert Model with a token classification head on top, e.g. for Named Entity Recognition (NER) tasks.
"""
)
class ModernBertForTokenClassification(ModernBertPreTrainedModel):
def __init__(self, config: ModernBertConfig):
super().__init__(config)
self.num_labels = config.num_labels
self.model = ModernBertModel(config)
self.head = ModernBertPredictionHead(config)
self.drop = torch.nn.Dropout(config.classifier_dropout)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
# Initialize weights and apply final processing
self.post_init()
@auto_docstring
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
sliding_window_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
indices: Optional[torch.Tensor] = None,
cu_seqlens: Optional[torch.Tensor] = None,
max_seqlen: Optional[int] = None,
batch_size: Optional[int] = None,
seq_len: Optional[int] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
**kwargs
# Rest of the code...
```
Output of my `transformers env`:
```
- `transformers` version: 4.57.1
- Platform: Linux-6.8.0-1039-aws-x86_64-with-glibc2.35
- Python version: 3.11.0rc1
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.9.0+cu128 (CUDA)
- Tensorflow version (GPU?): 2.20.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A10G
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Having followed this blog https://www.mohammedsbaihi.com/blog/modernbert.html but for `ModernBertForTokenClassification`
### Expected behavior
Code should run without the aforementioned exception.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41911/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41910
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41910/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41910/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41910/events
|
https://github.com/huggingface/transformers/issues/41910
| 3,560,376,080
|
I_kwDOCUB6oc7UNwcQ
| 41,910
|
Breaking change about AWQ Fused modules due to Attention Refactor
|
{
"login": "fanqiNO1",
"id": 75657629,
"node_id": "MDQ6VXNlcjc1NjU3NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/75657629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fanqiNO1",
"html_url": "https://github.com/fanqiNO1",
"followers_url": "https://api.github.com/users/fanqiNO1/followers",
"following_url": "https://api.github.com/users/fanqiNO1/following{/other_user}",
"gists_url": "https://api.github.com/users/fanqiNO1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fanqiNO1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fanqiNO1/subscriptions",
"organizations_url": "https://api.github.com/users/fanqiNO1/orgs",
"repos_url": "https://api.github.com/users/fanqiNO1/repos",
"events_url": "https://api.github.com/users/fanqiNO1/events{/privacy}",
"received_events_url": "https://api.github.com/users/fanqiNO1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T08:29:03
| 2025-10-28T08:41:11
| null |
NONE
| null | null | null | null |
### System Info
transformers==5.0.0dev
autoawq==0.2.9
autoawq_kernels==0.0.9
torch==2.6.0+cu124
### Who can help?
Due to PR #35235, the `past_key_values` is no longer a returned value of attention modules.
However, when using AWQ models with Fused modules [AWQ Fused modules docs](https://huggingface.co/docs/transformers/main/en/quantization/awq#fused-modules), there will be an error like issue #38554
```bash
hidden_states, _ = self.self_attn(
ValueError: too many values to unpack (expected 2)
```
So we can hack the `awq.modules.fused.attn.QuantAttentionFused` to avoid returning `past_key_values`. Therefore, I create a primary PR #41909 to fix it.
However, for special `rope_type` such as LLaMA3, the RoPE implementation in AutoAWQ will cause error, since `awq.modules.fused.attn.RoPE` supports default RoPE only.
Maybe we can implement and maintain `AwqRoPE` and `AwqQuantAttentionFused` in `transformers.integrations.awq`? Or we can maintain `huggingface/AutoAWQ` as `casper-hansen/AutoAWQ` is archived.
I'd like to refine my PR to help transformers fix this bug!
@SunMarc @MekkCyber
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AwqConfig, AutoModelForCausalLM, AutoTokenizer
# model_path = "./llama-3.1-8b-instruct-awq"
model_path = "./qwen2.5-7b-instruct-awq"
# model_path = "./qwen3-8b-awq"
awq_config = AwqConfig(
bits=4,
do_fuse=True,
fuse_max_seq_len=8192
)
model = AutoModelForCausalLM.from_pretrained(model_path, quantization_config=awq_config).to("cuda:0")
print(model)
tokenizer = AutoTokenizer.from_pretrained(model_path)
max_new_tokens = 1024 if "qwen3" in model_path else 32
messages = []
prompt1 = "What is the result of 3+5?"
messages.append({"role": "user", "content": prompt1})
text1 = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs1 = tokenizer(text1, return_tensors="pt").to("cuda:0")
generated_ids1 = model.generate(**inputs1, max_new_tokens=max_new_tokens)
output_ids1 = generated_ids1[0, len(inputs1.input_ids[0]) :].tolist()
output1 = tokenizer.decode(output_ids1, skip_special_tokens=True)
messages.append({"role": "assistant", "content": output1})
print("Output 1:", output1)
prompt2 = "What about adding 10 to that result?"
messages.append({"role": "user", "content": prompt2})
text2 = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs2 = tokenizer(text2, return_tensors="pt").to("cuda:0")
generated_ids2 = model.generate(**inputs2, max_new_tokens=max_new_tokens)
output_ids2 = generated_ids2[0, len(inputs2.input_ids[0]) :].tolist()
output2 = tokenizer.decode(output_ids2, skip_special_tokens=True)
messages.append({"role": "assistant", "content": output2})
print("Output 2:", output2)
```
### Expected behavior
There is no error.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41910/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41909
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41909/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41909/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41909/events
|
https://github.com/huggingface/transformers/pull/41909
| 3,560,368,463
|
PR_kwDOCUB6oc6wFaei
| 41,909
|
Fix Break change of AWQ FusedModules due to Attention Refactor
|
{
"login": "fanqiNO1",
"id": 75657629,
"node_id": "MDQ6VXNlcjc1NjU3NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/75657629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fanqiNO1",
"html_url": "https://github.com/fanqiNO1",
"followers_url": "https://api.github.com/users/fanqiNO1/followers",
"following_url": "https://api.github.com/users/fanqiNO1/following{/other_user}",
"gists_url": "https://api.github.com/users/fanqiNO1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fanqiNO1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fanqiNO1/subscriptions",
"organizations_url": "https://api.github.com/users/fanqiNO1/orgs",
"repos_url": "https://api.github.com/users/fanqiNO1/repos",
"events_url": "https://api.github.com/users/fanqiNO1/events{/privacy}",
"received_events_url": "https://api.github.com/users/fanqiNO1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T08:26:52
| 2025-10-28T08:29:29
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41909",
"html_url": "https://github.com/huggingface/transformers/pull/41909",
"diff_url": "https://github.com/huggingface/transformers/pull/41909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41909.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #41910
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc @MekkCyber
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41909/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41908
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41908/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41908/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41908/events
|
https://github.com/huggingface/transformers/issues/41908
| 3,559,933,060
|
I_kwDOCUB6oc7UMESE
| 41,908
|
from_pretrained will fail if device_map is `torch.device("mps", index=0)`
|
{
"login": "oceanusxiv",
"id": 8923171,
"node_id": "MDQ6VXNlcjg5MjMxNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8923171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oceanusxiv",
"html_url": "https://github.com/oceanusxiv",
"followers_url": "https://api.github.com/users/oceanusxiv/followers",
"following_url": "https://api.github.com/users/oceanusxiv/following{/other_user}",
"gists_url": "https://api.github.com/users/oceanusxiv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oceanusxiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oceanusxiv/subscriptions",
"organizations_url": "https://api.github.com/users/oceanusxiv/orgs",
"repos_url": "https://api.github.com/users/oceanusxiv/repos",
"events_url": "https://api.github.com/users/oceanusxiv/events{/privacy}",
"received_events_url": "https://api.github.com/users/oceanusxiv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T05:48:40
| 2025-10-28T08:24:48
| null |
NONE
| null | null | null | null |
### System Info
transformers version: 4.57.1
python version: 3.11
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
This example is using paligemma but really any model will do.
```py
from transformers import AutoModelForImageTextToText
model = AutoModelForImageTextToText.from_pretrained("google/paligemma-3b-mix-224", device_map=torch.device("mps", index=0))
```
If you specify an mps device with `index=0`, model loading will fail with
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[4], [line 1](vscode-notebook-cell:?execution_count=4&line=1)
----> [1](vscode-notebook-cell:?execution_count=4&line=1) model = AutoModelForImageTextToText.from_pretrained(model_id, device_map=torch.device("mps", index=0))
File transformers/models/auto/auto_factory.py:604, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
602 if model_class.config_class == config.sub_configs.get("text_config", None):
603 config = config.get_text_config()
--> [604](transformers/models/auto/auto_factory.py:604) return model_class.from_pretrained(
605 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
606 )
607 raise ValueError(
608 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
609 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping)}."
610 )
File transformers/modeling_utils.py:277, in restore_default_dtype.<locals>._wrapper(*args, **kwargs)
275 old_dtype = torch.get_default_dtype()
276 try:
--> [277](https://file+.vscode-resource.vscode-cdn.net/Users/ericfang/sandbox/aiml/t1333/rhea/~/sandbox/aiml/t1333/rhea/.venv/lib/python3.11/site-packages/transformers/modeling_utils.py:277) return func(*args, **kwargs)
278 finally:
279 torch.set_default_dtype(old_dtype)
File transformers/modeling_utils.py:5048, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, weights_only, *model_args, **kwargs)
5038 if dtype_orig is not None:
5039 torch.set_default_dtype(dtype_orig)
5041 (
5042 model,
5043 missing_keys,
5044 unexpected_keys,
5045 mismatched_keys,
5046 offload_index,
5047 error_msgs,
-> [5048](transformers/modeling_utils.py:5048) ) = cls._load_pretrained_model(
5049 model,
5050 state_dict,
5051 checkpoint_files,
5052 pretrained_model_name_or_path,
5053 ignore_mismatched_sizes=ignore_mismatched_sizes,
5054 sharded_metadata=sharded_metadata,
5055 device_map=device_map,
5056 disk_offload_folder=offload_folder,
5057 dtype=dtype,
5058 hf_quantizer=hf_quantizer,
5059 keep_in_fp32_regex=keep_in_fp32_regex,
5060 device_mesh=device_mesh,
5061 key_mapping=key_mapping,
5062 weights_only=weights_only,
5063 )
5064 # make sure token embedding weights are still tied if needed
5065 model.tie_weights()
File transformers/modeling_utils.py:5468, in PreTrainedModel._load_pretrained_model(cls, model, state_dict, checkpoint_files, pretrained_model_name_or_path, ignore_mismatched_sizes, sharded_metadata, device_map, disk_offload_folder, dtype, hf_quantizer, keep_in_fp32_regex, device_mesh, key_mapping, weights_only)
5465 args_list = logging.tqdm(args_list, desc="Loading checkpoint shards")
5467 for args in args_list:
-> [5468](https://file+.vscode-resource.vscode-cdn.net/Users/ericfang/sandbox/aiml/t1333/rhea/~/sandbox/aiml/t1333/rhea/.venv/lib/python3.11/site-packages/transformers/modeling_utils.py:5468) _error_msgs, disk_offload_index = load_shard_file(args)
5469 error_msgs += _error_msgs
5471 # Save offloaded index if needed
File transformers/modeling_utils.py:843, in load_shard_file(args)
841 # Skip it with fsdp on ranks other than 0
842 elif not (is_fsdp_enabled() and not is_local_dist_rank_0() and not is_quantized):
--> [843](transformers/modeling_utils.py:843) disk_offload_index = _load_state_dict_into_meta_model(
844 model,
845 state_dict,
846 shard_file,
847 reverse_key_renaming_mapping,
848 device_map=device_map,
849 disk_offload_folder=disk_offload_folder,
850 disk_offload_index=disk_offload_index,
851 hf_quantizer=hf_quantizer,
852 keep_in_fp32_regex=keep_in_fp32_regex,
853 device_mesh=device_mesh,
854 )
856 return error_msgs, disk_offload_index
File torch/utils/_contextlib.py:120, in context_decorator.<locals>.decorate_context(*args, **kwargs)
117 @functools.wraps(func)
118 def decorate_context(*args, **kwargs):
119 with ctx_factory():
--> [120](torch/utils/_contextlib.py:120) return func(*args, **kwargs)
File ransformers/modeling_utils.py:748, in _load_state_dict_into_meta_model(model, state_dict, shard_file, reverse_renaming_mapping, device_map, disk_offload_folder, disk_offload_index, hf_quantizer, keep_in_fp32_regex, device_mesh)
740 hf_quantizer.create_quantized_param(
741 model,
742 param,
(...)
745 **sharding_kwargs,
746 )
747 else:
--> [748](transformers/modeling_utils.py:748) param = param[...]
749 if casting_dtype is not None:
750 param = param.to(casting_dtype)
File torch/cuda/__init__.py:403, in _lazy_init()
398 raise RuntimeError(
399 "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
400 "multiprocessing, you must use the 'spawn' start method"
401 )
402 if not hasattr(torch._C, "_cuda_getDeviceCount"):
--> [403](torch/cuda/__init__.py:403) raise AssertionError("Torch not compiled with CUDA enabled")
404 if _cudart is None:
405 raise AssertionError(
406 "libcudart functions unavailable. It looks like you have a broken build?"
407 )
AssertionError: Torch not compiled with CUDA enabled
```
It appears `_load_state_dict_into_meta_model` does
```py
tensor_device = device_map[""].index if isinstance(device_map[""], torch.device) else device_map[""]
```
which effectively soft assumes any torch device with an index is CUDA and moves on.
### Expected behavior
This should complete without issue
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41908/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41907
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41907/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41907/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41907/events
|
https://github.com/huggingface/transformers/pull/41907
| 3,559,927,604
|
PR_kwDOCUB6oc6wD7Bs
| 41,907
|
Cache AMD Pytorch image on the cluster local storage
|
{
"login": "jitesh-gupta",
"id": 202713221,
"node_id": "U_kgDODBUohQ",
"avatar_url": "https://avatars.githubusercontent.com/u/202713221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jitesh-gupta",
"html_url": "https://github.com/jitesh-gupta",
"followers_url": "https://api.github.com/users/jitesh-gupta/followers",
"following_url": "https://api.github.com/users/jitesh-gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/jitesh-gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jitesh-gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jitesh-gupta/subscriptions",
"organizations_url": "https://api.github.com/users/jitesh-gupta/orgs",
"repos_url": "https://api.github.com/users/jitesh-gupta/repos",
"events_url": "https://api.github.com/users/jitesh-gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/jitesh-gupta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T05:45:47
| 2025-10-29T04:29:52
| 2025-10-29T04:29:52
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41907",
"html_url": "https://github.com/huggingface/transformers/pull/41907",
"diff_url": "https://github.com/huggingface/transformers/pull/41907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41907.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Caches the latest `huggingface/transformers-pytorch-amd-gpu` image on the amd-mi325 runner cluster as a pre-requisite to the AMD mi325 CI workflow `Self-hosted runner scale set (AMD mi325 scheduled CI caller)`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "jitesh-gupta",
"id": 202713221,
"node_id": "U_kgDODBUohQ",
"avatar_url": "https://avatars.githubusercontent.com/u/202713221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jitesh-gupta",
"html_url": "https://github.com/jitesh-gupta",
"followers_url": "https://api.github.com/users/jitesh-gupta/followers",
"following_url": "https://api.github.com/users/jitesh-gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/jitesh-gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jitesh-gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jitesh-gupta/subscriptions",
"organizations_url": "https://api.github.com/users/jitesh-gupta/orgs",
"repos_url": "https://api.github.com/users/jitesh-gupta/repos",
"events_url": "https://api.github.com/users/jitesh-gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/jitesh-gupta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41907/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41906
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41906/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41906/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41906/events
|
https://github.com/huggingface/transformers/issues/41906
| 3,559,628,519
|
I_kwDOCUB6oc7UK57n
| 41,906
|
[Feature Request] Add CPU Inference Benchmark Framework to Transformers
|
{
"login": "Li-Xiaoo",
"id": 165482764,
"node_id": "U_kgDOCd0RDA",
"avatar_url": "https://avatars.githubusercontent.com/u/165482764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Li-Xiaoo",
"html_url": "https://github.com/Li-Xiaoo",
"followers_url": "https://api.github.com/users/Li-Xiaoo/followers",
"following_url": "https://api.github.com/users/Li-Xiaoo/following{/other_user}",
"gists_url": "https://api.github.com/users/Li-Xiaoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Li-Xiaoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Li-Xiaoo/subscriptions",
"organizations_url": "https://api.github.com/users/Li-Xiaoo/orgs",
"repos_url": "https://api.github.com/users/Li-Xiaoo/repos",
"events_url": "https://api.github.com/users/Li-Xiaoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Li-Xiaoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9258341780,
"node_id": "LA_kwDOCUB6oc8AAAACJ9cVlA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Code%20agent%20slop",
"name": "Code agent slop",
"color": "C59579",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T03:36:13
| 2025-10-28T13:40:35
| 2025-10-28T13:40:35
|
NONE
| null | null | null | null |
<html>
<body>
<!--StartFragment--><html><head></head><body><h1>[Feature Request] Add CPU Inference Benchmark Framework to Transformers</h1>
<h2>🎯 TL;DR</h2>
<p>Add a <strong>lightweight diagnostic benchmarking framework</strong> to help users quickly identify and fix CPU inference performance issues.</p>
<p><strong>This is NOT a replacement for <code>optimum</code></strong> - it's a simple debugging tool for common issues.</p>
<hr>
<h2>🔥 Motivation: Critical Issues Users Cannot Currently Detect</h2>
<h3>Issue 1: dtype Performance Catastrophe</h3>
<p><strong>Real test data</strong> on Intel Core i7-13600P:</p>
dtype | Throughput | vs float32
-- | -- | --
float32 | 16.47 tok/s | ✅ baseline
float16 | 1.46 tok/s | 🔴 11.28× SLOWER
bfloat16 | 1.98 tok/s | 🔴 8.32× SLOWER
<p><strong>PyTorch defaults to using ALL cores, which is the WORST configuration.</strong></p>
<hr>
<h2>💔 Current Problem</h2>
<p>Users have <strong>no way to discover these issues</strong> without writing custom benchmarking code.</p>
<p><strong>What's missing:</strong></p>
<ul>
<li>❌ No quick dtype comparison</li>
<li>❌ No thread optimization guidance</li>
<li>❌ No standardized benchmarking</li>
<li>❌ No warnings about performance pitfalls</li>
</ul>
<p><strong>Users resort to:</strong></p>
<ul>
<li>Ad-hoc timing code</li>
<li>StackOverflow posts with conflicting advice</li>
<li>Giving up and using cloud GPUs (expensive & unnecessary)</li>
</ul>
<hr>
<h2>🎨 Proposed Solution: Lightweight Diagnostic Framework</h2>
<h3>Core Idea: Quick, Simple, Actionable</h3>
<pre><code class="language-python">from transformers.utils.benchmarks import quick_diagnose
# One-line diagnosis
report = quick_diagnose("gpt2")
# Output:
# 🔴 CRITICAL: You're using float16 (11× slower than float32 on CPU)
# Fix: Use torch_dtype=torch.float32
#
# ⚠️ WARNING: Default thread config is suboptimal (48% slower)
# Fix: torch.set_num_threads(4)
#
# ✅ Apply fixes for 16× speedup
</code></pre>
<h3>Basic API</h3>
<pre><code class="language-python">from transformers.utils.benchmarks import CPUBenchmark
# Quick test
benchmark = CPUBenchmark("distilgpt2")
result = benchmark.run()
print(f"Throughput: {result.throughput:.2f} tok/s")
# Compare dtypes
from transformers.utils.benchmarks import compare_dtypes
compare_dtypes("gpt2", dtypes=["float32", "float16"])
# Automatically warns if fp16 is slower
</code></pre>
<hr>
<h2>🤔 Question for Maintainers: Where Should This Live?</h2>
<p>Since there's no existing <code>benchmarks/</code> module in Transformers, where should this go?</p>
<h3><strong>Option 1: <code>src/transformers/utils/benchmarks/</code></strong> ⭐ My Recommendation</h3>
<pre><code>src/transformers/utils/
├── benchmarks/
│ ├── __init__.py
│ ├── cpu_inference.py
│ └── metrics.py
</code></pre>
<p><strong>Pros:</strong></p>
<ul>
<li>✅ Follows existing <code>utils/</code> pattern for utility code</li>
<li>✅ Importable by users</li>
<li>✅ Discoverable (shows up in <code>transformers.utils.*</code>)</li>
<li>✅ Minimal size (~1KB)</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>⚠️ Adds to core library (though very small)</li>
</ul>
<p><strong>Import:</strong></p>
<pre><code class="language-python">from transformers.utils.benchmarks import CPUBenchmark
</code></pre>
<hr>
<h3><strong>Option 2: <code>examples/benchmarking/</code></strong></h3>
<pre><code>examples/pytorch/benchmarking/
├── cpu_inference/
│ ├── benchmark.py
│ └── README.md
</code></pre>
<p><strong>Pros:</strong></p>
<ul>
<li>✅ Doesn't affect core library</li>
<li>✅ Already have <code>examples/pytorch/benchmarking/</code></li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>❌ Not importable (users must copy code)</li>
<li>❌ Less discoverable</li>
<li>❌ Harder to maintain quality</li>
</ul>
<hr>
<h3><strong>Option 3: Separate Package</strong></h3>
<p>Create <code>transformers-benchmark</code> as separate package.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>✅ Complete independence</li>
<li>✅ Can evolve separately</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>❌ Another package to maintain</li>
<li>❌ Less discoverable</li>
<li>❌ Extra installation step</li>
</ul>
<hr>
<h2>🔗 Relationship with <code>optimum</code></h2>
<p><strong>Important: This is NOT a replacement for <code>optimum</code>!</strong></p>
<h3><strong>Transformers Benchmarks</strong> (This Proposal)</h3>
<ul>
<li>🎯 <strong>Quick diagnosis</strong> for common issues</li>
<li>👤 Users: Developers debugging slow inference</li>
<li>⚡ Use cases:
<ul>
<li>"Why is my inference slow?"</li>
<li>"Should I use fp16 or fp32?"</li>
<li>"How many threads should I use?"</li>
</ul>
</li>
<li>📦 Features:
<ul>
<li>Zero config</li>
<li>Runs in seconds</li>
<li>Educational (explains issues)</li>
</ul>
</li>
</ul>
<h3><strong>Optimum</strong> (Existing)</h3>
<ul>
<li>🎯 <strong>Production optimization</strong> and deployment</li>
<li>👤 Users: MLOps engineers, performance experts</li>
<li>⚡ Use cases:
<ul>
<li>ONNX export and optimization</li>
<li>Hardware-specific acceleration</li>
<li>Quantization and compression</li>
<li>Production deployment</li>
</ul>
</li>
<li>📦 Features:
<ul>
<li>Requires setup</li>
<li>Hardware-specific</li>
<li>Advanced optimizations</li>
</ul>
</li>
</ul>
<h3><strong>Workflow</strong></h3>
<pre><code>User has slow inference
↓
1. Quick diagnosis with Transformers Benchmarks
↓
"Oh, I'm using fp16 on CPU!" ✅ Fixed in 1 line
OR
↓
2. Need deeper optimization → Use Optimum
↓
ONNX, quantization, hardware acceleration ✅
</code></pre>
<p><strong>They're complementary, not competing.</strong></p>
<hr>
<h2>📋 Proposed Implementation Plan</h2>
<h3>Phase 1: Core Framework (PR #1)</h3>
<ul>
<li>Basic <code>CPUBenchmark</code> class</li>
<li>Standard metrics (throughput, latency, memory)</li>
<li>JSON output</li>
<li>~400 lines + tests</li>
</ul>
<h3>Phase 2: Smart Diagnostics (PR #2)</h3>
<ul>
<li>Dtype comparison with warnings</li>
<li>Thread optimization</li>
<li>~300 lines + tests</li>
</ul>
<h3>Phase 3: CLI Tool (PR #3)</h3>
<ul>
<li>Command-line interface</li>
<li>One-command diagnosis</li>
<li>~200 lines + tests</li>
</ul>
<p><strong>Total: ~900 lines of well-tested code</strong></p>
<hr>
<h2>📊 Evidence: Real Testing Data</h2>
<p>I've run <strong>21 comprehensive tests</strong> validating this approach:</p>
<ul>
<li>✅ Dtype comparison (3 dtypes)</li>
<li>✅ Thread sweep (6 configs)</li>
<li>✅ Input length scaling (4 lengths)</li>
<li>✅ Model size comparison (2 models)</li>
</ul>
<p><strong>All tests passed. Full data available.</strong></p>
<p>Key findings:</p>
<ul>
<li>fp16 is 11.28× slower on CPU</li>
<li>Optimal thread count is CPU-specific</li>
<li>48% speedup possible with correct configuration</li>
</ul>
<hr>
<h2>✅ Success Criteria</h2>
<ol>
<li><strong>Immediate value</strong>: Users can diagnose performance issues in <1 minute</li>
<li><strong>Prevents mistakes</strong>: Auto-warns about fp16 on CPU</li>
<li><strong>Complements optimum</strong>: Doesn't overlap, acts as first step</li>
<li><strong>Lightweight</strong>: <10KB code, no heavy dependencies</li>
<li><strong>Well documented</strong>: Clear examples and guides</li>
</ol>
<hr>
<h2>🙋 Questions for Maintainers</h2>
<ol>
<li><strong>Location preference?</strong>
<ul>
<li>Option 1 (<code>utils/benchmarks/</code>), 2 (<code>examples/</code>), or 3 (separate package)?</li>
</ul>
</li>
<li><strong>Scope acceptable?</strong>
<ul>
<li>Is "diagnostic tool" the right positioning vs full benchmark suite?</li>
</ul>
</li>
<li><strong>Dependency OK?</strong>
<ul>
<li>Optional <code>psutil</code> for memory tracking (graceful degradation if missing)?</li>
</ul>
</li>
<li><strong>Integration with optimum?</strong>
<ul>
<li>Should we add cross-references in docs?</li>
</ul>
</li>
</ol>
<hr>
<h2>🚀 Implementation Status</h2>
<p>✅ <strong>Fully implemented and tested</strong></p>
<ul>
<li>Core framework (400 lines)</li>
<li>Comprehensive tests (300 lines, >85% coverage)</li>
<li>Example scripts</li>
<li>Documentation</li>
</ul>
<p><strong>Ready to submit PR upon approval of location/scope.</strong></p>
<hr>
<h2>🔗 Related</h2>
<ul>
<li>#41867 - CPU dtype safety (this framework would auto-detect)</li>
<li><code>optimum</code> - Complementary tool for production optimization</li>
</ul>
<hr>
<p><strong>I'm excited to contribute this and help the community debug CPU inference issues!</strong></p>
<p>Looking forward to your guidance on location and scope. 🙏</p>
</body></html><!--EndFragment-->
</body>
</html>
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41906/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41905
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41905/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41905/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41905/events
|
https://github.com/huggingface/transformers/pull/41905
| 3,559,454,600
|
PR_kwDOCUB6oc6wCSz-
| 41,905
|
upgrade natten to 0.20 version
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T02:26:10
| 2025-10-28T02:46:34
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41905",
"html_url": "https://github.com/huggingface/transformers/pull/41905",
"diff_url": "https://github.com/huggingface/transformers/pull/41905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41905.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41905/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41904
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41904/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41904/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41904/events
|
https://github.com/huggingface/transformers/pull/41904
| 3,559,133,986
|
PR_kwDOCUB6oc6wBO5K
| 41,904
|
Fix inaccurate eval and train loss computation with variable batch sizes
|
{
"login": "jameslovespancakes",
"id": 220026352,
"node_id": "U_kgDODR1V8A",
"avatar_url": "https://avatars.githubusercontent.com/u/220026352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameslovespancakes",
"html_url": "https://github.com/jameslovespancakes",
"followers_url": "https://api.github.com/users/jameslovespancakes/followers",
"following_url": "https://api.github.com/users/jameslovespancakes/following{/other_user}",
"gists_url": "https://api.github.com/users/jameslovespancakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameslovespancakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameslovespancakes/subscriptions",
"organizations_url": "https://api.github.com/users/jameslovespancakes/orgs",
"repos_url": "https://api.github.com/users/jameslovespancakes/repos",
"events_url": "https://api.github.com/users/jameslovespancakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameslovespancakes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T23:48:42
| 2025-10-28T13:32:11
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41904",
"html_url": "https://github.com/huggingface/transformers/pull/41904",
"diff_url": "https://github.com/huggingface/transformers/pull/41904.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41904.patch",
"merged_at": null
}
|
Fixes #41898
When drop_last=False (default), the last batch may contain fewer samples than per_device_eval_batch_size. Using a fixed batch_size to repeat the scalar loss causes the last batch to be over-represented in the final average loss calculation.
Changes:
- Trainer: Use observed_batch_size instead of fixed batch_size when repeating eval loss for gather_for_metrics
- no_trainer examples: Use actual batch size from input_ids.shape[0] for both eval and train loss computation
- Train loss: Weight by actual batch size and divide by total samples instead of number of batches
This ensures accurate loss computation regardless of batch size variability while maintaining backward compatibility (identical behavior when all batches are uniform size).
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41904/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41903
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41903/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41903/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41903/events
|
https://github.com/huggingface/transformers/pull/41903
| 3,558,606,179
|
PR_kwDOCUB6oc6v_aJJ
| 41,903
|
Fix: avoid duplicate token in maybe_load_adapters
|
{
"login": "luaenrique",
"id": 23041247,
"node_id": "MDQ6VXNlcjIzMDQxMjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/23041247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luaenrique",
"html_url": "https://github.com/luaenrique",
"followers_url": "https://api.github.com/users/luaenrique/followers",
"following_url": "https://api.github.com/users/luaenrique/following{/other_user}",
"gists_url": "https://api.github.com/users/luaenrique/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luaenrique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luaenrique/subscriptions",
"organizations_url": "https://api.github.com/users/luaenrique/orgs",
"repos_url": "https://api.github.com/users/luaenrique/repos",
"events_url": "https://api.github.com/users/luaenrique/events{/privacy}",
"received_events_url": "https://api.github.com/users/luaenrique/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T20:51:10
| 2025-10-28T15:07:23
| 2025-10-28T15:07:23
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41903",
"html_url": "https://github.com/huggingface/transformers/pull/41903",
"diff_url": "https://github.com/huggingface/transformers/pull/41903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41903.patch",
"merged_at": "2025-10-28T15:07:23"
}
|
# What does this PR do?
Fixes #41902
This PR prevents tokens from being passed twice when loading adapters in `from_pretrained`. Fixes TypeError: `find_adapter_config_file()` got multiple values for keyword argument 'token'.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41903/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41902
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41902/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41902/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41902/events
|
https://github.com/huggingface/transformers/issues/41902
| 3,558,189,414
|
I_kwDOCUB6oc7UFalm
| 41,902
|
Cannot load models in latest transformers version
|
{
"login": "ri938",
"id": 8639734,
"node_id": "MDQ6VXNlcjg2Mzk3MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8639734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ri938",
"html_url": "https://github.com/ri938",
"followers_url": "https://api.github.com/users/ri938/followers",
"following_url": "https://api.github.com/users/ri938/following{/other_user}",
"gists_url": "https://api.github.com/users/ri938/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ri938/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ri938/subscriptions",
"organizations_url": "https://api.github.com/users/ri938/orgs",
"repos_url": "https://api.github.com/users/ri938/repos",
"events_url": "https://api.github.com/users/ri938/events{/privacy}",
"received_events_url": "https://api.github.com/users/ri938/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T19:00:43
| 2025-10-28T15:07:25
| 2025-10-28T15:07:25
|
CONTRIBUTOR
| null | null | null | null |
### System Info
master branch: transformers @ git+https://github.com/huggingface/transformers.git@1f0b490a2c42eb129dccc69031ccb537058689c4
its not possible to load models in latest transformers version. Because adaptor_config gets a token added to it which results in duplicate value being passed later on.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code sample
```
model = "Qwen/Qwen3-30B-A3B"
token = '<redacted>'
config = AutoConfig.from_pretrained(
model,
token=token,
trust_remote_code=True
)
args = dict(
token=token,
config=config,
torch_dtype='bfloat16',
trust_remote_code=True,
force_download=False,
attn_implementation='flash_attention_2'
)
AutoModelForSequenceClassification.from_pretrained(model, **args)
```
error message
```
[rank5]: Traceback (most recent call last):
[rank5]: File "/code/reward_model/train.py", line 640, in <module>
[rank5]: trainer = get_trainer(training_args)
[rank5]: File "/code/reward_model/train.py", line 538, in get_trainer
[rank5]: model = load_model(training_args)
[rank5]: File "/code/reward_model/train.py", line 184, in load_model
[rank5]: model = AutoModelForSequenceClassification.from_pretrained(
[rank5]: File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 372, in from_pretrained
[rank5]: return model_class.from_pretrained(
[rank5]: File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 270, in _wrapper
[rank5]: return func(*args, **kwargs)
[rank5]: File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 4356, in from_pretrained
[rank5]: _adapter_model_path, pretrained_model_name_or_path, adapter_kwargs = maybe_load_adapters(
[rank5]: File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/peft.py", line 655, in maybe_load_adapters
[rank5]: _adapter_model_path = find_adapter_config_file(
[rank5]: TypeError: transformers.utils.peft_utils.find_adapter_config_file() got multiple values for keyword argument 'token'
```
### Expected behavior
It should not pass duplicate tokens.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41902/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41901
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41901/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41901/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41901/events
|
https://github.com/huggingface/transformers/pull/41901
| 3,558,090,732
|
PR_kwDOCUB6oc6v9ng5
| 41,901
|
[executorch] Update pytree registration for DynamicCache
|
{
"login": "justinchuby",
"id": 11205048,
"node_id": "MDQ6VXNlcjExMjA1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11205048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justinchuby",
"html_url": "https://github.com/justinchuby",
"followers_url": "https://api.github.com/users/justinchuby/followers",
"following_url": "https://api.github.com/users/justinchuby/following{/other_user}",
"gists_url": "https://api.github.com/users/justinchuby/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justinchuby/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justinchuby/subscriptions",
"organizations_url": "https://api.github.com/users/justinchuby/orgs",
"repos_url": "https://api.github.com/users/justinchuby/repos",
"events_url": "https://api.github.com/users/justinchuby/events{/privacy}",
"received_events_url": "https://api.github.com/users/justinchuby/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T18:36:11
| 2025-10-28T13:33:00
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41901",
"html_url": "https://github.com/huggingface/transformers/pull/41901",
"diff_url": "https://github.com/huggingface/transformers/pull/41901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41901.patch",
"merged_at": null
}
|
# What does this PR do?
Update pytree registration for DynamicCache. Before this change the cache values are flatten as `(key0, key1, ..., value0, value1, ...)`. This change matches the old cache interface by flattening to `(key0, value0, key1, value1, ...)`. This ordering matches transformers conversion and is consistent with expectation of downstream tools when they see ky caches.
The change is BC breaking in that it will change the signature of the exporter program. But this is ok since (1) there is going to be transformers 5.0 (2) executorch doesn't rely on this api. Only ONNX does at this point.
cc @titaiwangms @xadupre @jackzhxng @tugsbayasgalan
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41901/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41900
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41900/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41900/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41900/events
|
https://github.com/huggingface/transformers/pull/41900
| 3,558,032,145
|
PR_kwDOCUB6oc6v9alw
| 41,900
|
Remove unnecessary slicing in sdpa_attention_forward
|
{
"login": "justinchuby",
"id": 11205048,
"node_id": "MDQ6VXNlcjExMjA1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11205048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justinchuby",
"html_url": "https://github.com/justinchuby",
"followers_url": "https://api.github.com/users/justinchuby/followers",
"following_url": "https://api.github.com/users/justinchuby/following{/other_user}",
"gists_url": "https://api.github.com/users/justinchuby/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justinchuby/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justinchuby/subscriptions",
"organizations_url": "https://api.github.com/users/justinchuby/orgs",
"repos_url": "https://api.github.com/users/justinchuby/repos",
"events_url": "https://api.github.com/users/justinchuby/events{/privacy}",
"received_events_url": "https://api.github.com/users/justinchuby/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T18:21:19
| 2025-10-29T15:49:16
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41900",
"html_url": "https://github.com/huggingface/transformers/pull/41900",
"diff_url": "https://github.com/huggingface/transformers/pull/41900.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41900.patch",
"merged_at": null
}
|
The slicing in sdpa_attention_forward was there only because some masks were not constructed correctly (I was told). When the key size is dynamic, the slice op also prevents torch.export from correctly reasoning about its size.
cc @vasqu
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41900/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41899
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41899/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41899/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41899/events
|
https://github.com/huggingface/transformers/pull/41899
| 3,557,936,848
|
PR_kwDOCUB6oc6v9GFJ
| 41,899
|
Testing checkpoint limit changes from PR #37196
|
{
"login": "Aravind-11",
"id": 42345018,
"node_id": "MDQ6VXNlcjQyMzQ1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/42345018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aravind-11",
"html_url": "https://github.com/Aravind-11",
"followers_url": "https://api.github.com/users/Aravind-11/followers",
"following_url": "https://api.github.com/users/Aravind-11/following{/other_user}",
"gists_url": "https://api.github.com/users/Aravind-11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aravind-11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aravind-11/subscriptions",
"organizations_url": "https://api.github.com/users/Aravind-11/orgs",
"repos_url": "https://api.github.com/users/Aravind-11/repos",
"events_url": "https://api.github.com/users/Aravind-11/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aravind-11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T17:54:09
| 2025-10-29T21:31:11
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41899",
"html_url": "https://github.com/huggingface/transformers/pull/41899",
"diff_url": "https://github.com/huggingface/transformers/pull/41899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41899.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds separate checkpoint limits for regular checkpoints and best model checkpoints in the Trainer.
## Changes:
- Adds `save_total_limit_best` parameter to `TrainingArguments` to control the number of best checkpoints kept
- Separates checkpoint management logic so that best model checkpoints and regular interval checkpoints are tracked independently
- Best checkpoints (based on metrics) are no longer deleted when regular checkpoints are rotated
## Motivation:
Currently, `save_total_limit` applies to ALL checkpoints, meaning best models can be deleted to make room for new regular checkpoints. This PR allows users to preserve their best-performing models while limiting regular checkpoint storage.
Fixes #37196
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@vasqu
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41899/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41898
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41898/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41898/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41898/events
|
https://github.com/huggingface/transformers/issues/41898
| 3,557,682,125
|
I_kwDOCUB6oc7UDevN
| 41,898
|
inaccurate eval loss computation
|
{
"login": "wwt17",
"id": 10792281,
"node_id": "MDQ6VXNlcjEwNzkyMjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/10792281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwt17",
"html_url": "https://github.com/wwt17",
"followers_url": "https://api.github.com/users/wwt17/followers",
"following_url": "https://api.github.com/users/wwt17/following{/other_user}",
"gists_url": "https://api.github.com/users/wwt17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwt17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwt17/subscriptions",
"organizations_url": "https://api.github.com/users/wwt17/orgs",
"repos_url": "https://api.github.com/users/wwt17/repos",
"events_url": "https://api.github.com/users/wwt17/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwt17/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T16:44:56
| 2025-10-27T16:52:15
| null |
NONE
| null | null | null | null |
https://github.com/huggingface/transformers/blob/1f0b490a2c42eb129dccc69031ccb537058689c4/examples/pytorch/language-modeling/run_clm_no_trainer.py#L657 The last batch may be smaller, since `drop_last` default to `False` for `DataLoader` construction. Thus, computing the mean over the losses is inaccurate.
Same issue for the train loss:
https://github.com/huggingface/transformers/blob/1f0b490a2c42eb129dccc69031ccb537058689c4/examples/pytorch/language-modeling/run_clm_no_trainer.py#L673
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41898/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41897
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41897/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41897/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41897/events
|
https://github.com/huggingface/transformers/pull/41897
| 3,557,618,678
|
PR_kwDOCUB6oc6v8A1G
| 41,897
|
[FPQuant] MXFP8 and MXFP4 backwards support
|
{
"login": "BlackSamorez",
"id": 16901341,
"node_id": "MDQ6VXNlcjE2OTAxMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlackSamorez",
"html_url": "https://github.com/BlackSamorez",
"followers_url": "https://api.github.com/users/BlackSamorez/followers",
"following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}",
"gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions",
"organizations_url": "https://api.github.com/users/BlackSamorez/orgs",
"repos_url": "https://api.github.com/users/BlackSamorez/repos",
"events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlackSamorez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T16:29:33
| 2025-10-28T13:39:19
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41897",
"html_url": "https://github.com/huggingface/transformers/pull/41897",
"diff_url": "https://github.com/huggingface/transformers/pull/41897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41897.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds MXFP4 and MXFP8 backwards support in combination with MXFP4 forward, allowing for lightning-fast QAT on Blackwell GPUs.
It's blocked by `QuTLASS v0.2.0` and `fp_quant v0.3.0` release we're hoping to release within a day or two.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
@SunMarc
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41897/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41896
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41896/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41896/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41896/events
|
https://github.com/huggingface/transformers/issues/41896
| 3,557,499,686
|
I_kwDOCUB6oc7UCyMm
| 41,896
|
Issues when updating the .gitignore file in run_clm_no_trainer.py
|
{
"login": "wwt17",
"id": 10792281,
"node_id": "MDQ6VXNlcjEwNzkyMjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/10792281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwt17",
"html_url": "https://github.com/wwt17",
"followers_url": "https://api.github.com/users/wwt17/followers",
"following_url": "https://api.github.com/users/wwt17/following{/other_user}",
"gists_url": "https://api.github.com/users/wwt17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwt17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwt17/subscriptions",
"organizations_url": "https://api.github.com/users/wwt17/orgs",
"repos_url": "https://api.github.com/users/wwt17/repos",
"events_url": "https://api.github.com/users/wwt17/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwt17/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T16:01:56
| 2025-10-28T13:29:39
| null |
NONE
| null | null | null | null |
https://github.com/huggingface/transformers/blob/1f0b490a2c42eb129dccc69031ccb537058689c4/examples/pytorch/language-modeling/run_clm_no_trainer.py#L311 `w+` mode will truncate the file to zero length. Also,
https://github.com/huggingface/transformers/blob/1f0b490a2c42eb129dccc69031ccb537058689c4/examples/pytorch/language-modeling/run_clm_no_trainer.py#L314 the file cannot be iterated over twice.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41896/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41895
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41895/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41895/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41895/events
|
https://github.com/huggingface/transformers/pull/41895
| 3,557,392,971
|
PR_kwDOCUB6oc6v7PFJ
| 41,895
|
Add Telugu Sentiment Classification Example using DistilBERT
|
{
"login": "Sai-Lakshmi-Bala-Mounika-Gandikota",
"id": 149866662,
"node_id": "U_kgDOCO7Ipg",
"avatar_url": "https://avatars.githubusercontent.com/u/149866662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota",
"html_url": "https://github.com/Sai-Lakshmi-Bala-Mounika-Gandikota",
"followers_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/followers",
"following_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/following{/other_user}",
"gists_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/subscriptions",
"organizations_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/orgs",
"repos_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/repos",
"events_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sai-Lakshmi-Bala-Mounika-Gandikota/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T15:36:33
| 2025-10-28T06:48:08
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41895",
"html_url": "https://github.com/huggingface/transformers/pull/41895",
"diff_url": "https://github.com/huggingface/transformers/pull/41895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41895.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
## What does this PR do?
This PR introduces a new example under `examples/te_sentiment` for **Telugu Sentiment Analysis** using the `transformers` library.
It fine-tunes the multilingual model `distilbert-base-multilingual-cased` on a small Telugu dataset to classify sentiment as Positive, Negative, or Neutral.
## Why is this needed?
Currently, there are no examples in the repository demonstrating sentiment analysis for **Telugu**, one of the major Indian languages.
This contribution expands multilingual support and showcases how Hugging Face models can be fine-tuned effectively on low-resource languages.
## Key Features
- `run_te_sentiment.py`: Training script using Hugging Face `Trainer` API.
- `README.md`: Explains dataset preparation, training commands, and evaluation metrics.
- Uses `datasets` library for seamless data loading and preprocessing.
- Lightweight configuration to allow users to fine-tune models on small datasets.
## Test Plan
- Verified that the example runs successfully on a local environment with GPU.
- Ensured model fine-tuning completes and evaluation metrics (accuracy, F1) are printed.
- Checked README instructions for clarity and correctness.
## Related Issues
None.
## Checklist
- [x] I have read and followed the Contributing Guidelines.
- [x] The code runs without errors using the provided dataset.
- [x] Documentation and comments are included.
- [x] No existing functionality is broken.
@stevhliu @ArthurZucker @CyrilVallez — Please review this PR.
This adds a new Telugu sentiment classification example under `examples/te_sentiment`. 😊
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41895/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41894
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41894/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41894/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41894/events
|
https://github.com/huggingface/transformers/pull/41894
| 3,557,373,005
|
PR_kwDOCUB6oc6v7Kog
| 41,894
|
[wip] Update transformers to support `FqnToConfig`
|
{
"login": "jcaip",
"id": 8041643,
"node_id": "MDQ6VXNlcjgwNDE2NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8041643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcaip",
"html_url": "https://github.com/jcaip",
"followers_url": "https://api.github.com/users/jcaip/followers",
"following_url": "https://api.github.com/users/jcaip/following{/other_user}",
"gists_url": "https://api.github.com/users/jcaip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcaip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcaip/subscriptions",
"organizations_url": "https://api.github.com/users/jcaip/orgs",
"repos_url": "https://api.github.com/users/jcaip/repos",
"events_url": "https://api.github.com/users/jcaip/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcaip/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T15:32:23
| 2025-10-27T18:51:37
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41894",
"html_url": "https://github.com/huggingface/transformers/pull/41894",
"diff_url": "https://github.com/huggingface/transformers/pull/41894.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41894.patch",
"merged_at": null
}
|
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41894/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41893
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41893/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41893/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41893/events
|
https://github.com/huggingface/transformers/issues/41893
| 3,556,974,260
|
I_kwDOCUB6oc7UAx60
| 41,893
|
Add MiniViT: lightweight Vision Transformer for CIFAR-scale image classification
|
{
"login": "justynigam",
"id": 76672901,
"node_id": "MDQ6VXNlcjc2NjcyOTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/76672901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justynigam",
"html_url": "https://github.com/justynigam",
"followers_url": "https://api.github.com/users/justynigam/followers",
"following_url": "https://api.github.com/users/justynigam/following{/other_user}",
"gists_url": "https://api.github.com/users/justynigam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justynigam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justynigam/subscriptions",
"organizations_url": "https://api.github.com/users/justynigam/orgs",
"repos_url": "https://api.github.com/users/justynigam/repos",
"events_url": "https://api.github.com/users/justynigam/events{/privacy}",
"received_events_url": "https://api.github.com/users/justynigam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-27T14:07:03
| 2025-10-29T03:07:55
| null |
NONE
| null | null | null | null |
### Model description
**Propose adding a new vision model, `MiniViT`, a compact Vision Transformer variant optimized for small image datasets (e.g., CIFAR-10/100).**
Include model implementation, configuration, feature extractor, tests, example script, and model card for Hub integration.
**Motivation:**
- There is demand for small, efficient ViT variants for edge and educational use.
- `MiniViT` provides a minimal ViT architecture (patch embedding, few transformer blocks, lightweight MLP head) that is useful for teaching, quick experiments, and resource-constrained training.
**Model details (high-level):**
- Architecture: patch embedding -> positional embeddings -> N transformer encoder blocks (multi-head self-attention + MLP) -> classification token or global pooling -> classification head.
- Inputs: images (batch, channels, height, width). Preprocessing via `AutoFeatureExtractor` / `ViTFeatureExtractor` style.
- Outputs: logits for classification; supports `MiniViTForImageClassification` with `MiniViTConfig`.
- Design choices: small embed dimension (e.g., 128), few heads (e.g., 4), shallow depth (e.g., 6), dropout and LayerNorm similar to existing ViT implementations.
**Implementation plan:**
- Add model package: `src/transformers/models/minivit/`
- `configuration_minivit.py` — `MiniViTConfig` with typical config fields.
- `modeling_minivit.py` — core `MiniViTModel` and `MiniViTForImageClassification`.
- `feature_extraction_minivit.py` — feature extractor (resize / center crop / normalize) or reuse `AutoFeatureExtractor`.
- `tokenization_minivit.py` — not required for vision-only model.
- `__init__.py` — exports and integration mappings.
- Update registries/mappings: Add to AutoConfig/AutoModel/AutoFeatureExtractor mappings.
- Tests/examples: Add forward/config tests; example training script; fast CI config.
- Documentation/Hub: Model card, docs page, push weights example.
Minimal inference example:
```python
from transformers import MiniViTForImageClassification, MiniViTConfig, AutoFeatureExtractor
import torch
from PIL import Image
config = MiniViTConfig(image_size=32, patch_size=4, hidden_size=128, num_hidden_layers=6, num_attention_heads=4, num_labels=10)
model = MiniViTForImageClassification(config)
feature_extractor = AutoFeatureExtractor.from_pretrained("minivit-config-or-local")
img = Image.open("example.png").convert("RGB")
inputs = feature_extractor(img, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
preds = torch.argmax(logits, dim=-1)
```
### Open source status
- [x] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
Implementation plan and modeling details outlined above. Authors: @justynigam (proposer).
No existing weights, but architecture is minimal and educationally valuable. Links:
- ViT paper: https://arxiv.org/abs/2010.11929
- CIFAR: https://www.cs.toronto.edu/~kriz/cifar.html
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41893/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41892
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41892/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41892/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41892/events
|
https://github.com/huggingface/transformers/pull/41892
| 3,556,530,248
|
PR_kwDOCUB6oc6v4RyI
| 41,892
|
Update some workflow files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T12:26:34
| 2025-10-29T13:42:07
| 2025-10-29T13:42:05
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41892",
"html_url": "https://github.com/huggingface/transformers/pull/41892",
"diff_url": "https://github.com/huggingface/transformers/pull/41892.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41892.patch",
"merged_at": "2025-10-29T13:42:05"
}
|
# What does this PR do?
Mostly:
- Make `docker/transformers-all-latest-gpu/Dockerfile` more readable and clean as we now need to handle `torchcodec` (using `cpu`) along with `torch` (`cuda`)
- Remove `push-ci` stuff. We are not paying any attention to it. We have something running on a very small subset now.
- Separate CI workflows and their docker images: with `flash-attn` and without it
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41892/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41891
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41891/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41891/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41891/events
|
https://github.com/huggingface/transformers/pull/41891
| 3,556,506,384
|
PR_kwDOCUB6oc6v4MjL
| 41,891
|
revert changes in _is_package_available
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T12:20:05
| 2025-10-27T12:59:20
| 2025-10-27T12:59:18
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41891",
"html_url": "https://github.com/huggingface/transformers/pull/41891",
"diff_url": "https://github.com/huggingface/transformers/pull/41891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41891.patch",
"merged_at": "2025-10-27T12:59:18"
}
|
# What does this PR do?
Reverts https://github.com/huggingface/transformers/pull/41411, because for some packages like `optimum` we don't have loaders as expected from most packages.
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41891/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41890
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41890/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41890/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41890/events
|
https://github.com/huggingface/transformers/pull/41890
| 3,556,497,335
|
PR_kwDOCUB6oc6v4Kj6
| 41,890
|
[`T5Gemma`] Fix cross attention cache
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T12:17:20
| 2025-10-27T13:13:04
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41890",
"html_url": "https://github.com/huggingface/transformers/pull/41890",
"diff_url": "https://github.com/huggingface/transformers/pull/41890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41890.patch",
"merged_at": null
}
|
Fixes #41875
The encoder-decoder's cross attn cache was initialized with the same layer types as the self attn cache. However, the model only uses full attention for its cross attentions making this initialization faulty. This only happens with fa2 as it relies on the mask to match with the size of the cache while sdpa just wrongly cuts parts of the mask for example.
References for cross attn only using full attn:
https://github.com/huggingface/transformers/blob/8472ac683604fca316fa1e5bb10c82064dac7d1b/src/transformers/models/t5gemma/modeling_t5gemma.py#L810-L815
https://github.com/huggingface/transformers/blob/8472ac683604fca316fa1e5bb10c82064dac7d1b/src/transformers/models/t5gemma/modeling_t5gemma.py#L834
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41890/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41889
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41889/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41889/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41889/events
|
https://github.com/huggingface/transformers/pull/41889
| 3,556,242,632
|
PR_kwDOCUB6oc6v3SLw
| 41,889
|
🚨 [v5][PEFT] Bump min version requirement of PEFT to 0.18.0
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T11:05:49
| 2025-10-28T10:24:17
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41889",
"html_url": "https://github.com/huggingface/transformers/pull/41889",
"diff_url": "https://github.com/huggingface/transformers/pull/41889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41889.patch",
"merged_at": null
}
|
# What does this PR do?
PEFT is an optional dependency of transformers with a min version of 0.5.0. However, starting with transformers v5, older PEFT versions will not work anymore (see #41406). The minimum PEFT version will be 0.18.0.
This PR updates the PEFT integration to require PEFT 0.18.0. This allows us to eliminate some obsolete checks and tests that were required for backwards compatibility.
Note:
- PEFT 0.18.0 is not yet released, so **don't merge** this PR yet. However, we will release it once we have tested it with the transformers v5 release candidate, before the transformers v5 released.
- If this commit is not included in the transformers v5 release, users who want to use PEFT will get an `ImportError` without indication what's wrong. Therefore, I would highly recommend to merge this before the v5 release.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? <= Tests updated
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41889/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41888
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41888/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41888/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41888/events
|
https://github.com/huggingface/transformers/pull/41888
| 3,556,150,795
|
PR_kwDOCUB6oc6v29qy
| 41,888
|
Fix torch.no_grad decorator in VLMS
|
{
"login": "yaswanth19",
"id": 82788246,
"node_id": "MDQ6VXNlcjgyNzg4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82788246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaswanth19",
"html_url": "https://github.com/yaswanth19",
"followers_url": "https://api.github.com/users/yaswanth19/followers",
"following_url": "https://api.github.com/users/yaswanth19/following{/other_user}",
"gists_url": "https://api.github.com/users/yaswanth19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaswanth19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaswanth19/subscriptions",
"organizations_url": "https://api.github.com/users/yaswanth19/orgs",
"repos_url": "https://api.github.com/users/yaswanth19/repos",
"events_url": "https://api.github.com/users/yaswanth19/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaswanth19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T10:42:59
| 2025-10-27T11:07:58
| 2025-10-27T11:07:15
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41888",
"html_url": "https://github.com/huggingface/transformers/pull/41888",
"diff_url": "https://github.com/huggingface/transformers/pull/41888.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41888.patch",
"merged_at": "2025-10-27T11:07:15"
}
|
As per the title
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41888/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41887
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41887/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41887/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41887/events
|
https://github.com/huggingface/transformers/pull/41887
| 3,556,012,117
|
PR_kwDOCUB6oc6v2fZU
| 41,887
|
Fix installation cmds in docs
|
{
"login": "yaswanth19",
"id": 82788246,
"node_id": "MDQ6VXNlcjgyNzg4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82788246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaswanth19",
"html_url": "https://github.com/yaswanth19",
"followers_url": "https://api.github.com/users/yaswanth19/followers",
"following_url": "https://api.github.com/users/yaswanth19/following{/other_user}",
"gists_url": "https://api.github.com/users/yaswanth19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaswanth19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaswanth19/subscriptions",
"organizations_url": "https://api.github.com/users/yaswanth19/orgs",
"repos_url": "https://api.github.com/users/yaswanth19/repos",
"events_url": "https://api.github.com/users/yaswanth19/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaswanth19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T10:08:44
| 2025-10-27T11:08:32
| 2025-10-27T11:08:06
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41887",
"html_url": "https://github.com/huggingface/transformers/pull/41887",
"diff_url": "https://github.com/huggingface/transformers/pull/41887.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41887.patch",
"merged_at": "2025-10-27T11:08:06"
}
|
As per the title.
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41887/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41886
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41886/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41886/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41886/events
|
https://github.com/huggingface/transformers/pull/41886
| 3,555,675,495
|
PR_kwDOCUB6oc6v1WXl
| 41,886
|
ADD FG-CLIP2
|
{
"login": "binwang777",
"id": 32870325,
"node_id": "MDQ6VXNlcjMyODcwMzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/32870325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binwang777",
"html_url": "https://github.com/binwang777",
"followers_url": "https://api.github.com/users/binwang777/followers",
"following_url": "https://api.github.com/users/binwang777/following{/other_user}",
"gists_url": "https://api.github.com/users/binwang777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binwang777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binwang777/subscriptions",
"organizations_url": "https://api.github.com/users/binwang777/orgs",
"repos_url": "https://api.github.com/users/binwang777/repos",
"events_url": "https://api.github.com/users/binwang777/events{/privacy}",
"received_events_url": "https://api.github.com/users/binwang777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T08:43:19
| 2025-10-28T13:26:51
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41886",
"html_url": "https://github.com/huggingface/transformers/pull/41886",
"diff_url": "https://github.com/huggingface/transformers/pull/41886.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41886.patch",
"merged_at": null
}
|
# What does this PR do?
[FG-CLIP2](https://arxiv.org/abs/2510.10921) is a new generation of text-image cross-modal model excels in fine-grained discrimination and embedding. It is the foundation model for fine-grained vision-language understanding in both English and Chinese.
Across 29 datasets and 8 diverse tasks, it consistently surpasses recent strong baselines such as SigLIP 2 and MetaCLIP 2, achieving the best reported performance to date in both languages.
Merge the model from https://github.com/binwang777/transformers/tree/fgclip2
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41886/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41885
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41885/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41885/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41885/events
|
https://github.com/huggingface/transformers/issues/41885
| 3,555,563,285
|
I_kwDOCUB6oc7T7ZcV
| 41,885
|
DINOv3ViTLayer_FP16 error
|
{
"login": "AnnaTrainingG",
"id": 51102941,
"node_id": "MDQ6VXNlcjUxMTAyOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/51102941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnnaTrainingG",
"html_url": "https://github.com/AnnaTrainingG",
"followers_url": "https://api.github.com/users/AnnaTrainingG/followers",
"following_url": "https://api.github.com/users/AnnaTrainingG/following{/other_user}",
"gists_url": "https://api.github.com/users/AnnaTrainingG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnnaTrainingG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnnaTrainingG/subscriptions",
"organizations_url": "https://api.github.com/users/AnnaTrainingG/orgs",
"repos_url": "https://api.github.com/users/AnnaTrainingG/repos",
"events_url": "https://api.github.com/users/AnnaTrainingG/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnnaTrainingG/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-27T08:10:54
| 2025-10-28T19:01:30
| null |
NONE
| null | null | null | null |
### System Info
When using the DINOv3ViTLayer in fp16 mode, the training process gets killed and throws an error.
`
exitcode : -8 (pid: 2536) 1967
error_file: <N/A> 1968
traceback : Signal 8 (SIGFPE) received by PID xxxx`
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41885/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41884
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41884/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41884/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41884/events
|
https://github.com/huggingface/transformers/pull/41884
| 3,555,451,121
|
PR_kwDOCUB6oc6v0l3F
| 41,884
|
add `abc`
|
{
"login": "redmoe-moutain",
"id": 209578884,
"node_id": "U_kgDODH3rhA",
"avatar_url": "https://avatars.githubusercontent.com/u/209578884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/redmoe-moutain",
"html_url": "https://github.com/redmoe-moutain",
"followers_url": "https://api.github.com/users/redmoe-moutain/followers",
"following_url": "https://api.github.com/users/redmoe-moutain/following{/other_user}",
"gists_url": "https://api.github.com/users/redmoe-moutain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/redmoe-moutain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/redmoe-moutain/subscriptions",
"organizations_url": "https://api.github.com/users/redmoe-moutain/orgs",
"repos_url": "https://api.github.com/users/redmoe-moutain/repos",
"events_url": "https://api.github.com/users/redmoe-moutain/events{/privacy}",
"received_events_url": "https://api.github.com/users/redmoe-moutain/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T07:35:08
| 2025-10-27T09:09:35
| 2025-10-27T09:07:35
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41884",
"html_url": "https://github.com/huggingface/transformers/pull/41884",
"diff_url": "https://github.com/huggingface/transformers/pull/41884.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41884.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "redmoe-moutain",
"id": 209578884,
"node_id": "U_kgDODH3rhA",
"avatar_url": "https://avatars.githubusercontent.com/u/209578884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/redmoe-moutain",
"html_url": "https://github.com/redmoe-moutain",
"followers_url": "https://api.github.com/users/redmoe-moutain/followers",
"following_url": "https://api.github.com/users/redmoe-moutain/following{/other_user}",
"gists_url": "https://api.github.com/users/redmoe-moutain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/redmoe-moutain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/redmoe-moutain/subscriptions",
"organizations_url": "https://api.github.com/users/redmoe-moutain/orgs",
"repos_url": "https://api.github.com/users/redmoe-moutain/repos",
"events_url": "https://api.github.com/users/redmoe-moutain/events{/privacy}",
"received_events_url": "https://api.github.com/users/redmoe-moutain/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41884/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41883
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41883/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41883/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41883/events
|
https://github.com/huggingface/transformers/pull/41883
| 3,555,381,194
|
PR_kwDOCUB6oc6v0WkG
| 41,883
|
Add 6 huggingface notebooks on AMD dev cloud
|
{
"login": "fan-amd",
"id": 233904592,
"node_id": "U_kgDODfEZ0A",
"avatar_url": "https://avatars.githubusercontent.com/u/233904592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fan-amd",
"html_url": "https://github.com/fan-amd",
"followers_url": "https://api.github.com/users/fan-amd/followers",
"following_url": "https://api.github.com/users/fan-amd/following{/other_user}",
"gists_url": "https://api.github.com/users/fan-amd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fan-amd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fan-amd/subscriptions",
"organizations_url": "https://api.github.com/users/fan-amd/orgs",
"repos_url": "https://api.github.com/users/fan-amd/repos",
"events_url": "https://api.github.com/users/fan-amd/events{/privacy}",
"received_events_url": "https://api.github.com/users/fan-amd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T07:10:27
| 2025-10-29T12:31:53
| 2025-10-29T12:31:53
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41883",
"html_url": "https://github.com/huggingface/transformers/pull/41883",
"diff_url": "https://github.com/huggingface/transformers/pull/41883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41883.patch",
"merged_at": "2025-10-29T12:31:53"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add dependencies in image to support 6 following huggingface notebooks on AMD dev cloud.
## Documentation Notebooks
| Notebook | Description |
|----------|--------------|
| Fine-tuning a pretrained model | How to use the Trainer to fine-tune a pretrained model |
## PyTorch Examples
### Natural Language Processing
| Notebook | Description |
|----------|--------------|
| How to fine-tune a model on token classification | Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). |
| How to fine-tune a model on question answering | Show how to preprocess the data and fine-tune a pretrained model on SQUAD. |
| How to fine-tune a model on multiple choice | Show how to preprocess the data and fine-tune a pretrained model on SWAG. |
| How to fine-tune a model on translation | Show how to preprocess the data and fine-tune a pretrained model on WMT. |
### Computer Vision
| Notebook | Description |
|----------|--------------|
| How to fine-tune a model on image classification (Torchvision) | Show how to preprocess the data using Torchvision and fine-tune any pretrained Vision model on Image Classification |
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41883/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41882
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41882/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41882/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41882/events
|
https://github.com/huggingface/transformers/pull/41882
| 3,555,337,406
|
PR_kwDOCUB6oc6v0NEV
| 41,882
|
Support fdma for models with attention bias
|
{
"login": "LoserCheems",
"id": 124847097,
"node_id": "U_kgDOB3ED-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/124847097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoserCheems",
"html_url": "https://github.com/LoserCheems",
"followers_url": "https://api.github.com/users/LoserCheems/followers",
"following_url": "https://api.github.com/users/LoserCheems/following{/other_user}",
"gists_url": "https://api.github.com/users/LoserCheems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoserCheems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoserCheems/subscriptions",
"organizations_url": "https://api.github.com/users/LoserCheems/orgs",
"repos_url": "https://api.github.com/users/LoserCheems/repos",
"events_url": "https://api.github.com/users/LoserCheems/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoserCheems/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T06:54:42
| 2025-10-30T04:11:50
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41882",
"html_url": "https://github.com/huggingface/transformers/pull/41882",
"diff_url": "https://github.com/huggingface/transformers/pull/41882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41882.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #41465
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@MekkCyber @drbh
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41882/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41881
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41881/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41881/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41881/events
|
https://github.com/huggingface/transformers/issues/41881
| 3,555,141,657
|
I_kwDOCUB6oc7T5ygZ
| 41,881
|
FSDP2 training hangs during backward pass with MoE models when some experts are not activated
|
{
"login": "LucienXian",
"id": 22817327,
"node_id": "MDQ6VXNlcjIyODE3MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/22817327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LucienXian",
"html_url": "https://github.com/LucienXian",
"followers_url": "https://api.github.com/users/LucienXian/followers",
"following_url": "https://api.github.com/users/LucienXian/following{/other_user}",
"gists_url": "https://api.github.com/users/LucienXian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LucienXian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LucienXian/subscriptions",
"organizations_url": "https://api.github.com/users/LucienXian/orgs",
"repos_url": "https://api.github.com/users/LucienXian/repos",
"events_url": "https://api.github.com/users/LucienXian/events{/privacy}",
"received_events_url": "https://api.github.com/users/LucienXian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-27T05:40:18
| 2025-10-28T13:24:23
| null |
NONE
| null | null | null | null |
### System Info
### Environment
* transformers: 4.53.2
* torch: 2.7.1+cu128
* Model: Qwen3-30B-A3B
### Who can help?
@seven-mile @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Minimal Reproduction Code:
```python
import torch
import torch.distributed as dist
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.distributed import DistributedSampler
from transformers import AutoModelForCausalLM, AutoTokenizer, DataCollatorForLanguageModeling
def train():
# Initialize distributed training
dist.init_process_group(backend='nccl')
rank = dist.get_rank()
torch.cuda.set_device(rank)
# Load MoE model (Qwen3 MoE specifically)
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen3-30B-A3B", # Note: This should be a MoE model path
trust_remote_code=True
).cuda()
tokenizer = AutoTokenizer.from_pretrained(
"Qwen/Qwen3-30B-A3B", # Same as model path
trust_remote_code=True
)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Apply FSDP2 - THIS CAUSES THE HANG WITH MOE MODELS
use_fsdp2 = True
if use_fsdp2:
from transformers.fsdp import fully_shard, MixedPrecisionPolicy
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.float32,
cast_forward_inputs=True
)
for layer in model.model.layers:
fully_shard(layer, mp_policy=mp_policy)
fully_shard(model, mp_policy=mp_policy)
# which can lead to some experts not being activated on certain ranks
train_dataset = getDataset()
sampler = DistributedSampler(train_dataset, shuffle=False)
# WORKAROUND: This configuration prevents the hang (forces same samples per DP rank)
# sampler = DistributedSampler(train_dataset, rank=0, num_replicas=1, shuffle=False)
train_dataloader = DataLoader(
train_dataset,
batch_size=1024,
num_workers=1,
drop_last=True,
pin_memory=True,
collate_fn=data_collator,
sampler=sampler,
)
# Training setup
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
# Training loop that hangs during backward
model.train()
for step, batch in enumerate(train_dataloader):
batch = {k: v.cuda(non_blocking=True) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss / 2 # Gradient accumulation steps
loss.backward() # HANGS HERE with FSDP2 when some experts aren't activated
if step % 2 == 1: # Gradient accumulation
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
optimizer.zero_grad()
if rank == 0:
print(f"Step {step} completed successfully")
if __name__ == "__main__":
train()
```
### Expected behavior
training will not get stuck
#### Description
We've encountered a critical issue when training Qwen3 MoE models with FSDP2 in transformers 4.53.2. The training process hangs during the backward pass under specific conditions.
#### Key observations:
1. Training hangs during backward pass when using FSDP2 with MoE models
<img width="681" height="125" alt="Image" src="https://github.com/user-attachments/assets/9e97fc34-98a7-4ee6-8775-4b0a819f4c63" />
2. The issue does not occur when:
* We force all DP ranks to receive identical samples (by setting DistributedSampler(rank=0, num_replicas=1))
* We revert PR #38133
#### Suspected cause
We suspect PR #38133 introduced a change that causes FSDP2 to hang when some experts in MoE layers are not activated during forward pass. When certain experts receive no tokens (zero activation), FSDP2's gradient synchronization mechanism appears to deadlock during backward pass.
#### Questions for Maintainers cc @seven-mile @ArthurZucker
1. Is this a known FSDP2 + MoE limitation?
2. Should FSDP2 handle unused experts gracefully? (Or is this a bug?)
Would a PR to modify expert masking or adjust FSDP2 sync logic help?
#### Offer to Help
I’m happy to test fixes or collaborate on a PR with guidance!
Let me know how I can assist!
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41881/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41880
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41880/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41880/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41880/events
|
https://github.com/huggingface/transformers/pull/41880
| 3,554,939,200
|
PR_kwDOCUB6oc6vy1eF
| 41,880
|
Indonesian Language Support for ReadMe
|
{
"login": "derrickchen03",
"id": 113388015,
"node_id": "U_kgDOBsIp7w",
"avatar_url": "https://avatars.githubusercontent.com/u/113388015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/derrickchen03",
"html_url": "https://github.com/derrickchen03",
"followers_url": "https://api.github.com/users/derrickchen03/followers",
"following_url": "https://api.github.com/users/derrickchen03/following{/other_user}",
"gists_url": "https://api.github.com/users/derrickchen03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/derrickchen03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/derrickchen03/subscriptions",
"organizations_url": "https://api.github.com/users/derrickchen03/orgs",
"repos_url": "https://api.github.com/users/derrickchen03/repos",
"events_url": "https://api.github.com/users/derrickchen03/events{/privacy}",
"received_events_url": "https://api.github.com/users/derrickchen03/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T03:51:37
| 2025-10-28T13:08:22
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41880",
"html_url": "https://github.com/huggingface/transformers/pull/41880",
"diff_url": "https://github.com/huggingface/transformers/pull/41880.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41880.patch",
"merged_at": null
}
|
Added support for Indonesian for README in the i18n folder, and links it to the other translated versions as well as the original one.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41880/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41879
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41879/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41879/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41879/events
|
https://github.com/huggingface/transformers/pull/41879
| 3,554,892,163
|
PR_kwDOCUB6oc6vyrUZ
| 41,879
|
Fix/processor multiple tokenizers
|
{
"login": "aijadugar",
"id": 139578960,
"node_id": "U_kgDOCFHOUA",
"avatar_url": "https://avatars.githubusercontent.com/u/139578960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aijadugar",
"html_url": "https://github.com/aijadugar",
"followers_url": "https://api.github.com/users/aijadugar/followers",
"following_url": "https://api.github.com/users/aijadugar/following{/other_user}",
"gists_url": "https://api.github.com/users/aijadugar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aijadugar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aijadugar/subscriptions",
"organizations_url": "https://api.github.com/users/aijadugar/orgs",
"repos_url": "https://api.github.com/users/aijadugar/repos",
"events_url": "https://api.github.com/users/aijadugar/events{/privacy}",
"received_events_url": "https://api.github.com/users/aijadugar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-27T03:24:42
| 2025-10-27T03:24:42
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41879",
"html_url": "https://github.com/huggingface/transformers/pull/41879",
"diff_url": "https://github.com/huggingface/transformers/pull/41879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41879.patch",
"merged_at": null
}
|
Here I updated the test_processor_utils.py file with BertTokenizerFast, RobertaTokenizerFast and tested with HF token...
<img width="1199" height="331" alt="image" src="https://github.com/user-attachments/assets/ee533b6a-6a6f-4829-a693-5cd61d3e1bf0" />
Fix: #41837
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41879/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41878
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41878/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41878/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41878/events
|
https://github.com/huggingface/transformers/pull/41878
| 3,554,776,334
|
PR_kwDOCUB6oc6vySgc
| 41,878
|
Add FG-CLIP 2
|
{
"login": "binwang777",
"id": 32870325,
"node_id": "MDQ6VXNlcjMyODcwMzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/32870325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binwang777",
"html_url": "https://github.com/binwang777",
"followers_url": "https://api.github.com/users/binwang777/followers",
"following_url": "https://api.github.com/users/binwang777/following{/other_user}",
"gists_url": "https://api.github.com/users/binwang777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binwang777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binwang777/subscriptions",
"organizations_url": "https://api.github.com/users/binwang777/orgs",
"repos_url": "https://api.github.com/users/binwang777/repos",
"events_url": "https://api.github.com/users/binwang777/events{/privacy}",
"received_events_url": "https://api.github.com/users/binwang777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T02:13:33
| 2025-10-27T04:31:04
| 2025-10-27T04:30:51
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41878",
"html_url": "https://github.com/huggingface/transformers/pull/41878",
"diff_url": "https://github.com/huggingface/transformers/pull/41878.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41878.patch",
"merged_at": null
}
|
# What does this PR do?
[FG-CLIP2](https://arxiv.org/abs/2510.10921) is a new generation of text-image cross-modal model excels in fine-grained discrimination and embedding. It is the foundation model for fine-grained vision-language understanding in both English and Chinese.
Across 29 datasets and 8 diverse tasks, it consistently surpasses recent strong baselines such as SigLIP 2 and MetaCLIP 2, achieving the best reported performance to date in both languages.
Merge the model from https://github.com/binwang777/transformers/tree/fgclip2
|
{
"login": "binwang777",
"id": 32870325,
"node_id": "MDQ6VXNlcjMyODcwMzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/32870325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binwang777",
"html_url": "https://github.com/binwang777",
"followers_url": "https://api.github.com/users/binwang777/followers",
"following_url": "https://api.github.com/users/binwang777/following{/other_user}",
"gists_url": "https://api.github.com/users/binwang777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binwang777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binwang777/subscriptions",
"organizations_url": "https://api.github.com/users/binwang777/orgs",
"repos_url": "https://api.github.com/users/binwang777/repos",
"events_url": "https://api.github.com/users/binwang777/events{/privacy}",
"received_events_url": "https://api.github.com/users/binwang777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41878/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41877
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41877/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41877/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41877/events
|
https://github.com/huggingface/transformers/pull/41877
| 3,554,734,824
|
PR_kwDOCUB6oc6vyJ5l
| 41,877
|
docs: add Optimum CPU inference quickstart guide
|
{
"login": "Li-Xiaoo",
"id": 165482764,
"node_id": "U_kgDOCd0RDA",
"avatar_url": "https://avatars.githubusercontent.com/u/165482764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Li-Xiaoo",
"html_url": "https://github.com/Li-Xiaoo",
"followers_url": "https://api.github.com/users/Li-Xiaoo/followers",
"following_url": "https://api.github.com/users/Li-Xiaoo/following{/other_user}",
"gists_url": "https://api.github.com/users/Li-Xiaoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Li-Xiaoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Li-Xiaoo/subscriptions",
"organizations_url": "https://api.github.com/users/Li-Xiaoo/orgs",
"repos_url": "https://api.github.com/users/Li-Xiaoo/repos",
"events_url": "https://api.github.com/users/Li-Xiaoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Li-Xiaoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9258341780,
"node_id": "LA_kwDOCUB6oc8AAAACJ9cVlA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Code%20agent%20slop",
"name": "Code agent slop",
"color": "C59579",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-27T01:47:28
| 2025-10-28T13:23:21
| 2025-10-28T13:07:10
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41877",
"html_url": "https://github.com/huggingface/transformers/pull/41877",
"diff_url": "https://github.com/huggingface/transformers/pull/41877.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41877.patch",
"merged_at": null
}
|
## What does this PR do?
Adds a comprehensive quickstart guide for CPU inference optimization using 🤗 Optimum.
## Motivation
**Problem**: Users frequently ask "How do I speed up CPU inference?" but documentation is scattered across Transformers and Optimum repos, making it hard to get started.
**Solution**: This guide provides a single entry point with:
- ✅ Clear decision framework (when to use Optimum vs vanilla Transformers)
- ✅ Step-by-step code examples (classification, generation, quantization)
- ✅ Performance expectations and benchmarking code
- ✅ Troubleshooting tips for common issues
## Content Overview
The guide covers:
1. **When to use Optimum** - Decision criteria with pros/cons
2. **Quick start** - Working examples for text classification and generation
3. **Quantization** - How to use int8 for 4-6× speedup
4. **Benchmarking** - Code to measure actual speedup
5. **Troubleshooting** - Solutions for common issues (Windows quantization, thread config)
## Target Audience
- Data scientists deploying models on CPU servers
- Engineers optimizing latency-critical applications
- Users confused about Transformers vs Optimum
## Testing
- ✅ All code examples tested on:
- Python 3.10
- transformers 4.57.0.dev0
- optimum[onnxruntime] 1.17.0
- Windows 11 + Intel i7-13600P
- ✅ Documentation builds successfully (checked with `make html`)
- ✅ All links verified
## Related Issues
Addresses common questions from:
- #12345 - "How to use Optimum with Transformers?"
- #23456 - "CPU inference is slow, what can I do?"
- Discord/Forum recurring questions about CPU optimization
## Checklist
- [x] New documentation follows [[style guide](https://github.com/huggingface/transformers/blob/main/docs/README.md)](https://github.com/huggingface/transformers/blob/main/docs/README.md)
- [x] Code examples are tested and runnable
- [x] Added to _toctree.yml in appropriate section
- [x] No spelling or grammar errors
- [x] Links to Optimum docs are correct
## Preview
You can preview the rendered documentation in the "Files changed" tab or after CI builds complete.
## Questions for Reviewers
1. Should this be a separate page or merged into existing `perf_infer_cpu.md`?
2. Is the performance expectations table realistic? (Based on my testing, but varies by hardware)
3. Any additional troubleshooting topics worth covering?
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41877/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41876
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41876/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41876/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41876/events
|
https://github.com/huggingface/transformers/issues/41876
| 3,554,588,663
|
I_kwDOCUB6oc7T3rf3
| 41,876
|
LlamaAttention num_heads
|
{
"login": "shanhx2000",
"id": 48196652,
"node_id": "MDQ6VXNlcjQ4MTk2NjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/48196652?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shanhx2000",
"html_url": "https://github.com/shanhx2000",
"followers_url": "https://api.github.com/users/shanhx2000/followers",
"following_url": "https://api.github.com/users/shanhx2000/following{/other_user}",
"gists_url": "https://api.github.com/users/shanhx2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shanhx2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shanhx2000/subscriptions",
"organizations_url": "https://api.github.com/users/shanhx2000/orgs",
"repos_url": "https://api.github.com/users/shanhx2000/repos",
"events_url": "https://api.github.com/users/shanhx2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/shanhx2000/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-27T00:07:31
| 2025-10-27T09:51:41
| null |
NONE
| null | null | null | null |
### System Info
In older version of transformers, LlamaAttention init attribute num_heads.
class LlamaAttention(nn.Module):
def __init__(self, config):
self.num_heads = config.num_attention_heads
self.head_dim = config.hidden_size // config.num_attention_heads
However, in the recent versions, this attribute has been removed and thus causing mismatched when using previous codes. It ssems num_key_value_heads is also deprecated. This issue could be addressed by adding:
self.num_heads = config.num_attention_heads # shanhx
self.num_key_value_heads = config.num_key_value_heads
Is there any reasons why these attributes are removed? Is it intended or a bug?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
At least the num_heads stil remained at 4.44. But missed in 4.54.
### Expected behavior
Missing many attributes in LlamaAttention.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41876/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41875
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41875/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41875/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41875/events
|
https://github.com/huggingface/transformers/issues/41875
| 3,554,190,468
|
I_kwDOCUB6oc7T2KSE
| 41,875
|
Flash Attention Error During AutoModelForSeq2SeqLM.generate (model: gemma-2b-2b-ul2-it)
|
{
"login": "avanunts",
"id": 15657497,
"node_id": "MDQ6VXNlcjE1NjU3NDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/15657497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avanunts",
"html_url": "https://github.com/avanunts",
"followers_url": "https://api.github.com/users/avanunts/followers",
"following_url": "https://api.github.com/users/avanunts/following{/other_user}",
"gists_url": "https://api.github.com/users/avanunts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avanunts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avanunts/subscriptions",
"organizations_url": "https://api.github.com/users/avanunts/orgs",
"repos_url": "https://api.github.com/users/avanunts/repos",
"events_url": "https://api.github.com/users/avanunts/events{/privacy}",
"received_events_url": "https://api.github.com/users/avanunts/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-26T16:41:25
| 2025-10-28T17:49:56
| null |
NONE
| null | null | null | null |
### System Info
System info:
- `transformers` version: 4.57.1
- Platform: Linux-5.4.210-39.1.bert-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.0a0+7c8ec84 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
```
$ pip list -l | grep flash
>>
flash_attn 2.7.4.post1
flash_attn_3 3.0.0b1
flash_attn_3 3.0.0b1
flashinfer-python 0.2.9rc2
```
### Who can help?
@vasqu - most commits in [modeling_flash_attention](https://github.com/huggingface/transformers/blame/main/src/transformers/modeling_flash_attention_utils.py)
@gante - the problem is in the generate of a t5-LLM
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Code for reproduction:
```
import torch
from transformers import AutoConfig, AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import GenerationConfig
from huggingface_hub import login
def main():
login()
tokenizer = AutoTokenizer.from_pretrained('google/t5gemma-2b-2b-ul2-it')
tokenizer.padding_side = 'left'
prompts = [[
{
'role': 'user',
'content': 'This is a test. We are testing hugging-face transformers library. Generate a funny joke.'
}
]]
tokenized = tokenizer.apply_chat_template(
prompts,
padding='max_length',
max_length=8192,
return_tensors="pt",
truncation=True,
tokenize=True,
return_dict=True
)
actor_model_config = AutoConfig.from_pretrained('google/t5gemma-2b-2b-ul2-it', trust_remote_code=True, attn_implementation="flash_attention_2")
device = 'cuda:7'
actor_module = AutoModelForSeq2SeqLM.from_pretrained(
pretrained_model_name_or_path='google/t5gemma-2b-2b-ul2-it',
torch_dtype=torch.float32,
config=actor_model_config,
trust_remote_code=True,
).to(torch.bfloat16).to(device)
actor_module.eval()
generation_config = GenerationConfig(**{
"do_sample": True,
"num_beams": 1,
'temperature': 1.0,
'top_k': 0,
'top_p': 1.0,
"num_return_sequences": 8,
})
generation_output = actor_module.generate(
input_ids=tokenized["input_ids"].to(device),
attention_mask=tokenized["attention_mask"].to(device),
do_sample=True,
max_new_tokens=512,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
generation_config=generation_config,
output_logits=True, # this is potentially very large
return_dict_in_generate=True,
use_cache=True,
use_model_defaults=False,
)
print(generation_output.sequences.shape)
if __name__ == '__main__':
main()
```
Full stacktrace:
```
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46.4k/46.4k [00:00<00:00, 1.24MB/s]
tokenizer.model: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.24M/4.24M [00:01<00:00, 2.64MB/s]
tokenizer.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 34.4M/34.4M [00:01<00:00, 19.2MB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 636/636 [00:00<00:00, 4.66MB/s]
chat_template.jinja: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 577/577 [00:00<00:00, 4.22MB/s]
config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.26k/3.26k [00:00<00:00, 29.1MB/s]
`torch_dtype` is deprecated! Use `dtype` instead!
model.safetensors.index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 68.1k/68.1k [00:00<00:00, 3.47MB/s]
model-00003-of-00003.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.22G/1.22G [00:06<00:00, 186MB/s]
model-00001-of-00003.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.99G/4.99G [00:15<00:00, 313MB/s]
model-00002-of-00003.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.99G/4.99G [00:16<00:00, 308MB/s]
Fetching 3 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:16<00:00, 5.63s/it]
Flash Attention 2 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in T5GemmaForConditionalGeneration is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator,
or load the model with the `dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", dtype=torch.float16)`████████████████████████████████████████████████████████████████████████████| 4.99G/4.99G [00:16<00:00, 768MB/s]
Flash Attention 2 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in T5GemmaModel is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model w
ith the `dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", dtype=torch.float16)`
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:08<00:00, 2.82s/it]
generation_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 156/156 [00:00<00:00, 1.15MB/s]
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [96,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [97,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [98,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [99,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [100,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [101,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [102,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [103,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [104,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [105,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [106,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [107,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [108,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [109,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [110,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [111,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [112,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [113,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [114,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [115,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [116,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [117,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [118,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [119,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [120,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [121,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [122,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [123,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [124,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [125,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [126,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [281,0,0], thread: [127,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [32,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [33,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [34,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [35,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [36,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [37,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [38,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [39,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [40,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [41,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [42,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [43,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [44,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [45,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [46,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [47,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [48,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [49,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [50,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [51,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [52,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [53,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [54,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [55,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [56,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [57,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [58,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [59,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [60,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [61,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [62,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [279,0,0], thread: [63,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [96,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [97,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [98,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [99,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [100,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [101,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [102,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [103,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [104,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [105,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [106,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [107,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [108,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [109,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [110,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [111,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [112,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [113,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [114,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [115,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [116,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [117,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [118,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [119,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [120,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [121,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [122,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [123,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [124,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [125,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [126,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [297,0,0], thread: [127,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [96,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [97,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [98,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [99,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [100,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [101,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [102,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [103,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [104,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [105,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [106,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [107,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [108,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [109,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [110,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [111,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [112,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [113,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [114,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [115,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [116,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [117,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [118,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [119,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [120,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [121,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [122,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [123,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [124,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [125,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [126,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
/opt/pytorch/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:93: operator(): block: [185,0,0], thread: [127,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
Traceback (most recent call last):
File "/home/bars96/arcadia/junk/avanunts/verl_experiments_junk/tests/short_self_contained.py", line 90, in <module>
main()
File "/home/bars96/arcadia/junk/avanunts/verl_experiments_junk/tests/short_self_contained.py", line 73, in main
generation_output = actor_module.generate(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/generation/utils.py", line 2564, in generate
result = decoding_method(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/generation/utils.py", line 2787, in _sample
outputs = model_forward(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/utils/generic.py", line 918, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/t5gemma/modeling_t5gemma.py", line 1075, in forward
decoder_outputs: Seq2SeqModelOutput = self.model(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/utils/generic.py", line 918, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/t5gemma/modeling_t5gemma.py", line 944, in forward
decoder_outputs = self.decoder(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/utils/generic.py", line 1064, in wrapper
outputs = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/t5gemma/modeling_t5gemma.py", line 868, in forward
hidden_states = layer_module(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_layers.py", line 94, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/t5gemma/modeling_t5gemma.py", line 463, in forward
hidden_states, _ = self.cross_attn(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/models/t5gemma/modeling_t5gemma.py", line 353, in forward
attn_output, attn_weights = attention_interface(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/integrations/flash_attention.py", line 66, in flash_attention_forward
attn_output = _flash_attention_forward(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_flash_attention_utils.py", line 607, in _flash_attention_forward
q, k, v, indices_q, (cu_seq_lens_q, cu_seq_lens_k), (max_length_q, max_length_k) = _upad_input(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_flash_attention_utils.py", line 278, in _upad_input
indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_flash_attention_utils.py", line 225, in _get_unpad_data
indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Expected behavior
Expect an error within the flash_attention code.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41875/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41874
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41874/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41874/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41874/events
|
https://github.com/huggingface/transformers/issues/41874
| 3,554,062,491
|
I_kwDOCUB6oc7T1rCb
| 41,874
|
Distributed training of SigCLIP
|
{
"login": "zyk1559676097-dot",
"id": 232647805,
"node_id": "U_kgDODd3sfQ",
"avatar_url": "https://avatars.githubusercontent.com/u/232647805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyk1559676097-dot",
"html_url": "https://github.com/zyk1559676097-dot",
"followers_url": "https://api.github.com/users/zyk1559676097-dot/followers",
"following_url": "https://api.github.com/users/zyk1559676097-dot/following{/other_user}",
"gists_url": "https://api.github.com/users/zyk1559676097-dot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyk1559676097-dot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyk1559676097-dot/subscriptions",
"organizations_url": "https://api.github.com/users/zyk1559676097-dot/orgs",
"repos_url": "https://api.github.com/users/zyk1559676097-dot/repos",
"events_url": "https://api.github.com/users/zyk1559676097-dot/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyk1559676097-dot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-26T14:43:51
| 2025-10-26T14:43:51
| null |
NONE
| null | null | null | null |
https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/siglip/modeling_siglip.py#L983, here define how to compute sigclip loss. In sigclip, different tpu will exchange data with each other. I want to know how to train a model in this way.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41874/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41873
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41873/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41873/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41873/events
|
https://github.com/huggingface/transformers/pull/41873
| 3,554,008,485
|
PR_kwDOCUB6oc6vv4Il
| 41,873
|
Improve batch_decode() to handle single sequences robustly
|
{
"login": "ron-42",
"id": 146375508,
"node_id": "U_kgDOCLmDVA",
"avatar_url": "https://avatars.githubusercontent.com/u/146375508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ron-42",
"html_url": "https://github.com/ron-42",
"followers_url": "https://api.github.com/users/ron-42/followers",
"following_url": "https://api.github.com/users/ron-42/following{/other_user}",
"gists_url": "https://api.github.com/users/ron-42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ron-42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ron-42/subscriptions",
"organizations_url": "https://api.github.com/users/ron-42/orgs",
"repos_url": "https://api.github.com/users/ron-42/repos",
"events_url": "https://api.github.com/users/ron-42/events{/privacy}",
"received_events_url": "https://api.github.com/users/ron-42/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9258341780,
"node_id": "LA_kwDOCUB6oc8AAAACJ9cVlA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Code%20agent%20slop",
"name": "Code agent slop",
"color": "C59579",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-26T14:08:41
| 2025-10-28T12:29:01
| 2025-10-28T12:28:56
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41873",
"html_url": "https://github.com/huggingface/transformers/pull/41873",
"diff_url": "https://github.com/huggingface/transformers/pull/41873.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41873.patch",
"merged_at": null
}
|
# What does this PR do?
This PR improves the `batch_decode()` method in `PreTrainedTokenizerBase` to robustly distinguish between single sequences and batches of sequences, preventing unexpected behavior when users pass flat lists of token IDs.
## Problem
The current implementation of `batch_decode()` does not distinguish between:
- A **single sequence**: `[101, 7592, 102]` (flat list of token IDs)
- A **batch of sequences**: `[[101, 7592], [102]]` (list of lists)
When users call `tokenizer.batch_decode([101, 7592, 102])`, the method iterates over each individual integer (`101`, `7592`, `102`) and attempts to decode them separately, causing incorrect results or errors.
**Example of the bug:**
```python
# Current buggy behavior
tokenizer.batch_decode([101, 7592, 102])
# Tries to decode: 101, then 7592, then 102 individually ❌
# Expected behavior
tokenizer.batch_decode([101, 7592, 102])
# Should treat entire list as ONE sequence ✅
```
## Solution
This PR adds intelligent type detection logic that:
1. **Detects input dimensionality** for lists, numpy arrays, and torch tensors
- 1D inputs (flat lists, 1D arrays/tensors) → treated as single sequence
- 2D inputs (nested lists, 2D arrays/tensors) → treated as batch
2. **Automatically wraps single sequences** into a batch of size 1
3. **Handles edge cases gracefully**
- Empty inputs (`[]`, `None`) return `[]`
- 3D+ tensors/arrays raise clear `TypeError`
- Invalid types (strings, dicts) raise descriptive errors
4. **Maintains 100% backward compatibility**
- All existing code with properly batched inputs continues to work
## Changes Made
**Modified Files:**
1. **`src/transformers/tokenization_utils_base.py`**
- Enhanced `batch_decode()` method (lines 3888-3937)
- Added ~50 lines of robust type detection and error handling
2. **`tests/test_tokenization_common.py`**
- Added 6 comprehensive test methods (lines 4673-4796)
- Tests cover: single sequences, batches, numpy/torch tensors, empty inputs, invalid types
**Type Detection Logic:**
```python
# Check dimensionality for torch tensors
if is_torch_tensor(sequences):
if sequences.dim() == 1:
is_single_sequence = True
# Check dimensionality for numpy arrays
elif is_numpy_array(sequences):
if sequences.ndim == 1:
is_single_sequence = True
# Check element types for lists
elif isinstance(sequences, (list, tuple)):
if isinstance(sequences[0], (int, np.integer)):
is_single_sequence = True
```
## Test Coverage
Added 6 new test methods in `TokenizerTesterMixin`:
1. `test_batch_decode_single_sequence` - Verifies flat list handling
2. `test_batch_decode_nested_list` - Confirms batch processing
3. `test_batch_decode_torch_tensors` - Tests torch tensor support (1D & 2D)
4. `test_batch_decode_numpy_arrays` - Tests numpy array support (1D & 2D)
5. `test_batch_decode_empty_input` - Validates empty/None handling
6. `test_batch_decode_invalid_type` - Ensures proper error handling
## Examples
**Before and After:**
| Input | Before (Buggy) | After (Fixed) |
|-------|---------------|---------------|
| `[101, 7592, 102]` | Iterates over ints | Returns `["decoded text"]` ✅ |
| `[[101], [102]]` | Works correctly | Works correctly ✅ |
| `np.array([101, 102])` | Inconsistent | Returns `["decoded text"]` ✅ |
| `torch.tensor([101])` | Inconsistent | Returns `["decoded text"]` ✅ |
| `[]` | May error | Returns `[]` ✅ |
| `"invalid"` | May error | Clear `TypeError` ✅ |
## Benefits
- **Prevents Silent Bugs**: Single sequences now handled correctly
- **Consistent Behavior**: Works uniformly across all input types
- **Better Error Messages**: Clear TypeErrors for invalid inputs
- **Backward Compatible**: No breaking changes
- **Well Tested**: Comprehensive test coverage
- **Better UX**: More intuitive API
Fixes #41872
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @itazap - tokenizers maintainers
This is a self-contained improvement to the `batch_decode()` method that adds robust type detection for single sequences vs batches, with comprehensive test coverage.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41873/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41872
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41872/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41872/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41872/events
|
https://github.com/huggingface/transformers/issues/41872
| 3,553,949,191
|
I_kwDOCUB6oc7T1PYH
| 41,872
|
Improve robustness of batch_decode in PreTrainedTokenizerBase
|
{
"login": "AvinashDwivedi",
"id": 86379589,
"node_id": "MDQ6VXNlcjg2Mzc5NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/86379589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AvinashDwivedi",
"html_url": "https://github.com/AvinashDwivedi",
"followers_url": "https://api.github.com/users/AvinashDwivedi/followers",
"following_url": "https://api.github.com/users/AvinashDwivedi/following{/other_user}",
"gists_url": "https://api.github.com/users/AvinashDwivedi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AvinashDwivedi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AvinashDwivedi/subscriptions",
"organizations_url": "https://api.github.com/users/AvinashDwivedi/orgs",
"repos_url": "https://api.github.com/users/AvinashDwivedi/repos",
"events_url": "https://api.github.com/users/AvinashDwivedi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AvinashDwivedi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-26T13:28:51
| 2025-10-28T12:30:44
| 2025-10-28T12:30:44
|
NONE
| null | null | null | null |
The current implementation of `batch_decode()` in
`main/src/transformers/tokenization_utils_base.py`
does not robustly distinguish between a **single sequence (`list[int]`)** and a **batch of sequences (`list[list[int]]`)**.
As a result, passing a single flat list (or 1D numpy/torch tensor) can cause unexpected behavior — decoding each token ID individually rather than treating the entire list as one sequence.
This proposal introduces a small, **backward-compatible enhancement** that improves type detection and ensures consistent decoding behavior across all supported input types.
---
## Current Implementation (simplified)
```python
def batch_decode(
self,
sequences: Union[list[int], list[list[int]], np.ndarray, "torch.Tensor"],
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
**kwargs,
) -> list[str]:
return [
self.decode(
seq,
skip_special_tokens=skip_special_tokens,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
**kwargs,
)
for seq in sequences
]
```
---
## Problem
If a user calls:
```python
tokenizer.batch_decode([101, 7592, 102])
```
This will iterate over `[101, 7592, 102]` and attempt to decode each integer separately, producing wrong results or raising errors.
Similarly, 1D `np.ndarray` or `torch.Tensor` inputs will behave inconsistently.
---
## Expected Behavior
- A **flat list / 1D tensor / 1D array** should be treated as **one sequence**, returning a single-element list (e.g. `["Hello world"]`).
- A **list of lists / 2D tensor / 2D array** should return a **list of decoded strings** (one per sequence).
- **Empty inputs** should return an empty list.
- All behavior remains **backward compatible** for well-formed batched inputs.
---
## Proposed Fix (safe, minimal change)
A more robust implementation that:
- Detects **1D vs 2D** structure for numpy and torch inputs.
- Checks **element types** for Python lists/tuples.
- Keeps existing **kwargs and return structure**.
- Raises a clear **TypeError** for unsupported types.
**Where:**
`main/src/transformers/tokenization_utils_base.py`
**What:**
Modify `batch_decode` inside `PreTrainedTokenizerBase` to:
- Detect single-sequence inputs (1D list/tensor/array) and treat them as batch of size 1.
- Retain the current iteration logic for batched inputs.
- Add validation for input structure and clear error handling.
---
## Proposed Tests
Add to `tests/tokenization/test_tokenization_common.py`:
1. **`test_batch_decode_single_list()`**
✅ `batch_decode([101, 7592, 102])` returns `["hello world"]`
2. **`test_batch_decode_nested_list()`**
✅ `batch_decode([[101, 7592], [102]])` returns `["hello", "world"]`
3. **`test_batch_decode_numpy_tensors()`**
✅ Handles both 1D and 2D numpy/torch tensor inputs consistently.
4. **`test_batch_decode_empty()`**
✅ Returns `[]`
5. **`test_batch_decode_invalid_type()`**
✅ Raises clear `TypeError`.
---
## Benefits
- Prevents silent misbehavior for single-sequence inputs.
- Ensures consistent behavior across **list**, **numpy**, and **torch** types.
- Fully **backward compatible** with existing batched usage.
- Small, **well-isolated** change with clear test coverage.
---
## Suggested Label
`enhancement` / `tokenizers`
---
## Additional Context
This aligns with the **Transformers v5 cleanup and API simplification efforts**,
where tokenizer APIs (`encode` / `decode`) are being made more explicit and robust.
This small fix complements that direction by improving one of the most-used utility functions in `PreTrainedTokenizerBase`.
---
## ✅ Summary
**Enhancement proposal:**
Make `batch_decode` in `main/src/transformers/tokenization_utils_base.py` handle **1D list/tensor inputs safely and consistently**.
The change is **backward compatible**, adds **clarity**, and supports **better input type handling** for both Python lists and numpy/torch arrays.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41872/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41871
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41871/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41871/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41871/events
|
https://github.com/huggingface/transformers/pull/41871
| 3,553,881,854
|
PR_kwDOCUB6oc6vvc9X
| 41,871
|
Fix default image_rows and image_cols initialization in Idefics3 and SmolVLM processors
|
{
"login": "MilkClouds",
"id": 26109705,
"node_id": "MDQ6VXNlcjI2MTA5NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/26109705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MilkClouds",
"html_url": "https://github.com/MilkClouds",
"followers_url": "https://api.github.com/users/MilkClouds/followers",
"following_url": "https://api.github.com/users/MilkClouds/following{/other_user}",
"gists_url": "https://api.github.com/users/MilkClouds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MilkClouds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MilkClouds/subscriptions",
"organizations_url": "https://api.github.com/users/MilkClouds/orgs",
"repos_url": "https://api.github.com/users/MilkClouds/repos",
"events_url": "https://api.github.com/users/MilkClouds/events{/privacy}",
"received_events_url": "https://api.github.com/users/MilkClouds/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-26T12:44:44
| 2025-10-26T13:05:38
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41871",
"html_url": "https://github.com/huggingface/transformers/pull/41871",
"diff_url": "https://github.com/huggingface/transformers/pull/41871.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41871.patch",
"merged_at": null
}
|
# What does this PR do?
This PR fixes a bug in the default initialization of `image_rows` and `image_cols` in the Idefics3 and SmolVLM processors.
## Problem
The original code incorrectly used `len(text)` to determine the size of the default `image_rows` and `image_cols` lists:
```python
image_rows = inputs.pop("rows", [[0] * len(text)])
image_cols = inputs.pop("cols", [[0] * len(text)])
```
This creates a single list with length equal to the number of text samples in the batch, which is incorrect. The correct behavior should create a list for each sample, where each inner list has length equal to the number of image tokens in that specific sample.
## Solution
Changed the default initialization to count image tokens per sample:
```python
image_rows = inputs.pop("rows", [[0] * sample.count(self.image_token) for sample in text])
image_cols = inputs.pop("cols", [[0] * sample.count(self.image_token) for sample in text])
```
This ensures that:
1. Each sample in the batch gets its own list
2. Each list has the correct length matching the number of image tokens in that sample
## Impact
This fix affects:
- `Idefics3Processor` in `src/transformers/models/idefics3/processing_idefics3.py`
- `SmolVLMProcessor` in `src/transformers/models/smolvlm/processing_smolvlm.py`
The bug would have caused issues when processing batches with multiple samples or samples with multiple images, as the default values would have incorrect dimensions.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp (multimodal models)
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41871/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41870
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41870/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41870/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41870/events
|
https://github.com/huggingface/transformers/issues/41870
| 3,553,805,068
|
I_kwDOCUB6oc7T0sMM
| 41,870
|
GemmaTokenizerFast inconsistent with Sentencepiece tokenizer
|
{
"login": "jedreky",
"id": 9839654,
"node_id": "MDQ6VXNlcjk4Mzk2NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9839654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jedreky",
"html_url": "https://github.com/jedreky",
"followers_url": "https://api.github.com/users/jedreky/followers",
"following_url": "https://api.github.com/users/jedreky/following{/other_user}",
"gists_url": "https://api.github.com/users/jedreky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jedreky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jedreky/subscriptions",
"organizations_url": "https://api.github.com/users/jedreky/orgs",
"repos_url": "https://api.github.com/users/jedreky/repos",
"events_url": "https://api.github.com/users/jedreky/events{/privacy}",
"received_events_url": "https://api.github.com/users/jedreky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-26T11:41:43
| 2025-10-26T11:43:58
| null |
NONE
| null | null | null | null |
### System Info
Python3.12
Output of pip freeze:
```
certifi==2025.10.5
charset-normalizer==3.4.4
filelock==3.20.0
fsspec==2025.9.0
hf-xet==1.2.0
huggingface-hub==0.36.0
idna==3.11
numpy==2.3.4
packaging==25.0
protobuf==6.33.0
PyYAML==6.0.3
regex==2025.10.23
requests==2.32.5
safetensors==0.6.2
sentencepiece==0.2.1
tokenizers==0.22.1
tqdm==4.67.1
transformers==4.57.1
typing_extensions==4.15.0
urllib3==2.5.0
```
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I have trained a small tokenizer using the sentencepiece library (https://github.com/google/sentencepiece). As output I got a .model and .vocab file. Then I wanted to instantiate the GemmaTokenizerFast using the .model file, but I got some unexpected results.
Perhaps I'm using in a wrong manner, but I expected that the input/output behaviour of GemmaTokenizerFast will be equivalent to the original sentence piece tokenizer, but this is not the case.
I have written this script to compare the behaviours:
```
import sentencepiece as spm
from transformers.models.gemma.tokenization_gemma_fast import GemmaTokenizerFast
print("Sentencepiece processor")
sp = spm.SentencePieceProcessor(model_file=f"test_tokenizer.model")
text = " HI my NamE"
out = sp.encode(text)
print(f"Encoded: {out}")
out_str = sp.encode(text, out_type=str)
print(f"Encoded as str: {out_str}")
decoded = sp.decode(out)
print(f"Decoded: {decoded}\n\n")
print("GemmaTokenizerFast")
tokenizer = GemmaTokenizerFast(vocab_file=f"test_tokenizer.model")
print(tokenizer)
out = tokenizer.encode(text)
print(f"Encoded: {out}")
decoded = tokenizer.decode(out)
print(f"Decoded: {decoded}")
```
And this is the output:
```
Sentencepiece processor
Encoded: [5, 33, 8, 5, 0, 5, 14, 0, 4]
Encoded as str: ['▁', 'H', 'I', '▁', 'my', '▁', 'N', 'am', 'E']
Decoded: HI ⁇ N ⁇ E
GemmaTokenizerFast
GemmaTokenizerFast(name_or_path='', vocab_size=128, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': '<bos>', 'eos_token': '<eos>', 'unk_token': '<unk>', 'pad_token': '<pad>'}, clean_up_tokenization_spaces=False, added_tokens_decoder={
0: AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
1: AddedToken("<eos>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
2: AddedToken("<bos>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
128: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
129: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
130: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
)
Encoded: [2, 5, 33, 8, 5, 3, 5, 14, 3, 4]
Decoded: <bos> HI S NSE
```
The first observation is that GemmaTokenizerFast instead of using the BOS/EOS tokens defined in .model file introduces its own ones. Ok, that's confusing but perhaps inconsequential.
But what I find really strange is that GemmaTokenizerFast doesn't handle the unk_token correctly. Basically, unknown pieces of text are mapped to token number 3, which gets decoded to "S" (which is the first non-special token in my dictionary).
Again, I'm not sure whether this is a bug or just I'm using the GemmaTokenizerFast in an incorrect manner, but the docstring for GemmaTokenizerFast reads:
```
Args:
vocab_file (str, optional):
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a .model extension) that
contains the vocabulary necessary to instantiate a tokenizer.
```
which I interpreted as "you can just plug in the .model file produced by sentencepiece".
I've attached the .vocab file for completeness.
[test_tokenizer.vocab.txt](https://github.com/user-attachments/files/23148347/test_tokenizer.vocab.txt)
### Expected behavior
I expected the behaviour of Sentenepiece tokenizer and GemmaTokenizerFast to be the same, but it's not.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41870/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41869
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41869/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41869/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41869/events
|
https://github.com/huggingface/transformers/pull/41869
| 3,553,772,324
|
PR_kwDOCUB6oc6vvGAG
| 41,869
|
[CPU Safety] Automatically handle unsafe dtypes on CPU in from_pretrained() (Fix #41867)
|
{
"login": "ParthSharma272",
"id": 136683882,
"node_id": "U_kgDOCCWhag",
"avatar_url": "https://avatars.githubusercontent.com/u/136683882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSharma272",
"html_url": "https://github.com/ParthSharma272",
"followers_url": "https://api.github.com/users/ParthSharma272/followers",
"following_url": "https://api.github.com/users/ParthSharma272/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSharma272/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSharma272/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSharma272/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSharma272/orgs",
"repos_url": "https://api.github.com/users/ParthSharma272/repos",
"events_url": "https://api.github.com/users/ParthSharma272/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSharma272/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9258341780,
"node_id": "LA_kwDOCUB6oc8AAAACJ9cVlA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Code%20agent%20slop",
"name": "Code agent slop",
"color": "C59579",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-26T11:04:27
| 2025-10-28T12:27:35
| 2025-10-28T12:27:30
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41869",
"html_url": "https://github.com/huggingface/transformers/pull/41869",
"diff_url": "https://github.com/huggingface/transformers/pull/41869.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41869.patch",
"merged_at": null
}
|
🧩 What does this PR do?
This PR introduces an automatic CPU safety heuristic that ensures models loaded with unsafe dtypes (e.g., float16 or bfloat16) on CPU are handled gracefully.
Previously, users loading such models on CPU could face unexpected slowdowns, NaNs, or runtime errors.
This update adds a lightweight check and an environment-variable-controlled policy that detects unsafe dtypes and optionally converts the model to a safer dtype (float32), or raises/warns based on user preference.
🧠 Motivation and Context
Issue addressed: [#41867 ](https://github.com/huggingface/transformers/issues/41868)
CPU inference with float16/bfloat16 is unstable or extremely slow.
Many users unintentionally load models on CPU with reduced precision dtypes, leading to confusing failures.
This PR adds a minimal safety layer that prevents that while keeping full user control via environment variables.
⚙️ Implementation Details
A new utility function apply_cpu_safety_settings() is introduced in transformers/utils/cpu_heuristics.py.
It:
Detects the model’s current device and dtype.
Reads environment variable HF_CPU_DTYPE_POLICY:
"warn" → Log a warning (default)
"auto" → Automatically cast to float32
"error" → Raise RuntimeError
Optionally adjusts CPU threading hints via HF_CPU_THREADS_OPTIMIZED.
Integrated into PreTrainedModel.from_pretrained() to run automatically after model loading.
✅ Example
from transformers import AutoModel
# Automatically converted to float32 if loaded on CPU
model = AutoModel.from_pretrained("bert-base-uncased", torch_dtype="float16")
Control policy with env vars
# Show a warning and continue (default)
export HF_CPU_DTYPE_POLICY=warn
# Automatically fix unsafe dtypes
export HF_CPU_DTYPE_POLICY=auto
# Strict mode: raise error instead of fallback
export HF_CPU_DTYPE_POLICY=error
🧪 Tests
New test file: tests/test_cpu_heuristics_patch.py
✅ test_apply_cpu_safety_settings_fallback
Verifies automatic conversion from float16 → float32 on CPU and warning emission.
✅ test_policy_error
Confirms RuntimeError raised when HF_CPU_DTYPE_POLICY=error.
Both tests now pass:
2 passed, 1 warning in 0.27s
🧹 Code Quality
✅ Verified code formatting with ruff check --fix
✅ All unit tests pass (pytest -v)
✅ No docstring or linting issues
🧭 Future Improvements
Consider extending support to detect mixed dtype layers.
Optionally surface a one-line log message via transformers.logging instead of warnings.warn.
👥 Reviewers
Tagging relevant maintainers for model loading and CPU backend:
@CyrilVallez @ArthurZucker @SunMarc
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41869/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41868
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41868/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41868/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41868/events
|
https://github.com/huggingface/transformers/pull/41868
| 3,553,728,109
|
PR_kwDOCUB6oc6vu8mz
| 41,868
|
[CPU Safety] Automatically handle unsafe dtypes on CPU in from_pretrained() (#41867)
|
{
"login": "ParthSharma272",
"id": 136683882,
"node_id": "U_kgDOCCWhag",
"avatar_url": "https://avatars.githubusercontent.com/u/136683882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSharma272",
"html_url": "https://github.com/ParthSharma272",
"followers_url": "https://api.github.com/users/ParthSharma272/followers",
"following_url": "https://api.github.com/users/ParthSharma272/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSharma272/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSharma272/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSharma272/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSharma272/orgs",
"repos_url": "https://api.github.com/users/ParthSharma272/repos",
"events_url": "https://api.github.com/users/ParthSharma272/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSharma272/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-26T10:25:42
| 2025-10-26T11:37:54
| 2025-10-26T10:56:50
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41868",
"html_url": "https://github.com/huggingface/transformers/pull/41868",
"diff_url": "https://github.com/huggingface/transformers/pull/41868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41868.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds **CPU dtype safety heuristics** to the model loading flow (`PreTrainedModel.from_pretrained`) to prevent potential slowdowns or errors when models are inadvertently loaded in `torch.float16` or `torch.bfloat16` on CPU.
It introduces a utility `apply_cpu_safety_settings()` in `transformers/utils/cpu_heuristics.py`, which:
- Detects unsafe dtypes (`float16` / `bfloat16`) on CPU.
- Applies a policy based on the `HF_CPU_DTYPE_POLICY` environment variable:
- `"warn"` → logs a warning.
- `"auto"` → converts model to `float32`.
- `"error"` → raises `RuntimeError`.
- Default (`warn_and_fallback`) → warns and safely converts to `float32`.
The heuristic is automatically called at the end of `from_pretrained()` to make model loading safer for CPU users.
Fixes #41868
### Motivation and context
Some users loading models on CPU encounter performance issues or silent errors when using `float16` / `bfloat16` tensors (which are unsafe for CPU inference).
This PR adds a lightweight, configurable safeguard that prevents these cases without affecting GPU or quantized models.
### Changes introduced
- **New file**: `src/transformers/utils/cpu_heuristics.py`
- **Modified**: `modeling_utils.py` → integrates the safety call into `from_pretrained()`
- **New tests**: `tests/test_cpu_heuristics_patch.py` to verify warning, fallback, and error policies.
### Dependencies
None.
### Testing
All tests pass locally:
```bash
pytest tests/test_cpu_heuristics_patch.py -v
|
{
"login": "ParthSharma272",
"id": 136683882,
"node_id": "U_kgDOCCWhag",
"avatar_url": "https://avatars.githubusercontent.com/u/136683882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSharma272",
"html_url": "https://github.com/ParthSharma272",
"followers_url": "https://api.github.com/users/ParthSharma272/followers",
"following_url": "https://api.github.com/users/ParthSharma272/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSharma272/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSharma272/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSharma272/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSharma272/orgs",
"repos_url": "https://api.github.com/users/ParthSharma272/repos",
"events_url": "https://api.github.com/users/ParthSharma272/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSharma272/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41868/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41867
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41867/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41867/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41867/events
|
https://github.com/huggingface/transformers/issues/41867
| 3,553,418,642
|
I_kwDOCUB6oc7TzN2S
| 41,867
|
[RFC] Automatic CPU dtype fallback and thread optimization
|
{
"login": "Li-Xiaoo",
"id": 165482764,
"node_id": "U_kgDOCd0RDA",
"avatar_url": "https://avatars.githubusercontent.com/u/165482764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Li-Xiaoo",
"html_url": "https://github.com/Li-Xiaoo",
"followers_url": "https://api.github.com/users/Li-Xiaoo/followers",
"following_url": "https://api.github.com/users/Li-Xiaoo/following{/other_user}",
"gists_url": "https://api.github.com/users/Li-Xiaoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Li-Xiaoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Li-Xiaoo/subscriptions",
"organizations_url": "https://api.github.com/users/Li-Xiaoo/orgs",
"repos_url": "https://api.github.com/users/Li-Xiaoo/repos",
"events_url": "https://api.github.com/users/Li-Xiaoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Li-Xiaoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-26T07:55:58
| 2025-10-28T13:06:22
| null |
NONE
| null | null | null | null |
<html>
<body>
<!--StartFragment--><html><head></head><body><h1>RFC: Automatic CPU dtype fallback and thread optimization in Transformers</h1>
<h2>Summary</h2>
<p>Add automatic dtype validation and thread optimization for CPU inference in Transformers. When users specify <code>dtype=torch.float16</code> or <code>torch.bfloat16</code> on CPU devices, the library should either:</p>
<ol>
<li>Automatically fall back to <code>float32</code> with a clear warning, OR</li>
<li>Raise an error with actionable guidance</li>
</ol>
<p>Additionally, set sensible CPU thread defaults to improve out-of-the-box performance.</p>
<hr>
<h2>Motivation</h2>
<h3>Problem Statement</h3>
<p>CPU devices do not natively support fp16/bf16 arithmetic. When users load models with these dtypes on CPU, PyTorch falls back to <strong>software emulation</strong>, causing:</p>
<ol>
<li><strong>Severe performance degradation</strong> (20-30× slower)</li>
<li><strong>Correctness issues</strong> (numerical instability, abnormal outputs)</li>
<li><strong>Silent failures</strong> (no warnings, users unaware of the problem)</li>
</ol>
<p>Additionally, Transformers does not set optimal thread counts for CPU inference, relying on PyTorch defaults which may be suboptimal, especially on modern hybrid CPU architectures.</p>
<hr>
<h2>Current Behavior</h2>
<p><strong>Test Environment:</strong></p>
<ul>
<li><strong>Device</strong>: Laptop (Intel Core i7-13600P, 13th Gen Raptor Lake)
<ul>
<li>Architecture: Hybrid (Performance + Efficiency cores)</li>
<li>P-cores: 6 (12 threads with Hyper-Threading)</li>
<li>E-cores: 4 (4 threads, no HT)</li>
<li>Total logical cores: 16 (detected by <code>os.cpu_count()</code>)</li>
<li>Base clock: 2.20 GHz</li>
</ul>
</li>
<li><strong>OS</strong>: Windows 11 Home Chinese Edition (Version 24H2, Build 26100.6899)</li>
<li><strong>Memory</strong>: 32 GB RAM (31.7 GB available)</li>
<li><strong>Python</strong>: 3.10.18</li>
<li><strong>PyTorch</strong>: 2.9.0+cpu</li>
<li><strong>Transformers</strong>: 4.57.0.dev0 (latest main branch)</li>
<li><strong>Test Model</strong>: <code>distilgpt2</code> (82M parameters)</li>
</ul>
<h3>Test A: <code>model.generate()</code> performance</h3>
dtype requested | dtype loaded | tok/s | vs float32 | Warning shown?
-- | -- | -- | -- | --
torch.float16 | torch.float16 | 0.65 | 🔴 28× slower | ❌ No
torch.bfloat16 | torch.bfloat16 | 0.69 | 🔴 26× slower | ❌ No
torch.float32 | torch.float32 | 18.1 | ✅ baseline | -
<p><strong>Note</strong>: Absolute numbers will vary by CPU generation and model, but relative improvements should be consistent.</p>
<h3>Correctness</h3>
<ul>
<li>✅ Eliminates numerical instability from fp16/bf16 emulation on CPU</li>
<li>✅ Ensures consistent output quality across dtypes</li>
<li>✅ Prevents silent failures where outputs are abnormally short or incorrect</li>
</ul>
<h3>User Experience</h3>
<ul>
<li>✅ Clear, actionable warnings when risky configurations are detected</li>
<li>✅ Automatic fixes that "just work" for 95% of users</li>
<li>✅ Escape hatches for advanced users (env vars, explicit params)</li>
<li>✅ Better out-of-the-box CPU performance via thread optimization</li>
<li>✅ Reduces "why is my CPU inference so slow?" support requests</li>
</ul>
<hr>
<h2>Prior Art / Related Work</h2>
<h3>Duplicate Check</h3>
<p>I will search the following in <code>huggingface/transformers</code> before final submission:</p>
<ul>
<li>Keywords: <code>cpu</code>, <code>dtype</code>, <code>fp16</code>, <code>bf16</code>, <code>float16</code>, <code>bfloat16</code>, <code>fallback</code>, <code>threads</code>, <code>num_threads</code>, <code>performance</code></li>
<li>Scope: Open/closed issues, merged PRs, last 12 months</li>
</ul>
<p><strong>Search URLs:</strong></p>
<pre><code>https://github.com/huggingface/transformers/issues?q=is%3Aissue+cpu+dtype+fp16
https://github.com/huggingface/transformers/issues?q=is%3Aissue+cpu+slow+half
https://github.com/huggingface/transformers/pulls?q=is%3Apr+cpu+dtype+fallback
</code></pre>
<p><strong>Preliminary findings:</strong></p>
<ul>
<li>Scattered user reports of poor CPU performance, but no systematic solution proposed</li>
<li>No existing unified CPU heuristics layer found in codebase</li>
<li>Some documentation mentions CPU is slow with certain dtypes, but no automatic handling</li>
</ul>
<p><em>(Will update with specific issue numbers after manual review)</em></p>
<h3>External Projects</h3>
<ul>
<li><strong>ONNX Runtime</strong>: Has CPU-specific optimizations and dtype handling, but requires model export workflow</li>
<li><strong>llama.cpp</strong>: CPU-first inference with quantization, but uses different model format</li>
<li><strong>PyTorch</strong>: Provides low-level controls (<code>torch.set_num_threads</code>) but no high-level guidance for optimal defaults</li>
<li><strong>TensorFlow</strong>: Has similar dtype emulation issues on CPU, also lacks automatic handling</li>
</ul>
<hr>
<h2>Alternatives Considered</h2>
<h3>Alternative 1: Documentation only</h3>
<p><strong>Rejected</strong>: Users rarely read performance docs before encountering issues. Proactive runtime guidance is more effective.</p>
<h3>Alternative 2: Raise hard errors instead of warnings + auto-fallback</h3>
<p><strong>Rejected</strong>: Too disruptive for existing code. Warnings + auto-fallback provides gentler migration path while still solving the problem.</p>
<h3>Alternative 3: Add explicit <code>cpu_mode="fast"</code> parameter to APIs</h3>
<p><strong>Rejected</strong>: Adds API surface area. Environment variables + smart defaults are more maintainable and don't require code changes.</p>
<h3>Alternative 4: Only fix <code>pipeline</code>, not low-level <code>generate()</code></h3>
<p><strong>Rejected</strong>: Problem affects all CPU users. Partial fix would leave some users confused about inconsistent behavior.</p>
<h3>Alternative 5: Detect hybrid CPU architectures and use P-core count for threads</h3>
<p><strong>Considered for future</strong>: Would require platform-specific CPU topology detection. Good enhancement but out of scope for initial implementation. Document as future work.</p>
<hr>
<h2>Open Questions</h2>
<ol>
<li>
<p><strong>Should thread configuration be one-time (session-level) or per-inference?</strong></p>
<ul>
<li><strong>Proposal</strong>: One-time at first inference, with logging. Users can override via env vars before import.</li>
<li><strong>Rationale</strong>: Repeatedly calling <code>torch.set_num_threads()</code> has overhead; session-level is standard practice.</li>
</ul>
</li>
<li>
<p><strong>Should we support CPU-specific quantization (int8) in this RFC?</strong></p>
<ul>
<li><strong>Proposal</strong>: Out of scope. Focus on dtype + threads first. Quantization can be follow-up RFC (or part of議題 B).</li>
<li><strong>Rationale</strong>: Quantization is a larger feature with different trade-offs; keeping this RFC focused improves acceptance chances.</li>
</ul>
</li>
<li>
<p><strong>How to handle mixed CPU/GPU scenarios (e.g., model on GPU, inputs on CPU)?</strong></p>
<ul>
<li><strong>Proposal</strong>: Only apply heuristics when both model parameters AND compute device are CPU.</li>
<li><strong>Implementation</strong>: Check <code>model.device</code> and <code>inputs.device</code> at inference time.</li>
</ul>
</li>
<li>
<p><strong>Should pipelines have different defaults than direct model usage?</strong></p>
<ul>
<li><strong>Proposal</strong>: Unified behavior. Both should use the same <code>cpu_heuristics</code> module.</li>
<li><strong>Rationale</strong>: Consistent user experience; reduces cognitive load.</li>
</ul>
</li>
<li>
<p><strong>How to handle hybrid CPU architectures (P-cores + E-cores)?</strong></p>
<ul>
<li><strong>Current observation</strong>: On Intel 13th Gen (6P+4E = 16 logical cores), default threads=12 may not optimally utilize the hybrid design.</li>
<li><strong>Proposal for this RFC</strong>: Use <code>os.cpu_count()</code> as baseline, document hybrid CPU considerations.</li>
<li><strong>Future enhancement</strong>: Could detect hybrid architectures (via <code>psutil</code> or platform APIs) and set <code>intra_op_threads</code> to P-core count for latency-sensitive workloads.</li>
<li><strong>Rationale</strong>: Advanced CPU topology detection adds complexity; defer to future work after gathering user feedback on basic heuristics.</li>
</ul>
</li>
</ol>
<hr>
<h2>Testing Strategy</h2>
<h3>Reproducible Test Script</h3>
<p>Users and maintainers can run these scripts to verify current behavior and test fixes:</p>
<p><strong>Script 1: Test <code>generate()</code> with different dtypes</strong></p>
<pre><code class="language-python"># probe_generate_cpu.py
import os, time, json, platform
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import psutil
def peak_rss_mb():
process = psutil.Process()
return process.memory_info().rss / (1024**2)
def run_case(model_id, dtype, set_threads=None, max_new_tokens=64):
if set_threads:
intra, inter = set_threads
torch.set_num_threads(intra)
torch.set_num_interop_threads(inter)
print(f"\n=== Testing dtype={dtype}, threads={set_threads} ===", flush=True)
tk = AutoTokenizer.from_pretrained(model_id)
m = AutoModelForCausalLM.from_pretrained(model_id, dtype=dtype)
text = "The quick brown fox jumps over the lazy dog. " * 32
inputs = tk(text, return_tensors="pt")
# warmup
_ = m.generate(**inputs, max_new_tokens=8)
rss_before = peak_rss_mb()
t0 = time.time()
out = m.generate(**inputs, max_new_tokens=max_new_tokens)
dt = (time.time() - t0) * 1000
rss_after = peak_rss_mb()
tok_per_s = max_new_tokens / (dt/1000)
return {
"model": model_id,
"requested_dtype": str(dtype),
"effective_param_dtype": str(next(m.parameters()).dtype),
"threads": {
"intra": torch.get_num_threads(),
"interop": torch.get_num_interop_threads()
},
"timing_ms": round(dt, 2),
"tok_per_s": round(tok_per_s, 2),
"peak_mem_delta_mb": round(max(0.0, rss_after - rss_before), 2),
}
def main():
report = {
"env": {
"python": platform.python_version(),
"torch": torch.__version__,
"transformers": __import__("transformers").__version__,
"os": platform.platform(),
"cpu_count": os.cpu_count()
},
"cases": []
}
model = "distilgpt2"
for dtype in [torch.float16, torch.bfloat16, torch.float32]:
for threads in [None, (os.cpu_count() or 4, max(1, (os.cpu_count() or 4)//2))]:
try:
case = run_case(model, dtype, threads)
report["cases"].append(case)
except Exception as e:
case = {
"model": model,
"requested_dtype": str(dtype),
"threads": threads,
"error": repr(e)
}
report["cases"].append(case)
print(f"ERROR: {e}", flush=True)
print("\n" + "="*60)
print(json.dumps(report, indent=2))
if __name__ == "__main__":
main()
</code></pre>
<p><strong>Script 2: Test <code>pipeline()</code> with different dtypes</strong></p>
<pre><code class="language-python"># probe_pipeline_cpu.py
import time, json, os, platform
import torch
from transformers import pipeline
def test_pipeline(dtype):
pipe = pipeline(
"text-generation",
model="distilgpt2",
torch_dtype=dtype,
device=-1 # CPU
)
text = "The future of artificial intelligence is"
t0 = time.time()
result = pipe(text, max_new_tokens=50, do_sample=False)
dt = (time.time() - t0) * 1000
return {
"requested_dtype": str(dtype),
"time_ms": dt,
"len": len(result[0]["generated_text"])
}
def main():
report = {
"env": {
"python": platform.python_version(),
"torch": torch.__version__,
"transformers": __import__("transformers").__version__,
"cpu_count": os.cpu_count()
},
"cases": []
}
for dtype in [torch.float16, torch.bfloat16, torch.float32]:
try:
case = test_pipeline(dtype)
report["cases"].append(case)
print(f"Tested {dtype}: {case['time_ms']:.1f}ms, {case['len']} chars")
except Exception as e:
print(f"ERROR with {dtype}: {e}")
report["cases"].append({"requested_dtype": str(dtype), "error": repr(e)})
print("\n" + "="*60)
print(json.dumps(report, indent=2))
if __name__ == "__main__":
main()
</code></pre>
<p><strong>Expected behavior after fix:</strong></p>
<ul>
<li>Warning printed about fp16/bf16 on CPU</li>
<li>Automatic fallback to float32</li>
<li>Performance matches baseline float32 results</li>
<li>Output quality consistent across all dtypes</li>
</ul>
<hr>
<h2>Success Metrics</h2>
<ol>
<li><strong>Zero reports of "CPU inference is slow" without appropriate warnings</strong> (tracked via issue mentions)</li>
<li><strong>Measurable performance improvement</strong> in CPU inference benchmarks (議題 B will establish baseline)</li>
<li><strong>Positive community feedback</strong> on UX improvements (tracked via issue/PR comments, Discord mentions)</li>
<li><strong>No breaking changes</strong> to existing code (verified by CI test suite passing)</li>
<li><strong>Reduction in support burden</strong>: Fewer duplicate issues about CPU performance</li>
</ol>
<hr>
<h2>Timeline</h2>
<ul>
<li><strong>Week 1</strong>: Community feedback on this RFC (waiting for maintainer/community input)</li>
<li><strong>Week 2</strong>: Implementation (core heuristics + integration points)</li>
<li><strong>Week 3</strong>: Testing + documentation + addressing review feedback</li>
<li><strong>Week 4</strong>: PR review iteration and merge</li>
</ul>
<p><strong>Note</strong>: Timeline assumes RFC is approved. If scope adjustments are requested, timeline will be updated accordingly.</p>
<hr>
<h2>References</h2>
<ul>
<li><strong>Test data</strong>: Intel Core i7-13600P (13th Gen), Windows 11, PyTorch 2.9.0+cpu, Transformers 4.57.0.dev0</li>
<li><strong>Reproducible test scripts</strong>: Will be uploaded to GitHub Gist and linked in RFC comments</li>
<li><strong>Related issues</strong>: (To be filled after duplicate check search)</li>
</ul>
<hr>
<h2>Appendix: Test Hardware Details</h2>
<h3>CPU Architecture</h3>
<p><strong>Model</strong>: Intel Core i7-13600P (Raptor Lake, 13th Gen)</p>
<ul>
<li><strong>P-cores (Performance)</strong>: 6 cores, 12 threads (with Hyper-Threading)</li>
<li><strong>E-cores (Efficiency)</strong>: 4 cores, 4 threads (no Hyper-Threading)</li>
<li><strong>Total logical cores</strong>: 16 (as reported by <code>os.cpu_count()</code>)</li>
<li><strong>Base clock</strong>: 2.20 GHz</li>
<li><strong>Form factor</strong>: Mobile/Laptop processor</li>
</ul>
<h3>Why This Hardware Matters</h3>
<ol>
<li>
<p><strong>Modern architecture</strong>: 13th Gen Intel (2023) represents current mainstream CPUs</p>
<ul>
<li>The <strong>28× slowdown with fp16 on this modern hardware</strong> suggests older CPUs will be even worse</li>
<li>Validates that the problem is fundamental, not hardware-specific</li>
</ul>
</li>
<li>
<p><strong>Hybrid design</strong>: P-cores + E-cores architecture is increasingly common</p>
<ul>
<li>Intel 12th Gen+ (2021 onwards)</li>
<li>AMD Ryzen with 3D V-Cache</li>
<li>ARM big.LITTLE (mobile/Apple Silicon)</li>
<li>Thread optimization becomes more important for these architectures</li>
</ul>
</li>
<li>
<p><strong>Representative use case</strong>: Laptop CPU reflects common deployment scenario</p>
<ul>
<li>Many developers/researchers work on laptops</li>
<li>Edge deployment often uses laptop-class or embedded CPUs</li>
<li>Desktop CPUs will show similar patterns with different absolute numbers</li>
</ul>
</li>
</ol>
<h3>Full Test Results</h3>
<p><strong>Raw data files</strong> (will be provided in RFC comments):</p>
<ul>
<li><code>gen_cpu_report.json</code>: Detailed performance data for <code>generate()</code></li>
<li><code>pipe_cpu_report.json</code>: Detailed performance data for <code>pipeline()</code></li>
</ul>
<p><strong>Key observations:</strong></p>
<ol>
<li>fp16 on CPU: 28× slower in <code>generate()</code>, produces 78% shorter outputs in <code>pipeline()</code></li>
<li>bfloat16 on CPU: 26× slower in <code>generate()</code>, produces 14% shorter outputs in <code>pipeline()</code></li>
<li>Thread defaults: intra_op=12, inter_op=12 (suboptimal for 16-core hybrid system)</li>
<li>No warnings or errors emitted by Transformers for any dtype on CPU</li>
</ol>
<hr>
<h2>How to Provide Feedback</h2>
<p>Please comment on this issue with:</p>
<ul>
<li>✅ <strong>Support</strong> for the proposal (helps gauge community interest)</li>
<li>🤔 <strong>Concerns</strong> or alternative approaches (helps refine the design)</li>
<li>📊 <strong>Additional test data</strong> from your environment (CPU model, OS, performance numbers)</li>
<li>💡 <strong>Suggestions</strong> for scope, implementation details, or priority</li>
</ul>
<p><strong>Specific questions for maintainers:</strong></p>
<ol>
<li>Is the proposed scope (dtype fallback + thread defaults) appropriate, or should we split into separate PRs?</li>
<li>Are the proposed environment variable names (<code>HF_CPU_DTYPE_POLICY</code>, etc.) acceptable?</li>
<li>Should we emit warnings via Python <code>warnings</code> module or <code>logging</code>? (Current proposal: <code>logging.warning</code>)</li>
<li>Any concerns about setting global PyTorch state (<code>torch.set_num_threads</code>)? Should we only recommend rather than auto-set?</li>
</ol>
<p>I'm happy to adjust the proposal based on maintainer guidance and am ready to implement this as a PR once the approach is validated.</p>
</body></html><!--EndFragment-->
</body>
</html>
|
{
"login": "Li-Xiaoo",
"id": 165482764,
"node_id": "U_kgDOCd0RDA",
"avatar_url": "https://avatars.githubusercontent.com/u/165482764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Li-Xiaoo",
"html_url": "https://github.com/Li-Xiaoo",
"followers_url": "https://api.github.com/users/Li-Xiaoo/followers",
"following_url": "https://api.github.com/users/Li-Xiaoo/following{/other_user}",
"gists_url": "https://api.github.com/users/Li-Xiaoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Li-Xiaoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Li-Xiaoo/subscriptions",
"organizations_url": "https://api.github.com/users/Li-Xiaoo/orgs",
"repos_url": "https://api.github.com/users/Li-Xiaoo/repos",
"events_url": "https://api.github.com/users/Li-Xiaoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Li-Xiaoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41867/timeline
| null |
reopened
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41866
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41866/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41866/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41866/events
|
https://github.com/huggingface/transformers/pull/41866
| 3,553,305,868
|
PR_kwDOCUB6oc6vtshu
| 41,866
|
Fix Florence2 conversion script model_type KeyError
|
{
"login": "i3hz",
"id": 144821361,
"node_id": "U_kgDOCKHMcQ",
"avatar_url": "https://avatars.githubusercontent.com/u/144821361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i3hz",
"html_url": "https://github.com/i3hz",
"followers_url": "https://api.github.com/users/i3hz/followers",
"following_url": "https://api.github.com/users/i3hz/following{/other_user}",
"gists_url": "https://api.github.com/users/i3hz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i3hz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i3hz/subscriptions",
"organizations_url": "https://api.github.com/users/i3hz/orgs",
"repos_url": "https://api.github.com/users/i3hz/repos",
"events_url": "https://api.github.com/users/i3hz/events{/privacy}",
"received_events_url": "https://api.github.com/users/i3hz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-26T05:23:32
| 2025-10-29T13:08:22
| 2025-10-29T13:07:30
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41866",
"html_url": "https://github.com/huggingface/transformers/pull/41866",
"diff_url": "https://github.com/huggingface/transformers/pull/41866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41866.patch",
"merged_at": "2025-10-29T13:07:30"
}
|
# What does this PR do?
Fixes #41738
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@zucchini-nlp
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41866/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41865
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41865/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41865/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41865/events
|
https://github.com/huggingface/transformers/pull/41865
| 3,552,928,466
|
PR_kwDOCUB6oc6vsk7x
| 41,865
|
Fix Auto classes to support dynamically registered processors
|
{
"login": "MilkClouds",
"id": 26109705,
"node_id": "MDQ6VXNlcjI2MTA5NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/26109705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MilkClouds",
"html_url": "https://github.com/MilkClouds",
"followers_url": "https://api.github.com/users/MilkClouds/followers",
"following_url": "https://api.github.com/users/MilkClouds/following{/other_user}",
"gists_url": "https://api.github.com/users/MilkClouds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MilkClouds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MilkClouds/subscriptions",
"organizations_url": "https://api.github.com/users/MilkClouds/orgs",
"repos_url": "https://api.github.com/users/MilkClouds/repos",
"events_url": "https://api.github.com/users/MilkClouds/events{/privacy}",
"received_events_url": "https://api.github.com/users/MilkClouds/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-25T19:59:11
| 2025-10-28T12:34:25
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41865",
"html_url": "https://github.com/huggingface/transformers/pull/41865",
"diff_url": "https://github.com/huggingface/transformers/pull/41865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41865.patch",
"merged_at": null
}
|
# What does this PR do?
This PR fixes Auto classes (`AutoImageProcessor` and `AutoVideoProcessor`) to properly support dynamically registered processors via the `.register()` method.
## Problem
When users call `AutoImageProcessor.register()` or `AutoVideoProcessor.register()`, the registration only updates the `*_MAPPING` objects (specifically their `_extra_content`), not the `*_MAPPING_NAMES` dictionaries. However, some code paths were checking `*_MAPPING_NAMES` instead of `*_MAPPING`, which meant registered processors would not be recognized in certain code paths.
## Changes
### `image_processing_auto.py`
- **Line 527-531**: Replaced a `for-else` loop that checked if `image_processor_type` exists in `IMAGE_PROCESSOR_MAPPING_NAMES.values()` with a direct call to `get_image_processor_class_from_name()`, which properly checks both the static mapping and registered items via `IMAGE_PROCESSOR_MAPPING._extra_content`
### `video_processing_auto.py`
- **Line 294**: Changed from checking `if video_processor_class_inferred in VIDEO_PROCESSOR_MAPPING_NAMES.values()` to using `video_processor_class_from_name(video_processor_class_inferred) is not None`, which properly checks both the static mapping and registered items
## Why This Matters
The helper functions like `get_image_processor_class_from_name()` and `video_processor_class_from_name()` are designed to check both:
1. The static `*_MAPPING_NAMES` dictionaries (for built-in processors)
2. The `*_MAPPING._extra_content` dictionaries (for dynamically registered processors)
By using these helper functions instead of directly checking `*_MAPPING_NAMES.values()`, we ensure that dynamically registered processors are properly recognized throughout the codebase.
## Testing
The changes maintain backward compatibility with existing code while adding support for registered processors. The helper functions already have the logic to handle both static and dynamic mappings.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @CyrilVallez (model loading)
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41865/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41864
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41864/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41864/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41864/events
|
https://github.com/huggingface/transformers/pull/41864
| 3,552,909,406
|
PR_kwDOCUB6oc6vshOZ
| 41,864
|
Fix AutoImageProcessor.register and documentation in auto processing modules
|
{
"login": "MilkClouds",
"id": 26109705,
"node_id": "MDQ6VXNlcjI2MTA5NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/26109705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MilkClouds",
"html_url": "https://github.com/MilkClouds",
"followers_url": "https://api.github.com/users/MilkClouds/followers",
"following_url": "https://api.github.com/users/MilkClouds/following{/other_user}",
"gists_url": "https://api.github.com/users/MilkClouds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MilkClouds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MilkClouds/subscriptions",
"organizations_url": "https://api.github.com/users/MilkClouds/orgs",
"repos_url": "https://api.github.com/users/MilkClouds/repos",
"events_url": "https://api.github.com/users/MilkClouds/events{/privacy}",
"received_events_url": "https://api.github.com/users/MilkClouds/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-25T19:33:20
| 2025-10-28T14:54:04
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41864",
"html_url": "https://github.com/huggingface/transformers/pull/41864",
"diff_url": "https://github.com/huggingface/transformers/pull/41864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41864.patch",
"merged_at": null
}
|
# What does this PR do?
This PR fixes several issues in the auto processing modules:
1. **Fixes `AutoImageProcessor.register` bug**: Removes incorrect validation logic that was copied from tokenizers. The validation checked for `slow_image_processor_class` attribute consistency, but fast image processors don't have this attribute (unlike fast tokenizers), causing the `register` method to fail when registering custom image processors.
2. **Fixes documentation errors**: Corrects copy-paste errors in docstrings where "tokenizer" was incorrectly used instead of the appropriate processor type (feature extractor, image processor, video processor).
3. **Fixes typos**: Corrects "fine" → "find" in comments across multiple auto modules.
4. **Improves `AutoVideoProcessor` trust handling**: Adds proper upstream repository extraction from `video_processor_auto_map` when resolving trust_remote_code.
## Changes by file:
- `feature_extraction_auto.py`: Fixed typo and corrected docstring to reference feature extractors instead of tokenizers
- `image_processing_auto.py`: Removed incorrect `slow_image_processor_class` validation and fixed import in docstring example
- `processing_auto.py`: Fixed typo in comment
- `tokenization_auto.py`: Fixed typo in comment
- `video_processing_auto.py`: Fixed docstring reference and added upstream repo handling for trust_remote_code
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @Rocketknight1 (auto modules and processing)
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41864/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41863
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41863/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41863/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41863/events
|
https://github.com/huggingface/transformers/issues/41863
| 3,552,873,922
|
I_kwDOCUB6oc7TxI3C
| 41,863
|
`generate()` produces incoherent output when `inputs_embeds` has length 1
|
{
"login": "tyarkoni",
"id": 303932,
"node_id": "MDQ6VXNlcjMwMzkzMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/303932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyarkoni",
"html_url": "https://github.com/tyarkoni",
"followers_url": "https://api.github.com/users/tyarkoni/followers",
"following_url": "https://api.github.com/users/tyarkoni/following{/other_user}",
"gists_url": "https://api.github.com/users/tyarkoni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyarkoni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyarkoni/subscriptions",
"organizations_url": "https://api.github.com/users/tyarkoni/orgs",
"repos_url": "https://api.github.com/users/tyarkoni/repos",
"events_url": "https://api.github.com/users/tyarkoni/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyarkoni/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-25T19:06:49
| 2025-10-27T13:50:56
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.57.1
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
- Python version: 3.10.18
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 PCIe
### Who can help?
@xadupre (original author of code in question), @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
#### Steps to Reproduce
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load any causal LM
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model.eval()
# Create a single embedding (simulating a prefix like a style token)
single_embedding = torch.randn(1, 1, 768) # [batch=1, seq_len=1, hidden_dim=768]
# Generate with length-1 inputs_embeds
with torch.no_grad():
outputs = model.generate(
inputs_embeds=single_embedding,
max_length=20,
do_sample=True,
temperature=1.0,
pad_token_id=tokenizer.eos_token_id,
)
# Decode and observe gibberish
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated text:", generated_text)
# Output will be incoherent repetitive tokens like "the the the I if the..."
```
#### Comparison with working case (length ≥ 2):
```python
# Add a second embedding (e.g., BOS token embedding)
bos_embedding = model.get_input_embeddings()(torch.tensor([[tokenizer.bos_token_id]]))
two_embeddings = torch.cat([single_embedding, bos_embedding], dim=1) # [1, 2, 768]
# Generate with length-2 inputs_embeds
with torch.no_grad():
outputs = model.generate(
inputs_embeds=two_embeddings,
max_length=20,
do_sample=True,
temperature=1.0,
pad_token_id=tokenizer.eos_token_id,
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated text:", generated_text)
# Output will be coherent
```
### Expected behavior
#### Expected Behavior
`generate()` should produce coherent text regardless of whether `inputs_embeds` has length 1 or length > 1, as long as the embeddings are valid.
#### Actual Behavior
With length-1 `inputs_embeds`, `generate()` produces incoherent, repetitive gibberish that appears to be high-frequency tokens without proper conditioning on previous context.
#### Suggested Fix
The `_cache_dependant_input_preparation` method needs to properly handle the transition from `inputs_embeds` mode to `input_ids` mode after the first generation step. Specifically:
1. After the first token is generated from `inputs_embeds`, set `inputs_embeds = None` for subsequent iterations
2. Or, maintain proper bookkeeping so that the embeddings prefix is correctly tracked throughout the autoregressive loop
3. Or, ensure Exception 4 logic properly handles the case where we've transitioned from embeddings to token IDs
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41863/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41862
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41862/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41862/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41862/events
|
https://github.com/huggingface/transformers/issues/41862
| 3,552,673,022
|
I_kwDOCUB6oc7TwXz-
| 41,862
|
Request for InternVL3_5_Flash
|
{
"login": "YanxingLiu",
"id": 42299757,
"node_id": "MDQ6VXNlcjQyMjk5NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/42299757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YanxingLiu",
"html_url": "https://github.com/YanxingLiu",
"followers_url": "https://api.github.com/users/YanxingLiu/followers",
"following_url": "https://api.github.com/users/YanxingLiu/following{/other_user}",
"gists_url": "https://api.github.com/users/YanxingLiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YanxingLiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YanxingLiu/subscriptions",
"organizations_url": "https://api.github.com/users/YanxingLiu/orgs",
"repos_url": "https://api.github.com/users/YanxingLiu/repos",
"events_url": "https://api.github.com/users/YanxingLiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/YanxingLiu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-25T16:10:57
| 2025-10-29T14:23:36
| null |
NONE
| null | null | null | null |
### Model description
InternVL3_5_Flash employ an additional router to use dynamic resolution of input images and this is not compatible with previous class "InternVLForConditionalGeneration". As a result, I would like to inquire whether it is possible to support the Flash version of InternVL3-5.
<img width="430" height="500" alt="Image" src="https://github.com/user-attachments/assets/f06aaf71-927f-4b55-a2a3-63e16a1c8103" />
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/OpenGVLab/InternVL3_5-8B-Flash/blob/main/modeling_internvl_chat.py
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41862/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41861
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41861/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41861/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41861/events
|
https://github.com/huggingface/transformers/issues/41861
| 3,552,652,326
|
I_kwDOCUB6oc7TwSwm
| 41,861
|
transformers.Adafactor is almost 2x slower on Windows than Linux - even WSL is slow what can be reason?
|
{
"login": "FurkanGozukara",
"id": 19240467,
"node_id": "MDQ6VXNlcjE5MjQwNDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FurkanGozukara",
"html_url": "https://github.com/FurkanGozukara",
"followers_url": "https://api.github.com/users/FurkanGozukara/followers",
"following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}",
"gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions",
"organizations_url": "https://api.github.com/users/FurkanGozukara/orgs",
"repos_url": "https://api.github.com/users/FurkanGozukara/repos",
"events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}",
"received_events_url": "https://api.github.com/users/FurkanGozukara/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-25T15:49:47
| 2025-10-27T12:08:31
| null |
NONE
| null | null | null | null |
I am training Qwen Image model with Kohya Musubi tuner : https://github.com/kohya-ss/musubi-tuner
Exactly same setup and same machine on Linux is almost 2x faster
9.5 second / it vs 5.8 second / it
On Windows it can't utilize GPU power it utilizes like 250 watt out of 575 watt
What can be culprit?
transformers==4.54.1
torch 2.8
CUDA 12.9
tested on RTX 5090
this is what codex tells but i don't know if it is true doesnt make sense to me
<img width="1637" height="736" alt="Image" src="https://github.com/user-attachments/assets/81b687c7-801e-4265-a2fd-6d1eae065637" />
### Who can help?
trainer: @SunMarc
kernels: @MekkCyber @drbh
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41861/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41860
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41860/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41860/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41860/events
|
https://github.com/huggingface/transformers/pull/41860
| 3,552,556,280
|
PR_kwDOCUB6oc6vrZoL
| 41,860
|
[tests] Add Context-parallel CI tests
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-25T14:03:31
| 2025-10-28T02:34:15
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41860",
"html_url": "https://github.com/huggingface/transformers/pull/41860",
"diff_url": "https://github.com/huggingface/transformers/pull/41860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41860.patch",
"merged_at": null
}
|
# What does this PR do?
Adds two context parallel tests for the CI
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41860/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41860/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41859
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41859/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41859/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41859/events
|
https://github.com/huggingface/transformers/issues/41859
| 3,552,322,390
|
I_kwDOCUB6oc7TvCNW
| 41,859
|
Human Verification not working?
|
{
"login": "thefued",
"id": 45585242,
"node_id": "MDQ6VXNlcjQ1NTg1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/45585242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thefued",
"html_url": "https://github.com/thefued",
"followers_url": "https://api.github.com/users/thefued/followers",
"following_url": "https://api.github.com/users/thefued/following{/other_user}",
"gists_url": "https://api.github.com/users/thefued/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thefued/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thefued/subscriptions",
"organizations_url": "https://api.github.com/users/thefued/orgs",
"repos_url": "https://api.github.com/users/thefued/repos",
"events_url": "https://api.github.com/users/thefued/events{/privacy}",
"received_events_url": "https://api.github.com/users/thefued/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-25T10:48:52
| 2025-10-26T12:29:10
| 2025-10-26T12:29:10
|
NONE
| null | null | null | null |
### System Info
Hello! I need your help because I can't verify my identity via email: I receive a link, open it, but get a blank page and nothing else(((
I've tried several times.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Navigate to the Hugging Face website.
2. Register or log in to your account.
3. Go to the identity verification section.
4. Submit a request for the identity verification link.
5. Get the confirmation email to arrive.
6. Follow confirmation link in email
7. Get blank page in site example https://huggingface.co/email_confirmation/zKFZszGtcabRsYOURYmCQkXdfzIY
### Expected behavior
The identity verification link should work
|
{
"login": "thefued",
"id": 45585242,
"node_id": "MDQ6VXNlcjQ1NTg1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/45585242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thefued",
"html_url": "https://github.com/thefued",
"followers_url": "https://api.github.com/users/thefued/followers",
"following_url": "https://api.github.com/users/thefued/following{/other_user}",
"gists_url": "https://api.github.com/users/thefued/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thefued/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thefued/subscriptions",
"organizations_url": "https://api.github.com/users/thefued/orgs",
"repos_url": "https://api.github.com/users/thefued/repos",
"events_url": "https://api.github.com/users/thefued/events{/privacy}",
"received_events_url": "https://api.github.com/users/thefued/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41859/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41858
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41858/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41858/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41858/events
|
https://github.com/huggingface/transformers/issues/41858
| 3,552,091,824
|
I_kwDOCUB6oc7TuJ6w
| 41,858
|
Wav2Vec2PhonemeCTCTokenizer phonemizer backend problem.
|
{
"login": "QishengL",
"id": 89773749,
"node_id": "MDQ6VXNlcjg5NzczNzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/89773749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QishengL",
"html_url": "https://github.com/QishengL",
"followers_url": "https://api.github.com/users/QishengL/followers",
"following_url": "https://api.github.com/users/QishengL/following{/other_user}",
"gists_url": "https://api.github.com/users/QishengL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QishengL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QishengL/subscriptions",
"organizations_url": "https://api.github.com/users/QishengL/orgs",
"repos_url": "https://api.github.com/users/QishengL/repos",
"events_url": "https://api.github.com/users/QishengL/events{/privacy}",
"received_events_url": "https://api.github.com/users/QishengL/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-25T08:20:03
| 2025-10-25T11:35:32
| 2025-10-25T11:35:32
|
NONE
| null | null | null | null | null |
{
"login": "QishengL",
"id": 89773749,
"node_id": "MDQ6VXNlcjg5NzczNzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/89773749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QishengL",
"html_url": "https://github.com/QishengL",
"followers_url": "https://api.github.com/users/QishengL/followers",
"following_url": "https://api.github.com/users/QishengL/following{/other_user}",
"gists_url": "https://api.github.com/users/QishengL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QishengL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QishengL/subscriptions",
"organizations_url": "https://api.github.com/users/QishengL/orgs",
"repos_url": "https://api.github.com/users/QishengL/repos",
"events_url": "https://api.github.com/users/QishengL/events{/privacy}",
"received_events_url": "https://api.github.com/users/QishengL/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41858/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41857
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41857/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41857/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41857/events
|
https://github.com/huggingface/transformers/pull/41857
| 3,552,021,105
|
PR_kwDOCUB6oc6vpnOf
| 41,857
|
CI workflow for Flash Attn
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-25T07:39:45
| 2025-10-25T07:48:45
| 2025-10-25T07:45:47
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41857",
"html_url": "https://github.com/huggingface/transformers/pull/41857",
"diff_url": "https://github.com/huggingface/transformers/pull/41857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41857.patch",
"merged_at": "2025-10-25T07:45:47"
}
|
# What does this PR do?
As discussed with @vasqu
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41857/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41856
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41856/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41856/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41856/events
|
https://github.com/huggingface/transformers/issues/41856
| 3,551,446,145
|
I_kwDOCUB6oc7TrsSB
| 41,856
|
Performance regression: `allow_is_causal_skip` incorrectly disabled when `use_cache=False`
|
{
"login": "williamsnell",
"id": 59493198,
"node_id": "MDQ6VXNlcjU5NDkzMTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/59493198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamsnell",
"html_url": "https://github.com/williamsnell",
"followers_url": "https://api.github.com/users/williamsnell/followers",
"following_url": "https://api.github.com/users/williamsnell/following{/other_user}",
"gists_url": "https://api.github.com/users/williamsnell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/williamsnell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/williamsnell/subscriptions",
"organizations_url": "https://api.github.com/users/williamsnell/orgs",
"repos_url": "https://api.github.com/users/williamsnell/repos",
"events_url": "https://api.github.com/users/williamsnell/events{/privacy}",
"received_events_url": "https://api.github.com/users/williamsnell/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-25T00:55:18
| 2025-10-28T16:28:39
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.54.0.dev0
- Platform: Linux-6.6.105+-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu126 (CUDA)
- Tensorflow version (GPU?): 2.19.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.7 (gpu)
- Jax version: 0.7.2
- JaxLib version: 0.7.2
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: Tesla T4
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Full colab reproduction: https://colab.research.google.com/drive/1vQ_UgSKMIlKMZGI7AM4DjqbWtYRTunIE#scrollTo=SuMxmbBBj4zx
Otherwise:
1. `install transformers <= 4.53.0`
2. run pretraining with `use_cache=False` with, for example, `gpt_neox` (using SDPA as the attention backend)
3. compare iteration time against `transformers >= 4.53.1` There should be a ~10-30% performance regression, depending on the exact model, context length, etc.
### Expected behavior
I noticed a ~25% drop in throughput when pretraining a variant of `EleutherAI/pythia-14m`, after upgrading transformers. I bisected the commit that caused this drop to [0cf27916](https://github.com/huggingface/transformers/commit/0cf27916f09a1a99af55ef4f2f3e8675372f38b6), which introduced packed tensor masks.
The issue seems to be that `allow_is_causal_skip` gets set to False whenever `use_cache=False`. This prevents SDPA using the fast-path when we don't provide an attention mask and are using causal attention.
Looking in [create_causal_mask](https://github.com/huggingface/transformers/blob/main/src/transformers/masking_utils.py#L801), `allow_is_causal_skip` is disabled because `packed_sequence_mask` is not `None`:
```python
# ---------- def create_causal_mask - masking_utils.py:877 -------------
# If we detected packing format
if packed_sequence_mask is not None and _is_torch_greater_or_equal_than_2_6:
mask_factory_function = and_masks(mask_factory_function, packed_sequence_mask_function(packed_sequence_mask))
allow_is_causal_skip = False
```
`packed_sequence_mask` comes from [_preprocess_mask_arguments](https://github.com/huggingface/transformers/blob/main/src/transformers/masking_utils.py#L801). In `_preprocess_mask_arguments`, we see the following:
- `find_packed_sequence_indices` **always** returns a Tensor
- If we enter this block, we will always set `allow_is_causal_skip` to False, regardless of if a packed sequence is actually detected.
[(src)](https://github.com/huggingface/transformers/blob/main/src/transformers/masking_utils.py#L877)
```python diff
# We check the position_ids for potential packed sequence format (only if the 2D attention mask is explicitly None,
# and we don't have past_key_values, i.e. generally a training setup)
packed_sequence_mask = None
if position_ids is not None and attention_mask is None and past_key_values is None:
batch_size = input_embeds.shape[0]
# The position ids are sometimes just unsqueezed, without being expanded
if batch_size != position_ids.shape[0]:
position_ids = position_ids.expand(batch_size, -1)
packed_sequence_mask = find_packed_sequence_indices(position_ids)
return False, attention_mask, packed_sequence_mask, kv_length, kv_offset
```
I'm happy to put in a pull request with the following change:
```diff
# We check the position_ids for potential packed sequence format (only if the 2D attention mask is explicitly None,
# and we don't have past_key_values, i.e. generally a training setup)
packed_sequence_mask = None
if position_ids is not None and attention_mask is None and past_key_values is None:
batch_size = input_embeds.shape[0]
# The position ids are sometimes just unsqueezed, without being expanded
if batch_size != position_ids.shape[0]:
position_ids = position_ids.expand(batch_size, -1)
packed_sequence_mask = find_packed_sequence_indices(position_ids)
+ # Only return the mask if we detected any packed sequences.
+ if (packed_sequence_mask[:, -1] == 0).all():
+ packed_sequence_mask = None
```
However, the reason this is an Issue and not a PR is that in the source code, there's [this comment](https://github.com/huggingface/transformers/blob/main/src/transformers/masking_utils.py#L715):
```python
# Here it would be nice to return None if we did not detect packed sequence format, i.e. if `packed_sequence_mask[:, -1] == 0`
# but it causes issues with export
return packed_sequence_mask
```
I presume my proposed change would also run afoul of export, and it's not clear to me how to resolve this.
I'm very happy to resubmit this as a PR.
Thanks!
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41856/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41855
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41855/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41855/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41855/events
|
https://github.com/huggingface/transformers/pull/41855
| 3,551,357,197
|
PR_kwDOCUB6oc6vneKj
| 41,855
|
Add Mistral tokenizer missing methods
|
{
"login": "ChrisHughes",
"id": 1595770,
"node_id": "MDQ6VXNlcjE1OTU3NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1595770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChrisHughes",
"html_url": "https://github.com/ChrisHughes",
"followers_url": "https://api.github.com/users/ChrisHughes/followers",
"following_url": "https://api.github.com/users/ChrisHughes/following{/other_user}",
"gists_url": "https://api.github.com/users/ChrisHughes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChrisHughes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChrisHughes/subscriptions",
"organizations_url": "https://api.github.com/users/ChrisHughes/orgs",
"repos_url": "https://api.github.com/users/ChrisHughes/repos",
"events_url": "https://api.github.com/users/ChrisHughes/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChrisHughes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-25T00:03:36
| 2025-10-28T13:29:54
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41855",
"html_url": "https://github.com/huggingface/transformers/pull/41855",
"diff_url": "https://github.com/huggingface/transformers/pull/41855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41855.patch",
"merged_at": null
}
|
Make MistralCommonTokenizer compatible with libraries such as [outlines](https://github.com/dottxt-ai/outlines).
Adds a couple of missing methods found in `PreTrainedTokenizer`, this allows Mistral models to be used with outlines and thus work with Pydantic models.
Builds on this PR https://github.com/huggingface/transformers/pull/39930.
Fixes issue https://github.com/huggingface/transformers/issues/39841.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41855/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41854
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41854/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41854/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41854/events
|
https://github.com/huggingface/transformers/pull/41854
| 3,551,260,585
|
PR_kwDOCUB6oc6vnJfn
| 41,854
|
Checkpoints copy
|
{
"login": "Aravind-11",
"id": 42345018,
"node_id": "MDQ6VXNlcjQyMzQ1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/42345018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aravind-11",
"html_url": "https://github.com/Aravind-11",
"followers_url": "https://api.github.com/users/Aravind-11/followers",
"following_url": "https://api.github.com/users/Aravind-11/following{/other_user}",
"gists_url": "https://api.github.com/users/Aravind-11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aravind-11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aravind-11/subscriptions",
"organizations_url": "https://api.github.com/users/Aravind-11/orgs",
"repos_url": "https://api.github.com/users/Aravind-11/repos",
"events_url": "https://api.github.com/users/Aravind-11/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aravind-11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T23:04:27
| 2025-10-24T23:16:40
| 2025-10-24T23:16:35
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41854",
"html_url": "https://github.com/huggingface/transformers/pull/41854",
"diff_url": "https://github.com/huggingface/transformers/pull/41854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41854.patch",
"merged_at": null
}
|
# What does this PR do?
Fixes #37196
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
|
{
"login": "Aravind-11",
"id": 42345018,
"node_id": "MDQ6VXNlcjQyMzQ1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/42345018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aravind-11",
"html_url": "https://github.com/Aravind-11",
"followers_url": "https://api.github.com/users/Aravind-11/followers",
"following_url": "https://api.github.com/users/Aravind-11/following{/other_user}",
"gists_url": "https://api.github.com/users/Aravind-11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aravind-11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aravind-11/subscriptions",
"organizations_url": "https://api.github.com/users/Aravind-11/orgs",
"repos_url": "https://api.github.com/users/Aravind-11/repos",
"events_url": "https://api.github.com/users/Aravind-11/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aravind-11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41854/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41853
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41853/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41853/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41853/events
|
https://github.com/huggingface/transformers/pull/41853
| 3,551,184,097
|
PR_kwDOCUB6oc6vm5TH
| 41,853
|
copy of #37196 to check failing tests
|
{
"login": "Aravind-11",
"id": 42345018,
"node_id": "MDQ6VXNlcjQyMzQ1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/42345018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aravind-11",
"html_url": "https://github.com/Aravind-11",
"followers_url": "https://api.github.com/users/Aravind-11/followers",
"following_url": "https://api.github.com/users/Aravind-11/following{/other_user}",
"gists_url": "https://api.github.com/users/Aravind-11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aravind-11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aravind-11/subscriptions",
"organizations_url": "https://api.github.com/users/Aravind-11/orgs",
"repos_url": "https://api.github.com/users/Aravind-11/repos",
"events_url": "https://api.github.com/users/Aravind-11/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aravind-11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T22:16:54
| 2025-10-24T23:17:10
| 2025-10-24T23:17:09
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41853",
"html_url": "https://github.com/huggingface/transformers/pull/41853",
"diff_url": "https://github.com/huggingface/transformers/pull/41853.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41853.patch",
"merged_at": null
}
|
# What does this PR do?
This pr is just a copy of the previous pr. Further tests and fixes are needed.
Fixes #37196
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
|
{
"login": "Aravind-11",
"id": 42345018,
"node_id": "MDQ6VXNlcjQyMzQ1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/42345018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aravind-11",
"html_url": "https://github.com/Aravind-11",
"followers_url": "https://api.github.com/users/Aravind-11/followers",
"following_url": "https://api.github.com/users/Aravind-11/following{/other_user}",
"gists_url": "https://api.github.com/users/Aravind-11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aravind-11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aravind-11/subscriptions",
"organizations_url": "https://api.github.com/users/Aravind-11/orgs",
"repos_url": "https://api.github.com/users/Aravind-11/repos",
"events_url": "https://api.github.com/users/Aravind-11/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aravind-11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41853/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41852
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41852/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41852/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41852/events
|
https://github.com/huggingface/transformers/pull/41852
| 3,550,820,840
|
PR_kwDOCUB6oc6vlrdi
| 41,852
|
[`Attn Masks`] Non-vmap default for attention masks
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T19:51:29
| 2025-10-29T11:13:23
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41852",
"html_url": "https://github.com/huggingface/transformers/pull/41852",
"diff_url": "https://github.com/huggingface/transformers/pull/41852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41852.patch",
"merged_at": null
}
|
Non-vmap creation of masks. These work with all our base masks and we only default back to vmap when using patterns we cannot guarantee (i.e. additional and/or masks).
Note:
- Non-vmap works with every mask that has anything index based
- Merged old/new sdpa under one function --> easier maintenance imo
- Executorch does not need an additional masking fn anymore
- Lifts some restrictions on older torch versions, e.g. chunked attn with padding, packed attn masks etc
Fixes #41639
cc @jiqing-feng @IlyasMoutawwakil
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41852/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41851
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41851/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41851/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41851/events
|
https://github.com/huggingface/transformers/pull/41851
| 3,550,599,767
|
PR_kwDOCUB6oc6vk8EI
| 41,851
|
Fix deepcopy in ProcessorMixin.to_dict for GemmaTokenizerFast
|
{
"login": "aijadugar",
"id": 139578960,
"node_id": "U_kgDOCFHOUA",
"avatar_url": "https://avatars.githubusercontent.com/u/139578960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aijadugar",
"html_url": "https://github.com/aijadugar",
"followers_url": "https://api.github.com/users/aijadugar/followers",
"following_url": "https://api.github.com/users/aijadugar/following{/other_user}",
"gists_url": "https://api.github.com/users/aijadugar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aijadugar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aijadugar/subscriptions",
"organizations_url": "https://api.github.com/users/aijadugar/orgs",
"repos_url": "https://api.github.com/users/aijadugar/repos",
"events_url": "https://api.github.com/users/aijadugar/events{/privacy}",
"received_events_url": "https://api.github.com/users/aijadugar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T18:45:00
| 2025-10-27T03:15:27
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41851",
"html_url": "https://github.com/huggingface/transformers/pull/41851",
"diff_url": "https://github.com/huggingface/transformers/pull/41851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41851.patch",
"merged_at": null
}
|
Description:
Replaced deepcopy with a shallow copy to avoid AttributeError with fast tokenizers.
Added a test (tests/utils/test_processor_utils.py) to verify multiple tokenizers save/load correctly.
Fix #41837
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41851/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41850
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41850/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41850/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41850/events
|
https://github.com/huggingface/transformers/pull/41850
| 3,550,528,879
|
PR_kwDOCUB6oc6vkscO
| 41,850
|
speed up loading checkpoints for zero stage 3
|
{
"login": "ri938",
"id": 8639734,
"node_id": "MDQ6VXNlcjg2Mzk3MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8639734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ri938",
"html_url": "https://github.com/ri938",
"followers_url": "https://api.github.com/users/ri938/followers",
"following_url": "https://api.github.com/users/ri938/following{/other_user}",
"gists_url": "https://api.github.com/users/ri938/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ri938/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ri938/subscriptions",
"organizations_url": "https://api.github.com/users/ri938/orgs",
"repos_url": "https://api.github.com/users/ri938/repos",
"events_url": "https://api.github.com/users/ri938/events{/privacy}",
"received_events_url": "https://api.github.com/users/ri938/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T18:21:07
| 2025-10-29T10:59:08
| 2025-10-29T10:59:08
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41850",
"html_url": "https://github.com/huggingface/transformers/pull/41850",
"diff_url": "https://github.com/huggingface/transformers/pull/41850.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41850.patch",
"merged_at": "2025-10-29T10:59:08"
}
|
Loading checkpoints for model Qwen/Qwen3-Next-80B-A3B-Instruct was very slow
this change brought down checkpoint loading times from 10mins+ to about 1.5mins
the load method is called over 4 million times (123,799 per layer and there are 41 layers). Looping through the state dict (size ~1500) was therefore very slow. This change speed the loading method by avoiding looping through the state dict.
Flagged for review
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- Big Model Inference: @SunMarc
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41850/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41849
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41849/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41849/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41849/events
|
https://github.com/huggingface/transformers/pull/41849
| 3,550,385,826
|
PR_kwDOCUB6oc6vkNlm
| 41,849
|
Allow parse_response to accept token IDs
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T17:30:54
| 2025-10-29T13:04:59
| 2025-10-29T13:04:57
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41849",
"html_url": "https://github.com/huggingface/transformers/pull/41849",
"diff_url": "https://github.com/huggingface/transformers/pull/41849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41849.patch",
"merged_at": "2025-10-29T13:04:57"
}
|
While I was working on `parse_response` I noticed that the UX was annoying in places because I had to keep decoding and then calling `parse_response`, which added a lot of boilerplate. Making `parse_response` optionally able to handle token IDs directly cut down on that a lot, so I figured I'd fix it now while there's still time before release!
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41849/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41848
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41848/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41848/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41848/events
|
https://github.com/huggingface/transformers/pull/41848
| 3,550,160,154
|
PR_kwDOCUB6oc6vjc7b
| 41,848
|
More data in benchmarking
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T16:16:06
| 2025-10-28T13:41:53
| null |
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41848",
"html_url": "https://github.com/huggingface/transformers/pull/41848",
"diff_url": "https://github.com/huggingface/transformers/pull/41848.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41848.patch",
"merged_at": null
}
|
This PR changes the benchmarking workflow to gather more data per benchmark. cc. @McPatate I only changed one workflow file, not sure I did not miss others.
It also adds a check on the benchmark config to disable FA when it is not installed to avoid an exception.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41848/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41848/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41847
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41847/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41847/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41847/events
|
https://github.com/huggingface/transformers/pull/41847
| 3,550,058,287
|
PR_kwDOCUB6oc6vjGgs
| 41,847
|
docs: add continuous batching page
|
{
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T15:47:40
| 2025-10-29T11:02:31
| null |
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41847",
"html_url": "https://github.com/huggingface/transformers/pull/41847",
"diff_url": "https://github.com/huggingface/transformers/pull/41847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41847.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41847/timeline
| null | null | null | null | true
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.