url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
null | comments
list | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
string | type
dict | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool | is_closed
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/41946
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41946/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41946/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41946/events
|
https://github.com/huggingface/transformers/pull/41946
| 3,569,130,814
|
PR_kwDOCUB6oc6wiX6O
| 41,946
|
feat: add gradient_accumulation_steps argument to image classificatio…
|
{
"login": "Priyanshjain10",
"id": 240654067,
"node_id": "U_kgDODlgW8w",
"avatar_url": "https://avatars.githubusercontent.com/u/240654067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Priyanshjain10",
"html_url": "https://github.com/Priyanshjain10",
"followers_url": "https://api.github.com/users/Priyanshjain10/followers",
"following_url": "https://api.github.com/users/Priyanshjain10/following{/other_user}",
"gists_url": "https://api.github.com/users/Priyanshjain10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Priyanshjain10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Priyanshjain10/subscriptions",
"organizations_url": "https://api.github.com/users/Priyanshjain10/orgs",
"repos_url": "https://api.github.com/users/Priyanshjain10/repos",
"events_url": "https://api.github.com/users/Priyanshjain10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Priyanshjain10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-30T06:34:06
| 2025-10-30T06:48:04
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41946",
"html_url": "https://github.com/huggingface/transformers/pull/41946",
"diff_url": "https://github.com/huggingface/transformers/pull/41946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41946.patch",
"merged_at": null
}
|
## What does this PR do?
Adds `--gradient_accumulation_steps` argument to the image classification no_trainer example script, addressing issue #18436.
## Motivation
Gradient accumulation allows training with larger effective batch sizes by accumulating gradients over multiple batches before performing an optimizer step. This is especially useful when GPU memory is limited.
## Changes
- Added `--gradient_accumulation_steps` argument parser in `run_image_classification_no_trainer.py`
- Default value: 1 (no accumulation, maintains backward compatibility)
- Type: int
- Includes help text explaining the feature
## Related Issue
Fixes #18436
---
**Submitted for Hacktoberfest 2025** 🎃
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41946/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41945
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41945/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41945/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41945/events
|
https://github.com/huggingface/transformers/issues/41945
| 3,568,862,150
|
I_kwDOCUB6oc7UuIPG
| 41,945
|
Consider not using emojis in `print`, which encounterred encoding error.
|
{
"login": "acane77",
"id": 9192383,
"node_id": "MDQ6VXNlcjkxOTIzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9192383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acane77",
"html_url": "https://github.com/acane77",
"followers_url": "https://api.github.com/users/acane77/followers",
"following_url": "https://api.github.com/users/acane77/following{/other_user}",
"gists_url": "https://api.github.com/users/acane77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acane77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acane77/subscriptions",
"organizations_url": "https://api.github.com/users/acane77/orgs",
"repos_url": "https://api.github.com/users/acane77/repos",
"events_url": "https://api.github.com/users/acane77/events{/privacy}",
"received_events_url": "https://api.github.com/users/acane77/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-30T04:14:40
| 2025-10-30T04:14:40
| null |
NONE
| null | null | null | null |
### System Info
transformer version: latest commit
in file: https://github.com/huggingface/transformers/blob/main/src/transformers/utils/auto_docstring.py#L1121 (and any other files using emoji symbols)
This "🚨" symbol cause encoding error as the system charset is not UTF-8 encoded. (especially on Windows, the UTF-8 charset support is default disabled, instead the system charset is GBK, cp1252, etc)
For compatibility, it's better to not use such non-ASCII, UTF-8 charset dependent multi-word emojis.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
On windows system without UTF-8 charset enabled, run code to reach any print statement containing emoji characters. like "🚨"
### Expected behavior
Program crashed with the following error message.
```
[6424] [8464] File "transformers\models\bert\modeling_bert.py", line 778, in <module>
[6424] [8464] File "transformers\utils\auto_docstring.py", line 2048, in auto_docstring
[6424] [8464] File "transformers\utils\auto_docstring.py", line 2045, in auto_docstring_decorator
[6424] [8464] File "transformers\utils\auto_docstring.py", line 1787, in auto_class_docstring
[6424] [8464] File "transformers\utils\auto_docstring.py", line 1728, in auto_method_docstring
[6424] [8464] File "transformers\utils\auto_docstring.py", line 1243, in _get_model_info
[6424] [8464] File "transformers\utils\auto_docstring.py", line 1124, in get_model_name
[6424] [8464] File "encodings\cp1252.py", line 19, in encode
[6424] [8464] UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f6a8' in position 0: character maps to <undefined>
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41945/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41944
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41944/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41944/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41944/events
|
https://github.com/huggingface/transformers/issues/41944
| 3,568,698,595
|
I_kwDOCUB6oc7UtgTj
| 41,944
|
FA2 vs. SPDA leading to different performance on Qwen3
|
{
"login": "jiosephlee",
"id": 43046526,
"node_id": "MDQ6VXNlcjQzMDQ2NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/43046526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiosephlee",
"html_url": "https://github.com/jiosephlee",
"followers_url": "https://api.github.com/users/jiosephlee/followers",
"following_url": "https://api.github.com/users/jiosephlee/following{/other_user}",
"gists_url": "https://api.github.com/users/jiosephlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiosephlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiosephlee/subscriptions",
"organizations_url": "https://api.github.com/users/jiosephlee/orgs",
"repos_url": "https://api.github.com/users/jiosephlee/repos",
"events_url": "https://api.github.com/users/jiosephlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiosephlee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-30T02:35:01
| 2025-10-30T02:35:01
| null |
NONE
| null | null | null | null |
### System Info
Hi this is using TRL but it seems like a lower level issue.
I'm training a variant of Qwen3 (Intern-S1-mini) but I'm not using the vision tower so it's effectively Qwen3-8B. I've been doing finetuning and checking different attention implementations i.e. SPDA vs. Flash Attention 2. However, I've been getting strange results where the downstream test accuracy is different (FA2 is worse). Furthermore, it seems like this issue is accentuated with grad accumulation. I'm not sure what's the best way to share this as my current code abstracts upon HF Trainer for my personal convenience.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Here are the current values of my config
```"context_length": 0,
"per_device_train_batch_size": 16,
"gradient_accumulation_steps": 2,
"optim": "paged_adamw_8bit",
"evaluation_strategy": "epoch",
"weight_decay": 0.1,
"gradient_checkpointing": true,
"use_liger_kernel": true,
"num_train_epochs": 1,
"learning_rate": 8e-05,
"lr_scheduler_type": "cosine",
"warmup_steps": 0,
"warmup_ratio": 0.1,
"report_to": "wandb",
"run_name": "finetune_Tox_internlm_Intern-S1-mini",
"logging_steps": 1,
"logging_strategy": "steps",
"save_strategy": "no",
"remove_unused_columns": false,
"seed": 42,
"completion_only_loss": false,
"dataset_text_field": "text",
"packing": false,
"padding_free": false,
"loss_type": "nll"```
### Expected behavior
They should have equal test accuracy.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41944/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41943
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41943/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41943/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41943/events
|
https://github.com/huggingface/transformers/issues/41943
| 3,568,691,174
|
I_kwDOCUB6oc7Utefm
| 41,943
|
error: argument --include_num_input_tokens_seen/--include-num-input-tokens-seen: Truthy value expected: got non_padding but expected one of yes/no, true/false, t/f, y/n, 1/0 (case insensitive).
|
{
"login": "guofy-ai",
"id": 218227932,
"node_id": "U_kgDODQHk3A",
"avatar_url": "https://avatars.githubusercontent.com/u/218227932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guofy-ai",
"html_url": "https://github.com/guofy-ai",
"followers_url": "https://api.github.com/users/guofy-ai/followers",
"following_url": "https://api.github.com/users/guofy-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/guofy-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guofy-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guofy-ai/subscriptions",
"organizations_url": "https://api.github.com/users/guofy-ai/orgs",
"repos_url": "https://api.github.com/users/guofy-ai/repos",
"events_url": "https://api.github.com/users/guofy-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/guofy-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-30T02:31:09
| 2025-10-30T02:31:09
| null |
NONE
| null | null | null | null |
### System Info
Failed to parse argument "include_num_input_tokens_seen" using HfArgumentParser, code is unusable.
error: argument --include_num_input_tokens_seen/--include-num-input-tokens-seen: Truthy value expected: got non_padding but expected one of yes/no, true/false, t/f, y/n, 1/0 (case insensitive).
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
torchrun --save_steps=100 --cutoff_len=4096 --include_num_input_tokens_seen=non_padding
### Expected behavior
can not use args include_num_input_tokens_seen=non_padding
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41943/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41942
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41942/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41942/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41942/events
|
https://github.com/huggingface/transformers/pull/41942
| 3,568,061,322
|
PR_kwDOCUB6oc6we0yw
| 41,942
|
fix prepare_config_and_inputs_for_common bug in llava test
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T21:59:54
| 2025-10-29T22:08:41
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41942",
"html_url": "https://github.com/huggingface/transformers/pull/41942",
"diff_url": "https://github.com/huggingface/transformers/pull/41942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41942.patch",
"merged_at": null
}
|
@ydaigo , pls help review, thx very much.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41942/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41941
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41941/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41941/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41941/events
|
https://github.com/huggingface/transformers/pull/41941
| 3,567,778,419
|
PR_kwDOCUB6oc6wd2ZL
| 41,941
|
fix some ut failures on XPU w/ torch 2.9
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T20:35:39
| 2025-10-29T22:27:49
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41941",
"html_url": "https://github.com/huggingface/transformers/pull/41941",
"diff_url": "https://github.com/huggingface/transformers/pull/41941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41941.patch",
"merged_at": null
}
|
@ydshieh , pls help review, thx very much.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41941/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41940
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41940/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41940/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41940/events
|
https://github.com/huggingface/transformers/pull/41940
| 3,566,995,885
|
PR_kwDOCUB6oc6wbJS9
| 41,940
|
Fix typo in image_processing_lfm2_vl_fast
|
{
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T17:02:51
| 2025-10-29T17:12:27
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41940",
"html_url": "https://github.com/huggingface/transformers/pull/41940",
"diff_url": "https://github.com/huggingface/transformers/pull/41940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41940.patch",
"merged_at": null
}
|
# What does this PR do?
Fix small typo. Without consequences, but still confusing.
Fixes https://github.com/huggingface/transformers/issues/41919
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41940/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41939
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41939/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41939/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41939/events
|
https://github.com/huggingface/transformers/pull/41939
| 3,566,788,498
|
PR_kwDOCUB6oc6wachG
| 41,939
|
feat: add fallback to slow tokenizer when `use_fast=True` in AutoTokenizer fails at runtime
|
{
"login": "m-misiura",
"id": 82826099,
"node_id": "MDQ6VXNlcjgyODI2MDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/82826099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m-misiura",
"html_url": "https://github.com/m-misiura",
"followers_url": "https://api.github.com/users/m-misiura/followers",
"following_url": "https://api.github.com/users/m-misiura/following{/other_user}",
"gists_url": "https://api.github.com/users/m-misiura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m-misiura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m-misiura/subscriptions",
"organizations_url": "https://api.github.com/users/m-misiura/orgs",
"repos_url": "https://api.github.com/users/m-misiura/repos",
"events_url": "https://api.github.com/users/m-misiura/events{/privacy}",
"received_events_url": "https://api.github.com/users/m-misiura/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T16:05:52
| 2025-10-29T16:07:28
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41939",
"html_url": "https://github.com/huggingface/transformers/pull/41939",
"diff_url": "https://github.com/huggingface/transformers/pull/41939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41939.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds graceful fallback to slow tokenizers when `AutoTokenizer.from_pretrained()` with `use_fast=True` fails at runtime. Subsequently, this PR should improve robustness issue in tokenizer loading
## Problem
Currently, when `use_fast=True` is specified but the fast tokenizer fails to load due to runtime errors (corrupted files, missing dependencies, file permissions, etc.), the exception propagates and crashes the application. This forces users to implement defensive try-catch wrappers in production code, e.g. see this [PR](https://github.com/trustyai-explainability/guardrails-detectors/pull/56)
## Solution
Wraps fast tokenizer instantiation in a try-except block that:
- attempts to load the fast tokenizer when requested
- falls back to the slow tokenizer if loading fails (with a warning)
- re-raises the exception if no slow tokenizer is available (prevents silent failures)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Library:
- tokenizers: @ArthurZucker and @itazap
- model loading (from pretrained, etc): @CyrilVallez
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41939/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41939/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41938
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41938/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41938/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41938/events
|
https://github.com/huggingface/transformers/pull/41938
| 3,566,412,300
|
PR_kwDOCUB6oc6wZMh7
| 41,938
|
Fixed wrong padding value in OWLv2
|
{
"login": "gjamesgoenawan",
"id": 67161633,
"node_id": "MDQ6VXNlcjY3MTYxNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/67161633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gjamesgoenawan",
"html_url": "https://github.com/gjamesgoenawan",
"followers_url": "https://api.github.com/users/gjamesgoenawan/followers",
"following_url": "https://api.github.com/users/gjamesgoenawan/following{/other_user}",
"gists_url": "https://api.github.com/users/gjamesgoenawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gjamesgoenawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gjamesgoenawan/subscriptions",
"organizations_url": "https://api.github.com/users/gjamesgoenawan/orgs",
"repos_url": "https://api.github.com/users/gjamesgoenawan/repos",
"events_url": "https://api.github.com/users/gjamesgoenawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/gjamesgoenawan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T14:54:23
| 2025-10-29T16:47:28
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41938",
"html_url": "https://github.com/huggingface/transformers/pull/41938",
"diff_url": "https://github.com/huggingface/transformers/pull/41938.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41938.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR proposes changing the default padding value from 0.5 to 0.0 in OWLv2. While OWLv1 originally used a padding value of 0.5 (gray) as described in its paper [1], OWLv2 adopts 0.0 instead [2], consistent with its official implementation [3]. Using the incorrect padding value (0.5) leads to degraded performance on the LVIS dataset.
| Implementation | LVIS mAP |
| - | - |
| Scenic | 43.9 |
| Transformers (0.5 padding) | 43.4 |
| Transformers (0.0 padding) | 44.0 |
### Reproducing the results
Testing scripts:
The following scripts explicitly resized and pad the image beforehand so no padding will be done in the processor.
```
import os
import re
import torch
import argparse
import warnings
import numpy as np
import torch.distributed as dist
from torch.utils.data import Dataset, DataLoader, DistributedSampler
from transformers import Owlv2Processor, Owlv2ForObjectDetection
from PIL import Image
from lvis import LVIS, LVISResults, LVISEval
from tqdm import tqdm
warnings.filterwarnings("ignore")
NOT_PROMPTABLE_MARKER = '#'
PROMPT_TEMPLATES = [
'itap of a {}.',
'a bad photo of the {}.',
'a origami {}.',
'a photo of the large {}.',
'a {} in a video game.',
'art of the {}.',
'a photo of the small {}.',
]
def _canonicalize_string(string: str) -> str:
string = string.lower()
string = re.sub(f'[^a-z0-9-{NOT_PROMPTABLE_MARKER} ]', ' ', string)
string = re.sub(r'\s+', ' ', string)
string = re.sub(r'-+', '-', string)
string = string.strip()
string = re.sub(f'([^^]){NOT_PROMPTABLE_MARKER}+', r'\1', string)
return string
class LVISDataset(Dataset):
def __init__(self, ann_file, img_dir, processor, pad_value):
self.lvis = LVIS(ann_file)
self.img_ids = sorted(self.lvis.imgs.keys())
self.img_dir = img_dir
self.processor = processor
self.img_size = self.processor.image_processor.size['height']
self.pad_value = pad_value
def __len__(self):
return len(self.img_ids)
def __getitem__(self, idx):
img_id = self.img_ids[idx]
img_info = self.lvis.imgs[img_id]
img_path = os.path.join(self.img_dir, os.path.basename(img_info['coco_url']))
# Load image
image = Image.open(img_path).convert("RGB")
image = np.array(image).astype(np.float32) / 255.0 # scale to [0,1]
# Determine square size
max_side = max(image.shape[1], image.shape[0])
# Create padded square with floating-point pad value
pad_value = np.array(self.pad_value, dtype=np.float32) # e.g., [0.5,0.5,0.5]
padded_image = np.ones((max_side, max_side, 3), dtype=np.float32) * pad_value
# Paste original image at top-left
padded_image[:image.shape[0], :image.shape[1], :] = image
# Convert back to PIL for resizing
padded_image = Image.fromarray((padded_image * 255).astype(np.uint8))
# Resize to target size
resized_image = padded_image.resize((self.img_size, self.img_size), Image.Resampling.BILINEAR)
# Process image
pixel_values = self.processor.image_processor(
images=resized_image,
return_tensors="pt"
)['pixel_values']
return img_id, image, img_info['width'], img_info['height'], pixel_values
def collate_fn(batch):
img_ids, images, widths, heights, pixel_values = zip(*batch)
return list(img_ids), list(images), list(widths), list(heights), torch.cat(list(pixel_values), axis=0)
def main():
parser = argparse.ArgumentParser(description="Evaluate OWLv2 on LVIS dataset")
parser.add_argument("--dataset_dir", default="/path/to/lvis/dataset")
parser.add_argument("--pad_value", type=float, default=0.5)
parser.add_argument("--local_rank", default=int(os.getenv('LOCAL_RANK', -1)), type=int)
parser.add_argument("--topk", type=int, default=300)
parser.add_argument("--num_workers", type=int, default=4)
args = parser.parse_args()
torch.cuda.set_device(args.local_rank)
dist.init_process_group(
backend="nccl",
init_method="env://",
world_size=int(os.getenv("WORLD_SIZE", 1)),
rank=int(os.getenv("RANK", 0)),
device_id=torch.device(f'cuda:{args.local_rank}'),
)
rank = dist.get_rank()
world_size = dist.get_world_size()
print(f'Using Pad Value : {args.pad_value}')
device = torch.device(f"cuda:{args.local_rank}" if args.local_rank >= 0 else "cuda")
if rank == 0:
print(f"Running evaluation on {world_size} GPUs, device={device}")
processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-ensemble", use_fast=True)
model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-ensemble").to(device).eval()
ann_file = os.path.join(args.dataset_dir, "lvis_v1_val.json")
img_dir = os.path.join(args.dataset_dir, "val2017")
dataset = LVISDataset(ann_file, img_dir, processor=processor, pad_value=args.pad_value)
sampler = DistributedSampler(dataset, num_replicas=world_size, rank=rank, shuffle=False)
dataloader = DataLoader(
dataset,
batch_size=1,
sampler=sampler,
collate_fn=collate_fn,
num_workers=args.num_workers,
pin_memory=True,
persistent_workers=(args.num_workers > 0)
)
lvis_gt = dataset.lvis
cats = sorted(lvis_gt.cats.items(), key=lambda x: x[0])
class_names = [cat['name'] for _, cat in cats]
texts_ens = []
for template in PROMPT_TEMPLATES:
texts_ens += [_canonicalize_string(template.format(name)) for name in class_names]
with torch.no_grad():
text_inputs = processor.tokenizer(
texts_ens, padding=True, truncation=True, max_length=16, return_tensors="pt"
).to(device)
text_outputs = model.owlv2.text_model(**text_inputs)
text_embeds = model.owlv2.text_projection(text_outputs[1])
text_embeds = text_embeds / torch.linalg.norm(text_embeds, ord=2, dim=-1, keepdim=True)
input_ids = text_inputs['input_ids'].reshape(1, -1, text_inputs['input_ids'].shape[-1])
query_mask = input_ids[..., 0] > 0
print(f'RANK {rank}, Ready!')
dist.barrier()
raw_predictions = []
progress_bar = tqdm(dataloader, desc="Evaluating") if rank == 0 else dataloader
for n, batch in enumerate(progress_bar):
img_ids, images, widths, heights, pixel_values = batch
with torch.no_grad():
num_patches_height = model.num_patches_height
num_patches_width = model.num_patches_width
vision_outputs = model.owlv2.vision_model(pixel_values=pixel_values.to(device))
last_hidden_state = vision_outputs[0]
image_embeds = model.owlv2.vision_model.post_layernorm(last_hidden_state)
class_token_out = torch.broadcast_to(image_embeds[:, :1, :], image_embeds[:, :-1].shape)
image_embeds = image_embeds[:, 1:, :] * class_token_out
image_embeds = model.layer_norm(image_embeds)
image_embeds = image_embeds.reshape(
image_embeds.shape[0], num_patches_height, num_patches_width, image_embeds.shape[-1]
)
image_feats = image_embeds.view(image_embeds.shape[0], -1, image_embeds.shape[-1])
(pred_logits, _) = model.class_predictor(image_feats, text_embeds, query_mask)
pred_boxes = model.box_predictor(image_feats, image_embeds, False)
num_templates = len(PROMPT_TEMPLATES)
num_classes = len(class_names)
scores = pred_logits.reshape(1, -1, num_templates, num_classes).mean(2)
bsz, num_patches, num_classes = scores.shape
k = min(args.topk, num_patches * num_classes)
scores_flat = scores.view(bsz, -1)
topk_scores, topk_inds = torch.topk(scores_flat, k, dim=1)
patch_inds = topk_inds // num_classes
label_inds = topk_inds % num_classes
batch_idx = torch.arange(bsz, device=pred_boxes.device).unsqueeze(-1)
selected_boxes = pred_boxes[batch_idx, patch_inds]
raw_predictions.append([
img_ids, widths, heights,
topk_scores.cpu(), label_inds.cpu(), selected_boxes.cpu()
])
torch.cuda.synchronize()
predictions = []
for img_ids, widths, heights, topk_scores_cpu, label_inds_cpu, selected_boxes_cpu in raw_predictions:
image_id = img_ids[0]
w, h = float(widths[0]), float(heights[0])
scale = max(w, h)
scores_np = topk_scores_cpu[0].numpy()
labels_np = label_inds_cpu[0].numpy()
boxes_np = selected_boxes_cpu[0].numpy()
cx, cy, bw, bh = boxes_np[:, 0], boxes_np[:, 1], boxes_np[:, 2], boxes_np[:, 3]
x, y = (cx - bw / 2) * scale, (cy - bh / 2) * scale
width, height = bw * scale, bh * scale
preds_img = [
{
"image_id": image_id,
"category_id": cats[label][0],
"bbox": [float(x[i]), float(y[i]), float(width[i]), float(height[i])],
"score": float(scores_np[i]),
}
for i, label in enumerate(labels_np)
]
predictions.extend(preds_img)
print(f'RANK {rank}, Done!')
all_predictions = [None] * world_size
dist.all_gather_object(all_predictions, predictions)
if rank == 0:
full_predictions = [p for sublist in all_predictions for p in sublist]
lvis_dt = LVISResults(lvis_gt, full_predictions)
lvis_eval = LVISEval(lvis_gt, lvis_dt, iou_type='bbox')
lvis_eval.evaluate()
lvis_eval.accumulate()
lvis_eval.summarize()
lvis_eval.print_results()
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
Commands:
```
# 0.5 padding:
torchrun --nproc-per-node=NUM_GPUS myscript.py --pad_value 0.5 --dataset_dir /path/to/lvis/
# 0.0 padding:
torchrun --nproc-per-node=NUM_GPUS myscript.py --pad_value 0.0 --dataset_dir /path/to/lvis/
```
Please prepare LVIS dataset beforehand with the following structure:
```
/path/to/lvis/
├── val2017
│ ├── 000000062833.jpg
│ └── ...
└── lvis_v1_val.json
```
After Running the scripts, the following logs should be printed:
#### 0.5 padding
```
Using Pad Value : 0.5
Running evaluation on 1 GPUs, device=cuda:0
RANK 0, Ready!
RANK 0, Done!
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds=all] = 0.434
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=300 catIds=all] = 0.600
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=300 catIds=all] = 0.473
Average Precision (AP) @[ IoU=0.50:0.95 | area= s | maxDets=300 catIds=all] = 0.330
Average Precision (AP) @[ IoU=0.50:0.95 | area= m | maxDets=300 catIds=all] = 0.533
Average Precision (AP) @[ IoU=0.50:0.95 | area= l | maxDets=300 catIds=all] = 0.652
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= r] = 0.403
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= c] = 0.430
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= f] = 0.451
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds=all] = 0.563
Average Recall (AR) @[ IoU=0.50:0.95 | area= s | maxDets=300 catIds=all] = 0.406
Average Recall (AR) @[ IoU=0.50:0.95 | area= m | maxDets=300 catIds=all] = 0.672
Average Recall (AR) @[ IoU=0.50:0.95 | area= l | maxDets=300 catIds=all] = 0.805
```
#### 0.0 padding
```
Using Pad Value : 0.0
Running evaluation on 1 GPUs, device=cuda:0
RANK 0, Ready!
RANK 0, Done!
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds=all] = 0.440
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=300 catIds=all] = 0.602
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=300 catIds=all] = 0.482
Average Precision (AP) @[ IoU=0.50:0.95 | area= s | maxDets=300 catIds=all] = 0.333
Average Precision (AP) @[ IoU=0.50:0.95 | area= m | maxDets=300 catIds=all] = 0.540
Average Precision (AP) @[ IoU=0.50:0.95 | area= l | maxDets=300 catIds=all] = 0.664
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= r] = 0.406
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= c] = 0.438
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds= f] = 0.458
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 catIds=all] = 0.570
Average Recall (AR) @[ IoU=0.50:0.95 | area= s | maxDets=300 catIds=all] = 0.411
Average Recall (AR) @[ IoU=0.50:0.95 | area= m | maxDets=300 catIds=all] = 0.678
Average Recall (AR) @[ IoU=0.50:0.95 | area= l | maxDets=300 catIds=all] = 0.815
```
Reference:
[1] [OWLv1](https://arxiv.org/pdf/2205.06230) (Figure A4.)
[2] [OWLv2](https://arxiv.org/pdf/2306.09683) (Figure A3),
[3] OWLv2 [original implementation](https://github.com/google-research/scenic/blob/096e6a52b4cbbf30936c168c5d3d42d80e001988/scenic/projects/owl_vit/evaluator.py#L172C7-L172C58), which is changed with [this PR](https://github.com/google-research/scenic/commit/17cc144993f855a66b7301e35e329962da13b060#diff-9e13daafe2df21216a7227dffb5b2c71bda7eb27c0de64df40a681e3ff0d44bfR158) (scenic/projects/owl_vit/evaluator.py, line 158).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41938/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41937
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41937/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41937/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41937/events
|
https://github.com/huggingface/transformers/pull/41937
| 3,566,389,893
|
PR_kwDOCUB6oc6wZHuD
| 41,937
|
Refactor: Replace _default_log_level with DEFAULT_LOG_LEVEL constant
|
{
"login": "Pranavi125",
"id": 187347675,
"node_id": "U_kgDOCyqy2w",
"avatar_url": "https://avatars.githubusercontent.com/u/187347675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pranavi125",
"html_url": "https://github.com/Pranavi125",
"followers_url": "https://api.github.com/users/Pranavi125/followers",
"following_url": "https://api.github.com/users/Pranavi125/following{/other_user}",
"gists_url": "https://api.github.com/users/Pranavi125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pranavi125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pranavi125/subscriptions",
"organizations_url": "https://api.github.com/users/Pranavi125/orgs",
"repos_url": "https://api.github.com/users/Pranavi125/repos",
"events_url": "https://api.github.com/users/Pranavi125/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pranavi125/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T14:49:43
| 2025-10-29T14:55:45
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41937",
"html_url": "https://github.com/huggingface/transformers/pull/41937",
"diff_url": "https://github.com/huggingface/transformers/pull/41937.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41937.patch",
"merged_at": null
}
|
This PR refactors the logging module by renaming `_default_log_level` to `DEFAULT_LOG_LEVEL`
to align with constant naming conventions and improve readability.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41937/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41936
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41936/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41936/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41936/events
|
https://github.com/huggingface/transformers/pull/41936
| 3,566,359,056
|
PR_kwDOCUB6oc6wZBCm
| 41,936
|
Fix: add missing SAFE_WEIGHTS_INDEX_NAME to __all__ in constants.py
|
{
"login": "Pranavi125",
"id": 187347675,
"node_id": "U_kgDOCyqy2w",
"avatar_url": "https://avatars.githubusercontent.com/u/187347675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pranavi125",
"html_url": "https://github.com/Pranavi125",
"followers_url": "https://api.github.com/users/Pranavi125/followers",
"following_url": "https://api.github.com/users/Pranavi125/following{/other_user}",
"gists_url": "https://api.github.com/users/Pranavi125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pranavi125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pranavi125/subscriptions",
"organizations_url": "https://api.github.com/users/Pranavi125/orgs",
"repos_url": "https://api.github.com/users/Pranavi125/repos",
"events_url": "https://api.github.com/users/Pranavi125/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pranavi125/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T14:43:20
| 2025-10-29T14:54:38
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41936",
"html_url": "https://github.com/huggingface/transformers/pull/41936",
"diff_url": "https://github.com/huggingface/transformers/pull/41936.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41936.patch",
"merged_at": null
}
|
This PR adds the missing SAFE_WEIGHTS_INDEX_NAME constant to the __all__ list in constants.py.
Why:
Without this, the constant isn’t exported when using
from transformers.utils.constants import *.
Impact:
- Keeps the constants module consistent.
- No functional changes or breaking impact.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41936/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41935
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41935/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41935/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41935/events
|
https://github.com/huggingface/transformers/issues/41935
| 3,566,241,264
|
I_kwDOCUB6oc7UkIXw
| 41,935
|
Missing `config.json` and `preprocessor_config.json` in `kyutai/moshiko-pytorch-bf16 model` repo
|
{
"login": "akshatvishu",
"id": 33392262,
"node_id": "MDQ6VXNlcjMzMzkyMjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/33392262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshatvishu",
"html_url": "https://github.com/akshatvishu",
"followers_url": "https://api.github.com/users/akshatvishu/followers",
"following_url": "https://api.github.com/users/akshatvishu/following{/other_user}",
"gists_url": "https://api.github.com/users/akshatvishu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshatvishu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshatvishu/subscriptions",
"organizations_url": "https://api.github.com/users/akshatvishu/orgs",
"repos_url": "https://api.github.com/users/akshatvishu/repos",
"events_url": "https://api.github.com/users/akshatvishu/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshatvishu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-29T14:20:23
| 2025-10-29T18:12:54
| null |
NONE
| null | null | null | null |
### System Info
transformers version: 4.57.1
python version: 3.11
### Who can help?
@Cyrilvallez @eustlb
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm opening this issue to request that `config.json` and `preprocessor_config.json` be added to the [kyutai/moshiko-pytorch-bf16](<https://huggingface.co/kyutai/moshiko-pytorch-bf16/tree/main>) model repository.
**Problem:**
Currently, `AutoFeatureExtractor.from_pretrained("kyutai/moshiko-pytorch-bf16")` taken from model doc page at [huggingface.co/docs/transformers/en/model_doc/moshi](<https://huggingface.co/docs/transformers/en/model_doc/moshi>) under the heading `1. Model generation` fails with an `OSError` because `preprocessor_config.json` is missing. This is inconsistent with other repos in the collection, like [kyutai/moshiko-pytorch-q8](<https://huggingface.co/kyutai/moshiko-pytorch-q8/tree/main>) and [kmhf/hf-moshiko](<https://huggingface.co/kmhf/hf-moshiko/tree/main>), which do contain these necessary configuration files.
```
from datasets import load_dataset, Audio
import torch, math
from transformers import MoshiForConditionalGeneration, AutoFeatureExtractor, AutoTokenizer
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
feature_extractor = AutoFeatureExtractor.from_pretrained("kyutai/moshiko-pytorch-bf16")
tokenizer = AutoTokenizer.from_pretrained("kyutai/moshiko-pytorch-bf16")
device = "cuda"
dtype = torch.bfloat16
# prepare user input audio
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate))
audio_sample = librispeech_dummy[-1]["audio"]["array"]
user_input_values = feature_extractor(raw_audio=audio_sample, sampling_rate=feature_extractor.sampling_rate, return_tensors="pt").to(device=device, dtype=dtype)
# prepare moshi input values - we suppose moshi didn't say anything while the user spoke
moshi_input_values = torch.zeros_like(user_input_values.input_values)
# prepare moshi input ids - we suppose moshi didn't say anything while the user spoke
num_tokens = math.ceil(moshi_input_values.shape[-1] * waveform_to_token_ratio)
input_ids = torch.ones((1, num_tokens), device=device, dtype=torch.int64) * tokenizer.encode("<pad>")[0]
# generate 25 new tokens (around 2s of audio)
output = model.generate(input_ids=input_ids, user_input_values=user_input_values.input_values, moshi_input_values=moshi_input_values, max_new_tokens=25)
text_tokens = output.sequences
audio_waveforms = output.audio_sequences
```
error:
```
OSError: kyutai/moshiko-pytorch-bf16 does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/kyutai/moshiko-pytorch-bf16/tree/main' for available files.
```
**Confirmation from Source Repository:**
This has been confirmed by the model's authors as an issue for the Transformers port to handle (see: https://github.com/kyutai-labs/moshi/issues/234 )
### Expected behavior
**Proposed Solution:**
Adding the missing configuration files will resolve this. The content can be derived from the existing `q8` variant.
**Proposed `preprocessor_config.json`:**
(Copied from [kmhf/hf-moshiko](<https://huggingface.co/kmhf/hf-moshiko/tree/main>))
```json
{
"feature_extractor_type": "EncodecFeatureExtractor",
"sampling_rate": 24000,
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": true,
"chunk_length_s": null,
"overlap": null
}
```
**Proposed `config.json` :**
(Based on [kyutai/moshiko-pytorch-q8](<https://huggingface.co/kyutai/moshiko-pytorch-q8/blob/main/config.json>) and [kyutai/moshiko-pytorch-bf16/](<https://huggingface.co/kyutai/moshiko-pytorch-bf16/tree/main>)
```json
{
"moshi_name": "model.safetensors",
"mimi_name": "tokenizer-e351c8d8-checkpoint125.safetensors",
"tokenizer_name": "tokenizer_spm_32k_3.model",
"quantize": false,
"dim": 4096,
"text_card": 32000,
"existing_text_padding_id": 3,
"n_q": 16,
"dep_q": 8,
"card": 2048,
"num_heads": 32,
"num_layers": 32,
"hidden_scale": 4.125,
"causal": true,
"layer_scale": null,
"context": 3000,
"max_period": 10000,
"gating": "silu",
"norm": "rms_norm_f32",
"positional_embedding": "rope",
"depformer_dim": 1024,
"depformer_dim_feedforward": 4224,
"depformer_num_heads": 16,
"depformer_num_layers": 6,
"depformer_causal": true,
"depformer_layer_scale": null,
"depformer_multi_linear": true,
"depformer_context": 8,
"depformer_max_period": 10000,
"depformer_gating": "silu",
"depformer_pos_emb": "none",
"depformer_weights_per_step": true,
"delays": [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1]
}
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41935/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41934
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41934/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41934/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41934/events
|
https://github.com/huggingface/transformers/pull/41934
| 3,566,116,512
|
PR_kwDOCUB6oc6wYLfy
| 41,934
|
Fix: Gemma3TextConfig rope scaling assignments
|
{
"login": "RyanMullins",
"id": 868555,
"node_id": "MDQ6VXNlcjg2ODU1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/868555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMullins",
"html_url": "https://github.com/RyanMullins",
"followers_url": "https://api.github.com/users/RyanMullins/followers",
"following_url": "https://api.github.com/users/RyanMullins/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanMullins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanMullins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanMullins/subscriptions",
"organizations_url": "https://api.github.com/users/RyanMullins/orgs",
"repos_url": "https://api.github.com/users/RyanMullins/repos",
"events_url": "https://api.github.com/users/RyanMullins/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanMullins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T13:54:01
| 2025-10-29T13:56:56
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41934",
"html_url": "https://github.com/huggingface/transformers/pull/41934",
"diff_url": "https://github.com/huggingface/transformers/pull/41934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41934.patch",
"merged_at": null
}
|
# What does this PR do?
Related to https://github.com/huggingface/transformers/pull/41922, this PR corrects the assignment of the `rope_scaling` dictionary present on some Gemma 3 pre-trained models on HF Hub when normalizing to the new `rope_parameters` value.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@zucchini-nlp PTAL since you have been handling the RoPE changes.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41934/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41933
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41933/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41933/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41933/events
|
https://github.com/huggingface/transformers/pull/41933
| 3,566,094,335
|
PR_kwDOCUB6oc6wYGnB
| 41,933
|
Fix: Skip weight initialization for quantized int8 models
|
{
"login": "Pranavi125",
"id": 187347675,
"node_id": "U_kgDOCyqy2w",
"avatar_url": "https://avatars.githubusercontent.com/u/187347675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pranavi125",
"html_url": "https://github.com/Pranavi125",
"followers_url": "https://api.github.com/users/Pranavi125/followers",
"following_url": "https://api.github.com/users/Pranavi125/following{/other_user}",
"gists_url": "https://api.github.com/users/Pranavi125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pranavi125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pranavi125/subscriptions",
"organizations_url": "https://api.github.com/users/Pranavi125/orgs",
"repos_url": "https://api.github.com/users/Pranavi125/repos",
"events_url": "https://api.github.com/users/Pranavi125/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pranavi125/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T13:49:42
| 2025-10-29T13:49:42
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41933",
"html_url": "https://github.com/huggingface/transformers/pull/41933",
"diff_url": "https://github.com/huggingface/transformers/pull/41933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41933.patch",
"merged_at": null
}
|
What does this PR do?
This PR fixes an issue where quantized models (e.g., RedHatAI/Qwen2.5-VL-7B-Instruct-quantized.w8a8) fail to load due to a dtype incompatibility during weight initialization.
Problem
When loading quantized models (dtype=torch.int8), the method _load_pretrained_model() still calls initialize_weights().
Since PyTorch’s normal_() operation is unsupported for integer tensors, this leads to:
RuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8
Fix
Added a condition to skip weight initialization when the model is quantized:
if not is_quantized:
self.initialize_weights()
This ensures that quantized models bypass floating-point initialization safely.
Impact
✅ Prevents reinitialization of quantized weights
✅ Allows quantized models to load successfully using llmcompressor or compressed-tensors
✅ No change or performance impact for standard (float/bfloat) models
Checklist
Fixes dtype initialization crash for quantized models
Tested locally with Qwen2.5-VL-7B-Instruct-quantized.w8a8
Maintains full compatibility with non-quantized models
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41933/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41932
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41932/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41932/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41932/events
|
https://github.com/huggingface/transformers/pull/41932
| 3,565,840,312
|
PR_kwDOCUB6oc6wXPeU
| 41,932
|
Fix: Handle missing safetensors gracefully to prevent import errors
|
{
"login": "Pranavi125",
"id": 187347675,
"node_id": "U_kgDOCyqy2w",
"avatar_url": "https://avatars.githubusercontent.com/u/187347675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pranavi125",
"html_url": "https://github.com/Pranavi125",
"followers_url": "https://api.github.com/users/Pranavi125/followers",
"following_url": "https://api.github.com/users/Pranavi125/following{/other_user}",
"gists_url": "https://api.github.com/users/Pranavi125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pranavi125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pranavi125/subscriptions",
"organizations_url": "https://api.github.com/users/Pranavi125/orgs",
"repos_url": "https://api.github.com/users/Pranavi125/repos",
"events_url": "https://api.github.com/users/Pranavi125/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pranavi125/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T12:52:40
| 2025-10-29T13:31:53
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41932",
"html_url": "https://github.com/huggingface/transformers/pull/41932",
"diff_url": "https://github.com/huggingface/transformers/pull/41932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41932.patch",
"merged_at": null
}
|
This PR adds a safeguard for environments where `safetensors` is not installed.
It prevents import errors during dependency checks and allows transformers to load normally.
Changes made:
- Updated `setup.py` to conditionally check for safetensors
- Improved dependency handling logic
Tested locally: verified that transformers imports correctly with and without safetensors.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41932/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41931
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41931/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41931/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41931/events
|
https://github.com/huggingface/transformers/pull/41931
| 3,565,179,734
|
PR_kwDOCUB6oc6wVCVh
| 41,931
|
fix 3 failed test cases for video_llama_3 model on Intel XPU
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T09:55:19
| 2025-10-30T01:38:52
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41931",
"html_url": "https://github.com/huggingface/transformers/pull/41931",
"diff_url": "https://github.com/huggingface/transformers/pull/41931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41931.patch",
"merged_at": null
}
|
This PR fixes 3 failed test cases on Intel XPU:
```
1.tests/models/video_llama_3/test_modeling_video_llama_3.py::VideoLlama3IntegrationTest::test_small_model_integration_test
2.tests/models/video_llama_3/test_modeling_video_llama_3.py::VideoLlama3IntegrationTest::test_small_model_integration_test_batch_wo_ima
ge
3.tests/models/video_llama_3/test_modeling_video_llama_3.py::VideoLlama3ModelTest::test_generate_with_quant_cache
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41931/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41930
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41930/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41930/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41930/events
|
https://github.com/huggingface/transformers/pull/41930
| 3,565,167,412
|
PR_kwDOCUB6oc6wU_v7
| 41,930
|
handle inputs from Siglip/Siglip2 non-automapped encoder layers
|
{
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T09:51:32
| 2025-10-30T07:52:07
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41930",
"html_url": "https://github.com/huggingface/transformers/pull/41930",
"diff_url": "https://github.com/huggingface/transformers/pull/41930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41930.patch",
"merged_at": null
}
|
# What does this PR do?
Should fix #41929 . The `check_model_inputs` / `can_record_outputs` interaction is not always trivial and models with several entrypoints such as `VisionModel` vs `VisionTransformer` are missing some, adding it here. Also added a modification in `generic` to make sure the flag was captured, not 100% sure it's needed.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41930/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41929
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41929/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41929/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41929/events
|
https://github.com/huggingface/transformers/issues/41929
| 3,564,860,708
|
I_kwDOCUB6oc7Ue3Uk
| 41,929
|
ViT model's output_attention is not work
|
{
"login": "naturesh",
"id": 150237898,
"node_id": "U_kgDOCPRyyg",
"avatar_url": "https://avatars.githubusercontent.com/u/150237898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naturesh",
"html_url": "https://github.com/naturesh",
"followers_url": "https://api.github.com/users/naturesh/followers",
"following_url": "https://api.github.com/users/naturesh/following{/other_user}",
"gists_url": "https://api.github.com/users/naturesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naturesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naturesh/subscriptions",
"organizations_url": "https://api.github.com/users/naturesh/orgs",
"repos_url": "https://api.github.com/users/naturesh/repos",
"events_url": "https://api.github.com/users/naturesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/naturesh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-29T08:21:16
| 2025-10-29T09:52:11
| null |
NONE
| null | null | null | null |
### System Info
macos 26.0
python 3.10
pytorch 2.7.1
transformers 4.57.1
### Who can help?
@yonigozlan @molbap
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoModel, AutoProcessor
from transformers.image_utils import load_image
# load the model and processor
ckpt = "google/siglip2-so400m-patch16-naflex"
model = AutoModel.from_pretrained(ckpt).eval()
processor = AutoProcessor.from_pretrained(ckpt)
# load the image
image = load_image("https://huggingface.co/datasets/merve/coco/resolve/main/val2017/000000000285.jpg")
inputs = processor(images=[image], return_tensors="pt").to(model.device)
# run infernece
with torch.no_grad():
image_embeddings = model.vision_model(
pixel_values = inputs['pixel_values'],
attention_mask = inputs['pixel_attention_mask'],
spatial_shapes = inputs['spatial_shapes'],
output_attentions = True,
output_hidden_states = True
)
print(image_embeddings)
```
### Expected behavior
I'm trying to get attributes, hidden_states from google/siglip2-so400m-patch16-naflex using the model.vision_model.forward() method.
from_pretrained(), you can also add output_attentions as a parameter,
If you add it as a new config or as a parameter in .forward(), it always returns attributes and hidden_states to none in all operations.
Changing attn_implementation to eager does not solve the problem.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41929/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41928
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41928/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41928/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41928/events
|
https://github.com/huggingface/transformers/pull/41928
| 3,564,445,852
|
PR_kwDOCUB6oc6wSnXu
| 41,928
|
fix: add clear error message when mistral-common is missing for AutoTokenizer loading Voxtral
|
{
"login": "junjunjd",
"id": 55823903,
"node_id": "MDQ6VXNlcjU1ODIzOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/55823903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junjunjd",
"html_url": "https://github.com/junjunjd",
"followers_url": "https://api.github.com/users/junjunjd/followers",
"following_url": "https://api.github.com/users/junjunjd/following{/other_user}",
"gists_url": "https://api.github.com/users/junjunjd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junjunjd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junjunjd/subscriptions",
"organizations_url": "https://api.github.com/users/junjunjd/orgs",
"repos_url": "https://api.github.com/users/junjunjd/repos",
"events_url": "https://api.github.com/users/junjunjd/events{/privacy}",
"received_events_url": "https://api.github.com/users/junjunjd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-29T05:36:55
| 2025-10-29T20:28:44
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41928",
"html_url": "https://github.com/huggingface/transformers/pull/41928",
"diff_url": "https://github.com/huggingface/transformers/pull/41928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41928.patch",
"merged_at": null
}
|
- Add clear error message when mistral-common is missing for AutoTokenizer loading Voxtral
- Add unit test
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41928/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41927
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41927/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41927/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41927/events
|
https://github.com/huggingface/transformers/issues/41927
| 3,564,435,640
|
I_kwDOCUB6oc7UdPi4
| 41,927
|
Nightly / Nvidia CI workflows trigger on forks and fail due to missing org-specific runners
|
{
"login": "AvinashDwivedi",
"id": 86379589,
"node_id": "MDQ6VXNlcjg2Mzc5NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/86379589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AvinashDwivedi",
"html_url": "https://github.com/AvinashDwivedi",
"followers_url": "https://api.github.com/users/AvinashDwivedi/followers",
"following_url": "https://api.github.com/users/AvinashDwivedi/following{/other_user}",
"gists_url": "https://api.github.com/users/AvinashDwivedi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AvinashDwivedi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AvinashDwivedi/subscriptions",
"organizations_url": "https://api.github.com/users/AvinashDwivedi/orgs",
"repos_url": "https://api.github.com/users/AvinashDwivedi/repos",
"events_url": "https://api.github.com/users/AvinashDwivedi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AvinashDwivedi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-29T05:33:37
| 2025-10-29T15:18:51
| null |
NONE
| null | null | null | null |
### System Info
When forking the huggingface/transformers repository, certain GitHub Actions workflows (like “Nvidia CI with nightly torch” and “Nightly PyTorch build”) are automatically triggered on the forked repo’s default branch (main) — even though they depend on organization-specific GPU runners and secrets.
This leads to immediate workflow failures and email notifications such as:
Run failed: Nvidia CI with nightly torch - main (...)
<img width="1750" height="1205" alt="Image" src="https://github.com/user-attachments/assets/513427b6-3992-4f95-aae4-c50676b2dc29" />
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Fork the upstream repository huggingface/transformers on GitHub (press Fork in the web UI).
### Expected behavior
Forked repositories should:
1. not trigger organization-specific CI pipelines, or
2. gracefully skip such jobs without failure.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41927/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41926
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41926/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41926/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41926/events
|
https://github.com/huggingface/transformers/pull/41926
| 3,564,293,431
|
PR_kwDOCUB6oc6wSL9e
| 41,926
|
Cache latest pytorch amd image locally on mi325 CI runner cluster
|
{
"login": "jitesh-gupta",
"id": 202713221,
"node_id": "U_kgDODBUohQ",
"avatar_url": "https://avatars.githubusercontent.com/u/202713221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jitesh-gupta",
"html_url": "https://github.com/jitesh-gupta",
"followers_url": "https://api.github.com/users/jitesh-gupta/followers",
"following_url": "https://api.github.com/users/jitesh-gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/jitesh-gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jitesh-gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jitesh-gupta/subscriptions",
"organizations_url": "https://api.github.com/users/jitesh-gupta/orgs",
"repos_url": "https://api.github.com/users/jitesh-gupta/repos",
"events_url": "https://api.github.com/users/jitesh-gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/jitesh-gupta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-29T04:29:36
| 2025-10-29T18:45:38
| 2025-10-29T18:45:37
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41926",
"html_url": "https://github.com/huggingface/transformers/pull/41926",
"diff_url": "https://github.com/huggingface/transformers/pull/41926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41926.patch",
"merged_at": "2025-10-29T18:45:37"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Caches the latest `huggingface/transformers-pytorch-amd-gpu` image on the amd-mi325 runner cluster.
This image is heavily used by the models CI job in the AMD mi325 CI workflow `Self-hosted runner scale set (AMD mi325 scheduled CI caller)` and hence caching it locally will help reduce the network traffic as well as significantly improve the jobs turnaround time.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41926/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41925
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41925/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41925/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41925/events
|
https://github.com/huggingface/transformers/pull/41925
| 3,563,669,430
|
PR_kwDOCUB6oc6wQOi7
| 41,925
|
[deepspeed tests fixes]
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T22:51:46
| 2025-10-29T12:19:45
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41925",
"html_url": "https://github.com/huggingface/transformers/pull/41925",
"diff_url": "https://github.com/huggingface/transformers/pull/41925.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41925.patch",
"merged_at": null
}
|
Fixing a few deepspeed tests
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41925/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41924
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41924/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41924/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41924/events
|
https://github.com/huggingface/transformers/issues/41924
| 3,563,519,251
|
I_kwDOCUB6oc7UZv0T
| 41,924
|
`output_attentions=True` always warns for non-`"eager"` attention implementations, even when a custom AttentionInterface backend does return attention weights
|
{
"login": "kannandeepti",
"id": 35346947,
"node_id": "MDQ6VXNlcjM1MzQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/35346947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kannandeepti",
"html_url": "https://github.com/kannandeepti",
"followers_url": "https://api.github.com/users/kannandeepti/followers",
"following_url": "https://api.github.com/users/kannandeepti/following{/other_user}",
"gists_url": "https://api.github.com/users/kannandeepti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kannandeepti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kannandeepti/subscriptions",
"organizations_url": "https://api.github.com/users/kannandeepti/orgs",
"repos_url": "https://api.github.com/users/kannandeepti/repos",
"events_url": "https://api.github.com/users/kannandeepti/events{/privacy}",
"received_events_url": "https://api.github.com/users/kannandeepti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T21:57:36
| 2025-10-28T21:57:36
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.57.1
- Platform: Linux-5.10.233-223.887.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.19
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.6.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0+cu124 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@vasqu @ArthurZucker @Cyrilvallez When using a custom attention function registered via the new AttentionInterface and selecting it with `attn_implementation="<custom_name>"`, passing `output_attentions=True` to `model.forward(...)` triggers a UserWarning like:
> UserWarning: `output_attentions=True` is not supported with `attn_implementation` other than ['eager', 'eager_paged', 'flex_attention']. Please use `model.set_attn_implementation('eager')` to enable capturing attention outputs.
This warning is misleading for custom backends that do compute and return attention probabilities (same shape as eager). In addition, some models still set outputs.attentions=None unless the implementation name is exactly "eager", even though the custom backend returns (attn_output, attn_probs).
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The below code snippet triggers the undesirable UserWarning.
```python
import torch
import torch.nn as nn
from transformers.models.esm.modeling_esm import TransformersKwargs
from typing import Optional
from transformers import AutoModel, AttentionInterface
def eager_with_bias_attention_forward(
module: nn.Module,
query: torch.Tensor, # [B, H, T, D]
key: torch.Tensor, # [B, H, S, D]
value: torch.Tensor, # [B, H, S, D]
attention_mask: Optional[torch.Tensor],
scaling: Optional[float] = None,
dropout: float = 0.0,
**kwargs: TransformersKwargs,
):
"""
Adds `attention_bias` (broadcastable to [B, H or 1, T, S]) to logits before softmax.
Pass it via model(..., attention_bias=your_bias).
"""
if scaling is None:
scaling = query.size(-1) ** -0.5
# Take the dot product between "query" and "key" to get the raw attention scores.
attn_weights = torch.matmul(query, key.transpose(2, 3)) * scaling # [B, H, T, S]
if attention_mask is not None:
attention_mask = attention_mask[:, :, :, : key.shape[-2]]
attn_weights = attn_weights + attention_mask
# Add the bias matrix to the attention weights
attention_bias = kwargs.get("attention_bias", None)
if attention_bias is not None:
# allow [B, 1, T, S], [B, H, T, S], or [1, 1, T, S]; truncate S if needed
if attention_bias.size(-1) != key.shape[-2]:
attention_bias = attention_bias[..., : key.shape[-2]]
attention_bias = attention_bias.to(
dtype=attn_weights.dtype, device=attn_weights.device
)
attn_weights = attn_weights + attention_bias
attn_weights = nn.functional.softmax(attn_weights, dim=-1)
attn_weights = nn.functional.dropout(
attn_weights, p=dropout, training=module.training
)
attn_output = torch.matmul(attn_weights, value)
attn_output = attn_output.transpose(1, 2).contiguous()
return attn_output, attn_weights
# Register custom attention implementation
AttentionInterface.register("eager_with_bias", eager_with_bias_attention_forward)
# Load ESM2 model using the custom attention backend
model = AutoModel.from_pretrained(
"facebook/esm2_t33_650M_UR50D",
token_dropout=False,
local_files_only=True,
attn_implementation="eager_with_bias",
)
# --- dummy batch ---
B, T = 2, 64
hidden_size = model.config.hidden_size
H = model.config.num_attention_heads
# inputs_embeds must be [B, T, hidden_size]
emb = torch.randn(B, T, hidden_size, device=next(model.parameters()).device)
# attention_mask must be [B, T] with 1 for tokens you want to keep
attention_mask = torch.ones(B, T, dtype=torch.long, device=emb.device)
# bias should broadcast to [B, H, T, T]; using shared-across-heads:
attention_bias = torch.zeros(B, 1, T, T, device=emb.device)
# Triggers a UserWarning even though backend returns attention weights,
# and some models set outputs.attentions = None unless impl == "eager".
out = model(
inputs_embeds=emb,
attention_mask=attention_mask,
output_attentions=True,
attention_bias=attention_bias,
)
# Check if attention weights are being returned
assert (
out.attentions is not None and len(out.attentions) == model.config.num_hidden_layers
)
print("OK: got attention weights from custom backend")
```
### Expected behavior
### Expected behavior
- If the selected attention backend **returns attention probabilities**, `outputs.attentions` should be populated and **no warning** should be emitted.
- The warning (or error) should trigger **only** when the chosen backend **cannot** provide attention probabilities.
---
### Actual behavior
- A **UserWarning** is emitted whenever `attn_implementation != "eager"`, regardless of whether the custom backend supports returning attention weights.
- In some models, `outputs.attentions` is `None` unless the implementation name is literally `"eager"`.
---
### Where this comes from / related context
- There’s an **“early-error if `output_attentions=True` and impl isn’t eager”** change discussed in [PR #38288](https://github.com/huggingface/transformers/pull/38288) (config path).
- The [Attention Interface docs](https://huggingface.co/docs/transformers/main/en/attention) show how to register/select custom implementations and say extra kwargs are forwarded to the attention function, but they don’t document a way to declare that a custom backend supports returning attentions.
---
### Proposed solutions
#### 1. Capability flag on backends
Extend `AttentionInterface.register(name, fn, supports_attn_probs: bool = False)` (or use a small descriptor object) so model code can check capability instead of name equality.
If `supports_attn_probs=True`, allow `output_attentions=True` without warnings and surface the returned probabilities.
#### 2. Name-agnostic check
Replace `impl != "eager"` string checks with an interface query like `AttentionInterface.supports_attn_probs(impl)` to decide warning/error behavior, so custom backends that return weights aren’t penalized.
#### 3. Documented workaround
If changing the check is not desirable, document an official way to **declare** a custom backend as “eager-compatible,” or provide a supported alias/registration API that treats a custom backend like `"eager"` for the purpose of attention-weight return (avoiding the need for users to override `"eager"` globally just to silence the warning).
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41924/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41923
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41923/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41923/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41923/events
|
https://github.com/huggingface/transformers/pull/41923
| 3,563,463,159
|
PR_kwDOCUB6oc6wPhqk
| 41,923
|
fix some ut failures on XPU w/ torch 2.9
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T21:37:42
| 2025-10-29T15:20:07
| 2025-10-29T15:15:34
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41923",
"html_url": "https://github.com/huggingface/transformers/pull/41923",
"diff_url": "https://github.com/huggingface/transformers/pull/41923.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41923.patch",
"merged_at": "2025-10-29T15:15:34"
}
|
cases are below, all passed. @ydshieh , pls help review, thx very much.
> tests/models/aya_vision/test_modeling_aya_vision.py::AyaVisionIntegrationTest::test_small_model_integration_generate_text_only
> tests/models/aya_vision/test_modeling_aya_vision.py::AyaVisionIntegrationTest::test_small_model_integration_forward
> tests/models/aya_vision/test_modeling_aya_vision.py::AyaVisionIntegrationTest::test_small_model_integration_batched_generate_multi_image
> tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_whisper_longform
> tests/test_pipeline_mixin.py::AutomaticSpeechRecognitionPipelineTests::test_whisper_longform
> tests/models/aria/test_modeling_aria.py::AriaForConditionalGenerationIntegrationTest::test_generation_no_images
> tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_bf16
> tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops
> tests/models/glm4v/test_modeling_glm4v.py::Glm4vIntegrationTest::test_small_model_integration_test_expand
> tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_generate
> tests/models/mllama/test_modeling_mllama.py::MllamaForConditionalGenerationIntegrationTest::test_11b_model_integration_generate_text_only
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41923/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41922
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41922/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41922/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41922/events
|
https://github.com/huggingface/transformers/pull/41922
| 3,563,371,335
|
PR_kwDOCUB6oc6wPNh2
| 41,922
|
Fix rope_parameters for gemma3 weights conversion script
|
{
"login": "douglas-reid",
"id": 21148125,
"node_id": "MDQ6VXNlcjIxMTQ4MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/21148125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/douglas-reid",
"html_url": "https://github.com/douglas-reid",
"followers_url": "https://api.github.com/users/douglas-reid/followers",
"following_url": "https://api.github.com/users/douglas-reid/following{/other_user}",
"gists_url": "https://api.github.com/users/douglas-reid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/douglas-reid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/douglas-reid/subscriptions",
"organizations_url": "https://api.github.com/users/douglas-reid/orgs",
"repos_url": "https://api.github.com/users/douglas-reid/repos",
"events_url": "https://api.github.com/users/douglas-reid/events{/privacy}",
"received_events_url": "https://api.github.com/users/douglas-reid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T21:07:44
| 2025-10-29T13:58:59
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41922",
"html_url": "https://github.com/huggingface/transformers/pull/41922",
"diff_url": "https://github.com/huggingface/transformers/pull/41922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41922.patch",
"merged_at": null
}
|
# What does this PR do?
Fixes the rope_parameters in the weights conversion script for Gemma 3.
These should be:
```
local => default @ 10_000.0
global => linear(8.0) @ 1_000_000.0
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41922/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41921
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41921/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41921/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41921/events
|
https://github.com/huggingface/transformers/pull/41921
| 3,563,255,026
|
PR_kwDOCUB6oc6wO1N3
| 41,921
|
fix tensor device placement issue of 2 UT cases
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T20:27:38
| 2025-10-29T15:17:39
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41921",
"html_url": "https://github.com/huggingface/transformers/pull/41921",
"diff_url": "https://github.com/huggingface/transformers/pull/41921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41921.patch",
"merged_at": null
}
|
If run below 2 cases on 2 accelerators:
> pytest -rA tests/models/speech_to_text/test_modeling_speech_to_text.py::Speech2TextModelIntegrationTests::test_generation_librispeech
>pytest -rA tests/models/speech_to_text/test_modeling_speech_to_text.py::Speech2TextModelIntegrationTests::test_generation_librispeech_batched
Will fail w/ error as below:
> self = Speech2TextEncoderLayer(
> (self_attn): Speech2TextAttention(
> (k_proj): Linear(in_features=256, out_features=256, ...atures=2048, out_features=256, bias=True)
> (final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
> )
> hidden_states = tensor([[[-313.5520, -90.0677, 48.3815, ..., -22.3849, -48.9418,
> -49.9119],
> [-285.8702, -94...2.9486],
> [ 25.0123, -37.5042, 13.0347, ..., -58.4456, -16.1031,
> 45.5035]]], device='xpu:1')
> attention_mask = tensor([[[[0., 0., 0., ..., 0., 0., 0.],
> [0., 0., 0., ..., 0., 0., 0.],
> [0., 0., 0., ..., 0., 0....., 0., 0., 0.],
> [0., 0., 0., ..., 0., 0., 0.],
> [0., 0., 0., ..., 0., 0., 0.]]]], device='xpu:0')
> output_attentions = False
>
> def forward(
> self,
> hidden_states: torch.Tensor,
> attention_mask: torch.Tensor,
> output_attentions: bool = False,
> ) -> torch.Tensor:
> """
> Args:
> hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
> attention_mask (`torch.FloatTensor`): attention mask of size
> `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
> output_attentions (`bool`, *optional*):
> Whether or not to return the attentions tensors of all attention layers. See `attentions` under
> returned tensors for more detail.
> """
> residual = hidden_states
> hidden_states = self.self_attn_layer_norm(hidden_states)
> hidden_states, attn_weights = self.self_attn(
> hidden_states=hidden_states,
> attention_mask=attention_mask,
> output_attentions=output_attentions,
> )
> hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
> hidden_states = residual + hidden_states
>
> residual = hidden_states
> hidden_states = self.final_layer_norm(hidden_states)
> hidden_states = self.activation_fn(self.fc1(hidden_states))
> hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
> hidden_states = self.fc2(hidden_states)
> hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
> hidden_states = residual + hidden_states
> ^^^^^^^^^^^^^^^^^^^^^^^^
> E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, xpu:0 and xpu:1!
>
> src/transformers/models/speech_to_text/modeling_speech_to_text.py:369: RuntimeError
This PR fixed the issue.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41921/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41920
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41920/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41920/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41920/events
|
https://github.com/huggingface/transformers/pull/41920
| 3,562,379,835
|
PR_kwDOCUB6oc6wL7Po
| 41,920
|
evaluate>=0.4.6 is needed
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T16:25:50
| 2025-10-29T22:59:11
| 2025-10-29T12:20:54
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41920",
"html_url": "https://github.com/huggingface/transformers/pull/41920",
"diff_url": "https://github.com/huggingface/transformers/pull/41920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41920.patch",
"merged_at": "2025-10-29T12:20:54"
}
|
Some HF Transformers tests/examples fail with main and `evaluate < 0.4.6`
Fixing:
```
stderr: [rank0]: Traceback (most recent call last):
stderr: [rank0]: File "/code/users/stas/github/transformers-alst-integration/examples/pytorch/question-answering/run_qa.py", line 692, in <module>
stderr: [rank0]: main()
stderr: [rank0]: File "/code/users/stas/github/transformers-alst-integration/examples/pytorch/question-answering/run_qa.py", line 608, in main
stderr: [rank0]: metric = evaluate.load(
stderr: [rank0]: ^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 748, in load
stderr: [rank0]: evaluation_module = evaluation_module_factory(
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 680, in evaluation_module_factory
stderr: [rank0]: raise e1 from None
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 639, in evaluation_module_factory
stderr: [rank0]: ).get_module()
stderr: [rank0]: ^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 479, in get_module
stderr: [rank0]: local_path = self.download_loading_script(revision)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/loading.py", line 469, in download_loading_script
stderr: [rank0]: return cached_path(file_path, download_config=download_config)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/utils/file_utils.py", line 175, in cached_path
stderr: [rank0]: output_path = get_from_cache(
stderr: [rank0]: ^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/utils/file_utils.py", line 448, in get_from_cache
stderr: [rank0]: headers = get_authentication_headers_for_url(url, token=token)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/home/yak/miniconda3/envs/dev/lib/python3.12/site-packages/evaluate/utils/file_utils.py", line 236, in get_authentication_headers_for_url
stderr: [rank0]: token = hf_api.HfFolder.get_token()
stderr: [rank0]: ^^^^^^^^^^^^^^^
stderr: [rank0]: AttributeError: module 'huggingface_hub.hf_api' has no attribute 'HfFolder'
```
`evaluate>=0.4.6` is needed to fix this.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41920/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41919
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41919/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41919/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41919/events
|
https://github.com/huggingface/transformers/issues/41919
| 3,562,344,322
|
I_kwDOCUB6oc7UVQ-C
| 41,919
|
LFM2 image_processing_lfm2_vl_fast.py Mean Std swapped?
|
{
"login": "florianvoss-commit",
"id": 214635446,
"node_id": "U_kgDODMsTtg",
"avatar_url": "https://avatars.githubusercontent.com/u/214635446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/florianvoss-commit",
"html_url": "https://github.com/florianvoss-commit",
"followers_url": "https://api.github.com/users/florianvoss-commit/followers",
"following_url": "https://api.github.com/users/florianvoss-commit/following{/other_user}",
"gists_url": "https://api.github.com/users/florianvoss-commit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/florianvoss-commit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/florianvoss-commit/subscriptions",
"organizations_url": "https://api.github.com/users/florianvoss-commit/orgs",
"repos_url": "https://api.github.com/users/florianvoss-commit/repos",
"events_url": "https://api.github.com/users/florianvoss-commit/events{/privacy}",
"received_events_url": "https://api.github.com/users/florianvoss-commit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T16:17:44
| 2025-10-29T17:03:09
| null |
NONE
| null | null | null | null |
### System Info
In LFM2-VL image_processing_lfm2_vl_fast.py line 212 following the MEAN and STD from imagenet is used for preprocessing.
However it seems like they are swapped:
image_mean = IMAGENET_STANDARD_STD
image_std = IMAGENET_STANDARD_MEAN
or is this correct ?
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Have a look at https://github.com/huggingface/transformers/blob/main/src/transformers/models/lfm2_vl/image_processing_lfm2_vl_fast.py
### Expected behavior
Not optimized VLM Behaviour
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41919/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41918
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41918/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41918/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41918/events
|
https://github.com/huggingface/transformers/pull/41918
| 3,562,252,059
|
PR_kwDOCUB6oc6wLhGi
| 41,918
|
V4.57.1 training ci: Refactor `test_tensor_parallel.py`
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T15:57:00
| 2025-10-29T11:22:13
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41918",
"html_url": "https://github.com/huggingface/transformers/pull/41918",
"diff_url": "https://github.com/huggingface/transformers/pull/41918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41918.patch",
"merged_at": null
}
|
# What does this PR do?
Refactor `test_tensor_parallel.py` by removing `subprocess`. This way we can easily debug with breakpoint the test if it fails. On top of that, I made the tests more robust by testing on more than `--nproc_per_node> 2.` `--nproc_per_nodes = 8` crashes because the llama model we use is too tiny so can't do TP. But since it works with `--nproc_per_nodes = 4` already, no need to get a bigger llama (that may slow down the tests)
## Who can review?
@ArthurZucker @Cyrilvallez
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41918/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41917
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41917/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41917/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41917/events
|
https://github.com/huggingface/transformers/pull/41917
| 3,562,148,537
|
PR_kwDOCUB6oc6wLKyH
| 41,917
|
update v4.57.1-training-ci with main
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T15:32:22
| 2025-10-28T15:53:27
| 2025-10-28T15:53:27
|
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41917",
"html_url": "https://github.com/huggingface/transformers/pull/41917",
"diff_url": "https://github.com/huggingface/transformers/pull/41917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41917.patch",
"merged_at": "2025-10-28T15:53:26"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41917/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41916
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41916/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41916/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41916/events
|
https://github.com/huggingface/transformers/pull/41916
| 3,561,939,166
|
PR_kwDOCUB6oc6wKeh_
| 41,916
|
feat(ci): add continuous batching to benchmarks
|
{
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T14:45:20
| 2025-10-29T17:22:37
| null |
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41916",
"html_url": "https://github.com/huggingface/transformers/pull/41916",
"diff_url": "https://github.com/huggingface/transformers/pull/41916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41916.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41916/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41915
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41915/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41915/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41915/events
|
https://github.com/huggingface/transformers/pull/41915
| 3,561,733,374
|
PR_kwDOCUB6oc6wJyyB
| 41,915
|
V4.57.1 training ci: Refactor and Fix `test_tensor_parallel.py` to make it more robust
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T14:02:32
| 2025-10-28T14:46:17
| 2025-10-28T14:46:16
|
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41915",
"html_url": "https://github.com/huggingface/transformers/pull/41915",
"diff_url": "https://github.com/huggingface/transformers/pull/41915.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41915.patch",
"merged_at": null
}
|
# What does this PR do?
- TODO:
- update my branch
- explain changes
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @Cyrilvallez
|
{
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41915/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41914
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41914/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41914/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41914/events
|
https://github.com/huggingface/transformers/pull/41914
| 3,561,517,574
|
PR_kwDOCUB6oc6wJFvx
| 41,914
|
Run slow v2
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-28T13:13:51
| 2025-10-29T20:54:47
| null |
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41914",
"html_url": "https://github.com/huggingface/transformers/pull/41914",
"diff_url": "https://github.com/huggingface/transformers/pull/41914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41914.patch",
"merged_at": null
}
|
# What does this PR do?
Run slow v2!
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41914/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41913
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41913/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41913/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41913/events
|
https://github.com/huggingface/transformers/issues/41913
| 3,561,156,260
|
I_kwDOCUB6oc7UQu6k
| 41,913
|
`epoch` in the log message uses a wrong denominator under some conditions
|
{
"login": "nzw0301",
"id": 7121753,
"node_id": "MDQ6VXNlcjcxMjE3NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7121753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nzw0301",
"html_url": "https://github.com/nzw0301",
"followers_url": "https://api.github.com/users/nzw0301/followers",
"following_url": "https://api.github.com/users/nzw0301/following{/other_user}",
"gists_url": "https://api.github.com/users/nzw0301/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nzw0301/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nzw0301/subscriptions",
"organizations_url": "https://api.github.com/users/nzw0301/orgs",
"repos_url": "https://api.github.com/users/nzw0301/repos",
"events_url": "https://api.github.com/users/nzw0301/events{/privacy}",
"received_events_url": "https://api.github.com/users/nzw0301/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-28T11:38:39
| 2025-10-30T02:07:04
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.57.1
- Platform: macOS-26.0.1-arm64-arm-64bit
- Python version: 3.12.0
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```py
import torch
from datasets import Dataset
from torch import nn
from transformers import Trainer, TrainingArguments
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(2, 2)
def forward(self, a, return_loss=True):
output = self.linear(a)
return {"loss": output.sum()}
data = torch.tensor([[i, i] for i in range(10)], dtype=torch.float32) # [[0., 0.], [1., 1.], [2., 2.], ...]
dataset = Dataset.from_dict({"a": data}).to_iterable_dataset() # finite iterable dataset
args = TrainingArguments(output_dir=".", per_device_train_batch_size=1, max_steps=20, logging_steps=1)
trainer = Trainer(model=MyModule(), args=args, train_dataset=dataset)
trainer.train()
```
```
{'loss': 0.9867, 'grad_norm': 1.4142135381698608, 'learning_rate': 5e-05, 'epoch': 0.05}
{'loss': 1.3851, 'grad_norm': 2.4494898319244385, 'learning_rate': 4.75e-05, 'epoch': 0.1}
{'loss': 1.7833, 'grad_norm': 4.242640495300293, 'learning_rate': 4.5e-05, 'epoch': 0.15}
{'loss': 2.1812, 'grad_norm': 6.164413928985596, 'learning_rate': 4.25e-05, 'epoch': 0.2}
{'loss': 2.5788, 'grad_norm': 8.124038696289062, 'learning_rate': 4e-05, 'epoch': 0.25}
{'loss': 2.9761, 'grad_norm': 10.099504470825195, 'learning_rate': 3.7500000000000003e-05, 'epoch': 0.3}
{'loss': 3.3731, 'grad_norm': 12.083045959472656, 'learning_rate': 3.5e-05, 'epoch': 0.35}
{'loss': 3.7699, 'grad_norm': 14.071247100830078, 'learning_rate': 3.2500000000000004e-05, 'epoch': 0.4}
{'loss': 4.1665, 'grad_norm': 16.0623779296875, 'learning_rate': 3e-05, 'epoch': 0.45}
{'loss': 4.563, 'grad_norm': 18.055469512939453, 'learning_rate': 2.7500000000000004e-05, 'epoch': 0.5}
{'loss': 0.9861, 'grad_norm': 1.4142135381698608, 'learning_rate': 2.5e-05, 'epoch': 1.05}
{'loss': 1.3833, 'grad_norm': 2.4494898319244385, 'learning_rate': 2.25e-05, 'epoch': 1.1}
{'loss': 1.7803, 'grad_norm': 4.242640495300293, 'learning_rate': 2e-05, 'epoch': 1.15}
{'loss': 2.1772, 'grad_norm': 6.164413928985596, 'learning_rate': 1.75e-05, 'epoch': 1.2}
{'loss': 2.574, 'grad_norm': 8.124038696289062, 'learning_rate': 1.5e-05, 'epoch': 1.25}
{'loss': 2.9707, 'grad_norm': 10.099504470825195, 'learning_rate': 1.25e-05, 'epoch': 1.3}
{'loss': 3.3673, 'grad_norm': 12.083045959472656, 'learning_rate': 1e-05, 'epoch': 1.35}
{'loss': 3.764, 'grad_norm': 14.071247100830078, 'learning_rate': 7.5e-06, 'epoch': 1.4}
{'loss': 4.1606, 'grad_norm': 16.0623779296875, 'learning_rate': 5e-06, 'epoch': 1.45}
{'loss': 4.5572, 'grad_norm': 18.055469512939453, 'learning_rate': 2.5e-06, 'epoch': 1.5}
{'train_runtime': 0.2074, 'train_samples_per_second': 96.438, 'train_steps_per_second': 96.438, 'train_loss': 2.774213859438896, 'epoch': 1.5}
```
In my understanding, `epoch` is computed at https://github.com/huggingface/transformers/blob/1f0b490a2c42eb129dccc69031ccb537058689c4/src/transformers/trainer.py#L2555 and the denominator: `steps_in_epoch` is initialised with `args.max_steps` at
https://github.com/huggingface/transformers/blob/1f0b490a2c42eb129dccc69031ccb537058689c4/src/transformers/trainer.py#L2402 when dataset has no `__len__`, like the example above
### Expected behavior
```
{'loss': 0.9867, 'grad_norm': 1.4142135381698608, 'learning_rate': 5e-05, 'epoch': 0.1}
{'loss': 1.3851, 'grad_norm': 2.4494898319244385, 'learning_rate': 4.75e-05, 'epoch': 0.2}
{'loss': 1.7833, 'grad_norm': 4.242640495300293, 'learning_rate': 4.5e-05, 'epoch': 0.3}
{'loss': 2.1812, 'grad_norm': 6.164413928985596, 'learning_rate': 4.25e-05, 'epoch': 0.4}
{'loss': 2.5788, 'grad_norm': 8.124038696289062, 'learning_rate': 4e-05, 'epoch': 0.5}
{'loss': 2.9761, 'grad_norm': 10.099504470825195, 'learning_rate': 3.7500000000000003e-05, 'epoch': 0.6}
{'loss': 3.3731, 'grad_norm': 12.083045959472656, 'learning_rate': 3.5e-05, 'epoch': 0.7}
{'loss': 3.7699, 'grad_norm': 14.071247100830078, 'learning_rate': 3.2500000000000004e-05, 'epoch': 0.8}
{'loss': 4.1665, 'grad_norm': 16.0623779296875, 'learning_rate': 3e-05, 'epoch': 0.9}
{'loss': 4.563, 'grad_norm': 18.055469512939453, 'learning_rate': 2.7500000000000004e-05, 'epoch': 1.0}
{'loss': 0.9861, 'grad_norm': 1.4142135381698608, 'learning_rate': 2.5e-05, 'epoch': 1.1}
{'loss': 1.3833, 'grad_norm': 2.4494898319244385, 'learning_rate': 2.25e-05, 'epoch': 1.2}
{'loss': 1.7803, 'grad_norm': 4.242640495300293, 'learning_rate': 2e-05, 'epoch': 1.3}
{'loss': 2.1772, 'grad_norm': 6.164413928985596, 'learning_rate': 1.75e-05, 'epoch': 1.4}
{'loss': 2.574, 'grad_norm': 8.124038696289062, 'learning_rate': 1.5e-05, 'epoch': 1.5}
{'loss': 2.9707, 'grad_norm': 10.099504470825195, 'learning_rate': 1.25e-05, 'epoch': 1.6}
{'loss': 3.3673, 'grad_norm': 12.083045959472656, 'learning_rate': 1e-05, 'epoch': 1.7}
{'loss': 3.764, 'grad_norm': 14.071247100830078, 'learning_rate': 7.5e-06, 'epoch': 1.8}
{'loss': 4.1606, 'grad_norm': 16.0623779296875, 'learning_rate': 5e-06, 'epoch': 1.9}
{'loss': 4.5572, 'grad_norm': 18.055469512939453, 'learning_rate': 2.5e-06, 'epoch': 2.0}
{'train_runtime': 0.2074, 'train_samples_per_second': 96.438, 'train_steps_per_second': 96.438, 'train_loss': 2.774213859438896, 'epoch': 2.0}
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41913/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41912
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41912/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41912/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41912/events
|
https://github.com/huggingface/transformers/pull/41912
| 3,560,866,876
|
PR_kwDOCUB6oc6wHC-p
| 41,912
|
restore dtype of `hidden_states` in modeling_t5.py
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-28T10:32:20
| 2025-10-29T05:54:43
| 2025-10-29T05:54:43
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41912",
"html_url": "https://github.com/huggingface/transformers/pull/41912",
"diff_url": "https://github.com/huggingface/transformers/pull/41912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41912.patch",
"merged_at": null
}
|
In t5 model, as dtype of `self.wo.weight` is kept fp32 in [L783](https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/t5/modeling_t5.py#L783), `hidden_states` need to be converted to fp32 in some cases, we should restore it back in scenerios we use model dtype like FP16.
@ArthurZucker @Cyrilvallez, pls help review, thx!
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41912/timeline
| null | null | null | null | true
| true
|
End of preview. Expand
in Data Studio
Just dummy dataset with transformers lib issues on GitHub.
- Downloads last month
- 9