url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/40540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40540/comments | https://api.github.com/repos/huggingface/transformers/issues/40540/events | https://github.com/huggingface/transformers/pull/40540 | 3,366,240,090 | PR_kwDOCUB6oc6l9WcH | 40,540 | `tokenizers` bump tokenizers version | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-29T09:49:38 | 2025-08-29T10:34:45 | 2025-08-29T10:34:41 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40540",
"html_url": "https://github.com/huggingface/transformers/pull/40540",
"diff_url": "https://github.com/huggingface/transformers/pull/40540.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40540.patch",
"merged_at": "2025-08-29T10:34:41"
} | # What does this PR do?
Mostly breaking for the DecodeStream API but also adds `async` usage for `tokenizers` | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40540/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40539/comments | https://api.github.com/repos/huggingface/transformers/issues/40539/events | https://github.com/huggingface/transformers/pull/40539 | 3,366,211,050 | PR_kwDOCUB6oc6l9QGY | 40,539 | Standardize GLM4V MoE | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-29T09:38:59 | 2025-08-29T09:54:22 | 2025-08-29T09:54:20 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40539",
"html_url": "https://github.com/huggingface/transformers/pull/40539",
"diff_url": "https://github.com/huggingface/transformers/pull/40539.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40539.patch",
"merged_at": "2025-08-29T09:54:20"
} | Standardizes GLM4V's MoE implem | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40539/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40538/comments | https://api.github.com/repos/huggingface/transformers/issues/40538/events | https://github.com/huggingface/transformers/pull/40538 | 3,366,105,206 | PR_kwDOCUB6oc6l84xJ | 40,538 | DeepSeek v3 MoE Standardization | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-29T09:08:08 | 2025-08-29T09:55:20 | 2025-08-29T09:54:51 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40538",
"html_url": "https://github.com/huggingface/transformers/pull/40538",
"diff_url": "https://github.com/huggingface/transformers/pull/40538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40538.patch",
"merged_at": "2025-08-29T09:54:51"
} | Standardizes DeepSeek-v3's MoE implem | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40538/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40537/comments | https://api.github.com/repos/huggingface/transformers/issues/40537/events | https://github.com/huggingface/transformers/pull/40537 | 3,366,049,000 | PR_kwDOCUB6oc6l8szX | 40,537 | processor tests - use dummy videos | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-29T08:52:02 | 2025-09-01T09:04:47 | 2025-09-01T09:04:47 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40537",
"html_url": "https://github.com/huggingface/transformers/pull/40537",
"diff_url": "https://github.com/huggingface/transformers/pull/40537.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40537.patch",
"merged_at": "2025-09-01T09:04:47"
} | # What does this PR do?
As discussed internally, let's use dummy video instead of downloading. Runner are crashing for some reason | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40537/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40536/comments | https://api.github.com/repos/huggingface/transformers/issues/40536/events | https://github.com/huggingface/transformers/pull/40536 | 3,365,948,243 | PR_kwDOCUB6oc6l8Xva | 40,536 | fix crash when using chat to send 2+ request to gptoss | {
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-29T08:22:12 | 2025-09-23T09:50:48 | 2025-09-23T09:50:24 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40536",
"html_url": "https://github.com/huggingface/transformers/pull/40536",
"diff_url": "https://github.com/huggingface/transformers/pull/40536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40536.patch",
"merged_at": "2025-09-23T09:50:24"
} | without it, 2+ request in chat using gptoss will coredump,error like
Traceback (most recent call last):
File "/mnt/disk0/wangyi/miniforge3/envs/transformers/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
self.run()
File "/mnt/disk0/wangyi/miniforge3/envs/transformers/lib/python3.11/threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "/home/wangyi/transformers/src/transformers/commands/serving.py", line 981, in generate_with_cache
generate_output = model.generate(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/disk0/wangyi/miniforge3/envs/transformers/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/wangyi/transformers/src/transformers/generation/utils.py", line 2539, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/home/wangyi/transformers/src/transformers/generation/utils.py", line 2860, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wangyi/transformers/src/transformers/generation/utils.py", line 569, in prepare_inputs_for_generation
inputs_embeds, input_ids = self._cache_dependant_input_preparation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wangyi/transformers/src/transformers/generation/utils.py", line 475, in _cache_dependant_input_preparation
or (cache_position[-1] >= input_ids.shape[1]) # Exception 3
~~~~~~~~~~~~~~^^^^
IndexError: index -1 is out of bounds for dimension 0 with size 0
| {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40536/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40535/comments | https://api.github.com/repos/huggingface/transformers/issues/40535/events | https://github.com/huggingface/transformers/pull/40535 | 3,365,670,780 | PR_kwDOCUB6oc6l7dQp | 40,535 | fix gpt-oss out shape | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-29T06:37:29 | 2025-08-29T15:21:07 | 2025-08-29T15:20:33 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40535",
"html_url": "https://github.com/huggingface/transformers/pull/40535",
"diff_url": "https://github.com/huggingface/transformers/pull/40535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40535.patch",
"merged_at": "2025-08-29T15:20:33"
} | Hi @SunMarc . This commit should be included in #40304 , but I forgot to push this commit. We need this PR; otherwise, the out value will be wrong. Please review and merge this PR. Thanks! | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40535/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40534/comments | https://api.github.com/repos/huggingface/transformers/issues/40534/events | https://github.com/huggingface/transformers/pull/40534 | 3,365,500,633 | PR_kwDOCUB6oc6l68JM | 40,534 | Redundant code removal | {
"login": "piyushK52",
"id": 34690994,
"node_id": "MDQ6VXNlcjM0NjkwOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/34690994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piyushK52",
"html_url": "https://github.com/piyushK52",
"followers_url": "https://api.github.com/users/piyushK52/followers",
"following_url": "https://api.github.com/users/piyushK52/following{/other_user}",
"gists_url": "https://api.github.com/users/piyushK52/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piyushK52/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piyushK52/subscriptions",
"organizations_url": "https://api.github.com/users/piyushK52/orgs",
"repos_url": "https://api.github.com/users/piyushK52/repos",
"events_url": "https://api.github.com/users/piyushK52/events{/privacy}",
"received_events_url": "https://api.github.com/users/piyushK52/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-29T05:20:11 | 2025-08-29T11:31:01 | 2025-08-29T11:30:23 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40534",
"html_url": "https://github.com/huggingface/transformers/pull/40534",
"diff_url": "https://github.com/huggingface/transformers/pull/40534.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40534.patch",
"merged_at": "2025-08-29T11:30:23"
} | # What does this PR do?
vocab_file and _extra_ids are already assigned on line 146-147, so their reassignment is redundant.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Tagging @ArthurZucker as this relates to text encoders.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40534/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40533/comments | https://api.github.com/repos/huggingface/transformers/issues/40533/events | https://github.com/huggingface/transformers/issues/40533 | 3,365,026,655 | I_kwDOCUB6oc7Ikjtf | 40,533 | Smart Hyperparameter Search for PEFT | {
"login": "PamelaBha",
"id": 219210686,
"node_id": "U_kgDODRDjvg",
"avatar_url": "https://avatars.githubusercontent.com/u/219210686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PamelaBha",
"html_url": "https://github.com/PamelaBha",
"followers_url": "https://api.github.com/users/PamelaBha/followers",
"following_url": "https://api.github.com/users/PamelaBha/following{/other_user}",
"gists_url": "https://api.github.com/users/PamelaBha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PamelaBha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PamelaBha/subscriptions",
"organizations_url": "https://api.github.com/users/PamelaBha/orgs",
"repos_url": "https://api.github.com/users/PamelaBha/repos",
"events_url": "https://api.github.com/users/PamelaBha/events{/privacy}",
"received_events_url": "https://api.github.com/users/PamelaBha/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | [] | 2025-08-29T00:15:19 | 2025-08-29T15:31:09 | 2025-08-29T15:31:09 | NONE | null | null | null | null | ### Feature request
Problem: Fine-tuning large models with Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA requires choosing optimal hyperparameters (e.g., LoRA rank, alpha, dropout). Manually searching for the best combination is time-consuming and often leads to sub-optimal results.
### Motivation
This would be a massive time-saver for users. Instead of running dozens of experiments, they could run a single command and get a recommended PEFT config that maximizes performance while minimizing trainable parameters. This aligns perfectly with the goal of improving the fine-tuning experience.
### Your contribution
I would work on the entire PR | {
"login": "PamelaBha",
"id": 219210686,
"node_id": "U_kgDODRDjvg",
"avatar_url": "https://avatars.githubusercontent.com/u/219210686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PamelaBha",
"html_url": "https://github.com/PamelaBha",
"followers_url": "https://api.github.com/users/PamelaBha/followers",
"following_url": "https://api.github.com/users/PamelaBha/following{/other_user}",
"gists_url": "https://api.github.com/users/PamelaBha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PamelaBha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PamelaBha/subscriptions",
"organizations_url": "https://api.github.com/users/PamelaBha/orgs",
"repos_url": "https://api.github.com/users/PamelaBha/repos",
"events_url": "https://api.github.com/users/PamelaBha/events{/privacy}",
"received_events_url": "https://api.github.com/users/PamelaBha/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40533/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40532/comments | https://api.github.com/repos/huggingface/transformers/issues/40532/events | https://github.com/huggingface/transformers/pull/40532 | 3,364,732,916 | PR_kwDOCUB6oc6l4Z4O | 40,532 | Fix `SeamlessM4Tv2ModelWithTextInputTest::test_retain_grad_hidden_states_attentions` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T21:28:59 | 2025-08-28T21:38:35 | 2025-08-28T21:30:59 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40532",
"html_url": "https://github.com/huggingface/transformers/pull/40532",
"diff_url": "https://github.com/huggingface/transformers/pull/40532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40532.patch",
"merged_at": "2025-08-28T21:30:59"
} | # What does this PR do?
We have in `SeamlessM4Tv2Encoder`
```python
to_drop = False
if self.training:
dropout_probability = torch.rand([])
if dropout_probability < self.layerdrop: # skip the layer
to_drop = True
if to_drop:
layer_outputs = (None, None)
```
which causes some outputs being `None` from time to time, and cause `test_retain_grad_hidden_states_attentions` fails at
```python
if self.has_attentions:
attentions = outputs.attentions[0]
attentions.retain_grad()
```
Let's just not drop randomly for the test | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40532/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40531/comments | https://api.github.com/repos/huggingface/transformers/issues/40531/events | https://github.com/huggingface/transformers/issues/40531 | 3,364,627,938 | I_kwDOCUB6oc7IjCXi | 40,531 | Getting AttributeError: 'Mxfp4GptOssExperts' object has no attribute 'down_proj_scales'. When trying to load the GPT oss 20b model | {
"login": "eltonjohnfanboy",
"id": 103358618,
"node_id": "U_kgDOBikgmg",
"avatar_url": "https://avatars.githubusercontent.com/u/103358618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eltonjohnfanboy",
"html_url": "https://github.com/eltonjohnfanboy",
"followers_url": "https://api.github.com/users/eltonjohnfanboy/followers",
"following_url": "https://api.github.com/users/eltonjohnfanboy/following{/other_user}",
"gists_url": "https://api.github.com/users/eltonjohnfanboy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eltonjohnfanboy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eltonjohnfanboy/subscriptions",
"organizations_url": "https://api.github.com/users/eltonjohnfanboy/orgs",
"repos_url": "https://api.github.com/users/eltonjohnfanboy/repos",
"events_url": "https://api.github.com/users/eltonjohnfanboy/events{/privacy}",
"received_events_url": "https://api.github.com/users/eltonjohnfanboy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-28T20:46:23 | 2025-10-16T09:40:36 | 2025-10-16T09:40:36 | NONE | null | null | null | null | ### System Info
Hi guys!
Recently I was trying to tune the GPT oss 20b model for some task I was interested in.
I tried to follow a strategy similar to the one indicated by the `gpt-oss-recipes` repositories for sft: https://github.com/huggingface/gpt-oss-recipes/blob/main/sft.py
But instead of full fine tuning I wanted to train a LoRA adapter on top of it.
Currently I've access to some A100s with 80 gb of VRAM so I though this should be doable easily.
But whenever I try to load the model for the training I get the following error:
```
[rank4]: AttributeError: 'Mxfp4GptOssExperts' object has no attribute 'down_proj_scales'. Did you mean: 'down_proj_bias'?
```
pd: I'm using the latest version of Transformers -> 4.56.0.dev0
Currently this is how I'm loading the model:
```python
model_config_parameters = {
"pretrained_model_name_or_path": ckpt,
"trust_remote_code": True,
"revision": FLAGS.model_revision,
"torch_dtype": _ACCELERATE_MIXED_PRECISION[accelerator.state.mixed_precision],
"use_cache": not FLAGS.gradient_checkpointing,
"token": os.getenv("HF_TOKEN"),
"quantization_config": quantization_config
}
arch_add_args = {"attn_implementation": "eager"}
model = AutoModelForCausalLM.from_pretrained(**(model_config_parameters | arch_add_args)).to(device)
```
Is this some bug or I'm getting wrong some part of the model loading process? Thanks!!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model_config_parameters = {
"pretrained_model_name_or_path": ckpt,
"trust_remote_code": True,
"revision": FLAGS.model_revision,
"torch_dtype": _ACCELERATE_MIXED_PRECISION[accelerator.state.mixed_precision],
"use_cache": not FLAGS.gradient_checkpointing,
"token": os.getenv("HF_TOKEN"),
"quantization_config": quantization_config
}
arch_add_args = {"attn_implementation": "eager"}
model = AutoModelForCausalLM.from_pretrained(**(model_config_parameters | arch_add_args)).to(device)
### Expected behavior
The should be loaded correctly and thus let the training continue. | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40531/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/40531/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40530/comments | https://api.github.com/repos/huggingface/transformers/issues/40530/events | https://github.com/huggingface/transformers/pull/40530 | 3,364,566,777 | PR_kwDOCUB6oc6l32YV | 40,530 | Set `test_all_params_have_gradient=False` for `HunYuanMoEV1ModelTest` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T20:20:25 | 2025-08-28T20:33:19 | 2025-08-28T20:32:51 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40530",
"html_url": "https://github.com/huggingface/transformers/pull/40530",
"diff_url": "https://github.com/huggingface/transformers/pull/40530.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40530.patch",
"merged_at": "2025-08-28T20:32:51"
} | # What does this PR do?
Moe models should set this to `False` as in many other Moe model test files.
As not all experts will be selected, and if not selected, their parameters won't have grad | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40530/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40530/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40529/comments | https://api.github.com/repos/huggingface/transformers/issues/40529/events | https://github.com/huggingface/transformers/pull/40529 | 3,364,092,253 | PR_kwDOCUB6oc6l2PnO | 40,529 | Update kosmos2_5 README to use torch_dtype in model loading example | {
"login": "AdemBoukhris457",
"id": 61518107,
"node_id": "MDQ6VXNlcjYxNTE4MTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/61518107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdemBoukhris457",
"html_url": "https://github.com/AdemBoukhris457",
"followers_url": "https://api.github.com/users/AdemBoukhris457/followers",
"following_url": "https://api.github.com/users/AdemBoukhris457/following{/other_user}",
"gists_url": "https://api.github.com/users/AdemBoukhris457/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdemBoukhris457/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdemBoukhris457/subscriptions",
"organizations_url": "https://api.github.com/users/AdemBoukhris457/orgs",
"repos_url": "https://api.github.com/users/AdemBoukhris457/repos",
"events_url": "https://api.github.com/users/AdemBoukhris457/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdemBoukhris457/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T17:16:10 | 2025-08-29T11:18:49 | 2025-08-29T11:18:48 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40529",
"html_url": "https://github.com/huggingface/transformers/pull/40529",
"diff_url": "https://github.com/huggingface/transformers/pull/40529.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40529.patch",
"merged_at": null
} | # Update kosmos2_5 README to use `torch_dtype` in model loading example
This pull request updates the `kosmos2_5.md` README file to correct the `Kosmos2_5ForConditionalGeneration.from_pretrained` example. Specifically, it replaces the `dtype` parameter with `torch_dtype` to align with the Hugging Face Transformers API for specifying data types during model loading. This change improves clarity and ensures compatibility with the library's conventions.
### Changes Made
- Modified `kosmos2_5.md` to update the model loading example from:
```python
model = Kosmos2_5ForConditionalGeneration.from_pretrained(repo, device_map=device, dtype=dtype)
```
to:
```python
model = Kosmos2_5ForConditionalGeneration.from_pretrained(repo, device_map=device, torch_dtype=dtype)
```
### Related Issue
- No specific issue is linked to this change, as it is a minor documentation improvement. I checked existing issues and PRs to ensure no duplicate work.
### Pull Request Checklist
- [x] The pull request title summarizes the contribution.
- [ ] This PR does not address a specific issue, as it is a minor documentation fix.
- [ ] This PR is not a work in progress (no `[WIP]` prefix needed).
- [x] Existing tests pass (no code changes, only README update, so no new tests required).
- [ ] No new features or models added, so no new tests or `ModelTester.all_model_classes` updates needed.
- [ ] No new public methods added, so no docstrings required.
- [ ] No images, videos, or non-text files added (change is text-only in README).
- [x] Documentation build checked locally with:
```bash
pip install hf-doc-builder
doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build
```
The updated `kosmos2_5.md` file builds without errors.
- [x] Code style and quality checks not applicable (README change), but ran:
```bash
make style
make quality
make repo-consistency
```
to ensure no unintended issues.
- [x] Commit message is clear and descriptive.
### Notes
- This is a documentation-only change, so no functional tests were modified or added.
- I verified the documentation build locally to ensure the updated README renders correctly.
- The change aligns with the Transformers API documentation for `from_pretrained`.
Please review and let me know if any adjustments are needed! | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40529/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40528/comments | https://api.github.com/repos/huggingface/transformers/issues/40528/events | https://github.com/huggingface/transformers/pull/40528 | 3,363,989,927 | PR_kwDOCUB6oc6l15nD | 40,528 | [`Qwen Omni/VL`] Fix fa tests | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T16:37:41 | 2025-08-28T19:07:24 | 2025-08-28T19:07:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40528",
"html_url": "https://github.com/huggingface/transformers/pull/40528",
"diff_url": "https://github.com/huggingface/transformers/pull/40528.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40528.patch",
"merged_at": "2025-08-28T19:07:22"
} | Mrope is dependent on the head dim so we need to update it as well
cc @zucchini-nlp (since vlms and such have more potential to fuck things up here) | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40528/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40527/comments | https://api.github.com/repos/huggingface/transformers/issues/40527/events | https://github.com/huggingface/transformers/pull/40527 | 3,363,981,620 | PR_kwDOCUB6oc6l13yD | 40,527 | Standardize DialoGPT Model Card | {
"login": "Uvi-12",
"id": 190028082,
"node_id": "U_kgDOC1OZMg",
"avatar_url": "https://avatars.githubusercontent.com/u/190028082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Uvi-12",
"html_url": "https://github.com/Uvi-12",
"followers_url": "https://api.github.com/users/Uvi-12/followers",
"following_url": "https://api.github.com/users/Uvi-12/following{/other_user}",
"gists_url": "https://api.github.com/users/Uvi-12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Uvi-12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Uvi-12/subscriptions",
"organizations_url": "https://api.github.com/users/Uvi-12/orgs",
"repos_url": "https://api.github.com/users/Uvi-12/repos",
"events_url": "https://api.github.com/users/Uvi-12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Uvi-12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T16:35:03 | 2025-09-18T17:18:39 | 2025-09-18T17:18:38 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40527",
"html_url": "https://github.com/huggingface/transformers/pull/40527",
"diff_url": "https://github.com/huggingface/transformers/pull/40527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40527.patch",
"merged_at": null
} | ## What does this PR do?
This PR updates the DialogGPT model card to align with the new standardized model card format proposed in #36923.
## Notes
- This PR is still a work in progress.
- Using the doc preview bot to verify formatting.
- Follow-up commits will update the pending sections. | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40527/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40526/comments | https://api.github.com/repos/huggingface/transformers/issues/40526/events | https://github.com/huggingface/transformers/pull/40526 | 3,363,905,235 | PR_kwDOCUB6oc6l1nSL | 40,526 | Lazy import torchcodec | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T16:10:32 | 2025-08-28T16:57:14 | 2025-08-28T16:57:14 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40526",
"html_url": "https://github.com/huggingface/transformers/pull/40526",
"diff_url": "https://github.com/huggingface/transformers/pull/40526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40526.patch",
"merged_at": "2025-08-28T16:57:14"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40526/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/40526/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40525/comments | https://api.github.com/repos/huggingface/transformers/issues/40525/events | https://github.com/huggingface/transformers/pull/40525 | 3,363,744,744 | PR_kwDOCUB6oc6l1E0b | 40,525 | 🚨🚨🚨 Refactor CLIP structure | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T15:18:41 | 2025-08-28T17:49:33 | 2025-08-28T17:49:33 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40525",
"html_url": "https://github.com/huggingface/transformers/pull/40525",
"diff_url": "https://github.com/huggingface/transformers/pull/40525.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40525.patch",
"merged_at": null
} | # What does this PR do?
Remove excessive abstraction by merging `CLIPVisionTransformers` and `CLIPVisionModel` into a single `CLIPVisionModel`. This is a breaking change, the `CLIPVisionTransformers` class will no longer exist, and weights will be saved with a different prefix (weights loading is OK). Motivation - code refactoring for unbloating and `@check_model_inputs` use
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40525/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40524/comments | https://api.github.com/repos/huggingface/transformers/issues/40524/events | https://github.com/huggingface/transformers/pull/40524 | 3,363,685,496 | PR_kwDOCUB6oc6l033U | 40,524 | Use begin_of_sequence token in all sliding windows for correct model behaviour | {
"login": "TiAn5046",
"id": 146202114,
"node_id": "U_kgDOCLbeAg",
"avatar_url": "https://avatars.githubusercontent.com/u/146202114?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TiAn5046",
"html_url": "https://github.com/TiAn5046",
"followers_url": "https://api.github.com/users/TiAn5046/followers",
"following_url": "https://api.github.com/users/TiAn5046/following{/other_user}",
"gists_url": "https://api.github.com/users/TiAn5046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TiAn5046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TiAn5046/subscriptions",
"organizations_url": "https://api.github.com/users/TiAn5046/orgs",
"repos_url": "https://api.github.com/users/TiAn5046/repos",
"events_url": "https://api.github.com/users/TiAn5046/events{/privacy}",
"received_events_url": "https://api.github.com/users/TiAn5046/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-28T15:03:05 | 2025-09-11T07:46:23 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40524",
"html_url": "https://github.com/huggingface/transformers/pull/40524",
"diff_url": "https://github.com/huggingface/transformers/pull/40524.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40524.patch",
"merged_at": null
} | Testing with models employing bos tokens (eg Llama) resulted in much higher perplexity for all tokens but those in the first window. Reason was that if we tokenize only once, then only the first window will get the bos token. However for correct model behaviour, each forward pass/each sliding window needs to have a bos token as first token (otherwise perplexity values are significantly higher).
Note: Since inserted bos token is the first token, we do not need to set its label to -100 due to causal model internal label shift.
Note: if custom attention masks are used, always ensure bos token gets attention - otherwise unexpected behaviour (eg perplexity spikes for first three tokens experiencing lack of bos token) | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40524/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40523/comments | https://api.github.com/repos/huggingface/transformers/issues/40523/events | https://github.com/huggingface/transformers/pull/40523 | 3,363,485,326 | PR_kwDOCUB6oc6l0MPu | 40,523 | Fix mistral3 tests after "[Kosmos 2.5] Rename checkpoints" | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T14:10:15 | 2025-08-28T14:29:57 | 2025-08-28T14:29:55 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40523",
"html_url": "https://github.com/huggingface/transformers/pull/40523",
"diff_url": "https://github.com/huggingface/transformers/pull/40523.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40523.patch",
"merged_at": "2025-08-28T14:29:54"
} | Partially Reverts huggingface/transformers#40338 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40523/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40522/comments | https://api.github.com/repos/huggingface/transformers/issues/40522/events | https://github.com/huggingface/transformers/issues/40522 | 3,363,395,439 | I_kwDOCUB6oc7IeVdv | 40,522 | RuntimeError with use_remove_padding=True in GRPO training for Qwen2.5-VL due to tensor shape mismatch in transformers==4.53.3 | {
"login": "zhihaofang1017",
"id": 92305118,
"node_id": "U_kgDOBYB23g",
"avatar_url": "https://avatars.githubusercontent.com/u/92305118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhihaofang1017",
"html_url": "https://github.com/zhihaofang1017",
"followers_url": "https://api.github.com/users/zhihaofang1017/followers",
"following_url": "https://api.github.com/users/zhihaofang1017/following{/other_user}",
"gists_url": "https://api.github.com/users/zhihaofang1017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhihaofang1017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhihaofang1017/subscriptions",
"organizations_url": "https://api.github.com/users/zhihaofang1017/orgs",
"repos_url": "https://api.github.com/users/zhihaofang1017/repos",
"events_url": "https://api.github.com/users/zhihaofang1017/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhihaofang1017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-28T13:44:37 | 2025-09-30T10:34:03 | 2025-09-30T10:34:03 | NONE | null | null | null | null | ### System Info
NPU & Ascend Driver:
NPU: 16x Ascend 910B2 64GB
CANN Version: 8.2.RC1
Python Version: Python 3.11.9
Key Libraries:
verl: git+[https://github.com/volcengine/verl@8e1fc24](https://www.google.com/search?q=https://github.com/volcengine/verl%408e1fc24&authuser=1)
transformers: 4.53.3
torch: 2.5.1+cpu
torch-npu: 2.5.1
accelerate: 1.10.0
bytedray:2.10.0.32
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/volcengine/verl/blob/main/examples/grpo_trainer/run_qwen2_5_vl_7b_npu.sh
### Expected behavior
I am trying to run the GRPO training example for the Qwen2.5-VL model using the script in VeRL(https://github.com/volcengine/verl/blob/main/examples/grpo_trainer/run_qwen2_5_vl_7b_npu.sh).
The core error message is:
(TaskRunner pid=1205760, ip=[2605:340:cd51:4900:f78:21b8:4971:440b]) File "/home/tiger/.local/lib/python3.11/site-packages/transformers/masking_utils.py", line 704, in _preprocess_mask_arguments
(TaskRunner pid=1205760, ip=[2605:340:cd51:4900:f78:21b8:4971:440b]) position_ids = position_ids.expand(batch_size, -1)
(TaskRunner pid=1205760, ip=[2605:340:cd51:4900:f78:21b8:4971:440b]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(TaskRunner pid=1205760, ip=[2605:340:cd51:4900:f78:21b8:4971:440b]) RuntimeError: expand(npuLongType{[3, 1, 717]}, size=[1, -1]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)
<img width="2288" height="970" alt="Image" src="https://github.com/user-attachments/assets/bc6752a1-f83b-4490-8462-a652f4c2502d" />
However, the training process fails at step 0 with a RuntimeError related to a tensor dimension mismatch inside the model's forward pass.
Through further debugging, I've discovered a key finding: this error only occurs when use_remove_padding=True is set in the configuration. If I set use_remove_padding=False, the training proceeds without this specific error.
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40522/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40521/comments | https://api.github.com/repos/huggingface/transformers/issues/40521/events | https://github.com/huggingface/transformers/pull/40521 | 3,363,394,379 | PR_kwDOCUB6oc6lz4sp | 40,521 | skip some `padding_matches_padding_free_with_position_ids` for FA2 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T13:44:17 | 2025-08-28T15:20:10 | 2025-08-28T15:20:08 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40521",
"html_url": "https://github.com/huggingface/transformers/pull/40521",
"diff_url": "https://github.com/huggingface/transformers/pull/40521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40521.patch",
"merged_at": "2025-08-28T15:20:08"
} | # What does this PR do?
As discussed offline. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40521/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40520/comments | https://api.github.com/repos/huggingface/transformers/issues/40520/events | https://github.com/huggingface/transformers/pull/40520 | 3,363,385,144 | PR_kwDOCUB6oc6lz2re | 40,520 | [generate] add faster `stop_strings` stopping criteria | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-28T13:41:52 | 2025-09-12T13:28:15 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40520",
"html_url": "https://github.com/huggingface/transformers/pull/40520",
"diff_url": "https://github.com/huggingface/transformers/pull/40520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40520.patch",
"merged_at": null
} | # What does this PR do?
Adds `StopStringTextMatchCriteria`, a faster alternative to `StopStringCriteria`. Unlike `StopStringCriteria`, `StopStringTextMatchCriteria` **can't** be compiled.
Some additional context:
- when we added `StopStringCriteria`, we were looking forward having end-to-end `generate` compilation, so it made sense to focus on compilable options;
- As a user mentioned on [this issue](https://github.com/huggingface/smolagents/issues/1703), `StopStringCriteria` can be really slow in some contexts. More specifically, at initialization time (see benchamrks below);
- In general, `StopStringTextMatchCriteria` is faster, so it's the new default. `StopStringCriteria` is kept for `torch.compile` users.
Thank you @MaxBourdon for surfacing the problem
### Benchmarks
TL;DR `StopStringCriteria` is very slow to initialize on new `stop_strings` inputs, >2s on my machine. This is cached, so successive calls with the same `stop_strings` are not as bad. However, it's particularly troublesome when trying small models, as this initialization may take much more than the generation time. Excluding init time, the new `StopStringTextMatchCriteria` is also slightly faster.
<details>
<summary>Benchmark script</summary>
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteriaList, StopStringTextMatchCriteria, StopStringCriteria
from time import time
import torch
N_RUNS = 100
MAX_NEW_TOKENS = 100
MODEL_ID = "Qwen/Qwen2.5-0.5B-Instruct"
STOP_STRINGS = ["Potato", "Carrots", "Onions", "Garlic", "Tomatoes", "Lettuce", "Cucumbers"]
BATCH_SIZE = 1
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map="auto", dtype=torch.bfloat16)
inputs = tokenizer(["The quick brown"] * BATCH_SIZE, return_tensors="pt").to(model.device)
# warmup
for i in range(10):
gen_out = model.generate(**inputs, do_sample=False, max_new_tokens=MAX_NEW_TOKENS, min_new_tokens=MAX_NEW_TOKENS)
assert gen_out.shape[1] == MAX_NEW_TOKENS + inputs.input_ids.shape[1]
# ------------------------------------------------
# No stopping criteria
# ------------------------------------------------
all_times = []
for i in range(N_RUNS):
start_time = time()
gen_out = model.generate(**inputs, do_sample=False, max_new_tokens=MAX_NEW_TOKENS, min_new_tokens=MAX_NEW_TOKENS)
assert gen_out.shape[1] == MAX_NEW_TOKENS + inputs.input_ids.shape[1]
end_time = time()
all_times.append(end_time - start_time)
avg_time = sum(all_times) / N_RUNS
print(f"[No stopping criteria] Average generation time: {avg_time} seconds")
# ------------------------------------------------
# StopStringCriteria
# ------------------------------------------------
# IMPORTANT NOTE: initializing this for the first time is slow, >1s. Prior to this PR, this initialization was
# done inside `generate` when `stop_strings` is set
init_start_time = time()
custom_stopping_criteria = StoppingCriteriaList([StopStringCriteria(tokenizer, STOP_STRINGS)])
init_end_time = time()
print(f"[StopStringCriteria] first init time: {init_end_time - init_start_time} seconds")
init_start_time = time()
custom_stopping_criteria = StoppingCriteriaList([StopStringCriteria(tokenizer, STOP_STRINGS)])
init_end_time = time()
print(f"[StopStringCriteria] second init time: {init_end_time - init_start_time} seconds")
all_times = []
for i in range(N_RUNS):
start_time = time()
gen_out = model.generate(
**inputs,
do_sample=False,
stopping_criteria=custom_stopping_criteria,
max_new_tokens=MAX_NEW_TOKENS,
min_new_tokens=MAX_NEW_TOKENS,
)
assert gen_out.shape[1] == MAX_NEW_TOKENS + inputs.input_ids.shape[1]
end_time = time()
all_times.append(end_time - start_time)
avg_time = sum(all_times) / N_RUNS
print(f"[StopStringCriteria] Average generation time: {avg_time} seconds")
# ------------------------------------------------
# StopStringTextMatchCriteria
# ------------------------------------------------
init_start_time = time()
custom_stopping_criteria = StoppingCriteriaList([StopStringTextMatchCriteria(tokenizer, STOP_STRINGS)])
init_end_time = time()
print(f"[StopStringTextMatchCriteria] first init time: {init_end_time - init_start_time} seconds")
init_start_time = time()
custom_stopping_criteria = StoppingCriteriaList([StopStringTextMatchCriteria(tokenizer, STOP_STRINGS)])
init_end_time = time()
print(f"[StopStringTextMatchCriteria] second init time: {init_end_time - init_start_time} seconds")
all_times = []
for i in range(N_RUNS):
start_time = time()
gen_out = model.generate(
**inputs,
do_sample=False,
stopping_criteria=custom_stopping_criteria,
max_new_tokens=MAX_NEW_TOKENS,
min_new_tokens=MAX_NEW_TOKENS,
)
assert gen_out.shape[1] == MAX_NEW_TOKENS + inputs.input_ids.shape[1]
end_time = time()
all_times.append(end_time - start_time)
avg_time = sum(all_times) / N_RUNS
print(f"[StopStringTextMatchCriteria] Average generation time: {avg_time} seconds")
# ------------------------------------------------
# generate with stop strings (using `StopStringTextMatchCriteria` under the hood)
# ------------------------------------------------
all_times = []
for i in range(N_RUNS):
start_time = time()
gen_out = model.generate(
**inputs,
do_sample=False,
stop_strings=STOP_STRINGS,
tokenizer=tokenizer,
max_new_tokens=MAX_NEW_TOKENS,
min_new_tokens=MAX_NEW_TOKENS,
)
assert gen_out.shape[1] == MAX_NEW_TOKENS + inputs.input_ids.shape[1]
end_time = time()
all_times.append(end_time - start_time)
avg_time = sum(all_times) / N_RUNS
print(f"[Default `stop_strings` criteria] Average generation time: {avg_time} seconds")
```
</details>
Benchmark results on my machine:
```
[No stopping criteria] Average generation time: 1.3314339590072632 seconds
[StopStringCriteria] first init time: 2.4044578075408936 seconds # <------- this is the issue, init time > generation time!
[StopStringCriteria] second init time: 0.0518953800201416 seconds
[StopStringCriteria] Average generation time: 1.3567428421974181 seconds
[StopStringTextMatchCriteria] first init time: 6.4373016357421875e-06 seconds
[StopStringTextMatchCriteria] second init time: 1.9073486328125e-06 seconds
[StopStringTextMatchCriteria] Average generation time: 1.343175311088562 seconds
[Default `stop_strings` criteria] Average generation time: 1.3437320828437804 seconds # <---- uses `StopStringTextMatchCriteria`
```
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40520/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40519/comments | https://api.github.com/repos/huggingface/transformers/issues/40519/events | https://github.com/huggingface/transformers/pull/40519 | 3,363,179,135 | PR_kwDOCUB6oc6lzKOK | 40,519 | Skip some flex attn tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T12:40:09 | 2025-08-28T13:43:39 | 2025-08-28T13:43:38 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40519",
"html_url": "https://github.com/huggingface/transformers/pull/40519",
"diff_url": "https://github.com/huggingface/transformers/pull/40519.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40519.patch",
"merged_at": "2025-08-28T13:43:38"
} | # What does this PR do?
For those failing with
> torch._inductor.exc.InductorError: RuntimeError: No valid triton configs. OutOfMemoryError: out of resource: triton_tem_fused_0 Required: 151552 Hardware limit:101376 Reducing block sizes or num_stages may help.
I don't want to see 👀 🙄 anymoreeeeeeeee! | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40519/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40518/comments | https://api.github.com/repos/huggingface/transformers/issues/40518/events | https://github.com/huggingface/transformers/pull/40518 | 3,362,601,492 | PR_kwDOCUB6oc6lxNgH | 40,518 | 🚨 Remove Constrained Beam Search decoding strategy | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T09:38:37 | 2025-09-01T12:34:49 | 2025-09-01T12:34:49 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40518",
"html_url": "https://github.com/huggingface/transformers/pull/40518",
"diff_url": "https://github.com/huggingface/transformers/pull/40518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40518.patch",
"merged_at": "2025-09-01T12:34:48"
} | Removes Constrained Beam Search generation strategy from the codebase. Directs users to the `transformers-community/constrained-beam-search` repository.
It has been a warning for a few releases, but now `trust_remote_code=True` is required to run constrained beam search.
Depends on #40480 and #40495 | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40518/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40517/comments | https://api.github.com/repos/huggingface/transformers/issues/40517/events | https://github.com/huggingface/transformers/pull/40517 | 3,362,387,995 | PR_kwDOCUB6oc6lwfTN | 40,517 | Fix for missing default values in encoder decoder | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T08:36:02 | 2025-09-17T03:54:19 | 2025-09-01T14:11:23 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40517",
"html_url": "https://github.com/huggingface/transformers/pull/40517",
"diff_url": "https://github.com/huggingface/transformers/pull/40517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40517.patch",
"merged_at": "2025-09-01T14:11:23"
} | This PR fixes two things in `EncoderDecoder`:
- there is a value `is_updated` that is declared in an `if` which is then used in a check, so we add a default value of `False`
- there is an update to `past_key_values` that updates an attribute `is_updated` which is only in `EncoderDecoder` cache type, so we add a check on the type of `past_key_values`
| {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40517/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40516/comments | https://api.github.com/repos/huggingface/transformers/issues/40516/events | https://github.com/huggingface/transformers/pull/40516 | 3,362,369,746 | PR_kwDOCUB6oc6lwbcv | 40,516 | when _validate_request, remove request_id, or error will like Unexpec… | {
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T08:29:44 | 2025-08-29T01:44:08 | 2025-08-29T01:44:08 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40516",
"html_url": "https://github.com/huggingface/transformers/pull/40516",
"diff_url": "https://github.com/huggingface/transformers/pull/40516.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40516.patch",
"merged_at": null
} | …ted keys in the request: {'request_id'} INFO: ::1:44814 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
@gante please review the PR.
how to produce the issue
transformers serve --log_level debug
transformers chat Qwen/Qwen2.5-0.5B-Instruct do_sample=False max_new_tokens=10
send 1st request-> OK
send 2st request->crash. client log:
File "/home/ywan171/transformers/src/transformers/commands/chat.py", line 742, in _inner_run
model_output, request_id = await interface.stream_output(stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ywan171/transformers/src/transformers/commands/chat.py", line 130, in stream_output
async for token in await stream:
^^^^^^^^^^^^
File "/workspace/ywan171/miniforge3/envs/optimum-intel/lib/python3.11/site-packages/huggingface_hub/inference/_generated/_async_client.py", line 963, in chat_completion
data = await self._inner_post(request_parameters, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ywan171/miniforge3/envs/optimum-intel/lib/python3.11/site-packages/huggingface_hub/inference/_generated/_async_client.py", line 289, in _inner_post
raise error
File "/workspace/ywan171/miniforge3/envs/optimum-intel/lib/python3.11/site-packages/huggingface_hub/inference/_generated/_async_client.py", line 275, in _inner_post
response.raise_for_status()
File "/workspace/ywan171/miniforge3/envs/optimum-intel/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1161, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 422, message='Unprocessable Entity', url='http://localhost:8000/v1/chat/completions'
server log:
INFO: ::1:51300 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Validating request: {'model': 'Qwen/Qwen2.5-0.5B-Instruct@main', 'stream': True, 'request_id': 'req_0', 'generation_config': '{\n "bos_token_id": 151643,\n "eos_token_id": [\n 151645,\n 151643\n ],\n "max_new_tokens": 10,\n "pad_token_id": 151643,\n "repetition_penalty": 1.1,\n "temperature": 0.7,\n "top_k": 20,\n "top_p": 0.8,\n "transformers_version": "4.56.0.dev0"\n}\n', 'messages': [{'role': 'user', 'content': 'hi'}, {'role': 'assistant', 'content': 'Hello! How can I assist you today? If'}, {'role': 'user', 'content': 'hi'}]}
Unexpected keys in the request: {'request_id'}
INFO: ::1:39042 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
| {
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40516/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40515/comments | https://api.github.com/repos/huggingface/transformers/issues/40515/events | https://github.com/huggingface/transformers/pull/40515 | 3,362,318,642 | PR_kwDOCUB6oc6lwQdT | 40,515 | Add Context-Aware Tokenizer Selection Utility Based on Corpus Analysis | {
"login": "Aishwarya0811",
"id": 41635755,
"node_id": "MDQ6VXNlcjQxNjM1NzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/41635755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aishwarya0811",
"html_url": "https://github.com/Aishwarya0811",
"followers_url": "https://api.github.com/users/Aishwarya0811/followers",
"following_url": "https://api.github.com/users/Aishwarya0811/following{/other_user}",
"gists_url": "https://api.github.com/users/Aishwarya0811/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aishwarya0811/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aishwarya0811/subscriptions",
"organizations_url": "https://api.github.com/users/Aishwarya0811/orgs",
"repos_url": "https://api.github.com/users/Aishwarya0811/repos",
"events_url": "https://api.github.com/users/Aishwarya0811/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aishwarya0811/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-28T08:15:08 | 2025-09-11T07:33:10 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40515",
"html_url": "https://github.com/huggingface/transformers/pull/40515",
"diff_url": "https://github.com/huggingface/transformers/pull/40515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40515.patch",
"merged_at": null
} | # What does this PR do?
This PR introduces a new utility module that automates tokenizer selection and configuration based on corpus characteristics, addressing the need to reduce manual trial-and-error in tokenizer selection and improve model performance with minimal user effort.
## Key Features:
CorpusAnalyzer: Extracts statistical features from text corpora (vocabulary size, morphological complexity, character diversity, language patterns)
TokenizerRecommender: Maps corpus features to optimal tokenizer types (BPE, WordPiece, SentencePiece) using rule-based heuristics
TokenizerSelector: End-to-end utility that analyzes corpus, recommends tokenizer type, and optionally trains it using existing infrastructure
Language-aware recommendations: Handles different script types (Latin, CJK, mixed) appropriately
## Implementation Details:
Minimal changes: Single new file src/transformers/utils/tokenizer_selection.py
Zero modifications to existing tokenizer classes
Uses lazy imports to avoid circular dependencies
Integrates with existing train_new_from_iterator method
Comprehensive test coverage (16 tests)
-->
<!-- Remove if not applicable -->
## Fixes #40512
## Fixes # (issue) Context-Aware Tokenizer Selection Utility Based on Corpus Analysis
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
- @ArthurZucker
- @PamelaBha
- @gante
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40515/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40514/comments | https://api.github.com/repos/huggingface/transformers/issues/40514/events | https://github.com/huggingface/transformers/pull/40514 | 3,361,944,141 | PR_kwDOCUB6oc6lvAJr | 40,514 | Include machine type in collated reports filename | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T06:14:41 | 2025-08-28T07:28:13 | 2025-08-28T07:28:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40514",
"html_url": "https://github.com/huggingface/transformers/pull/40514",
"diff_url": "https://github.com/huggingface/transformers/pull/40514.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40514.patch",
"merged_at": "2025-08-28T07:28:12"
} | null | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40514/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40513/comments | https://api.github.com/repos/huggingface/transformers/issues/40513/events | https://github.com/huggingface/transformers/issues/40513 | 3,361,532,078 | I_kwDOCUB6oc7IXOiu | 40,513 | .from_pretrained() auto converts model to fp32, leads to issue with flash attention 2 | {
"login": "KeshavSingh29",
"id": 130352102,
"node_id": "U_kgDOB8UD5g",
"avatar_url": "https://avatars.githubusercontent.com/u/130352102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KeshavSingh29",
"html_url": "https://github.com/KeshavSingh29",
"followers_url": "https://api.github.com/users/KeshavSingh29/followers",
"following_url": "https://api.github.com/users/KeshavSingh29/following{/other_user}",
"gists_url": "https://api.github.com/users/KeshavSingh29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KeshavSingh29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeshavSingh29/subscriptions",
"organizations_url": "https://api.github.com/users/KeshavSingh29/orgs",
"repos_url": "https://api.github.com/users/KeshavSingh29/repos",
"events_url": "https://api.github.com/users/KeshavSingh29/events{/privacy}",
"received_events_url": "https://api.github.com/users/KeshavSingh29/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-28T02:39:18 | 2025-08-28T06:51:25 | 2025-08-28T06:51:25 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Linux-5.15.0-1061-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.16
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.2
- Accelerate version: 1.7.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 16
- machine_rank: 0
- num_machines: 2
- gpu_ids: all
- main_process_ip: 10.3.0.43
- main_process_port: 56789
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: 0.15.3
- PyTorch version (accelerator?): 2.6.0+cu124 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@gante @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Trained custom model using accelerate + FSDP with hf trainer.
Converted FSDP distcp files to safetensor using :
```python
! python -m torch.distributed.checkpoint.format_utils dcp_to_torch "<fsdp_model_loc>" fullweights.pth
import torch
from safetensors.torch import save_file
sd = torch.load("fullweights.pth", map_location="cpu")
obj = sd.get("model", sd)
save_file({k: v.cpu() for k, v in obj.items()}, "model.safetensors")
```
Since model was huge, converted to shards using :
```python
# Initialize empty model
model = CustomModel(cfg)
# Load the trained weights
state_dict = load_file("model.safetensors")
model.load_state_dict(state_dict)
# Convert model to bf16
model = model.to(torch.bfloat16)
print(f"Model dtype after conversion: {next(model.parameters()).dtype}")
# Prints torch.bfloat16
model.save_pretrained(
"./model-sharded",
safe_serialization=True,
max_shard_size="10GB", # Change to your desired size
torch_dtype=torch.bfloat16
)
```
Then when loading the model using
```python
model = AutoModelForCausalLM.from_pretrained("./model-sharded", trust_remote_code=True, device_map="auto")
print(f"Model dtype after conversion: {next(model.parameters()).dtype}")
# Prints torch.float32
```
### Expected behavior
The model parameters dtype should not change.
I understand we can use `torch_dtype=torch.bfloat16` to fix this issue but why does the auto conversion happen in the first place?
If thats the case, do i not need the model conversion step to bf16 ? | {
"login": "KeshavSingh29",
"id": 130352102,
"node_id": "U_kgDOB8UD5g",
"avatar_url": "https://avatars.githubusercontent.com/u/130352102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KeshavSingh29",
"html_url": "https://github.com/KeshavSingh29",
"followers_url": "https://api.github.com/users/KeshavSingh29/followers",
"following_url": "https://api.github.com/users/KeshavSingh29/following{/other_user}",
"gists_url": "https://api.github.com/users/KeshavSingh29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KeshavSingh29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeshavSingh29/subscriptions",
"organizations_url": "https://api.github.com/users/KeshavSingh29/orgs",
"repos_url": "https://api.github.com/users/KeshavSingh29/repos",
"events_url": "https://api.github.com/users/KeshavSingh29/events{/privacy}",
"received_events_url": "https://api.github.com/users/KeshavSingh29/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40513/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40512/comments | https://api.github.com/repos/huggingface/transformers/issues/40512/events | https://github.com/huggingface/transformers/issues/40512 | 3,361,395,460 | I_kwDOCUB6oc7IWtME | 40,512 | Context-Aware Tokenizer Suggestions | {
"login": "PamelaBha",
"id": 219210686,
"node_id": "U_kgDODRDjvg",
"avatar_url": "https://avatars.githubusercontent.com/u/219210686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PamelaBha",
"html_url": "https://github.com/PamelaBha",
"followers_url": "https://api.github.com/users/PamelaBha/followers",
"following_url": "https://api.github.com/users/PamelaBha/following{/other_user}",
"gists_url": "https://api.github.com/users/PamelaBha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PamelaBha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PamelaBha/subscriptions",
"organizations_url": "https://api.github.com/users/PamelaBha/orgs",
"repos_url": "https://api.github.com/users/PamelaBha/repos",
"events_url": "https://api.github.com/users/PamelaBha/events{/privacy}",
"received_events_url": "https://api.github.com/users/PamelaBha/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-08-28T01:20:10 | 2025-08-28T11:59:59 | null | NONE | null | null | null | null | ### Feature request
Add Context-Aware Tokenizer Suggestion Utility Based on Corpus Analysis
This will introduce a utility module that automates tokenizer selection and configuration based on corpus characteristics. The goal is to reduce manual heuristics and improve model performance with minimal user effort.
Benefits
Reduces manual trial-and-error in tokenizer selection.
Improves model performance by aligning tokenizer choice with corpus characteristics.
Enhances accessibility for non-expert users.
### Motivation
Tokenizer optimization is often manual and heuristic-driven. By analyzing corpus features such as vocabulary size, token frequency, and language morphology, we can recommend the most suitable tokenizer type (e.g., BPE, WordPiece, SentencePiece) and configuration. This feature is especially useful for users training custom models or working with domain-specific corpora.
### Your contribution
CorpusAnalyzer: Extracts key statistics from a text corpus.
TokenizerRecommender: Suggests tokenizer type and configuration based on corpus features.
suggest_and_train_tokenizer(): End-to-end utility to analyze corpus, recommend tokenizer, and optionally train it.
CLI and notebook examples for ease of use.
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40512/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40511/comments | https://api.github.com/repos/huggingface/transformers/issues/40511/events | https://github.com/huggingface/transformers/pull/40511 | 3,361,337,721 | PR_kwDOCUB6oc6ltBl6 | 40,511 | Fix typos | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-28T00:48:38 | 2025-09-01T05:13:45 | 2025-08-29T11:25:34 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40511",
"html_url": "https://github.com/huggingface/transformers/pull/40511",
"diff_url": "https://github.com/huggingface/transformers/pull/40511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40511.patch",
"merged_at": "2025-08-29T11:25:34"
} | # What does this PR do?
Fix more typos in comments and variable names. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40511/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40510/comments | https://api.github.com/repos/huggingface/transformers/issues/40510/events | https://github.com/huggingface/transformers/pull/40510 | 3,361,007,079 | PR_kwDOCUB6oc6lr6Ct | 40,510 | Various AMD expectations | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T22:18:10 | 2025-08-28T08:15:21 | 2025-08-28T08:15:21 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40510",
"html_url": "https://github.com/huggingface/transformers/pull/40510",
"diff_url": "https://github.com/huggingface/transformers/pull/40510.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40510.patch",
"merged_at": "2025-08-28T08:15:21"
} | This PR adds AMD expectations for `qwen2`, `smolvlm` and `TableTransformer`. | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40510/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40509/comments | https://api.github.com/repos/huggingface/transformers/issues/40509/events | https://github.com/huggingface/transformers/pull/40509 | 3,360,941,393 | PR_kwDOCUB6oc6lrsSd | 40,509 | sped up gguf tokenizer for nemotron test | {
"login": "nayana1729",
"id": 109027318,
"node_id": "U_kgDOBn-f9g",
"avatar_url": "https://avatars.githubusercontent.com/u/109027318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nayana1729",
"html_url": "https://github.com/nayana1729",
"followers_url": "https://api.github.com/users/nayana1729/followers",
"following_url": "https://api.github.com/users/nayana1729/following{/other_user}",
"gists_url": "https://api.github.com/users/nayana1729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nayana1729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nayana1729/subscriptions",
"organizations_url": "https://api.github.com/users/nayana1729/orgs",
"repos_url": "https://api.github.com/users/nayana1729/repos",
"events_url": "https://api.github.com/users/nayana1729/events{/privacy}",
"received_events_url": "https://api.github.com/users/nayana1729/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T21:43:52 | 2025-08-28T12:11:31 | 2025-08-28T12:10:49 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40509",
"html_url": "https://github.com/huggingface/transformers/pull/40509",
"diff_url": "https://github.com/huggingface/transformers/pull/40509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40509.patch",
"merged_at": "2025-08-28T12:10:49"
} | # What does this PR do?
Speeds up GGUF tokenizer used in `tests/quantization/ggml/test_ggml.py` by using original tokenizer from NVIDIA.
Fixes #40334
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc
@ArthurZucker
@jiqing-feng | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40509/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40508/comments | https://api.github.com/repos/huggingface/transformers/issues/40508/events | https://github.com/huggingface/transformers/pull/40508 | 3,360,909,846 | PR_kwDOCUB6oc6lrmEt | 40,508 | speeding up GGUF tokenizer for Nemotron test | {
"login": "nayana1729",
"id": 109027318,
"node_id": "U_kgDOBn-f9g",
"avatar_url": "https://avatars.githubusercontent.com/u/109027318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nayana1729",
"html_url": "https://github.com/nayana1729",
"followers_url": "https://api.github.com/users/nayana1729/followers",
"following_url": "https://api.github.com/users/nayana1729/following{/other_user}",
"gists_url": "https://api.github.com/users/nayana1729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nayana1729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nayana1729/subscriptions",
"organizations_url": "https://api.github.com/users/nayana1729/orgs",
"repos_url": "https://api.github.com/users/nayana1729/repos",
"events_url": "https://api.github.com/users/nayana1729/events{/privacy}",
"received_events_url": "https://api.github.com/users/nayana1729/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T21:30:56 | 2025-08-27T21:34:32 | 2025-08-27T21:31:53 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40508",
"html_url": "https://github.com/huggingface/transformers/pull/40508",
"diff_url": "https://github.com/huggingface/transformers/pull/40508.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40508.patch",
"merged_at": null
} | # What does this PR do?
Speeds up GGUF tokenizer used in `tests/quantization/ggml/test_ggml.py` by using original tokenizer from NVIDIA.
Fixes #40334
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc
@ArthurZucker
@jiqing-feng | {
"login": "nayana1729",
"id": 109027318,
"node_id": "U_kgDOBn-f9g",
"avatar_url": "https://avatars.githubusercontent.com/u/109027318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nayana1729",
"html_url": "https://github.com/nayana1729",
"followers_url": "https://api.github.com/users/nayana1729/followers",
"following_url": "https://api.github.com/users/nayana1729/following{/other_user}",
"gists_url": "https://api.github.com/users/nayana1729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nayana1729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nayana1729/subscriptions",
"organizations_url": "https://api.github.com/users/nayana1729/orgs",
"repos_url": "https://api.github.com/users/nayana1729/repos",
"events_url": "https://api.github.com/users/nayana1729/events{/privacy}",
"received_events_url": "https://api.github.com/users/nayana1729/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40508/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40507/comments | https://api.github.com/repos/huggingface/transformers/issues/40507/events | https://github.com/huggingface/transformers/pull/40507 | 3,360,755,115 | PR_kwDOCUB6oc6lrFiP | 40,507 | [modular] Classes can now be defined and referenced in arbitrary order (without bringing unwanted dependencies) | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T20:24:28 | 2025-08-27T21:06:12 | 2025-08-27T21:06:11 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40507",
"html_url": "https://github.com/huggingface/transformers/pull/40507",
"diff_url": "https://github.com/huggingface/transformers/pull/40507.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40507.patch",
"merged_at": "2025-08-27T21:06:11"
} | # What does this PR do?
As per the title. Until now, if a class was ever referenced in the modular file BEFORE being redefined in it later on in the file, it could match a class from another modeling file, thus adding wrong dependencies to the generated files.
This PR fixes it, by making sure all classes redefined in modular are removed from the dependency graph of another class if present (they will be added later on with their own dependencies)
So the order of class definition and class reference in modular files can now be arbitrary 🎉(even though to keep clean code, we always want classes to be defined before they are used somewhere, so that someone can read the file "naturally", it is sometimes not practical)
As an example, consider `modular_modernbert_decoder` which is being modified, where I previously had to use a check on string name instead of `isinstance(class)` because the 2 classes are defined afterwards in modular, and would thus bring erroneous dependencies.
A few other models had classes at the wrong location due to it, or even some classes that were imported but never used anywhere
This is now all fixed | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40507/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40506/comments | https://api.github.com/repos/huggingface/transformers/issues/40506/events | https://github.com/huggingface/transformers/pull/40506 | 3,360,264,825 | PR_kwDOCUB6oc6lpcZ4 | 40,506 | Internvl3.5 | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T17:36:15 | 2025-10-21T12:11:02 | 2025-10-21T12:11:02 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40506",
"html_url": "https://github.com/huggingface/transformers/pull/40506",
"diff_url": "https://github.com/huggingface/transformers/pull/40506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40506.patch",
"merged_at": null
} | # What does this PR do?
WIP for the new iteration of InternVL3.5, see https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B/tree/main. Original authors are working on their own conversion to HF format (see https://huggingface.co/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview/discussions/2 ), pushing this for safekeeping and will likely delete!
Current plan:
- [x] Conversation and generation for small, dense models work.
- [x] Image processor not included, has dynamic preprocessing, looks similar to phi4 with an aspect ratio targeting. Needs to be added.
- [x] Generations for 20B/241B are TODO. Conversion (key mapping) works without errors. | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40506/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40506/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40505/comments | https://api.github.com/repos/huggingface/transformers/issues/40505/events | https://github.com/huggingface/transformers/pull/40505 | 3,360,138,970 | PR_kwDOCUB6oc6lpBIr | 40,505 | Refactor Siglip-like models | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-27T16:59:18 | 2025-08-28T15:06:39 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40505",
"html_url": "https://github.com/huggingface/transformers/pull/40505",
"diff_url": "https://github.com/huggingface/transformers/pull/40505.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40505.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40505/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40504/comments | https://api.github.com/repos/huggingface/transformers/issues/40504/events | https://github.com/huggingface/transformers/issues/40504 | 3,360,122,293 | I_kwDOCUB6oc7IR2W1 | 40,504 | gpt-oss and transformers serve not working | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | open | false | null | [] | null | [] | 2025-08-27T16:53:49 | 2025-10-26T08:02:41 | null | MEMBER | null | null | null | null | ### System Info
- `transformers` version: 4.56.0.dev0
- Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
I'm getting the following when trying chat with the model with transformers serve + transformers chat localhost:8000 --model-name-or-path transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
It doesn't work for both the de-quantized model (you can just uninstall `kernels`) and the quantized model.
```
Traceback (most recent call last):
File "/home/marc/anaconda3/envs/sglang/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
self.run()
File "/home/marc/anaconda3/envs/sglang/lib/python3.10/threading.py", line 946, in run
self._target(*self._args, **self._kwargs)
File "/home/marc/transformers/src/transformers/commands/serving.py", line 982, in generate_with_cache
generate_output = model.generate(**kwargs)
File "/home/marc/anaconda3/envs/sglang/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
File "/home/marc/transformers/src/transformers/generation/utils.py", line 2539, in generate
result = self._sample(
File "/home/marc/transformers/src/transformers/generation/utils.py", line 2860, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "/home/marc/transformers/src/transformers/generation/utils.py", line 569, in prepare_inputs_for_generation
inputs_embeds, input_ids = self._cache_dependant_input_preparation(
File "/home/marc/transformers/src/transformers/generation/utils.py", line 475, in _cache_dependant_input_preparation
or (cache_position[-1] >= input_ids.shape[1]) # Exception 3
IndexError: index -1 is out of bounds for dimension 0 with size 0
```
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
transformers serve + transformers chat localhost:8000 --model-name-or-path transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
### Expected behavior
working | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40504/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40504/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40503/comments | https://api.github.com/repos/huggingface/transformers/issues/40503/events | https://github.com/huggingface/transformers/pull/40503 | 3,360,021,406 | PR_kwDOCUB6oc6loonc | 40,503 | Fix the CI workflow of `merge to main` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T16:23:23 | 2025-08-27T16:39:08 | 2025-08-27T16:35:12 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40503",
"html_url": "https://github.com/huggingface/transformers/pull/40503",
"diff_url": "https://github.com/huggingface/transformers/pull/40503.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40503.patch",
"merged_at": "2025-08-27T16:35:12"
} | # What does this PR do?
The workflow lacked a condition and the whole list of models in our codebase were selected to run when there is no change in modeling code.
This PR fixed it. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40503/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40502/comments | https://api.github.com/repos/huggingface/transformers/issues/40502/events | https://github.com/huggingface/transformers/pull/40502 | 3,359,951,750 | PR_kwDOCUB6oc6loaUi | 40,502 | Collated reports: no need to upload artifact | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T16:03:07 | 2025-08-27T16:31:56 | 2025-08-27T16:31:55 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40502",
"html_url": "https://github.com/huggingface/transformers/pull/40502",
"diff_url": "https://github.com/huggingface/transformers/pull/40502.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40502.patch",
"merged_at": "2025-08-27T16:31:55"
} | Since we already upload to a specified dataset there is no need to upload it to the gh artifact registry as well. | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40502/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40501/comments | https://api.github.com/repos/huggingface/transformers/issues/40501/events | https://github.com/huggingface/transformers/pull/40501 | 3,359,931,356 | PR_kwDOCUB6oc6loWBi | 40,501 | [serve] fix ` request_id` unexpected | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T15:57:25 | 2025-08-28T12:18:19 | 2025-08-28T12:16:28 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40501",
"html_url": "https://github.com/huggingface/transformers/pull/40501",
"diff_url": "https://github.com/huggingface/transformers/pull/40501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40501.patch",
"merged_at": "2025-08-28T12:16:28"
} | # What does this PR do?
This PR fixes transformers serve as it is currently not working due to an unexpected field being passed `request_id`
# reproducer
```python
transformers serve
transformers chat localhost:8000 --model-name-or-path HuggingFaceTB/SmolLM3-3B
```
the second message will trigger an error. | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40501/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40500/comments | https://api.github.com/repos/huggingface/transformers/issues/40500/events | https://github.com/huggingface/transformers/issues/40500 | 3,359,826,694 | I_kwDOCUB6oc7IQuMG | 40,500 | torch.load fails to load RNG state in PyTorch 2.6+ due to weights_only=True and missing safe globals | {
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rangehow/subscriptions",
"organizations_url": "https://api.github.com/users/rangehow/orgs",
"repos_url": "https://api.github.com/users/rangehow/repos",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"received_events_url": "https://api.github.com/users/rangehow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-27T15:28:20 | 2025-08-27T15:43:38 | 2025-08-27T15:43:38 | CONTRIBUTOR | null | null | null | null |
**Environment:**
- PyTorch version: 2.6
- transformers: 4.55.4
---
**Describe the issue:**
Starting from PyTorch 2.6, the default behavior of `torch.load` has changed: `weights_only=True` is now the default. This introduces stricter security checks during unpickling, requiring explicit allowlisting of certain globals (e.g., `numpy._core.multiarray._reconstruct`) via `torch.serialization.add_safe_globals()` or the `safe_globals` context manager.
We are encountering failures when loading RNG states saved by earlier versions of PyTorch (or even by the same version), with the following error:
```
_pickle.UnpicklingError: Weights only load failed. ...
WeightsUnpickler error: Unsupported global: GLOBAL numpy._core.multiarray._reconstruct was not an allowed global by default. Please use `torch.serialization.add_safe_globals([_reconstruct])` or the `torch.serialization.safe_globals([_reconstruct])` context manager to allowlist this global if you trust this class/function.
```
Although we are already using a `safe_globals()` context manager (see code below), the loading still fails. It appears that the allowlist is not being applied correctly to the `torch.load` call, or the specific global is not being recognized properly under certain numpy versions.
**Code to reproduce:**
```python
import torch
from packaging import version
import contextlib
import numpy as np
def safe_globals():
if version.parse(torch.__version__).release < version.parse("2.6").release:
return contextlib.nullcontext()
np_core = np._core if version.parse(np.__version__) >= version.parse("2.0.0") else np.core
allowlist = [np_core.multiarray._reconstruct, np.ndarray, np.dtype]
allowlist += [type(np.dtype(np.uint32))]
return torch.serialization.safe_globals(allowlist)
with safe_globals():
a = torch.load('/path/to/rng_state_0.pth') # Fails with UnpicklingError
```
**Expected behavior:**
The `safe_globals()` context manager should allow the listed globals (including `numpy._core.multiarray._reconstruct`) to be safely unpickled when using `weights_only=True`, especially when the source is trusted.
**Actual behavior:**
Despite the allowlist, `torch.load` raises a `UnpicklingError`, indicating that `numpy._core.multiarray._reconstruct` is still not recognized as a safe global.
| {
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rangehow/subscriptions",
"organizations_url": "https://api.github.com/users/rangehow/orgs",
"repos_url": "https://api.github.com/users/rangehow/repos",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"received_events_url": "https://api.github.com/users/rangehow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40500/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40499/comments | https://api.github.com/repos/huggingface/transformers/issues/40499/events | https://github.com/huggingface/transformers/pull/40499 | 3,359,749,099 | PR_kwDOCUB6oc6lnvT2 | 40,499 | [model] Add VideoLLaMA3 implementation | {
"login": "lkhl",
"id": 78654844,
"node_id": "MDQ6VXNlcjc4NjU0ODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/78654844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkhl",
"html_url": "https://github.com/lkhl",
"followers_url": "https://api.github.com/users/lkhl/followers",
"following_url": "https://api.github.com/users/lkhl/following{/other_user}",
"gists_url": "https://api.github.com/users/lkhl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkhl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkhl/subscriptions",
"organizations_url": "https://api.github.com/users/lkhl/orgs",
"repos_url": "https://api.github.com/users/lkhl/repos",
"events_url": "https://api.github.com/users/lkhl/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkhl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T15:07:36 | 2025-10-13T14:47:39 | 2025-10-13T13:54:34 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40499",
"html_url": "https://github.com/huggingface/transformers/pull/40499",
"diff_url": "https://github.com/huggingface/transformers/pull/40499.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40499.patch",
"merged_at": "2025-10-13T13:54:34"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR supports VideoLLaMA3 model.
Paper: https://arxiv.org/abs/2501.13106
Original repo: https://github.com/DAMO-NLP-SG/VideoLLaMA3
Converted checkpoint: https://huggingface.co/lkhl/VideoLLaMA3-2B-Image-HF (temporary)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40499/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/40499/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40498/comments | https://api.github.com/repos/huggingface/transformers/issues/40498/events | https://github.com/huggingface/transformers/pull/40498 | 3,359,738,706 | PR_kwDOCUB6oc6lntEB | 40,498 | Multiple fixes to FA tests in AMD | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T15:04:58 | 2025-09-01T18:49:50 | 2025-09-01T18:49:50 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40498",
"html_url": "https://github.com/huggingface/transformers/pull/40498",
"diff_url": "https://github.com/huggingface/transformers/pull/40498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40498.patch",
"merged_at": "2025-09-01T18:49:50"
} | This PR fixes a tests across a few models in the following ways:
- solves a multi-device issue in `qwen2_5_omni`
- adds AMD expectations to `gemma3`, `qwen2_5_omni` and `qwen2_5_vl`
- fixes an issue in the `qwen2_5_omni` and `qwen2_5_vl` FA tests: the test changes the `hidden_size` so it breaks compatibility with `mrope`. For this issue, I added a fix using `try` and `except` to ensure the change to `mrope_section` does not propagate to the rest of the tests. I am not sure this is the best way, so tagging @ydshieh
- removed some mutables that were used a default arguments
This PR touches mostly test-related files, so I think it's ok to bundle everything, but please let me know if not. As the non test-related stuff this is mostly multimodal, cc. @zucchini-nlp | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40498/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40497/comments | https://api.github.com/repos/huggingface/transformers/issues/40497/events | https://github.com/huggingface/transformers/pull/40497 | 3,359,581,815 | PR_kwDOCUB6oc6lnLJB | 40,497 | Improve keypoint-matching models docs | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T14:26:37 | 2025-08-28T11:31:21 | 2025-08-28T11:31:21 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40497",
"html_url": "https://github.com/huggingface/transformers/pull/40497",
"diff_url": "https://github.com/huggingface/transformers/pull/40497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40497.patch",
"merged_at": "2025-08-28T11:31:21"
} | # What does this PR do?
As per the title, fix `options` tags + replace `no_grad` with `inference_mode`
cc @sbucaille
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40497/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40497/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40496/comments | https://api.github.com/repos/huggingface/transformers/issues/40496/events | https://github.com/huggingface/transformers/issues/40496 | 3,359,546,981 | I_kwDOCUB6oc7IPp5l | 40,496 | AutoModel.from_pretrained() doesn't work for models with '.' in their name when there's a relative import | {
"login": "zachmoshe",
"id": 4789087,
"node_id": "MDQ6VXNlcjQ3ODkwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zachmoshe",
"html_url": "https://github.com/zachmoshe",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions",
"organizations_url": "https://api.github.com/users/zachmoshe/orgs",
"repos_url": "https://api.github.com/users/zachmoshe/repos",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"received_events_url": "https://api.github.com/users/zachmoshe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-27T14:16:05 | 2025-09-10T14:34:57 | 2025-09-10T14:34:57 | NONE | null | null | null | null | ### System Info
For models with:
- A dot in their name (e.g. "saved_model_v1.0")
- AND custom code with another module other than config.py and model.py
`AutoModel.from_pretrained(..)` fails with: `ModuleNotFoundError: No module named 'transformers_modules.saved_model_v1'`
This happens when trying to relatively import "another_module" from one of the other modules (i.e. `from .another_module import ...`)
### Who can help?
@sgugger @XuehaiPan
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See this example: https://github.com/zachmoshe/transformers-dotted-models-example.git
`uv run save.py` will save a model with custom code that has another module to a folder with ".".
`uv run load.py` will fail loading the saved model.
If we remove the relative import from model.py it works. If we remove the dot from the model name it also works.
### Expected behavior
`AutoModel.from_pretrained("MyModel-v1.0")` should work. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40496/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/40496/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40495/comments | https://api.github.com/repos/huggingface/transformers/issues/40495/events | https://github.com/huggingface/transformers/pull/40495 | 3,359,498,936 | PR_kwDOCUB6oc6lm5id | 40,495 | 🚨 Remove Group Beam Search decoding strategy | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T14:02:15 | 2025-09-01T11:42:48 | 2025-09-01T11:42:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40495",
"html_url": "https://github.com/huggingface/transformers/pull/40495",
"diff_url": "https://github.com/huggingface/transformers/pull/40495.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40495.patch",
"merged_at": "2025-09-01T11:42:48"
} | Removes Group Beam Search generation strategy from the codebase. Directs users to the `transformers-community/group-beam-search` repository.
It has been a warning for a few releases, but now `trust_remote_code=True` is required to run group beam search.
Depends on #40480 | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40495/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40494/comments | https://api.github.com/repos/huggingface/transformers/issues/40494/events | https://github.com/huggingface/transformers/pull/40494 | 3,359,433,679 | PR_kwDOCUB6oc6lmrzC | 40,494 | Fix `qwen2_moe` tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T13:44:53 | 2025-08-27T14:22:07 | 2025-08-27T14:22:05 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40494",
"html_url": "https://github.com/huggingface/transformers/pull/40494",
"diff_url": "https://github.com/huggingface/transformers/pull/40494.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40494.patch",
"merged_at": "2025-08-27T14:22:05"
} | # What does this PR do?
The CI job of the tests in `Qwen2MoeIntegrationTest` is killed from time to time due to the CPU memory limit (60 G), which is unclear to me why the usage may differ in different runs.
This PR simply reuses the same model set once in the first test, except the `test_model_a2_7b_long_prompt_flash_attn`, because it loads the model using `flash_attn`. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40494/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40493/comments | https://api.github.com/repos/huggingface/transformers/issues/40493/events | https://github.com/huggingface/transformers/pull/40493 | 3,359,256,101 | PR_kwDOCUB6oc6lmGqJ | 40,493 | Update dtypes to suit colab bf16 -> fp16 -> fp32. | {
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-27T12:52:54 | 2025-08-27T15:06:52 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40493",
"html_url": "https://github.com/huggingface/transformers/pull/40493",
"diff_url": "https://github.com/huggingface/transformers/pull/40493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40493.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40493/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40492/comments | https://api.github.com/repos/huggingface/transformers/issues/40492/events | https://github.com/huggingface/transformers/pull/40492 | 3,359,203,660 | PR_kwDOCUB6oc6ll7qn | 40,492 | avoid divid zero errors. | {
"login": "zhanluxianshen",
"id": 161462588,
"node_id": "U_kgDOCZ-5PA",
"avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhanluxianshen",
"html_url": "https://github.com/zhanluxianshen",
"followers_url": "https://api.github.com/users/zhanluxianshen/followers",
"following_url": "https://api.github.com/users/zhanluxianshen/following{/other_user}",
"gists_url": "https://api.github.com/users/zhanluxianshen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhanluxianshen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhanluxianshen/subscriptions",
"organizations_url": "https://api.github.com/users/zhanluxianshen/orgs",
"repos_url": "https://api.github.com/users/zhanluxianshen/repos",
"events_url": "https://api.github.com/users/zhanluxianshen/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhanluxianshen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-27T12:36:08 | 2025-08-28T00:56:17 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40492",
"html_url": "https://github.com/huggingface/transformers/pull/40492",
"diff_url": "https://github.com/huggingface/transformers/pull/40492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40492.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40492/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40491/comments | https://api.github.com/repos/huggingface/transformers/issues/40491/events | https://github.com/huggingface/transformers/pull/40491 | 3,359,180,832 | PR_kwDOCUB6oc6ll2zZ | 40,491 | [auto-model] propagate kwargs | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T12:28:28 | 2025-09-03T09:59:20 | 2025-09-03T09:59:20 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40491",
"html_url": "https://github.com/huggingface/transformers/pull/40491",
"diff_url": "https://github.com/huggingface/transformers/pull/40491.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40491.patch",
"merged_at": "2025-09-03T09:59:20"
} | # What does this PR do?
Fixes [40477](https://github.com/huggingface/transformers/issues/40477) and propagates kwargs in deprecated auto models
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40491/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40490/comments | https://api.github.com/repos/huggingface/transformers/issues/40490/events | https://github.com/huggingface/transformers/pull/40490 | 3,359,125,888 | PR_kwDOCUB6oc6llrNm | 40,490 | [qwen-vl] fix position ids | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T12:11:35 | 2025-09-11T03:48:51 | 2025-09-01T09:10:41 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40490",
"html_url": "https://github.com/huggingface/transformers/pull/40490",
"diff_url": "https://github.com/huggingface/transformers/pull/40490.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40490.patch",
"merged_at": "2025-09-01T09:10:41"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/40136 and fixes https://github.com/huggingface/transformers/issues/40154
The accuracy on lm eval is restored. The issue was in position ids which weren't prepared correctly when generating. Using packed or unpacked positions doesn't affect anything, as I thought at first
Ig the values in position ids do not affect much generation results, given that slow tests weren't failing. We could add a test with the specific image/prompt where the outputs are significantly different, but I don't think it is necessary | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40490/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40489/comments | https://api.github.com/repos/huggingface/transformers/issues/40489/events | https://github.com/huggingface/transformers/pull/40489 | 3,359,068,409 | PR_kwDOCUB6oc6llfST | 40,489 | correct kes to keys. | {
"login": "zhanluxianshen",
"id": 161462588,
"node_id": "U_kgDOCZ-5PA",
"avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhanluxianshen",
"html_url": "https://github.com/zhanluxianshen",
"followers_url": "https://api.github.com/users/zhanluxianshen/followers",
"following_url": "https://api.github.com/users/zhanluxianshen/following{/other_user}",
"gists_url": "https://api.github.com/users/zhanluxianshen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhanluxianshen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhanluxianshen/subscriptions",
"organizations_url": "https://api.github.com/users/zhanluxianshen/orgs",
"repos_url": "https://api.github.com/users/zhanluxianshen/repos",
"events_url": "https://api.github.com/users/zhanluxianshen/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhanluxianshen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T11:51:52 | 2025-08-28T22:25:49 | 2025-08-28T12:00:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40489",
"html_url": "https://github.com/huggingface/transformers/pull/40489",
"diff_url": "https://github.com/huggingface/transformers/pull/40489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40489.patch",
"merged_at": "2025-08-28T12:00:22"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40489/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40488/comments | https://api.github.com/repos/huggingface/transformers/issues/40488/events | https://github.com/huggingface/transformers/issues/40488 | 3,359,048,556 | I_kwDOCUB6oc7INwNs | 40,488 | Voxtral model fails with LoRA due to in-place operation error | {
"login": "julzzheng",
"id": 22426359,
"node_id": "MDQ6VXNlcjIyNDI2MzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/22426359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julzzheng",
"html_url": "https://github.com/julzzheng",
"followers_url": "https://api.github.com/users/julzzheng/followers",
"following_url": "https://api.github.com/users/julzzheng/following{/other_user}",
"gists_url": "https://api.github.com/users/julzzheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julzzheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julzzheng/subscriptions",
"organizations_url": "https://api.github.com/users/julzzheng/orgs",
"repos_url": "https://api.github.com/users/julzzheng/repos",
"events_url": "https://api.github.com/users/julzzheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/julzzheng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-08-27T11:44:03 | 2025-09-04T15:14:03 | 2025-09-04T15:11:24 | NONE | null | null | null | null | ### System Info
## Description
The Voxtral model crashes during LoRA training with:
**RuntimeError**: a leaf Variable that requires grad is being used in an in-place operation.
## Location of Issue
The error occurs in `modeling_voxtral.py` at line 512 in the `forward` method:
```python
if input_features is not None:
audio_embeds = self.get_audio_embeds(input_features)
# replace text-audio token placeholders with audio embeddings
audio_token_mask = input_ids == self.config.audio_token_id
inputs_embeds[audio_token_mask] = audio_embeds # <-- This line causes the error
```
## Environment
- transformers: 4.55.4
- torch: 2.7.1
- mistral_common: 1.8.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
# Configuration
from transformers import VoxtralForConditionalGeneration, VoxtralProcessor
from peft import LoraConfig, get_peft_model
import torch
import numpy as np
# Load model and processor
model_name = "mistralai/Voxtral-Mini-3B-2507"
processor = VoxtralProcessor.from_pretrained(model_name)
model = VoxtralForConditionalGeneration.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
model.gradient_checkpointing_enable(gradient_checkpointing_kwargs={"use_reentrant": False})
peft_config = LoraConfig(
r=32,
lora_alpha=64,
lora_dropout=0.01,
target_modules=["k_proj", "v_proj", "q_proj", "out_proj"],
bias="none",
)
model = get_peft_model(model, peft_config)
# Create proper audio input using the processor
dummy_audio = np.random.randn(48000) # 3 seconds of audio at 16kHz (1D array)
audio_inputs = processor.feature_extractor(
raw_speech=dummy_audio,
sampling_rate=16000,
return_tensors="pt"
)
# Prepare sample inputs (both audio features and text tokens required for transcription)
# This would typically come from your data collator during training
batch = {
"input_ids": torch.randint(0, 1000, (1, 50)).to(model.device),
"input_features": audio_inputs.input_features.to(model.device), # Use properly generated features
"attention_mask": torch.ones(1, 50).to(model.device),
"labels": torch.randint(0, 1000, (1, 50)).to(model.device)
}
print(f"input_features shape: {batch['input_features'].shape}")
print(f"input_ids shape: {batch['input_ids'].shape}")
# This triggers the error when both input_ids and input_features are provided
outputs = model(**batch)
```
```
510 # replace text-audio token placeholders with audio embeddings
511 audio_token_mask = input_ids == self.config.audio_token_id
--> 512 inputs_embeds[audio_token_mask] = audio_embeds
514 outputs: BaseModelOutputWithPast = self.language_model(
515 attention_mask=attention_mask,
516 position_ids=position_ids,
(...) 523 **kwargs,
524 )
525 return outputs
RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
```
### Expected behavior
**Expected**: Model works with LoRA training
**Actual**: Crashes with RuntimeError | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40488/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40487/comments | https://api.github.com/repos/huggingface/transformers/issues/40487/events | https://github.com/huggingface/transformers/issues/40487 | 3,359,041,528 | I_kwDOCUB6oc7INuf4 | 40,487 | Mutex while building docs | {
"login": "Uvi-12",
"id": 190028082,
"node_id": "U_kgDOC1OZMg",
"avatar_url": "https://avatars.githubusercontent.com/u/190028082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Uvi-12",
"html_url": "https://github.com/Uvi-12",
"followers_url": "https://api.github.com/users/Uvi-12/followers",
"following_url": "https://api.github.com/users/Uvi-12/following{/other_user}",
"gists_url": "https://api.github.com/users/Uvi-12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Uvi-12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Uvi-12/subscriptions",
"organizations_url": "https://api.github.com/users/Uvi-12/orgs",
"repos_url": "https://api.github.com/users/Uvi-12/repos",
"events_url": "https://api.github.com/users/Uvi-12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Uvi-12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T11:41:12 | 2025-08-28T13:03:58 | 2025-08-28T13:03:58 | CONTRIBUTOR | null | null | null | null | @stevhliu I am trying to build docs for the model cards, but while building, I get a mutex every time I do it. Please help me to overcome it
System:
- OS: macOS 14.5 (Build 23F79)
- CPU: Apple M1
- RAM: 16 GB
Python:
- Version: 3.13.2
- pip: 23.3.1 (from /opt/miniconda3/lib/python3.12/site-packages)
Node / npm:
- Node: v23.3.0
- npm: 10.9.0
Doc Builder:
- doc-builder: installed
Transformers:
- Version: 4.56.0.dev0
- Editable location: /Users/yuvrajpradhan/Documents/GitHub/transformers
- Dependencies: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm
Other ML frameworks:
- TensorFlow: 2.20.0
- PyTorch: 2.7.1
- TorchVision: 0.22.1
- Torchaudio: 2.7.1
<img width="1917" height="157" alt="Image" src="https://github.com/user-attachments/assets/958e4163-46dd-4a12-8e4f-d35aa89f1baa" /> | {
"login": "Uvi-12",
"id": 190028082,
"node_id": "U_kgDOC1OZMg",
"avatar_url": "https://avatars.githubusercontent.com/u/190028082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Uvi-12",
"html_url": "https://github.com/Uvi-12",
"followers_url": "https://api.github.com/users/Uvi-12/followers",
"following_url": "https://api.github.com/users/Uvi-12/following{/other_user}",
"gists_url": "https://api.github.com/users/Uvi-12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Uvi-12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Uvi-12/subscriptions",
"organizations_url": "https://api.github.com/users/Uvi-12/orgs",
"repos_url": "https://api.github.com/users/Uvi-12/repos",
"events_url": "https://api.github.com/users/Uvi-12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Uvi-12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40487/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40486/comments | https://api.github.com/repos/huggingface/transformers/issues/40486/events | https://github.com/huggingface/transformers/pull/40486 | 3,359,019,118 | PR_kwDOCUB6oc6llU-m | 40,486 | Benchmarking V2: framework impl | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T11:32:37 | 2025-09-03T20:26:33 | 2025-09-03T20:26:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40486",
"html_url": "https://github.com/huggingface/transformers/pull/40486",
"diff_url": "https://github.com/huggingface/transformers/pull/40486.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40486.patch",
"merged_at": "2025-09-03T20:26:32"
} | # What does this PR do?
This PR is another iteration of reworking the benchmarking flow in Transformers. the goal is to have a similar flow as for Diffusers: daily reports to HF Datasets, and visualization in Gradio Spaces.
This PR focuses on the framework fundamentals, export to Datasets, GH actions and more model support will come in follow-ups.
From the wishlist, the first iteration includes the following
- JSON output with all the scenarios as an array
- support for different attention packages and SDPA backends
- compiled, kernelized scenarios
- HW info an utilization collection
- abstractions for making the ModelBenchmark code more lean and standardized.
I put everything into a _v2 folder so we can keep the existing framework intact until this stabilizes a bit.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40486/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40485/comments | https://api.github.com/repos/huggingface/transformers/issues/40485/events | https://github.com/huggingface/transformers/issues/40485 | 3,358,940,160 | I_kwDOCUB6oc7INVwA | 40,485 | KeyError: 'architectures' when `trust_remote_code=True` | {
"login": "Twinkle-ce",
"id": 116020945,
"node_id": "U_kgDOBupW0Q",
"avatar_url": "https://avatars.githubusercontent.com/u/116020945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Twinkle-ce",
"html_url": "https://github.com/Twinkle-ce",
"followers_url": "https://api.github.com/users/Twinkle-ce/followers",
"following_url": "https://api.github.com/users/Twinkle-ce/following{/other_user}",
"gists_url": "https://api.github.com/users/Twinkle-ce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Twinkle-ce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Twinkle-ce/subscriptions",
"organizations_url": "https://api.github.com/users/Twinkle-ce/orgs",
"repos_url": "https://api.github.com/users/Twinkle-ce/repos",
"events_url": "https://api.github.com/users/Twinkle-ce/events{/privacy}",
"received_events_url": "https://api.github.com/users/Twinkle-ce/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-27T11:01:15 | 2025-08-27T11:29:26 | 2025-08-27T11:29:26 | NONE | null | null | null | null | ### System Info
Name: transformers
Version: 4.53.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "ByteDance/Sa2VA-8B"
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype="auto", device_map="auto", trust_remote_code=True
).eval()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
### Expected behavior
Load successfully! | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40485/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40484/comments | https://api.github.com/repos/huggingface/transformers/issues/40484/events | https://github.com/huggingface/transformers/pull/40484 | 3,358,939,849 | PR_kwDOCUB6oc6llEc2 | 40,484 | fix typo | {
"login": "Guo-Chenxu",
"id": 90923304,
"node_id": "MDQ6VXNlcjkwOTIzMzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/90923304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guo-Chenxu",
"html_url": "https://github.com/Guo-Chenxu",
"followers_url": "https://api.github.com/users/Guo-Chenxu/followers",
"following_url": "https://api.github.com/users/Guo-Chenxu/following{/other_user}",
"gists_url": "https://api.github.com/users/Guo-Chenxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guo-Chenxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guo-Chenxu/subscriptions",
"organizations_url": "https://api.github.com/users/Guo-Chenxu/orgs",
"repos_url": "https://api.github.com/users/Guo-Chenxu/repos",
"events_url": "https://api.github.com/users/Guo-Chenxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guo-Chenxu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T11:01:07 | 2025-08-28T08:31:25 | 2025-08-28T08:31:25 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40484",
"html_url": "https://github.com/huggingface/transformers/pull/40484",
"diff_url": "https://github.com/huggingface/transformers/pull/40484.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40484.patch",
"merged_at": "2025-08-28T08:31:25"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #40478
fix a typo in code
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40484/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40483/comments | https://api.github.com/repos/huggingface/transformers/issues/40483/events | https://github.com/huggingface/transformers/pull/40483 | 3,358,906,997 | PR_kwDOCUB6oc6lk9c1 | 40,483 | fix_image_processing_fast_for_glm4v | {
"login": "lambertwjh",
"id": 148857096,
"node_id": "U_kgDOCN9hCA",
"avatar_url": "https://avatars.githubusercontent.com/u/148857096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lambertwjh",
"html_url": "https://github.com/lambertwjh",
"followers_url": "https://api.github.com/users/lambertwjh/followers",
"following_url": "https://api.github.com/users/lambertwjh/following{/other_user}",
"gists_url": "https://api.github.com/users/lambertwjh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lambertwjh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambertwjh/subscriptions",
"organizations_url": "https://api.github.com/users/lambertwjh/orgs",
"repos_url": "https://api.github.com/users/lambertwjh/repos",
"events_url": "https://api.github.com/users/lambertwjh/events{/privacy}",
"received_events_url": "https://api.github.com/users/lambertwjh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T10:49:56 | 2025-09-10T21:06:10 | 2025-09-10T21:05:27 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40483",
"html_url": "https://github.com/huggingface/transformers/pull/40483",
"diff_url": "https://github.com/huggingface/transformers/pull/40483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40483.patch",
"merged_at": "2025-09-10T21:05:27"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40483/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40482/comments | https://api.github.com/repos/huggingface/transformers/issues/40482/events | https://github.com/huggingface/transformers/pull/40482 | 3,358,872,523 | PR_kwDOCUB6oc6lk2Vv | 40,482 | [Whisper] Add rocm expected results to certain tests | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T10:35:36 | 2025-08-27T16:19:12 | 2025-08-27T16:19:11 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40482",
"html_url": "https://github.com/huggingface/transformers/pull/40482",
"diff_url": "https://github.com/huggingface/transformers/pull/40482.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40482.patch",
"merged_at": "2025-08-27T16:19:11"
} | # What does this PR do?
Adds expected results to certain tests that had slightly different results on AMD devices. | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40482/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40481/comments | https://api.github.com/repos/huggingface/transformers/issues/40481/events | https://github.com/huggingface/transformers/pull/40481 | 3,358,810,287 | PR_kwDOCUB6oc6lkpSM | 40,481 | [modular] Use multi-processing + fix model import issue | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T10:10:53 | 2025-08-27T12:51:14 | 2025-08-27T12:51:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40481",
"html_url": "https://github.com/huggingface/transformers/pull/40481",
"diff_url": "https://github.com/huggingface/transformers/pull/40481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40481.patch",
"merged_at": "2025-08-27T12:51:12"
} | # What does this PR do?
As per the title. Let's use multiprocessing by default on the converter, and it also fixes relative imports of the form `from ...models.name.modeling_ import ...` that caused the issue of on-conversion in https://github.com/huggingface/transformers/pull/40431
cc @qubvel as well, only the checker was using mp until now -> high time for the converter to do it as well, as running all conversions locally is starting to take quite some time | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40481/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40480/comments | https://api.github.com/repos/huggingface/transformers/issues/40480/events | https://github.com/huggingface/transformers/pull/40480 | 3,358,753,574 | PR_kwDOCUB6oc6lkdq- | 40,480 | Fix custom generate relative imports | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T09:50:47 | 2025-09-01T11:38:56 | 2025-09-01T11:38:56 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40480",
"html_url": "https://github.com/huggingface/transformers/pull/40480",
"diff_url": "https://github.com/huggingface/transformers/pull/40480.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40480.patch",
"merged_at": "2025-09-01T11:38:56"
} | Same as #38916, but for repos on the hub. This time we dont need to copy anything, as the whole repo is cloned, we just need to handle the paths properly.
This gets tested indirectly with the custom generate tests that will come with group beam search etc.
| {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40480/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40479/comments | https://api.github.com/repos/huggingface/transformers/issues/40479/events | https://github.com/huggingface/transformers/pull/40479 | 3,358,748,697 | PR_kwDOCUB6oc6lkcps | 40,479 | fix: continuous batching in `transformers serve` | {
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-27T09:49:03 | 2025-09-03T05:47:28 | 2025-09-02T08:45:05 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40479",
"html_url": "https://github.com/huggingface/transformers/pull/40479",
"diff_url": "https://github.com/huggingface/transformers/pull/40479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40479.patch",
"merged_at": "2025-09-02T08:45:05"
} | Fixing continuous batching in `transformers serve`.
- added a `--continuous_batching` cmd line flag to enable, open to change this!
- ~`max_new_tokens` can sometimes be `None`, set a default so it doesn't break CB which expects it to be set~ can't repro my previous error, removed the added code and added a test to check if defaults are set correctly
- fix server hang on shutdown, added a `lifespan` to the `FastAPI` instance
- calling continuous batching manager `stop`
- refactored `TimedModel` to make `delete_model` "public" so we cancel the `threading.Timer` that was causing the server to hang on SIGINT
- added `request_id_iter` to iterate only on tokens linked to a given request_id
- refactored the `get_result` to requeue tokens if `request_id is not None && req.request_id != request_id` (before we were losing tokens while iterating directly on all output_queue tokens)
- fix iterator to continue looping even if output_queue is empty
- ~moved the `DecodeStream` object to live in the `RequestState` rather than being single instance linked to the manager~ removed any trace of tokenizer within CB impl, didn't make sense to have here as we already are expecting encoded tokens. Leaving it up to the caller to decode (updated the serving code adequately)
- removed `next_token` from `RequestState` as it wasn't used, in streaming I've used `generated_tokens[-1]` to get latest token
- changed `prepare_next_batch` signature, now returns a bool to short circuit inner generation loop when it didn't prepare anything | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40479/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40478/comments | https://api.github.com/repos/huggingface/transformers/issues/40478/events | https://github.com/huggingface/transformers/issues/40478 | 3,358,712,716 | I_kwDOCUB6oc7IMeOM | 40,478 | Typo: should be `feature_extractor`, not `feature_extracttor` | {
"login": "Guo-Chenxu",
"id": 90923304,
"node_id": "MDQ6VXNlcjkwOTIzMzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/90923304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guo-Chenxu",
"html_url": "https://github.com/Guo-Chenxu",
"followers_url": "https://api.github.com/users/Guo-Chenxu/followers",
"following_url": "https://api.github.com/users/Guo-Chenxu/following{/other_user}",
"gists_url": "https://api.github.com/users/Guo-Chenxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guo-Chenxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guo-Chenxu/subscriptions",
"organizations_url": "https://api.github.com/users/Guo-Chenxu/orgs",
"repos_url": "https://api.github.com/users/Guo-Chenxu/repos",
"events_url": "https://api.github.com/users/Guo-Chenxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guo-Chenxu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-27T09:36:48 | 2025-08-28T08:31:26 | 2025-08-28T08:31:26 | CONTRIBUTOR | null | null | null | null | ### System Info
the currently latest [main](https://github.com/huggingface/transformers/tree/75d6f17de68f372284ecb5b40db6f83007ffa394) branch
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
well, I'm not sure whether this is necessary. while testing, I came across a typo — I believe it should be `feature_extractor` (not `feature_extracttor`, otherwise it will skip some tests). I'm raising this issue; if needed, I can submit a PR to fix it.
the releated code is here:
https://github.com/huggingface/transformers/blob/75d6f17de68f372284ecb5b40db6f83007ffa394/tests/test_processing_common.py#L976
### Expected behavior
correct spelling `feature_extractor` | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40478/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40477/comments | https://api.github.com/repos/huggingface/transformers/issues/40477/events | https://github.com/huggingface/transformers/issues/40477 | 3,358,585,041 | I_kwDOCUB6oc7IL_DR | 40,477 | AutoModelForVision2Seq.from_config() got an unexpected keyword argument 'torch_dtype' | {
"login": "ljeff97",
"id": 46018447,
"node_id": "MDQ6VXNlcjQ2MDE4NDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/46018447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ljeff97",
"html_url": "https://github.com/ljeff97",
"followers_url": "https://api.github.com/users/ljeff97/followers",
"following_url": "https://api.github.com/users/ljeff97/following{/other_user}",
"gists_url": "https://api.github.com/users/ljeff97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ljeff97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljeff97/subscriptions",
"organizations_url": "https://api.github.com/users/ljeff97/orgs",
"repos_url": "https://api.github.com/users/ljeff97/repos",
"events_url": "https://api.github.com/users/ljeff97/events{/privacy}",
"received_events_url": "https://api.github.com/users/ljeff97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-27T08:55:23 | 2025-09-03T09:59:21 | 2025-09-03T09:59:21 | NONE | null | null | null | null | ### System Info
AutoModelForVision2Seq.from_config() got an unexpected keyword argument 'torch_dtype'
transformers >= 4.54.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
AutoModelForVision2Seq.from_config() got an unexpected keyword argument 'torch_dtype'
transformers >= 4.54.1
### Expected behavior
fix error | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40477/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40476/comments | https://api.github.com/repos/huggingface/transformers/issues/40476/events | https://github.com/huggingface/transformers/issues/40476 | 3,358,120,418 | I_kwDOCUB6oc7IKNni | 40,476 | Qwen2.5Omni Repeatedly Calculates Audio/Video/Image Features | {
"login": "CharlesGong12",
"id": 113589886,
"node_id": "U_kgDOBsU-fg",
"avatar_url": "https://avatars.githubusercontent.com/u/113589886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CharlesGong12",
"html_url": "https://github.com/CharlesGong12",
"followers_url": "https://api.github.com/users/CharlesGong12/followers",
"following_url": "https://api.github.com/users/CharlesGong12/following{/other_user}",
"gists_url": "https://api.github.com/users/CharlesGong12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CharlesGong12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CharlesGong12/subscriptions",
"organizations_url": "https://api.github.com/users/CharlesGong12/orgs",
"repos_url": "https://api.github.com/users/CharlesGong12/repos",
"events_url": "https://api.github.com/users/CharlesGong12/events{/privacy}",
"received_events_url": "https://api.github.com/users/CharlesGong12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-27T06:10:50 | 2025-09-04T07:26:30 | 2025-09-01T12:23:25 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.1
- Platform: Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.39
- Python version: 3.10.15
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: 0.16.3
- PyTorch version (accelerator?): 2.6.0+cu126 (CUDA)
- Tensorflow version (GPU?): 2.16.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My code is:
```
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
args.model_name_or_path,
torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
attn_implementation="sdpa",
# attn_implementation="eager",
device_map="auto"
)
conversation = [
{
'role': 'system',
'content': [
{'type': 'text', 'text': 'You are a helpful assistant.'}
]
},
{
"role": "user",
"content": [
{"type": "video", "video": "xxx.mp4", 'max_frames': 80, "max_pixels": 50176},
{"type": "text", "text": "xxx"}
],
}
]
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Inference: Generation of the output text and audio
text_ids = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO, return_audio=False, thinker_max_new_tokens=100)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
```
### Expected behavior
In the qwen2.5-omni code, Qwen2_5OmniThinkerForConditionalGeneration repeatedly calculates the feature of audio/video/image input for each text token generated. Does this cause significant redundancy? As shown [here](https://github.com/huggingface/transformers/blob/ff8b88a948fc2f6aba421ca64ad165291928dcee/src/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py#L1896), this code segment does not check whether it's in the prefill phase. Consequently, every time a text token is generated, the forward function computes the same audio/image/video feature as in previous token generation phases.
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40476/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40475/comments | https://api.github.com/repos/huggingface/transformers/issues/40475/events | https://github.com/huggingface/transformers/issues/40475 | 3,357,746,094 | I_kwDOCUB6oc7IIyOu | 40,475 | InternVL 3.5 30B A3B load processor bug | {
"login": "yilian49",
"id": 43861414,
"node_id": "MDQ6VXNlcjQzODYxNDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/43861414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yilian49",
"html_url": "https://github.com/yilian49",
"followers_url": "https://api.github.com/users/yilian49/followers",
"following_url": "https://api.github.com/users/yilian49/following{/other_user}",
"gists_url": "https://api.github.com/users/yilian49/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yilian49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yilian49/subscriptions",
"organizations_url": "https://api.github.com/users/yilian49/orgs",
"repos_url": "https://api.github.com/users/yilian49/repos",
"events_url": "https://api.github.com/users/yilian49/events{/privacy}",
"received_events_url": "https://api.github.com/users/yilian49/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | open | false | null | [] | null | [] | 2025-08-27T01:56:31 | 2025-10-22T08:02:49 | null | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.56.0.dev0
- Platform: Linux-5.15.0-151-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run
```
processor = AutoProcessor.from_pretrained(
"OpenGVLab/InternVL3_5-30B-A3B",
trust_remote_code=True, # allow custom code if the repo provides it
use_fast=True,
)
```
Raises error
```
Traceback (most recent call last):
File "/sgl-workspace/test_hf.py", line 22, in <module>
test_autoprocessor()
File "/sgl-workspace/test_hf.py", line 6, in test_autoprocessor
processor = AutoProcessor.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/transformers/src/transformers/models/auto/processing_auto.py", line 391, in from_pretrained
return processor_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/transformers/src/transformers/processing_utils.py", line 1304, in from_pretrained
return cls.from_args_and_dict(args, processor_dict, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/transformers/src/transformers/processing_utils.py", line 1105, in from_args_and_dict
processor = cls(*args, **valid_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/transformers/src/transformers/models/internvl/processing_internvl.py", line 81, in __init__
self.start_image_token = tokenizer.start_image_token
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/transformers/src/transformers/tokenization_utils_base.py", line 1099, in __getattr__
raise AttributeError(f"{self.__class__.__name__} has no attribute {key}")
AttributeError: Qwen2TokenizerFast has no attribute start_image_token
```
### Expected behavior
Expect the internvl processor to load up properly | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40475/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40474/comments | https://api.github.com/repos/huggingface/transformers/issues/40474/events | https://github.com/huggingface/transformers/pull/40474 | 3,357,492,402 | PR_kwDOCUB6oc6lgbOS | 40,474 | Validate GptOssConfig rope config after it's fully initialized | {
"login": "zifeitong",
"id": 19350890,
"node_id": "MDQ6VXNlcjE5MzUwODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/19350890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zifeitong",
"html_url": "https://github.com/zifeitong",
"followers_url": "https://api.github.com/users/zifeitong/followers",
"following_url": "https://api.github.com/users/zifeitong/following{/other_user}",
"gists_url": "https://api.github.com/users/zifeitong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zifeitong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zifeitong/subscriptions",
"organizations_url": "https://api.github.com/users/zifeitong/orgs",
"repos_url": "https://api.github.com/users/zifeitong/repos",
"events_url": "https://api.github.com/users/zifeitong/events{/privacy}",
"received_events_url": "https://api.github.com/users/zifeitong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T23:59:23 | 2025-09-02T17:32:38 | 2025-08-27T09:16:59 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40474",
"html_url": "https://github.com/huggingface/transformers/pull/40474",
"diff_url": "https://github.com/huggingface/transformers/pull/40474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40474.patch",
"merged_at": "2025-08-27T09:16:59"
} |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
`rope_config_validation()` uses `max_position_embeddings`.
Fixes #40461.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40474/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40473/comments | https://api.github.com/repos/huggingface/transformers/issues/40473/events | https://github.com/huggingface/transformers/pull/40473 | 3,357,482,943 | PR_kwDOCUB6oc6lgZJr | 40,473 | [tests] Unskip DeBERTaV2 tokenizer parity tests; re-enable fast/slow checks | {
"login": "asifh0ssain",
"id": 141849626,
"node_id": "U_kgDOCHR0Gg",
"avatar_url": "https://avatars.githubusercontent.com/u/141849626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asifh0ssain",
"html_url": "https://github.com/asifh0ssain",
"followers_url": "https://api.github.com/users/asifh0ssain/followers",
"following_url": "https://api.github.com/users/asifh0ssain/following{/other_user}",
"gists_url": "https://api.github.com/users/asifh0ssain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asifh0ssain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asifh0ssain/subscriptions",
"organizations_url": "https://api.github.com/users/asifh0ssain/orgs",
"repos_url": "https://api.github.com/users/asifh0ssain/repos",
"events_url": "https://api.github.com/users/asifh0ssain/events{/privacy}",
"received_events_url": "https://api.github.com/users/asifh0ssain/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T23:55:08 | 2025-08-27T11:40:29 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40473",
"html_url": "https://github.com/huggingface/transformers/pull/40473",
"diff_url": "https://github.com/huggingface/transformers/pull/40473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40473.patch",
"merged_at": null
} | This PR re-enables two previously skipped parity tests for the DeBERTaV2 tokenizer:
- `test_sentencepiece_tokenize_and_convert_tokens_to_string`
- `test_sentencepiece_tokenize_and_decode`
Both tests now call the mixin implementations to verify fast and slow tokenizers produce consistent results.
✅ No API changes.
✅ Code passes local lint/format checks.
Hugging Face CI will confirm full suite. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40473/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40472/comments | https://api.github.com/repos/huggingface/transformers/issues/40472/events | https://github.com/huggingface/transformers/pull/40472 | 3,357,278,656 | PR_kwDOCUB6oc6lfxLC | 40,472 | Add GLPNImageProcessorFast with enhanced 4-channel support for #36978 | {
"login": "akacmazz",
"id": 32853513,
"node_id": "MDQ6VXNlcjMyODUzNTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/32853513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akacmazz",
"html_url": "https://github.com/akacmazz",
"followers_url": "https://api.github.com/users/akacmazz/followers",
"following_url": "https://api.github.com/users/akacmazz/following{/other_user}",
"gists_url": "https://api.github.com/users/akacmazz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akacmazz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akacmazz/subscriptions",
"organizations_url": "https://api.github.com/users/akacmazz/orgs",
"repos_url": "https://api.github.com/users/akacmazz/repos",
"events_url": "https://api.github.com/users/akacmazz/events{/privacy}",
"received_events_url": "https://api.github.com/users/akacmazz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T22:14:30 | 2025-10-09T16:05:42 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40472",
"html_url": "https://github.com/huggingface/transformers/pull/40472",
"diff_url": "https://github.com/huggingface/transformers/pull/40472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40472.patch",
"merged_at": null
} | # What does this PR do?
This commit introduces GLPNImageProcessorFast, a PyTorch-optimized image processor for GLPN models with enhanced multi-channel support.
Key improvements:
- Added GLPNImageProcessorFast class with native PyTorch tensor processing
- Enhanced support for 1, 3, and 4-channel images (including RGBA)
- Optimized preprocessing pipeline using torchvision transforms
- Updated GLPNImageProcessor to support 4-channel inference
- Added comprehensive tests for multi-channel image processing
- Added proper documentation for the new processor
The fast processor leverages PyTorch tensors throughout the processing pipeline, providing better performance and memory efficiency compared to the PIL-based approach. Both processors now support variable channel dimensions for improved flexibility.
Technical details:
- Uses torchvision.transforms for efficient tensor-based preprocessing
- Implements proper channel dimension handling with infer_channel_dimension_format(num_channels=(1,3, 4))
- Maintains API compatibility with existing GLPNImageProcessor
- Provides significant performance improvements for PyTorch workflows
This enhancement enables GLPN models to work seamlessly with RGBA images and other multi-channel inputs, which is particularly useful for computer vision applications involving images with transparency channels.
Fixes #36978 https://github.com/huggingface/transformers/issues/36978
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/
CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@yonigozlan, @amyeroberts, @qubvel
This PR focuses on vision models (GLPN) and adds a new fast image processor with enhanced channel support. The changes include both the core implementation and comprehensive testing.
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40472/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40471/comments | https://api.github.com/repos/huggingface/transformers/issues/40471/events | https://github.com/huggingface/transformers/pull/40471 | 3,357,255,555 | PR_kwDOCUB6oc6lfs_E | 40,471 | DOC: Standardize CodeGen model card (issue #36979) | {
"login": "DHANUSHRAJA22",
"id": 155062318,
"node_id": "U_kgDOCT4QLg",
"avatar_url": "https://avatars.githubusercontent.com/u/155062318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DHANUSHRAJA22",
"html_url": "https://github.com/DHANUSHRAJA22",
"followers_url": "https://api.github.com/users/DHANUSHRAJA22/followers",
"following_url": "https://api.github.com/users/DHANUSHRAJA22/following{/other_user}",
"gists_url": "https://api.github.com/users/DHANUSHRAJA22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DHANUSHRAJA22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DHANUSHRAJA22/subscriptions",
"organizations_url": "https://api.github.com/users/DHANUSHRAJA22/orgs",
"repos_url": "https://api.github.com/users/DHANUSHRAJA22/repos",
"events_url": "https://api.github.com/users/DHANUSHRAJA22/events{/privacy}",
"received_events_url": "https://api.github.com/users/DHANUSHRAJA22/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T22:06:38 | 2025-08-27T17:54:43 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40471",
"html_url": "https://github.com/huggingface/transformers/pull/40471",
"diff_url": "https://github.com/huggingface/transformers/pull/40471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40471.patch",
"merged_at": null
} | Standardized the CodeGen model card to match the project's documentation template (#36979). Added concise overview, badge section, friendly model description, Pipeline and AutoModel usage examples, quantization info, attention visualization, and resource links. Improves ease of use and first-time contributor experience.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
Standardized the CodeGen model card to match the project's documentation template (#36979). Added concise overview, badge section, friendly model description, Pipeline and AutoModel usage examples, quantization info, attention visualization, and resource links. Improves ease of use and first-time contributor experience.
Fixes #36979
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
@stevhliu@stevhliu# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40471/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40470/comments | https://api.github.com/repos/huggingface/transformers/issues/40470/events | https://github.com/huggingface/transformers/pull/40470 | 3,357,067,971 | PR_kwDOCUB6oc6lfFtu | 40,470 | Add collated reports job to Nvidia CI | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T20:41:34 | 2025-09-02T12:25:22 | 2025-09-02T12:25:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40470",
"html_url": "https://github.com/huggingface/transformers/pull/40470",
"diff_url": "https://github.com/huggingface/transformers/pull/40470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40470.patch",
"merged_at": "2025-09-02T12:25:22"
} | # What does this PR do?
This PR adds the new collated reports job to Nvidia as well that produces reports like this: https://huggingface.co/datasets/optimum-amd/transformers_daily_ci/blob/main/2025-08-25/runs/39-17221003312/ci_results_run_models_gpu/collated_reports_e68146f.json
This is required to compare test result between platforms easily.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40470/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40469/comments | https://api.github.com/repos/huggingface/transformers/issues/40469/events | https://github.com/huggingface/transformers/pull/40469 | 3,356,957,892 | PR_kwDOCUB6oc6leutP | 40,469 | Fix nightly torch CI | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T19:57:38 | 2025-08-26T20:06:57 | 2025-08-26T20:02:16 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40469",
"html_url": "https://github.com/huggingface/transformers/pull/40469",
"diff_url": "https://github.com/huggingface/transformers/pull/40469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40469.patch",
"merged_at": "2025-08-26T20:02:15"
} | # What does this PR do?
Since we use FA2 in the CI docker image last Sunday, the nightly torch CI hangs at the docker image build job.
Let's not use FA2 and run FA2 tests on torch nightly CI workflow.
Also need to not install `detectron2` for the same/similar reason. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40469/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40468/comments | https://api.github.com/repos/huggingface/transformers/issues/40468/events | https://github.com/huggingface/transformers/issues/40468 | 3,356,830,094 | I_kwDOCUB6oc7IFSmO | 40,468 | ImportError: cannot import name 'GptOssForSequenceClassification' from 'transformers' | {
"login": "pervaizniazi",
"id": 41390991,
"node_id": "MDQ6VXNlcjQxMzkwOTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/41390991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pervaizniazi",
"html_url": "https://github.com/pervaizniazi",
"followers_url": "https://api.github.com/users/pervaizniazi/followers",
"following_url": "https://api.github.com/users/pervaizniazi/following{/other_user}",
"gists_url": "https://api.github.com/users/pervaizniazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pervaizniazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pervaizniazi/subscriptions",
"organizations_url": "https://api.github.com/users/pervaizniazi/orgs",
"repos_url": "https://api.github.com/users/pervaizniazi/repos",
"events_url": "https://api.github.com/users/pervaizniazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/pervaizniazi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-26T19:14:53 | 2025-08-28T14:14:36 | 2025-08-28T14:14:36 | NONE | null | null | null | null | ### System Info
Pytorch version: 2.8.0+cu128
Transformers version: 4.55.4
### Who can help?
Hi,
I have uninstalled a latest version of transformers. While accessing "GptOssForSequenceClassification", I get the error:
ImportError: cannot import name 'GptOssForSequenceClassification' from 'transformers' (/home/user/miniconda3/envs/working/lib/python3.10/site-packages/transformers/__init__.py). @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
from transformers import GptOssForSequenceClassification
mdir='/home/user/gpt-oss-20b/'
model = GptOssForSequenceClassification.from_pretrained(mdir, [trust_remote_code=True)]
print(model)
### Expected behavior
GptOssForSequenceClassification should be imported successfully. | {
"login": "pervaizniazi",
"id": 41390991,
"node_id": "MDQ6VXNlcjQxMzkwOTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/41390991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pervaizniazi",
"html_url": "https://github.com/pervaizniazi",
"followers_url": "https://api.github.com/users/pervaizniazi/followers",
"following_url": "https://api.github.com/users/pervaizniazi/following{/other_user}",
"gists_url": "https://api.github.com/users/pervaizniazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pervaizniazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pervaizniazi/subscriptions",
"organizations_url": "https://api.github.com/users/pervaizniazi/orgs",
"repos_url": "https://api.github.com/users/pervaizniazi/repos",
"events_url": "https://api.github.com/users/pervaizniazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/pervaizniazi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40468/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40467/comments | https://api.github.com/repos/huggingface/transformers/issues/40467/events | https://github.com/huggingface/transformers/pull/40467 | 3,356,698,876 | PR_kwDOCUB6oc6ld5Ao | 40,467 | Not to shock AMD team by the cancelled workflow run notification ❤️ 💖 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T18:29:30 | 2025-08-26T18:53:26 | 2025-08-26T18:53:24 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40467",
"html_url": "https://github.com/huggingface/transformers/pull/40467",
"diff_url": "https://github.com/huggingface/transformers/pull/40467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40467.patch",
"merged_at": "2025-08-26T18:53:24"
} | # What does this PR do?
If cancelled, not to send the notification. | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40467/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40466/comments | https://api.github.com/repos/huggingface/transformers/issues/40466/events | https://github.com/huggingface/transformers/pull/40466 | 3,356,644,295 | PR_kwDOCUB6oc6ldtgQ | 40,466 | 🌐 [i18n-KO] Translated `jan.md` to Korean | {
"login": "Jwaminju",
"id": 49024958,
"node_id": "MDQ6VXNlcjQ5MDI0OTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/49024958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jwaminju",
"html_url": "https://github.com/Jwaminju",
"followers_url": "https://api.github.com/users/Jwaminju/followers",
"following_url": "https://api.github.com/users/Jwaminju/following{/other_user}",
"gists_url": "https://api.github.com/users/Jwaminju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jwaminju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jwaminju/subscriptions",
"organizations_url": "https://api.github.com/users/Jwaminju/orgs",
"repos_url": "https://api.github.com/users/Jwaminju/repos",
"events_url": "https://api.github.com/users/Jwaminju/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jwaminju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T18:11:52 | 2025-09-06T02:53:29 | 2025-09-06T02:53:25 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40466",
"html_url": "https://github.com/huggingface/transformers/pull/40466",
"diff_url": "https://github.com/huggingface/transformers/pull/40466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40466.patch",
"merged_at": null
} | # What does this PR do?
Translated the `jan.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [X] Check for missing / redundant translations (번역 누락/중복 검사)
- [X] Grammar Check (맞춤법 검사)
- [X] Review or Add new terms to glossary (용어 확인 및 추가)
- [X] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [X] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
jungnerd, luckyvickyricky, chelsseeey, skwh54, amo33, maximizemaxwell, D15M4S
<!-- harheem, nsbg, Youngdong2, xhaktm00, ssunbear, ChoHyoungSeo, judy-choi -->
<!-- 4N3MONE, Kim-Ju-won, ahnjj, FacerAin, ssum21, TaskerJang, HyunZ118 -->
<!-- yijun-lee, songi104, chhaewxn, AhnJoonSung, jihyun-0611, seopp, pyapyapya -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
stevhliu May you please review this PR? | {
"login": "Jwaminju",
"id": 49024958,
"node_id": "MDQ6VXNlcjQ5MDI0OTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/49024958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jwaminju",
"html_url": "https://github.com/Jwaminju",
"followers_url": "https://api.github.com/users/Jwaminju/followers",
"following_url": "https://api.github.com/users/Jwaminju/following{/other_user}",
"gists_url": "https://api.github.com/users/Jwaminju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jwaminju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jwaminju/subscriptions",
"organizations_url": "https://api.github.com/users/Jwaminju/orgs",
"repos_url": "https://api.github.com/users/Jwaminju/repos",
"events_url": "https://api.github.com/users/Jwaminju/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jwaminju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40466/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40465/comments | https://api.github.com/repos/huggingface/transformers/issues/40465/events | https://github.com/huggingface/transformers/pull/40465 | 3,356,581,831 | PR_kwDOCUB6oc6ldgF1 | 40,465 | 🌐 [i18n-KO] Translated `tools.md` to Korean | {
"login": "Jwaminju",
"id": 49024958,
"node_id": "MDQ6VXNlcjQ5MDI0OTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/49024958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jwaminju",
"html_url": "https://github.com/Jwaminju",
"followers_url": "https://api.github.com/users/Jwaminju/followers",
"following_url": "https://api.github.com/users/Jwaminju/following{/other_user}",
"gists_url": "https://api.github.com/users/Jwaminju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jwaminju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jwaminju/subscriptions",
"organizations_url": "https://api.github.com/users/Jwaminju/orgs",
"repos_url": "https://api.github.com/users/Jwaminju/repos",
"events_url": "https://api.github.com/users/Jwaminju/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jwaminju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T17:51:58 | 2025-09-15T14:48:52 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40465",
"html_url": "https://github.com/huggingface/transformers/pull/40465",
"diff_url": "https://github.com/huggingface/transformers/pull/40465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40465.patch",
"merged_at": null
} | # What does this PR do?
Translated the `tools.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [X] Check for missing / redundant translations (번역 누락/중복 검사)
- [X] Grammar Check (맞춤법 검사)
- [X] Review or Add new terms to glossary (용어 확인 및 추가)
- [X] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [X] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
@jungnerd @ahnjj @yijun-lee
<!-- harheem, nsbg, Youngdong2, xhaktm00, ssunbear, ChoHyoungSeo, judy-choi -->
<!-- 4N3MONE, Kim-Ju-won, ahnjj, FacerAin, ssum21, TaskerJang, HyunZ118 -->
<!-- yijun-lee, songi104, chhaewxn, AhnJoonSung, jihyun-0611, seopp, pyapyapya -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
stevhliu May you please review this PR? | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40465/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40464 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40464/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40464/comments | https://api.github.com/repos/huggingface/transformers/issues/40464/events | https://github.com/huggingface/transformers/pull/40464 | 3,356,502,241 | PR_kwDOCUB6oc6ldPH2 | 40,464 | 🌐 [i18n-KO] Translated `agents.md` to Korean | {
"login": "Jwaminju",
"id": 49024958,
"node_id": "MDQ6VXNlcjQ5MDI0OTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/49024958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jwaminju",
"html_url": "https://github.com/Jwaminju",
"followers_url": "https://api.github.com/users/Jwaminju/followers",
"following_url": "https://api.github.com/users/Jwaminju/following{/other_user}",
"gists_url": "https://api.github.com/users/Jwaminju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jwaminju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jwaminju/subscriptions",
"organizations_url": "https://api.github.com/users/Jwaminju/orgs",
"repos_url": "https://api.github.com/users/Jwaminju/repos",
"events_url": "https://api.github.com/users/Jwaminju/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jwaminju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T17:22:17 | 2025-08-26T17:56:23 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40464",
"html_url": "https://github.com/huggingface/transformers/pull/40464",
"diff_url": "https://github.com/huggingface/transformers/pull/40464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40464.patch",
"merged_at": null
} | # What does this PR do?
Translated the `agents.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [X] Check for missing / redundant translations (번역 누락/중복 검사)
- [X] Grammar Check (맞춤법 검사)
- [X] Review or Add new terms to glossary (용어 확인 및 추가)
- [X] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [X] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
@jungnerd @ahnjj @yijun-lee
<!-- harheem, nsbg, Youngdong2, xhaktm00, ssunbear, ChoHyoungSeo, judy-choi -->
<!-- 4N3MONE, Kim-Ju-won, ahnjj, FacerAin, ssum21, TaskerJang, HyunZ118 -->
<!-- yijun-lee, songi104, chhaewxn, AhnJoonSung, jihyun-0611, seopp, pyapyapya -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
stevhliu May you please review this PR? | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40464/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40463/comments | https://api.github.com/repos/huggingface/transformers/issues/40463/events | https://github.com/huggingface/transformers/issues/40463 | 3,356,365,774 | I_kwDOCUB6oc7IDhPO | 40,463 | Number of trainable params is displayed wrong in accelerate logs when using FSDP | {
"login": "Ali-Sayed-Salehi",
"id": 108986671,
"node_id": "U_kgDOBn8BLw",
"avatar_url": "https://avatars.githubusercontent.com/u/108986671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ali-Sayed-Salehi",
"html_url": "https://github.com/Ali-Sayed-Salehi",
"followers_url": "https://api.github.com/users/Ali-Sayed-Salehi/followers",
"following_url": "https://api.github.com/users/Ali-Sayed-Salehi/following{/other_user}",
"gists_url": "https://api.github.com/users/Ali-Sayed-Salehi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ali-Sayed-Salehi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ali-Sayed-Salehi/subscriptions",
"organizations_url": "https://api.github.com/users/Ali-Sayed-Salehi/orgs",
"repos_url": "https://api.github.com/users/Ali-Sayed-Salehi/repos",
"events_url": "https://api.github.com/users/Ali-Sayed-Salehi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ali-Sayed-Salehi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-26T16:35:39 | 2025-10-04T08:02:12 | 2025-10-04T08:02:12 | NONE | null | null | null | null | ### System Info
### System Info
peft version : 0.17.1
transformers version: 4.55.2
Platform: Linux-5.14.0-570.33.2.el9_6.x86_64-x86_64-with-glibc2.39
Python version: 3.12.3
Huggingface_hub version: 0.34.4
Safetensors version: 0.6.2
Accelerate version: 1.10.0
Accelerate config:
compute_environment: LOCAL_MACHINE
debug: true
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: true
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
DeepSpeed version: 0.17.4
PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?: yes, FSDP
Using GPU in script?: Yes
GPU type: Tesla V100-SXM2-32GB
Deepspeed config:
compute_environment: LOCAL_MACHINE
debug: true
deepspeed_config:
deepspeed_multinode_launcher: standard
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
### Who can help?
@SunMarc @zach-huggingface @pacman100
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
When using FSDP with QLORA in a multi-GPU environment to finetune Llama3.1-8B, the wrong number of trainable params is printed:
***** Running training *****
Num examples = 501
Num Epochs = 1
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 64
Gradient Accumulation steps = 16
Total optimization steps = 1
Number of trainable parameters = 136,585,216
Should be:
=== Trainable parameter summary ===
Actual trainable (requires_grad): 546,373,632
Breakdown (actual):
• lora_total 20,971,520
• lm_head 525,369,344
• other 32,768
It seems like the printed number is divided by the number of GPUs. So if we multiply the number by 4 (num of GPUs), and add the number of params that should be saved from trainable_token_indices (which is 32,768 in this case), we will get the correct number. This problem does not exist when I run with Deepspeed and it prints the correct number (546,373,632).
My script:
```python
DTYPE = torch.float32
USE_FP16 =False
USE_BF16 = False
TASK_TO_MODEL_CLASS = {
"clm": AutoModelForCausalLM,
"seq_cls": AutoModelForSequenceClassification,
}
# ------------------------- Training arguments -------------------------
training_args = TrainingArguments(
output_dir=output_dir,
learning_rate=1e-4,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=16,
gradient_checkpointing = True,
num_train_epochs=3,
max_steps=1 if DEBUG else -1,
weight_decay=1e-4,
logging_strategy="steps",
logging_steps=1 if DEBUG else 25,
report_to=["tensorboard"],
logging_dir=tensorboard_dir,
save_strategy="steps",
eval_strategy="steps",
eval_steps=1 if DEBUG else 50,
save_steps=1 if DEBUG else 50,
save_total_limit=2,
load_best_model_at_end= True,
metric_for_best_model="eval_loss" if TASK == "clm" else args.selection_metric,
greater_is_better=False if TASK == "clm" else True,
label_names=["labels"],
max_grad_norm=1.0,
bf16=USE_BF16,
fp16=USE_FP16,
log_level="info",
log_level_replica="warning",
remove_unused_columns=False,
eval_accumulation_steps=16 if TASK == "clm" else None,
)
# ------------------------- Load model and quantize -------------------------
optional_kwargs = {}
bnb_4bit_quant_storage_dtype = DTYPE if DTYPE == torch.bfloat16 else torch.float32
model_dtype = DTYPE if not args.quant else bnb_4bit_quant_storage_dtype
if args.quant and LLAMA:
print(" Using 4-bit quantization...")
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=DTYPE,
bnb_4bit_quant_storage=bnb_4bit_quant_storage_dtype,
)
optional_kwargs["quantization_config"] = quant_config
if TASK == "seq_cls":
optional_kwargs["id2label"] = {0: "NEGATIVE", 1: "POSITIVE"}
optional_kwargs["label2id"] = {"NEGATIVE": 0, "POSITIVE": 1}
optional_kwargs["num_labels"] = 2
model = ModelClass.from_pretrained(
MODEL_PATH,
trust_remote_code=True,
torch_dtype=model_dtype,
**optional_kwargs
)
# ------------------------- Gradient Checkpointing -------------------------
model.config.use_cache = not training_args.gradient_checkpointing
training_args.gradient_checkpointing_kwargs = {"use_reentrant": True}
# ------------------------- Tokenizer -------------------------
SPECIAL_TOKENS = [
"<COMMIT_MESSAGE>", "</COMMIT_MESSAGE>",
"<FILE>", "</FILE>",
"<ADDED>", "</ADDED>",
"<REMOVED>", "</REMOVED>",
]
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
local_files_only=True,
trust_remote_code=True,
use_fast=True,
additional_special_tokens=SPECIAL_TOKENS,
)
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = tokenizer.pad_token_id
model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=8, mean_resizing=False)
# ------------------------- Load dataset -------------------------
format_func = determine_format_fn(TASK)
dataset = load_and_split_dataset(
dataset_path=args.dataset_path,
repo_path=REPO_PATH,
slurm_tmpdir=slurm_tmpdir,
debug=DEBUG,
format_fn=format_func
)
# ------------------------- tokenize -------------------------
should_truncate, tokenizer_max_len = determine_tokenizer_truncation(
tokenizer=tokenizer,
config=config,
truncation_len=args.truncation_len,
chunking_len=args.chunking_len if TASK == "clm" else None
)
def tokenize_data(examples):
outputs = tokenizer(
examples["text"],
truncation=should_truncate,
max_length=tokenizer_max_len
)
return outputs
tokenized_dataset = dataset.map(tokenize_data, batched=True, remove_columns=["text"])
final_dataset = tokenized_dataset
# ------------------------------ Data Collator ------------------------------
data_collator = determine_data_collator(TASK, tokenizer)
# ------------------------- LORA -------------------------
if args.lora:
print(" Applying LoRA...")
modules_to_save = None
if TASK == "clm":
modules_to_save = ['lm_head']
lora_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules="all-linear",
lora_dropout=0.1,
bias="none",
task_type=TaskType.SEQ_CLS if TASK == "seq_cls" else TaskType.CAUSAL_LM,
modules_to_save=modules_to_save
)
# Make only the new embedding rows trainable
if hasattr(lora_config, "trainable_token_indices"):
lora_config.trainable_token_indices = {"embed_tokens": token_info["added_token_ids"]}
else:
lora_config.modules_to_save.append("embed_tokens")
model = prepare_peft_model(model, lora_config, training_args)
# ------------------------- Trainer -------------------------
trainer = Trainer(
model=model,
args=training_args,
train_dataset=final_dataset["train"],
eval_dataset=final_dataset["test"],
data_collator=data_collator
)
trainer.accelerator.print(f"{trainer.model}")
# ---------------------------- Train ----------------------------
trainer.train(resume_from_checkpoint= True if args.continue_from_dir else False)
```
My `prepare_peft_model` is an exact replication of the source code from SFTTrainer:
```python
def enable_gradient_checkpointing(
model: PreTrainedModel, gradient_checkpointing_kwargs: Optional[dict]
) -> PreTrainedModel:
"""Enables gradient checkpointing for the model."""
# Enable gradient checkpointing on the base model for PEFT
if is_peft_model(model):
model.base_model.gradient_checkpointing_enable()
# Enable gradient checkpointing for non-PEFT models
else:
model.gradient_checkpointing_enable()
gradient_checkpointing_kwargs = gradient_checkpointing_kwargs or {}
use_reentrant = (
"use_reentrant" not in gradient_checkpointing_kwargs or gradient_checkpointing_kwargs["use_reentrant"]
)
if use_reentrant:
if hasattr(model, "enable_input_require_grads"):
model.enable_input_require_grads()
else:
def make_inputs_require_grad(module, input, output):
output.requires_grad_(True)
model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)
return model
def peft_module_casting_to_bf16(model):
for name, module in model.named_modules():
if isinstance(module, torch.nn.LayerNorm) or "norm" in name:
module = module.to(torch.float32)
elif any(x in name for x in ["lm_head", "embed_tokens", "wte", "wpe", "score"]):
if hasattr(module, "weight"):
if module.weight.dtype == torch.float32:
module = module.to(torch.bfloat16)
def prepare_peft_model(
model: PreTrainedModel, peft_config: Optional["PeftConfig"], args: TrainingArguments
) -> PreTrainedModel:
"""Prepares a model for PEFT training."""
if not is_peft_available():
raise ImportError("PEFT is required to use a peft model. Run `pip install peft`.")
# If the model is already a PeftModel, we need to merge and unload it.
# Further information here: https://huggingface.co/docs/trl/dpo_trainer#reference-model-considerations-with-peft
if isinstance(model, PeftModel) and peft_config is not None:
model = model.merge_and_unload()
# Handle quantized models (QLoRA)
is_qlora = getattr(model, "is_loaded_in_4bit", False) or getattr(model, "is_loaded_in_8bit", False)
is_sharded_qlora = False
if getattr(model, "is_loaded_in_4bit", False):
# Check if model is sharded (FSDP/DS-Zero3)
for _, param in model.named_parameters():
if param.__class__.__name__ == "Params4bit":
is_sharded_qlora = param.data.device.type in {"cpu", "meta"}
break
# Prepare model for kbit training if needed
if is_qlora and not is_sharded_qlora:
model = prepare_model_for_kbit_training(
model,
use_gradient_checkpointing=args.gradient_checkpointing,
gradient_checkpointing_kwargs=args.gradient_checkpointing_kwargs or {},
)
# Disable gradient checkpointing as it's handled by prepare_model_for_kbit_training
args = dataclasses.replace(args, gradient_checkpointing=False)
elif args.gradient_checkpointing:
model = enable_gradient_checkpointing(model, args.gradient_checkpointing_kwargs)
# Create PEFT model
if peft_config is not None:
if (
version.parse(peft.__version__) >= version.parse("0.12") # autocast_adapter_dtype introduced in 0.12
and getattr(model, "is_loaded_in_4bit", False)
and is_sharded_qlora
):
model = get_peft_model(model, peft_config, autocast_adapter_dtype=False)
else:
model = get_peft_model(model, peft_config)
# Handle bf16 casting for 4-bit models
if args.bf16 and getattr(model, "is_loaded_in_4bit", False) and not is_sharded_qlora:
peft_module_casting_to_bf16(model)
return model
```
How I get the real numbers for trainable params (this is for the same finetuning script and config and the same run):
```python
def _n_params(t): return int(t.numel())
def _is_lora_param_name(name: str) -> bool:
# PEFT names like: <...>.lora_A.weight / <...>.lora_B.weight
return (".lora_A." in name) or (".lora_B." in name) or (".lora_embedding_A." in name) or (".lora_embedding_B." in name)
def _is_embed_token_adapter(name: str) -> bool:
# When using trainable_token_indices, PEFT makes a token adapter under embed_tokens
return ("embed_tokens" in name) and ("token_adapter" in name) and name.endswith(".weight")
def _is_lm_head(name: str) -> bool:
tail = name.split(".")[-2:]
return ("lm_head" in name) and (tail[-1] in {"weight","bias"})
def _is_cls_head(name: str) -> bool:
return (("score." in name) or ("classifier." in name)) and name.split(".")[-1] in {"weight","bias"}
def count_trainable_params(model, tokenizer=None, task="clm", added_token_ids=None, verbose=True):
"""
Returns a dict with:
- actual_total: sum of p.numel() for p.requires_grad
- buckets: lora_total, lm_head, seq_head, embed_token_adapter (nominal),
embed_token_effective (added_rows * hidden_size, if tokenizer provided)
- theoretical: derived from LoRA A/B shapes + optional heads + effective embed rows
"""
buckets = defaultdict(int)
# 1) Actual counts from requires_grad
for name, p in model.named_parameters():
if not p.requires_grad:
continue
n = _n_params(p)
if _is_lora_param_name(name):
buckets["lora_total"] += n
elif _is_embed_token_adapter(name):
buckets["embed_token_adapter_nominal"] += n # This is usually full VxD, even if only rows are "active"
elif _is_lm_head(name):
buckets["lm_head"] += n
elif _is_cls_head(name):
buckets["seq_head"] += n
else:
buckets["other"] += n
actual_total = sum(buckets.values())
# 2) Theoretical LoRA count from A/B shapes (r*(in+out) per layer)
# Group A/B by the same base prefix
lora_pairs = {} # prefix -> {"A": (out, r), "B": (r, in)}
for name, p in model.named_parameters():
if not _is_lora_param_name(name):
continue
base = name.rsplit(".lora_", 1)[0] # strip ".lora_A." or ".lora_B."
if base not in lora_pairs: lora_pairs[base] = {}
if ".lora_A." in name:
# A is [out, r]
out, r = p.shape[0], p.shape[1]
lora_pairs[base]["A"] = (out, r)
elif ".lora_B." in name:
# B is [r, in]
r, in_f = p.shape[0], p.shape[1]
lora_pairs[base]["B"] = (r, in_f)
theoretical_lora = 0
for base, pair in lora_pairs.items():
if "A" in pair and "B" in pair:
out, rA = pair["A"]
rB, in_f = pair["B"]
r = rA # should equal rB
theoretical_lora += r * (in_f + out)
# 3) Theoretical heads
theoretical_lm_head = 0
theoretical_seq_head = 0
# If the head params are trainable, just sum their shapes (same as "actual", but we compute fresh)
# (We prefer to derive by shape so it's robust if some submodule holds the params.)
if task == "clm":
head = getattr(model, "get_output_embeddings", lambda: None)()
if head is not None and hasattr(head, "weight") and head.weight.requires_grad:
theoretical_lm_head += _n_params(head.weight)
if getattr(head, "bias", None) is not None and head.bias.requires_grad:
theoretical_lm_head += _n_params(head.bias)
else: # seq_cls
head = getattr(model, "score", None)
if head is None:
head = getattr(model, "classifier", None)
if hasattr(head, "out_proj"): head = head.out_proj
if head is not None and hasattr(head, "weight") and head.weight.requires_grad:
theoretical_seq_head += _n_params(head.weight)
if getattr(head, "bias", None) is not None and head.bias.requires_grad:
theoretical_seq_head += _n_params(head.bias)
# 4) Effective embed rows (only the added ids) for a more meaningful number
theoretical_embed_effective = 0
if tokenizer is not None:
if added_token_ids is None:
try:
added_token_ids = sorted(getattr(tokenizer, "get_added_vocab", lambda: {})().values())
except Exception:
added_token_ids = []
if added_token_ids:
emb = model.get_input_embeddings()
hidden = emb.weight.shape[1]
theoretical_embed_effective = len(added_token_ids) * hidden
theoretical_total = theoretical_lora + theoretical_lm_head + theoretical_seq_head + theoretical_embed_effective
if verbose:
print("\n=== Trainable parameter summary ===")
print(f"Actual trainable (requires_grad): {actual_total:,}")
print("Breakdown (actual):")
for k in ["lora_total","lm_head","seq_head","embed_token_adapter_nominal","other"]:
if k in buckets:
print(f" • {k:28s} {buckets[k]:,}")
if tokenizer is not None and theoretical_embed_effective:
print(f"\nEffective added-token rows (V_add * d_model): {theoretical_embed_effective:,} (more meaningful than nominal V*d for token_adapter)")
print("\nTheoretical (sanity check):")
print(f" • LoRA (sum over layers r*(in+out)) : {theoretical_lora:,}")
if task == "clm": print(f" • lm_head (if trainable) : {theoretical_lm_head:,}")
if task != "clm": print(f" • seq head (if trainable) : {theoretical_seq_head:,}")
if theoretical_embed_effective: print(f" • embed added-rows effective : {theoretical_embed_effective:,}")
print(f"≈ Expected total (effective view) : {theoretical_total:,}")
print("====================================\n")
return {
"actual_total": actual_total,
"actual_breakdown": dict(buckets),
"theoretical": {
"lora": theoretical_lora,
"lm_head": theoretical_lm_head,
"seq_head": theoretical_seq_head,
"embed_effective": theoretical_embed_effective,
"expected_total_effective": theoretical_total,
},
}
```
### Expected behavior
The accelerate logs for Number of trainable parameters should print the correct number. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40463/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40462/comments | https://api.github.com/repos/huggingface/transformers/issues/40462/events | https://github.com/huggingface/transformers/issues/40462 | 3,356,358,210 | I_kwDOCUB6oc7IDfZC | 40,462 | Question about RoPE Implementation in modeling_llama: Should torch.cat be repeat_interleave? | {
"login": "abhidipbhattacharyya",
"id": 31606185,
"node_id": "MDQ6VXNlcjMxNjA2MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/31606185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhidipbhattacharyya",
"html_url": "https://github.com/abhidipbhattacharyya",
"followers_url": "https://api.github.com/users/abhidipbhattacharyya/followers",
"following_url": "https://api.github.com/users/abhidipbhattacharyya/following{/other_user}",
"gists_url": "https://api.github.com/users/abhidipbhattacharyya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhidipbhattacharyya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhidipbhattacharyya/subscriptions",
"organizations_url": "https://api.github.com/users/abhidipbhattacharyya/orgs",
"repos_url": "https://api.github.com/users/abhidipbhattacharyya/repos",
"events_url": "https://api.github.com/users/abhidipbhattacharyya/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhidipbhattacharyya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T16:32:41 | 2025-08-27T10:01:11 | 2025-08-27T10:01:11 | NONE | null | null | null | null | Hi,
I was going through the code for `modeling_llama` and the RoPE implementation. I came across the following function:
```
def forward(self, x, position_ids):
inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
position_ids_expanded = position_ids[:, None, :].float()
device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
with torch.autocast(device_type=device_type, enabled=False): # Force float32
freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
emb = torch.cat((freqs, freqs), dim=-1)
cos = emb.cos() * self.attention_scaling
sin = emb.sin() * self.attention_scaling
return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
```
I believe the line `emb = torch.cat((freqs, freqs), dim=-1)` should be replaced with `repeat_interleave`. This is because the cosine/sine angles for matrix multiplication should be structured like:
```
[cos(θ₁), cos(θ₁), cos(θ₂), cos(θ₂), cos(θ₃), cos(θ₃), ...]
```
This way, further down the stream when we compute:
```
q_embed = (q * cos) + (rotate_half(q) * sin)
```
...the values are aligned properly for pairwise rotation. However, the current `torch.cat((freqs, freqs), dim=-1) ` should produce:
```
[cos(θ₁), cos(θ₂), cos(θ₃), cos(θ₁), cos(θ₂), cos(θ₃), ...]
```
which seems incorrect. Am I missing something?
Thanks,
Abhidip | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40462/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40461/comments | https://api.github.com/repos/huggingface/transformers/issues/40461/events | https://github.com/huggingface/transformers/issues/40461 | 3,356,197,876 | I_kwDOCUB6oc7IC4P0 | 40,461 | AttributeError: 'GptOssConfig' object has no attribute 'max_position_embeddings' | {
"login": "speedbunny",
"id": 950677,
"node_id": "MDQ6VXNlcjk1MDY3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/950677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/speedbunny",
"html_url": "https://github.com/speedbunny",
"followers_url": "https://api.github.com/users/speedbunny/followers",
"following_url": "https://api.github.com/users/speedbunny/following{/other_user}",
"gists_url": "https://api.github.com/users/speedbunny/gists{/gist_id}",
"starred_url": "https://api.github.com/users/speedbunny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedbunny/subscriptions",
"organizations_url": "https://api.github.com/users/speedbunny/orgs",
"repos_url": "https://api.github.com/users/speedbunny/repos",
"events_url": "https://api.github.com/users/speedbunny/events{/privacy}",
"received_events_url": "https://api.github.com/users/speedbunny/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-26T15:44:30 | 2025-08-27T09:16:59 | 2025-08-27T09:16:59 | NONE | null | null | null | null | ### System Info
Kaggle script running GPT:oss20b that worked this morning (pulls transformers from GitHub) has just started giving this error:
_'GptOssConfig' object has no attribute 'max_position_embeddings_
Not a big user and it's a plug and play script so can't elaborate much, sorry.
Here's the env!
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.56.0.dev0
- Platform: Linux-6.6.56+-x86_64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): 2.18.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.6 (gpu)
- Jax version: 0.5.2
- JaxLib version: 0.5.1
- Using distributed or parallel set-up in script?: Yes
- Using GPU in script?: Yes
- GPU type: Tesla T4
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
# Load model & tokenizer (keep it simple for now)
tokenizer = AutoTokenizer.from_pretrained(model_id, device_map="auto")
model = AutoModelForCausalLM.from_pretrained(model_id,device_map="auto")
### Expected behavior
Script ran w/o errors this morning, powers multiple notebooks. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40461/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40460/comments | https://api.github.com/repos/huggingface/transformers/issues/40460/events | https://github.com/huggingface/transformers/issues/40460 | 3,356,083,891 | I_kwDOCUB6oc7ICcaz | 40,460 | Gemma-3-12b-pt high grad_norm during continual pre-training | {
"login": "marseller",
"id": 54594235,
"node_id": "MDQ6VXNlcjU0NTk0MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/54594235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marseller",
"html_url": "https://github.com/marseller",
"followers_url": "https://api.github.com/users/marseller/followers",
"following_url": "https://api.github.com/users/marseller/following{/other_user}",
"gists_url": "https://api.github.com/users/marseller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marseller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marseller/subscriptions",
"organizations_url": "https://api.github.com/users/marseller/orgs",
"repos_url": "https://api.github.com/users/marseller/repos",
"events_url": "https://api.github.com/users/marseller/events{/privacy}",
"received_events_url": "https://api.github.com/users/marseller/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-26T15:14:38 | 2025-10-03T12:29:03 | 2025-10-03T12:29:03 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.3
- Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: 0.17.4
- PyTorch version (accelerator?): 2.7.1+cu126 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?:
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
### Who can help?
@zach-huggingface @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am performing domain-specific continual pre-training on Gemma-3-12B-PT. During the first 100 million tokens, I start encountering issues: the gradient_norm spikes to extremely high values, and the model’s outputs degrade, even when using a very small learning rate of 5e-7 and max_grad_norm 1.0.
I am using exactly the same data and setup with LLaMA-3.1-8B, and in that case, the gradient_norm behaves as expected. The training is being conducted using the standard Trainer class, the only change was in learning rate, which does not directly affect the computed gradients. Both of the pre-trained models are using tokenizer bos and eos tokens for tokenization.
I am using accelerate to launch the training script and initialize deepspeed distributed training.
accelerate launch --config_file "deepspeed_config.yaml" train.py
deepspeed_config.yaml:
compute_environment: LOCAL_MACHINE
debug: true
deepspeed_config:
deepspeed_config_file: deepspeed_config.json
zero3_init_flag: false
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
Gemma-3-12B grad_norm
<img width="1314" height="351" alt="Image" src="https://github.com/user-attachments/assets/af7b1e9d-6276-43fd-a3bd-19ae8bff480d" />
Llama-3.1-8B grad_norm
<img width="1300" height="336" alt="Image" src="https://github.com/user-attachments/assets/b40e43af-c29a-4296-a678-d7b7b072e96b" />
if training_args.resume_from_checkpoint is not None:
checkpoint_path = snapshot_download(
repo_id=training_args.hub_model_id,
allow_patterns=[f"{training_args.resume_from_checkpoint}/*"],
local_dir="/opt/ml/model/checkpoints",
)
checkpoint = f"{checkpoint_path}/{training_args.resume_from_checkpoint}"
model_args.model_name_or_path = f"{checkpoint_path}/{training_args.resume_from_checkpoint}"
model, tokenizer = create_and_prepare_model(model_args)
model.config.use_cache = not training_args.gradient_checkpointing
if training_args.gradient_checkpointing:
training_args.gradient_checkpointing_kwargs = {"use_reentrant": model_args.use_reentrant}
train_dataset, eval_dataset = create_datasets_with_tokenizer(tokenizer, data_args)
data_collator = DataCollatorWithFlattening()
trainer = Trainer(
model=model,
data_collator=data_collator,
tokenizer=tokenizer,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.accelerator.print(f"{trainer.model}")
if hasattr(trainer.model, "print_trainable_parameters"):
trainer.model.print_trainable_parameters()
print(f"Starting training with model: {model_args.model_name_or_path} {checkpoint}")
trainer.train(resume_from_checkpoint=checkpoint)
if trainer.is_fsdp_enabled:
trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
trainer.save_model()`
### Expected behavior
lower gradient updates for Gemma-3-12b-pt | {
"login": "marseller",
"id": 54594235,
"node_id": "MDQ6VXNlcjU0NTk0MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/54594235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marseller",
"html_url": "https://github.com/marseller",
"followers_url": "https://api.github.com/users/marseller/followers",
"following_url": "https://api.github.com/users/marseller/following{/other_user}",
"gists_url": "https://api.github.com/users/marseller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marseller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marseller/subscriptions",
"organizations_url": "https://api.github.com/users/marseller/orgs",
"repos_url": "https://api.github.com/users/marseller/repos",
"events_url": "https://api.github.com/users/marseller/events{/privacy}",
"received_events_url": "https://api.github.com/users/marseller/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40460/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40459/comments | https://api.github.com/repos/huggingface/transformers/issues/40459/events | https://github.com/huggingface/transformers/issues/40459 | 3,355,672,276 | I_kwDOCUB6oc7IA37U | 40,459 | `use_kernels=True` does not invoke custom kernels | {
"login": "ariG23498",
"id": 36856589,
"node_id": "MDQ6VXNlcjM2ODU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ariG23498",
"html_url": "https://github.com/ariG23498",
"followers_url": "https://api.github.com/users/ariG23498/followers",
"following_url": "https://api.github.com/users/ariG23498/following{/other_user}",
"gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions",
"organizations_url": "https://api.github.com/users/ariG23498/orgs",
"repos_url": "https://api.github.com/users/ariG23498/repos",
"events_url": "https://api.github.com/users/ariG23498/events{/privacy}",
"received_events_url": "https://api.github.com/users/ariG23498/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-26T13:32:35 | 2025-09-16T08:50:55 | 2025-09-16T08:50:55 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.56.0.dev0
- Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31
- Python version: 3.12.7
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@ArthurZucker
### Reproduction
```python
import logging
logging.basicConfig(level=logging.INFO)
import torch
from transformers import (
AutoTokenizer, AutoModelForCausalLM,
)
model_id = "openai/gpt-oss-20b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto,
use_kernels=True,
).eval()
messages = [
{"role": "system", "content": "What is Tensor Parallelism?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="low",
).to(model.device)
with torch.inference_mode():
generated = model.generate(
**inputs,
do_sample=False,
temperature=None,
max_new_tokens=64,
disable_compile=True,
)
decoded_generation = tokenizer.batch_decode(generated, skip_special_tokens=True)[0]
print(decoded_generation)
```
### Expected behavior
Noting that I have activated logging, I should be able to see the logs for all the custom kernels being invoked. While the `LigerRMSNorm` is being invoked I do not see the `MegaBlocksMoeMLP` as it should be (as [stated in the modelling file here](https://github.com/huggingface/transformers/blob/263d06fedc17bb28f70dabe2acae562bc617ef9b/src/transformers/models/gpt_oss/modeling_gpt_oss.py#L156)).
I also note that while the `LigerRMSNorm` is invoked but it complains that it cannot be used due to not being compatible with compile:
```
INFO:root:Using layer `LigerRMSNorm` from repo `kernels-community/liger_kernels` (revision: main) for layer `LigerRMSNorm`
INFO:root:Layer does not support torch.compile, using fallback
```
I have used `disable_compile=True,` in the `.generate()` method, which should have taken care of the issue.
### Solution
The way I could invoke the custom kernels was to swap out these lines:
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L5241-L5243
With the following
```py
from kernels import Device, Mode, kernelize
kernelize(model, device=Device(type=model.device.type), mode=Mode.INFERENCE)
```
While this is not the solution, and we should infer what mode the model is in, I thought of listing the current personal solution down for ease of ideation. | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40459/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40459/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40458/comments | https://api.github.com/repos/huggingface/transformers/issues/40458/events | https://github.com/huggingface/transformers/pull/40458 | 3,355,612,685 | PR_kwDOCUB6oc6laPA6 | 40,458 | Add bfloat16 support detection for MPS in is_torch_bf16_gpu_available() | {
"login": "andrerom",
"id": 289757,
"node_id": "MDQ6VXNlcjI4OTc1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/289757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andrerom",
"html_url": "https://github.com/andrerom",
"followers_url": "https://api.github.com/users/andrerom/followers",
"following_url": "https://api.github.com/users/andrerom/following{/other_user}",
"gists_url": "https://api.github.com/users/andrerom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andrerom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andrerom/subscriptions",
"organizations_url": "https://api.github.com/users/andrerom/orgs",
"repos_url": "https://api.github.com/users/andrerom/repos",
"events_url": "https://api.github.com/users/andrerom/events{/privacy}",
"received_events_url": "https://api.github.com/users/andrerom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T13:17:06 | 2025-09-01T08:26:15 | 2025-08-29T14:37:16 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40458",
"html_url": "https://github.com/huggingface/transformers/pull/40458",
"diff_url": "https://github.com/huggingface/transformers/pull/40458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40458.patch",
"merged_at": "2025-08-29T14:37:16"
} | Add `bfloat16` support detection for MPS (Apple Silicon) in `is_torch_bf16_gpu_available()`
# What does this PR do?
Makes sure to allow it and not throw on `bf16` usage with "Your setup doesn't support bf16/gpu." from `TrainingArguments`.
### Why?
`bfloat16` has been supported for a few years now in Metal [1](https://www.reddit.com/r/LocalLLaMA/comments/14pz4v0/apples_metal_is_getting_bfloat16_support/) and torch.mps [1](https://github.com/pytorch/pytorch/issues/150121) [2](https://github.com/pytorch/pytorch/issues/141864).
**IMPORTANT**! On Apple M1 and M2 bfloat16 is emulated in software _(by Apple's Metal framework)_ using float32 for hardware instead, meaning there is no performance benefit, so on a M1 or M2 you should rather use float16 or float32.
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40458/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40457/comments | https://api.github.com/repos/huggingface/transformers/issues/40457/events | https://github.com/huggingface/transformers/issues/40457 | 3,355,572,588 | I_kwDOCUB6oc7IAfls | 40,457 | Image Embedding Models (Feature extractors) should have a `.hidden_size` | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-08-26T13:06:24 | 2025-09-07T17:08:37 | null | NONE | null | null | null | null | ### Feature request
Image models should have a `.hidden_size` getter that figures out the model's return shape (in its setup at that time)
### Motivation
It is currently left as an exercise to the user to figure out the hidden size of image embedding models.
Once must read the documentation and then implement their own logic to figure out the sizes.
For example, from:
https://github.com/sign/image-latent-transformer/blob/8c2c4bc5b9a1fb84c0a5fced19acf67fac7f1093/image_latent_transformer/utils.py#L96-L114
```py
config = getattr(image_encoder, 'config', {})
if hasattr(config, 'vision_config'):
config = config.vision_config
if hasattr(config, 'hidden_size'):
return config.hidden_size
# https://huggingface.co/docs/transformers/model_doc/mobilevit#transformers.MobileViTModel
# If expand_output, the model will apply an additional 1x1 convolution to expand the output channels
# from config.neck_hidden_sizes[5] to config.neck_hidden_sizes[6].
if hasattr(config, 'neck_hidden_sizes'):
if getattr(image_encoder, 'expand_output', False):
return config.neck_hidden_sizes[-1]
return config.neck_hidden_sizes[-2]
if hasattr(config, 'hidden_sizes'):
return config.hidden_sizes[-1]
raise UnknownImageEncoderError()
```
### Your contribution
I can make a PR that creates the default behavior for models (a la, the above code).
Then, it is on the model developers to correctly extend it if needed, and on huggingface maintainers to extend the default to other classes of models, since i clearly did not cover all. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40457/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40456/comments | https://api.github.com/repos/huggingface/transformers/issues/40456/events | https://github.com/huggingface/transformers/pull/40456 | 3,355,451,133 | PR_kwDOCUB6oc6lZsdn | 40,456 | [modular] Remove ambiguity in all calls to parent class methods + fix dependency graph | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T12:30:36 | 2025-08-27T12:51:30 | 2025-08-27T12:51:28 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40456",
"html_url": "https://github.com/huggingface/transformers/pull/40456",
"diff_url": "https://github.com/huggingface/transformers/pull/40456.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40456.patch",
"merged_at": "2025-08-27T12:51:28"
} | # What does this PR do?
A lot of modular files had wrong calls to parent's method (in order to skip unravelling the definition), making then non-pythonic files. This is now fixed, and the converter is much more robust on this.
Here are the new rules to make this process much more pythonic, such that modular files are correct python files:
- When we want to skip unravelling the parent's code (i.e. we do NOT want to call `super`), let's call the method from the actual class we would like the method to be used
- If it's a grand-parent of the inherited class, no need to add the base again in the MRO : e.g., if we inherit from `LlamaMLP`, and we want to call `nn.Module.__init__(...)`, no need to re-add `nn.Module` as a new base, as it's already part of the MRO (`LlamaMLP` is a `nn.Module`, so it's the grand-parent)
- the converter will replace such class calls with `super()` to keep Python's best practices when the class called is one of the direct parents of the generated code
Overall, those rules are much more natural and pythonic, and clear the ambiguity that exists currently.
Also, fix a bug in how we were creating the dependency graphs (some models could be skipped due to non-exhaustive match) | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40456/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40455/comments | https://api.github.com/repos/huggingface/transformers/issues/40455/events | https://github.com/huggingface/transformers/pull/40455 | 3,355,399,976 | PR_kwDOCUB6oc6lZhn7 | 40,455 | Fix extra template loading | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T12:14:21 | 2025-08-26T13:01:03 | 2025-08-26T13:01:01 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40455",
"html_url": "https://github.com/huggingface/transformers/pull/40455",
"diff_url": "https://github.com/huggingface/transformers/pull/40455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40455.patch",
"merged_at": "2025-08-26T13:01:01"
} | We had a bug in how we loaded extra templates in the new format (where each template ends up in its own file). This sometimes affected template loading from the Hub. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40455/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40454/comments | https://api.github.com/repos/huggingface/transformers/issues/40454/events | https://github.com/huggingface/transformers/pull/40454 | 3,355,348,491 | PR_kwDOCUB6oc6lZWk0 | 40,454 | Fix 'T5GemmaConfig' object has no attribute 'num_hidden_layers' | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T11:57:51 | 2025-09-01T12:56:32 | 2025-09-01T12:56:32 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40454",
"html_url": "https://github.com/huggingface/transformers/pull/40454",
"diff_url": "https://github.com/huggingface/transformers/pull/40454.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40454.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes `AttributeError: 'T5GemmaConfig' object has no attribute 'num_hidden_layers'` when training. I wasn't sure if this kind of issue is typically unit tested, so I'm happy to add a regression test if you can point me to where it should go :)
Minimal repro script below:
```sh
echo -e "Question: Why is the sky blue? Answer:" | transformers run --task text2text-generation --model google/t5gemma-b-b-ul2 --device 0
```
Stack trace on `main` (commit `58cebc848baa0af2e4ff159fb11504d94179f376`):
```
Traceback (most recent call last):
File "/fsx/lewis/git/hf/transformers/transformers/bin/transformers", line 8, in <module>
sys.exit(main())
^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/commands/transformers_cli.py", line 59, in main
service.run()
File "/fsx/lewis/git/hf/transformers/src/transformers/commands/run.py", line 99, in run
output = nlp(**entry) if self._reader.is_multi_columns else nlp(entry)
^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/pipelines/text2text_generation.py", line 191, in __call__
result = super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/pipelines/base.py", line 1467, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/pipelines/base.py", line 1474, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/pipelines/base.py", line 1374, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/pipelines/text2text_generation.py", line 220, in _forward
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/transformers/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/generation/utils.py", line 2399, in generate
self._prepare_cache_for_generation(
File "/fsx/lewis/git/hf/transformers/src/transformers/generation/utils.py", line 2007, in _prepare_cache_for_generation
else EncoderDecoderCache(DynamicCache(**dynamic_cache_kwargs), DynamicCache(**dynamic_cache_kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/cache_utils.py", line 1019, in __init__
for _ in range(config.num_hidden_layers)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/lewis/git/hf/transformers/src/transformers/configuration_utils.py", line 207, in __getattribute__
return super().__getattribute__(key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'T5GemmaConfig' object has no attribute 'num_hidden_layers'
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40454/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40453/comments | https://api.github.com/repos/huggingface/transformers/issues/40453/events | https://github.com/huggingface/transformers/pull/40453 | 3,355,059,700 | PR_kwDOCUB6oc6lYYMm | 40,453 | feat(processing_utils): support optional attributes in ProcessorMixin | {
"login": "MengAiDev",
"id": 202287492,
"node_id": "U_kgDODA6phA",
"avatar_url": "https://avatars.githubusercontent.com/u/202287492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MengAiDev",
"html_url": "https://github.com/MengAiDev",
"followers_url": "https://api.github.com/users/MengAiDev/followers",
"following_url": "https://api.github.com/users/MengAiDev/following{/other_user}",
"gists_url": "https://api.github.com/users/MengAiDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MengAiDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MengAiDev/subscriptions",
"organizations_url": "https://api.github.com/users/MengAiDev/orgs",
"repos_url": "https://api.github.com/users/MengAiDev/repos",
"events_url": "https://api.github.com/users/MengAiDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/MengAiDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T10:23:57 | 2025-09-11T07:23:15 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40453",
"html_url": "https://github.com/huggingface/transformers/pull/40453",
"diff_url": "https://github.com/huggingface/transformers/pull/40453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40453.patch",
"merged_at": null
} | - Add functionality to handle optional processor attributes
- Update save_pretrained and from_pretrained methods to include optional attributes
- Modify processor_config.json handling to accommodate optional attributes
- Improve error handling and logging for optional attribute loading
# What does this PR do?
Fixes #40447
cc @ArthurZucker @amyeroberts, @qubvel @Rocketknight1 | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40453/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40452/comments | https://api.github.com/repos/huggingface/transformers/issues/40452/events | https://github.com/huggingface/transformers/pull/40452 | 3,354,747,841 | PR_kwDOCUB6oc6lXVzu | 40,452 | update aria to remove wrong attention usage | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T08:49:07 | 2025-08-28T13:46:45 | null | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40452",
"html_url": "https://github.com/huggingface/transformers/pull/40452",
"diff_url": "https://github.com/huggingface/transformers/pull/40452.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40452.patch",
"merged_at": null
} | # What does this PR do?
Not sure how this got in here, but adapt aria cross attention to our standards | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40452/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40451/comments | https://api.github.com/repos/huggingface/transformers/issues/40451/events | https://github.com/huggingface/transformers/pull/40451 | 3,354,712,743 | PR_kwDOCUB6oc6lXOjO | 40,451 | CI when PR merged to `main` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T08:37:44 | 2025-08-27T08:56:20 | 2025-08-27T08:56:19 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40451",
"html_url": "https://github.com/huggingface/transformers/pull/40451",
"diff_url": "https://github.com/huggingface/transformers/pull/40451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40451.patch",
"merged_at": "2025-08-27T08:56:19"
} | # What does this PR do?
CI when PR merged to `main`.
We restrict to the pre-defined list of important models.
A job run: https://github.com/huggingface/transformers/actions/runs/17234576559
The slack notification : https://huggingface.slack.com/archives/C01QRHS4K3P/p1756202582644859
<img width="993" height="591" alt="Screenshot 2025-08-26 121801" src="https://github.com/user-attachments/assets/0bfaa10f-6224-4fa6-8124-a4594922ff4c" />
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40451/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40450/comments | https://api.github.com/repos/huggingface/transformers/issues/40450/events | https://github.com/huggingface/transformers/issues/40450 | 3,354,705,567 | I_kwDOCUB6oc7H9L6f | 40,450 | Add Seed OSS Model | {
"login": "MengAiDev",
"id": 202287492,
"node_id": "U_kgDODA6phA",
"avatar_url": "https://avatars.githubusercontent.com/u/202287492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MengAiDev",
"html_url": "https://github.com/MengAiDev",
"followers_url": "https://api.github.com/users/MengAiDev/followers",
"following_url": "https://api.github.com/users/MengAiDev/following{/other_user}",
"gists_url": "https://api.github.com/users/MengAiDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MengAiDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MengAiDev/subscriptions",
"organizations_url": "https://api.github.com/users/MengAiDev/orgs",
"repos_url": "https://api.github.com/users/MengAiDev/repos",
"events_url": "https://api.github.com/users/MengAiDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/MengAiDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-08-26T08:35:33 | 2025-08-26T08:56:24 | 2025-08-26T08:56:23 | CONTRIBUTOR | null | null | null | null | ### Model description
Seed-OSS is a series of open-source large language models developed by ByteDance's Seed Team, designed for powerful long-context, reasoning, agent and general capabilities, and versatile developer-friendly features. Although trained with only 12T tokens, Seed-OSS achieves excellent performance on several popular open benchmarks.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Github: https://github.com/[ByteDance-Seed/seed-oss](https://github.com/ByteDance-Seed/seed-oss
Huggingface: https://huggingface.co/collections/ByteDance-Seed/seed-oss-68a609f4201e788db05b5dcd | {
"login": "MengAiDev",
"id": 202287492,
"node_id": "U_kgDODA6phA",
"avatar_url": "https://avatars.githubusercontent.com/u/202287492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MengAiDev",
"html_url": "https://github.com/MengAiDev",
"followers_url": "https://api.github.com/users/MengAiDev/followers",
"following_url": "https://api.github.com/users/MengAiDev/following{/other_user}",
"gists_url": "https://api.github.com/users/MengAiDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MengAiDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MengAiDev/subscriptions",
"organizations_url": "https://api.github.com/users/MengAiDev/orgs",
"repos_url": "https://api.github.com/users/MengAiDev/repos",
"events_url": "https://api.github.com/users/MengAiDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/MengAiDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40450/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40449/comments | https://api.github.com/repos/huggingface/transformers/issues/40449/events | https://github.com/huggingface/transformers/pull/40449 | 3,354,540,266 | PR_kwDOCUB6oc6lWqHP | 40,449 | Fix Gemma RMSNorm weight init | {
"login": "albertz",
"id": 59132,
"node_id": "MDQ6VXNlcjU5MTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertz",
"html_url": "https://github.com/albertz",
"followers_url": "https://api.github.com/users/albertz/followers",
"following_url": "https://api.github.com/users/albertz/following{/other_user}",
"gists_url": "https://api.github.com/users/albertz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertz/subscriptions",
"organizations_url": "https://api.github.com/users/albertz/orgs",
"repos_url": "https://api.github.com/users/albertz/repos",
"events_url": "https://api.github.com/users/albertz/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T07:44:35 | 2025-09-30T12:24:40 | 2025-09-30T12:16:26 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40449",
"html_url": "https://github.com/huggingface/transformers/pull/40449",
"diff_url": "https://github.com/huggingface/transformers/pull/40449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40449.patch",
"merged_at": null
} | # What does this PR do?
Gemma RMSNorm weight is used additively in `...*(1+weight)`,
thus it should be initialized with zero.
Fixes #40224
## Who can review?
@ArthurZucker
| {
"login": "albertz",
"id": 59132,
"node_id": "MDQ6VXNlcjU5MTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertz",
"html_url": "https://github.com/albertz",
"followers_url": "https://api.github.com/users/albertz/followers",
"following_url": "https://api.github.com/users/albertz/following{/other_user}",
"gists_url": "https://api.github.com/users/albertz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertz/subscriptions",
"organizations_url": "https://api.github.com/users/albertz/orgs",
"repos_url": "https://api.github.com/users/albertz/repos",
"events_url": "https://api.github.com/users/albertz/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40449/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40448/comments | https://api.github.com/repos/huggingface/transformers/issues/40448/events | https://github.com/huggingface/transformers/pull/40448 | 3,354,264,325 | PR_kwDOCUB6oc6lVv9r | 40,448 | [model] Support MiniCPM-V 4.5 | {
"login": "tc-mb",
"id": 157115220,
"node_id": "U_kgDOCV1jVA",
"avatar_url": "https://avatars.githubusercontent.com/u/157115220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tc-mb",
"html_url": "https://github.com/tc-mb",
"followers_url": "https://api.github.com/users/tc-mb/followers",
"following_url": "https://api.github.com/users/tc-mb/following{/other_user}",
"gists_url": "https://api.github.com/users/tc-mb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tc-mb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tc-mb/subscriptions",
"organizations_url": "https://api.github.com/users/tc-mb/orgs",
"repos_url": "https://api.github.com/users/tc-mb/repos",
"events_url": "https://api.github.com/users/tc-mb/events{/privacy}",
"received_events_url": "https://api.github.com/users/tc-mb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T06:13:34 | 2025-08-26T14:50:52 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40448",
"html_url": "https://github.com/huggingface/transformers/pull/40448",
"diff_url": "https://github.com/huggingface/transformers/pull/40448.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40448.patch",
"merged_at": null
} | I’m bringing MiniCPM-V 4.5 in this PR.
Compared to the recently released MiniCPM-V 4.0, this new version puts more emphasis on performance metrics.
I will mention the PR first. Some of the writing may need to be modified. You can review this PR after MIniCPM-V 4.0 is merged into transformers. https://github.com/huggingface/transformers/pull/39899
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40448/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40448/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40447/comments | https://api.github.com/repos/huggingface/transformers/issues/40447/events | https://github.com/huggingface/transformers/issues/40447 | 3,354,043,515 | I_kwDOCUB6oc7H6qR7 | 40,447 | Processor does not load optional attributes | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-08-26T04:31:05 | 2025-10-16T08:19:23 | 2025-10-16T08:19:23 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.55.0.dev0
- Platform: macOS-15.6.1-arm64-arm-64bit
- Python version: 3.12.2
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
maybe @Rocketknight1
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Minimal reproduction:
if `image_processor` is in `attributes` it works, but if it is in `optional_attributes` it fails.
```python
import tempfile
from transformers import AutoImageProcessor, ProcessorMixin
class TextImageProcessor(ProcessorMixin):
name = "text-image-processor"
attributes = []
image_processor_class = "AutoImageProcessor"
optional_attributes = ["image_processor"]
def __init__(self,
image_processor: AutoImageProcessor = None):
super().__init__(image_processor=image_processor)
self.image_processor = image_processor
# Not used in this processor, but necessary for "from_pretrained" compatibility
self.chat_template = None
self.audio_tokenizer = None
with tempfile.TemporaryDirectory() as temp_dir:
image_processor = AutoImageProcessor.from_pretrained("WinKawaks/vit-tiny-patch16-224")
processor = TextImageProcessor(image_processor=image_processor)
processor.save_pretrained(save_directory=temp_dir, push_to_hub=False)
new_processor = TextImageProcessor.from_pretrained(temp_dir)
assert new_processor.image_processor is not None
```
### Expected behavior
Both attributes AND optional attributes should be loaded and initialized.
Right now it seems like it would only load optional attributes if they are called `audio_tokenizer` | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40447/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40446/comments | https://api.github.com/repos/huggingface/transformers/issues/40446/events | https://github.com/huggingface/transformers/pull/40446 | 3,353,862,343 | PR_kwDOCUB6oc6lUa2N | 40,446 | Add convert_segmentation_map_to_binary_masks_sorted function for hand… | {
"login": "Ahmed-G-ElTaher",
"id": 124341899,
"node_id": "U_kgDOB2lOiw",
"avatar_url": "https://avatars.githubusercontent.com/u/124341899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ahmed-G-ElTaher",
"html_url": "https://github.com/Ahmed-G-ElTaher",
"followers_url": "https://api.github.com/users/Ahmed-G-ElTaher/followers",
"following_url": "https://api.github.com/users/Ahmed-G-ElTaher/following{/other_user}",
"gists_url": "https://api.github.com/users/Ahmed-G-ElTaher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ahmed-G-ElTaher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ahmed-G-ElTaher/subscriptions",
"organizations_url": "https://api.github.com/users/Ahmed-G-ElTaher/orgs",
"repos_url": "https://api.github.com/users/Ahmed-G-ElTaher/repos",
"events_url": "https://api.github.com/users/Ahmed-G-ElTaher/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ahmed-G-ElTaher/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-26T02:50:39 | 2025-09-23T04:36:32 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40446",
"html_url": "https://github.com/huggingface/transformers/pull/40446",
"diff_url": "https://github.com/huggingface/transformers/pull/40446.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40446.patch",
"merged_at": null
} | ## Pull Request: Add Support for Handling Overlapping Annotations in Mask2Former
### Problem
The current image processing pipeline for Mask2Former doesn't handle overlapping annotations correctly. When annotations overlap, the processing order is arbitrary, causing larger objects to sometimes be overwritten by smaller ones. This can lead to information loss and incorrect segmentation masks.
### Solution
This PR introduces a new function, `convert_segmentation_map_to_binary_masks_sorted`, that addresses this issue by processing object instances in descending order of their area. This ensures that smaller objects consistently appear on top of larger objects, preserving all annotation information in overlapping regions.
The implementation draws inspiration from the approach used in the `coco2masks` function, which sorts annotations by area before processing them. This enhancement is particularly beneficial for datasets with significant object overlaps.
### Changes
A new function `convert_segmentation_map_to_binary_masks_sorted` has been added with the following features:
- Accepts an optional `sort_by_area` parameter (default: `True`).
- Calculates the area of each instance when sorting is enabled.
- Processes instances in descending order of their area.
This new function can serve as a direct replacement for the existing conversion function, providing improved handling of overlapping regions.
### Testing
- Tested with sample COCO annotations containing overlapping objects.
- Verified that all objects are correctly represented in the output masks.
- Confirmed compatibility with existing Mask2Former models.
### Performance Impact
While the area calculation introduces a minor computational overhead, its impact is negligible within the overall processing pipeline. This step occurs during preprocessing rather than during model inference. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40446/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40445/comments | https://api.github.com/repos/huggingface/transformers/issues/40445/events | https://github.com/huggingface/transformers/pull/40445 | 3,353,845,193 | PR_kwDOCUB6oc6lUXLL | 40,445 | [i18n-KO] Translated `big_bird.md` to Korean | {
"login": "ssum21",
"id": 116950962,
"node_id": "U_kgDOBviHsg",
"avatar_url": "https://avatars.githubusercontent.com/u/116950962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ssum21",
"html_url": "https://github.com/ssum21",
"followers_url": "https://api.github.com/users/ssum21/followers",
"following_url": "https://api.github.com/users/ssum21/following{/other_user}",
"gists_url": "https://api.github.com/users/ssum21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ssum21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssum21/subscriptions",
"organizations_url": "https://api.github.com/users/ssum21/orgs",
"repos_url": "https://api.github.com/users/ssum21/repos",
"events_url": "https://api.github.com/users/ssum21/events{/privacy}",
"received_events_url": "https://api.github.com/users/ssum21/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-26T02:42:22 | 2025-10-16T18:23:57 | 2025-10-16T18:23:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40445",
"html_url": "https://github.com/huggingface/transformers/pull/40445",
"diff_url": "https://github.com/huggingface/transformers/pull/40445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40445.patch",
"merged_at": "2025-10-16T18:23:57"
} | # What does this PR do?
Translated the `big_bird.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 KREW 팀원들에게 리뷰를 요청하는 아래 주석을 노출해주세요!-->
May you please review this PR?
@4N3MONE, @Kim-Ju-won, @ahnjj, @FacerAin, @ssum21, @TaskerJang, @HyunZ118
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
<!---transformers, course는 @stevhliu, agent-course는 @sergiopanieg smol-agents는 @albertvillanova입니다!--->
@stevhliu May you please review this PR?
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40445/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40444/comments | https://api.github.com/repos/huggingface/transformers/issues/40444/events | https://github.com/huggingface/transformers/issues/40444 | 3,353,286,554 | I_kwDOCUB6oc7H3xea | 40,444 | Finetuning Qwen2.5-VL with an IterableDataset with multiple images per prompt fails | {
"login": "Infernaught",
"id": 72055086,
"node_id": "MDQ6VXNlcjcyMDU1MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/72055086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Infernaught",
"html_url": "https://github.com/Infernaught",
"followers_url": "https://api.github.com/users/Infernaught/followers",
"following_url": "https://api.github.com/users/Infernaught/following{/other_user}",
"gists_url": "https://api.github.com/users/Infernaught/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Infernaught/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Infernaught/subscriptions",
"organizations_url": "https://api.github.com/users/Infernaught/orgs",
"repos_url": "https://api.github.com/users/Infernaught/repos",
"events_url": "https://api.github.com/users/Infernaught/events{/privacy}",
"received_events_url": "https://api.github.com/users/Infernaught/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-25T21:38:56 | 2025-10-04T08:02:15 | 2025-10-04T08:02:15 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.55.3
- Platform: Linux-5.4.0-1113-oracle-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: 0.16.9
- PyTorch version (accelerator?): 2.6.0+cu124 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes, for training
- GPU type: NVIDIA RTX A5000
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here's a script that reproduces this issue:
```
from datasets import load_dataset
from transformers import AutoModelForVision2Seq, AutoProcessor
from peft import get_peft_model, LoraConfig
dataset = load_dataset("unsloth/LaTeX_OCR", split = "train[:20]", streaming=False)
model = AutoModelForVision2Seq.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct"
)
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct",
)
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0,
bias="none",
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "qkv_proj", "up_proj", "down_proj"],
)
model = get_peft_model(
model=model,
peft_config=peft_config,
)
instruction = "Write the LaTeX representation for this image."
def convert_to_conversation(sample):
conversation = [
{ "role": "user",
"content" : [
{"type" : "text", "text" : instruction},
{"type" : "image", "image" : sample["image"]},
{"type" : "image", "image" : sample["image"].resize((sample["image"].size[0]//2, sample["image"].size[1]//2))}
]
},
{ "role" : "assistant",
"content" : [
{"type" : "text", "text" : sample["text"]} ]
},
]
return { "messages" : conversation}
def generator():
for sample in dataset:
yield convert_to_conversation(sample)
from datasets import IterableDataset
converted_dataset = IterableDataset.from_generator(generator)
from trl import SFTTrainer, SFTConfig
from qwen_vl_utils import process_vision_info
class DataCollator:
def __init__(self, processor):
self.processor = processor
def __call__(self, examples):
return self.process(examples)
def process(self, examples):
texts = [
processor.apply_chat_template(example["messages"], tokenize=False)
for example
in examples
]
image_inputs = [
process_vision_info(example["messages"])[0]
for example
in examples
]
model_inputs = processor(
text=texts,
images=image_inputs,
return_tensors="pt",
padding=True
)
labels = model_inputs["input_ids"].clone()
# mask padding tokens in labels
labels[labels == processor.tokenizer.pad_token_id] = -100
image_tokens = [151652, 151653, 151655]
# mask image token IDs in the labels
for image_token_id in image_tokens:
labels[labels == image_token_id] = -100
input_ids = model_inputs["input_ids"]
attention_mask = model_inputs["attention_mask"]
pixel_values = model_inputs["pixel_values"]
image_grid_thw = model_inputs["image_grid_thw"]
pixel_values = pixel_values.unsqueeze(0)
return {
"input_ids": input_ids,
"attention_mask": attention_mask,
"pixel_values": pixel_values,
"image_grid_thw": image_grid_thw,
"labels": labels
}
trainer = SFTTrainer(
model = model,
train_dataset = converted_dataset,
data_collator = DataCollator(processor),
args = SFTConfig(
per_device_train_batch_size = 1,
gradient_accumulation_steps = 8,
warmup_steps = 5,
max_steps = 30,
# num_train_epochs = 1, # Set this instead of max_steps for full training runs
learning_rate = 2e-4,
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "cosine_with_restarts",
seed = 3407,
# seed = 42,
output_dir = "outputs",
report_to = "none", # For Weights and Biases
# You MUST put the below items for vision finetuning:
remove_unused_columns = False,
dataset_text_field = "",
dataset_kwargs = {"skip_prepare_dataset": True},
# max_seq_length = 2048,
),
)
trainer_stats = trainer.train()
```
### Expected behavior
I would expect that training works as it does with only one image per example.
I wasn't sure whether to post this to the accelerate issues page or here given that this is occurring generally in training. I think I've traced this issue to the DataLoaderDispatcher whenever `self.slice_fn` is called (https://github.com/huggingface/accelerate/blob/5dd3d0b6901983bcb6de1b69687333a324382524/src/accelerate/data_loader.py#L880 and https://github.com/huggingface/accelerate/blob/5dd3d0b6901983bcb6de1b69687333a324382524/src/accelerate/data_loader.py#L911). For Qwen2.5-VL, in particular, it slices the `image_grid_thw` seemingly with the assumption that there is only one image per row (it slices by batch size). If this is the incorrect place to post this, I can move this issue to the accelerate repo. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40444/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40444/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40443/comments | https://api.github.com/repos/huggingface/transformers/issues/40443/events | https://github.com/huggingface/transformers/pull/40443 | 3,353,284,685 | PR_kwDOCUB6oc6lSed8 | 40,443 | Fix duplicate import block and improve import structure in init.py | {
"login": "himaenshuu",
"id": 118502619,
"node_id": "U_kgDOBxA02w",
"avatar_url": "https://avatars.githubusercontent.com/u/118502619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/himaenshuu",
"html_url": "https://github.com/himaenshuu",
"followers_url": "https://api.github.com/users/himaenshuu/followers",
"following_url": "https://api.github.com/users/himaenshuu/following{/other_user}",
"gists_url": "https://api.github.com/users/himaenshuu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/himaenshuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/himaenshuu/subscriptions",
"organizations_url": "https://api.github.com/users/himaenshuu/orgs",
"repos_url": "https://api.github.com/users/himaenshuu/repos",
"events_url": "https://api.github.com/users/himaenshuu/events{/privacy}",
"received_events_url": "https://api.github.com/users/himaenshuu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-25T21:37:58 | 2025-08-26T18:30:32 | 2025-08-26T12:53:08 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40443",
"html_url": "https://github.com/huggingface/transformers/pull/40443",
"diff_url": "https://github.com/huggingface/transformers/pull/40443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40443.patch",
"merged_at": null
} | • Removed the duplicate import block at the end that was importing utility functions already imported at the top.
• Added a TYPE_CHECKING guard for auto model imports to prevent circular dependencies.
• Included proper exception handling for optional PyTorch dependencies using a try/except block.
• Fixed the import order to ensure that utilities load before models. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40443/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40442/comments | https://api.github.com/repos/huggingface/transformers/issues/40442/events | https://github.com/huggingface/transformers/pull/40442 | 3,353,133,694 | PR_kwDOCUB6oc6lR-en | 40,442 | docs(pixtral): Update Pixtral model card to new format | {
"login": "BryanBradfo",
"id": 101939095,
"node_id": "U_kgDOBhN3lw",
"avatar_url": "https://avatars.githubusercontent.com/u/101939095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BryanBradfo",
"html_url": "https://github.com/BryanBradfo",
"followers_url": "https://api.github.com/users/BryanBradfo/followers",
"following_url": "https://api.github.com/users/BryanBradfo/following{/other_user}",
"gists_url": "https://api.github.com/users/BryanBradfo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BryanBradfo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BryanBradfo/subscriptions",
"organizations_url": "https://api.github.com/users/BryanBradfo/orgs",
"repos_url": "https://api.github.com/users/BryanBradfo/repos",
"events_url": "https://api.github.com/users/BryanBradfo/events{/privacy}",
"received_events_url": "https://api.github.com/users/BryanBradfo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-25T20:33:07 | 2025-08-27T18:38:52 | 2025-08-27T18:38:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40442",
"html_url": "https://github.com/huggingface/transformers/pull/40442",
"diff_url": "https://github.com/huggingface/transformers/pull/40442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40442.patch",
"merged_at": "2025-08-27T18:38:51"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This pull request updates the `pixtral.md` model card to align with the new standardized format, as requested in issue #36979.
The main changes include:
- Restructuring the document to follow the new standard layout (header, description, tip, hfoptions).
- Simplifying the model's description to be more accessible.
- Setting up the `hfoptions` block with the `AutoModel` code example from the previous documentation.
### A Note on Code Validation
I have done my best to prepare the `AutoModel` code example based on the existing documentation. However, due to hardware limitations (the Pixtral-12B model is too large for my local machine), I was unable to run and personally verify that this code works as expected with the latest `transformers` version.
Could a reviewer with the necessary computing resources please check if the `AutoModel` example runs correctly? Any help in adding the `Pipeline` and `transformers-cli` examples would also be greatly appreciated.
This PR is a starting point, and I'm eager to learn from your feedback to complete it.
Fixes #36979
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Based on the contribution guide for documentation: cc @stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40442/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40441/comments | https://api.github.com/repos/huggingface/transformers/issues/40441/events | https://github.com/huggingface/transformers/pull/40441 | 3,353,112,664 | PR_kwDOCUB6oc6lR5_5 | 40,441 | Fix collated reports model name entry | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-25T20:26:12 | 2025-08-25T20:36:32 | 2025-08-25T20:36:01 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40441",
"html_url": "https://github.com/huggingface/transformers/pull/40441",
"diff_url": "https://github.com/huggingface/transformers/pull/40441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40441.patch",
"merged_at": "2025-08-25T20:36:01"
} | null | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40441/timeline | null | null | null | null | true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.