url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/39135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39135/comments | https://api.github.com/repos/huggingface/transformers/issues/39135/events | https://github.com/huggingface/transformers/pull/39135 | 3,189,297,058 | PR_kwDOCUB6oc6cvT4F | 39,135 | Several fixes for Gemma3n | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-06-30T17:25:46 | 2025-07-01T12:15:58 | 2025-07-01T08:34:53 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39135",
"html_url": "https://github.com/huggingface/transformers/pull/39135",
"diff_url": "https://github.com/huggingface/transformers/pull/39135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39135.patch",
"merged_at": "2025-07-01T08:34:53"
} | # What does this PR do?
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39135/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39134/comments | https://api.github.com/repos/huggingface/transformers/issues/39134/events | https://github.com/huggingface/transformers/pull/39134 | 3,189,061,548 | PR_kwDOCUB6oc6cuhHC | 39,134 | [video processors] Support float fps for precise frame sampling | {
"login": "zrohyun",
"id": 25281432,
"node_id": "MDQ6VXNlcjI1MjgxNDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/25281432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zrohyun",
"html_url": "https://github.com/zrohyun",
"followers_url": "https://api.github.com/users/zrohyun/followers",
"following_url": "https://api.github.com/users/zrohyun/following{/other_user}",
"gists_url": "https://api.github.com/users/zrohyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zrohyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zrohyun/subscriptions",
"organizations_url": "https://api.github.com/users/zrohyun/orgs",
"repos_url": "https://api.github.com/users/zrohyun/repos",
"events_url": "https://api.github.com/users/zrohyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/zrohyun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T16:07:37 | 2025-07-07T03:44:31 | 2025-07-07T03:43:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39134",
"html_url": "https://github.com/huggingface/transformers/pull/39134",
"diff_url": "https://github.com/huggingface/transformers/pull/39134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39134.patch",
"merged_at": "2025-07-07T03:43:43"
} | Enable fractional fps values (e.g., 1.5, 29.97) in video processors for more precise frame sampling control.
- Change fps type from int to float across all video processors
- Maintain backward compatibility with integer values
Extends: #38105
# What does this PR do?
This PR extends the video frame sampling functionality introduced in #38105 to support float fps values, enabling more precise control over video frame sampling rates.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39134/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39133/comments | https://api.github.com/repos/huggingface/transformers/issues/39133/events | https://github.com/huggingface/transformers/pull/39133 | 3,189,047,623 | PR_kwDOCUB6oc6cueLx | 39,133 | [serve] Cursor support, move docs into separate page, add more examples | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T16:03:28 | 2025-07-25T09:54:59 | 2025-07-03T16:04:16 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39133",
"html_url": "https://github.com/huggingface/transformers/pull/39133",
"diff_url": "https://github.com/huggingface/transformers/pull/39133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39133.patch",
"merged_at": "2025-07-03T16:04:16"
} | # What does this PR do?
This PR:
- Adds support to CORS requests (e.g. Cursor)
- Changes emitted payload to avoid sending empty fields (as expected by Cursor)
- Moves `transformers serve` into the `Serving` docs page
- Adds more details to the existing `transformers serve` docs
- Adds example to use `transformers serve` with Jan and port forwarding 🤩
- Adds example to use `transformers serve` with Cursor with `ngrok` 🤩
- (Removes stray TF docs) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39133/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39132/comments | https://api.github.com/repos/huggingface/transformers/issues/39132/events | https://github.com/huggingface/transformers/pull/39132 | 3,189,033,345 | PR_kwDOCUB6oc6cubTO | 39,132 | Better typing for model.config | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8882772041,
"node_id": "LA_kwDOCUB6oc8AAAACEXRYSQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/typing",
"name": "typing",
"color": "DBA272",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-30T15:59:13 | 2025-07-16T12:50:36 | 2025-07-16T12:50:35 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39132",
"html_url": "https://github.com/huggingface/transformers/pull/39132",
"diff_url": "https://github.com/huggingface/transformers/pull/39132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39132.patch",
"merged_at": "2025-07-16T12:50:35"
} | # What does this PR do?
Makes `model.config` type resolved to specific config
### On `main` (before)
`PretrainedConfig`, no model-specific attributes

### On branch (after)
`LlamaConfig` with specific attributes for Llama model

| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39132/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39131/comments | https://api.github.com/repos/huggingface/transformers/issues/39131/events | https://github.com/huggingface/transformers/pull/39131 | 3,188,736,506 | PR_kwDOCUB6oc6ctbr8 | 39,131 | Test fixes for Aria (and some Expectation for llava_next_video) | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T14:37:31 | 2025-07-02T21:41:14 | 2025-07-02T21:41:14 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39131",
"html_url": "https://github.com/huggingface/transformers/pull/39131",
"diff_url": "https://github.com/huggingface/transformers/pull/39131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39131.patch",
"merged_at": "2025-07-02T21:41:14"
} | This PR aims to fix some issues with the `aria` model tests:
- the image used in the test is fetched from an url that is no longer up: I replaced the url with the image used in `llava-vl` which seems to be the same
- some test have an outdated image token `<image>` which I updated to the new `<|img|>` token
- there are some device / dtype issues that were fixed with `.to`
- added Expectations for AMD. Also did that for `llava_next_video` and it's in this PR because it's a little to small for a standalone PR imo.
All in all, this brings the number of failing tests from 21 to 7 in `aria`. The ones leftover have to do with a larger issue of FA2 in the `AriaTextModel` which seems a little out of scope here since it's modeling code.
| {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39131/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39130/comments | https://api.github.com/repos/huggingface/transformers/issues/39130/events | https://github.com/huggingface/transformers/pull/39130 | 3,188,655,791 | PR_kwDOCUB6oc6ctKO_ | 39,130 | add pin memory and block table | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-30T14:13:31 | 2025-08-11T15:26:17 | null | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39130",
"html_url": "https://github.com/huggingface/transformers/pull/39130",
"diff_url": "https://github.com/huggingface/transformers/pull/39130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39130.patch",
"merged_at": null
} | # What does this PR do?
Update CB to support block table | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39130/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39129/comments | https://api.github.com/repos/huggingface/transformers/issues/39129/events | https://github.com/huggingface/transformers/pull/39129 | 3,188,584,433 | PR_kwDOCUB6oc6cs68J | 39,129 | Add EXAONE 4.0 model | {
"login": "lgai-exaone",
"id": 176995546,
"node_id": "U_kgDOCoy82g",
"avatar_url": "https://avatars.githubusercontent.com/u/176995546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lgai-exaone",
"html_url": "https://github.com/lgai-exaone",
"followers_url": "https://api.github.com/users/lgai-exaone/followers",
"following_url": "https://api.github.com/users/lgai-exaone/following{/other_user}",
"gists_url": "https://api.github.com/users/lgai-exaone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lgai-exaone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lgai-exaone/subscriptions",
"organizations_url": "https://api.github.com/users/lgai-exaone/orgs",
"repos_url": "https://api.github.com/users/lgai-exaone/repos",
"events_url": "https://api.github.com/users/lgai-exaone/events{/privacy}",
"received_events_url": "https://api.github.com/users/lgai-exaone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-30T13:53:04 | 2025-09-23T13:35:01 | 2025-07-25T17:58:28 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39129",
"html_url": "https://github.com/huggingface/transformers/pull/39129",
"diff_url": "https://github.com/huggingface/transformers/pull/39129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39129.patch",
"merged_at": "2025-07-25T17:58:28"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add EXAONE 4.0 modeling code in preparation for official model release by LG AI Research.
This PR adds the modeling code for EXAONE 4.0 ahead of the official model release by LG AI Research.
Model weights, test code, and documentation will be updated once the official release is available.
This contribution is licensed under the **MIT License**. Please refer to the attached NOTICE.md for the license notice.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39129/reactions",
"total_count": 9,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39129/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39128/comments | https://api.github.com/repos/huggingface/transformers/issues/39128/events | https://github.com/huggingface/transformers/pull/39128 | 3,188,501,822 | PR_kwDOCUB6oc6cso70 | 39,128 | Fix chat | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T13:30:55 | 2025-06-30T14:02:32 | 2025-06-30T13:47:48 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39128",
"html_url": "https://github.com/huggingface/transformers/pull/39128",
"diff_url": "https://github.com/huggingface/transformers/pull/39128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39128.patch",
"merged_at": "2025-06-30T13:47:48"
} | # What does this PR do?
Fix chat init
| {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39128/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39127/comments | https://api.github.com/repos/huggingface/transformers/issues/39127/events | https://github.com/huggingface/transformers/pull/39127 | 3,188,480,417 | PR_kwDOCUB6oc6cskQ- | 39,127 | Licenses | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T13:25:21 | 2025-06-30T13:38:32 | 2025-06-30T13:25:36 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39127",
"html_url": "https://github.com/huggingface/transformers/pull/39127",
"diff_url": "https://github.com/huggingface/transformers/pull/39127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39127.patch",
"merged_at": "2025-06-30T13:25:36"
} | Arthur's comments | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39127/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39126/comments | https://api.github.com/repos/huggingface/transformers/issues/39126/events | https://github.com/huggingface/transformers/pull/39126 | 3,188,266,252 | PR_kwDOCUB6oc6cr2jC | 39,126 | [Whisper] update token timestamps tests | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T12:15:01 | 2025-07-01T08:43:42 | 2025-06-30T19:55:36 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39126",
"html_url": "https://github.com/huggingface/transformers/pull/39126",
"diff_url": "https://github.com/huggingface/transformers/pull/39126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39126.patch",
"merged_at": "2025-06-30T19:55:36"
} | # What does this PR do?
Includes 3 fixes for token timestamps for Whisper:
1. nit due to `assertEqual` → `torch.testing.assert_close`
2. update values for A10
3. unhandled edge case (see added comments in `generation_whisper.py`)
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39126/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39125/comments | https://api.github.com/repos/huggingface/transformers/issues/39125/events | https://github.com/huggingface/transformers/pull/39125 | 3,188,227,254 | PR_kwDOCUB6oc6cruS- | 39,125 | Fix multimodal processor get duplicate arguments when receive kwargs for initialization | {
"login": "Isotr0py",
"id": 41363108,
"node_id": "MDQ6VXNlcjQxMzYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/41363108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Isotr0py",
"html_url": "https://github.com/Isotr0py",
"followers_url": "https://api.github.com/users/Isotr0py/followers",
"following_url": "https://api.github.com/users/Isotr0py/following{/other_user}",
"gists_url": "https://api.github.com/users/Isotr0py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Isotr0py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Isotr0py/subscriptions",
"organizations_url": "https://api.github.com/users/Isotr0py/orgs",
"repos_url": "https://api.github.com/users/Isotr0py/repos",
"events_url": "https://api.github.com/users/Isotr0py/events{/privacy}",
"received_events_url": "https://api.github.com/users/Isotr0py/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-06-30T12:01:26 | 2025-07-02T11:57:19 | 2025-07-02T11:57:15 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39125",
"html_url": "https://github.com/huggingface/transformers/pull/39125",
"diff_url": "https://github.com/huggingface/transformers/pull/39125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39125.patch",
"merged_at": "2025-07-02T11:57:15"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Also reported in https://github.com/vllm-project/vllm/issues/19833
- Fix #38898
The current processor arguments revolving logic can hit `TypeError: Qwen2VLProcessor.__init__() got multiple values for argument 'tokenizer'` if we passed tokenizer as a kwargs. Because it will remove unused args, which caused an incorrect args order and raise `TypeError: Qwen2VLProcessor.__init__() got multiple values for argument 'tokenizer'`.
For example, Qwen2-VL receive `(image_processor, tokenizer, video_processor)` as args, if we passed `tokenizer` as kwargs, it will be removed from args, which will become `(image_processor, video_processor)` and pass `video_processor` as tokenizer:
https://github.com/huggingface/transformers/blob/4a79bf947d0614d2a023b9137a32cf754ac241fe/src/transformers/models/qwen2_vl/processing_qwen2_vl.py#L78
**Code to reproduce**
```python3
from transformers import Qwen2VLProcessor, Qwen2Tokenizer
processor = Qwen2VLProcessor.from_pretrained("/data/LLM-model/Qwen2-VL-2B-Instruct")
print(type(processor.tokenizer))
tokenizer = Qwen2Tokenizer.from_pretrained("/data/LLM-model/Qwen2.5-3B-Instruct")
processor = Qwen2VLProcessor.from_pretrained("/data/LLM-model/Qwen2-VL-2B-Instruct", tokenizer=tokenizer)
print(type(processor.tokenizer))
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Isotr0py",
"id": 41363108,
"node_id": "MDQ6VXNlcjQxMzYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/41363108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Isotr0py",
"html_url": "https://github.com/Isotr0py",
"followers_url": "https://api.github.com/users/Isotr0py/followers",
"following_url": "https://api.github.com/users/Isotr0py/following{/other_user}",
"gists_url": "https://api.github.com/users/Isotr0py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Isotr0py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Isotr0py/subscriptions",
"organizations_url": "https://api.github.com/users/Isotr0py/orgs",
"repos_url": "https://api.github.com/users/Isotr0py/repos",
"events_url": "https://api.github.com/users/Isotr0py/events{/privacy}",
"received_events_url": "https://api.github.com/users/Isotr0py/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39125/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39124/comments | https://api.github.com/repos/huggingface/transformers/issues/39124/events | https://github.com/huggingface/transformers/issues/39124 | 3,188,205,281 | I_kwDOCUB6oc6-CCbh | 39,124 | AttributeError: 'Resampler' object has no attribute '_initialize_weights' | {
"login": "DarwinYang",
"id": 733525,
"node_id": "MDQ6VXNlcjczMzUyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/733525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DarwinYang",
"html_url": "https://github.com/DarwinYang",
"followers_url": "https://api.github.com/users/DarwinYang/followers",
"following_url": "https://api.github.com/users/DarwinYang/following{/other_user}",
"gists_url": "https://api.github.com/users/DarwinYang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DarwinYang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DarwinYang/subscriptions",
"organizations_url": "https://api.github.com/users/DarwinYang/orgs",
"repos_url": "https://api.github.com/users/DarwinYang/repos",
"events_url": "https://api.github.com/users/DarwinYang/events{/privacy}",
"received_events_url": "https://api.github.com/users/DarwinYang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-30T11:53:33 | 2025-07-31T15:29:46 | 2025-07-02T08:32:44 | NONE | null | null | null | null | ### System Info
transformers version: 4.52.4
Platform: Linux-5.4.282-1.el8.elrepo.x86_64-x86_64-with-glibc2.31
Python version: 3.10.18
Huggingface_hub version: 0.33.0
Safetensors version: 0.5.3
Accelerate version: 1.7.0
Accelerate config: not found
DeepSpeed version: 0.14.4
PyTorch version (GPU?): 2.7.1+cu126 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?:
Using GPU in script?:
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I was training the model using LLaMa-Factory, I utilized transformers/modeling_utils.py and encountered the following error:
` File "/home/admin/miniforge3/envs/llama-factory/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2557, in initialize_weights self.smart_apply(self._initialize_weights) File "/home/admin/miniforge3/envs/llama-factory/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2548, in smart_apply module.smart_apply(module._initialize_weights) File "/home/admin/miniforge3/envs/llama-factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1940, in __getattr__ raise AttributeError( AttributeError: 'Resampler' object has no attribute '_initialize_weights'`
### Expected behavior
**module.smart_apply(module._initialize_weights)** in the initialize_weights Function of modeling_utils.py
```
if not hasattr(torch.nn.Module, "smart_apply"):
# This function is equivalent to `torch.nn.Module.apply`, except that it dynamically adjust the function
# to apply as we go down the graph
def smart_apply(self, fn):
for module in self.children():
# We found a sub-model: recursively dispatch its own init function now!
if hasattr(module, "_init_weights"):
module.smart_apply(module._initialize_weights)
else:
module.smart_apply(fn)
fn(self)
return self
```
In my local environment, I made the belowing adjustment to the line of code, and it allowed the Omni large model training to run successfully. Please have the developers of Transformers evaluate this.
**module.smart_apply(module._init_weights)** | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39124/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39123/comments | https://api.github.com/repos/huggingface/transformers/issues/39123/events | https://github.com/huggingface/transformers/issues/39123 | 3,187,901,640 | I_kwDOCUB6oc6-A4TI | 39,123 | Gradient accumulation steps for Vision Languge model | {
"login": "khalil-Hennara",
"id": 90086758,
"node_id": "MDQ6VXNlcjkwMDg2NzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/90086758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khalil-Hennara",
"html_url": "https://github.com/khalil-Hennara",
"followers_url": "https://api.github.com/users/khalil-Hennara/followers",
"following_url": "https://api.github.com/users/khalil-Hennara/following{/other_user}",
"gists_url": "https://api.github.com/users/khalil-Hennara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khalil-Hennara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khalil-Hennara/subscriptions",
"organizations_url": "https://api.github.com/users/khalil-Hennara/orgs",
"repos_url": "https://api.github.com/users/khalil-Hennara/repos",
"events_url": "https://api.github.com/users/khalil-Hennara/events{/privacy}",
"received_events_url": "https://api.github.com/users/khalil-Hennara/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T10:05:17 | 2025-08-09T08:03:07 | 2025-08-09T08:03:07 | NONE | null | null | null | null | I am trying to fine tuning Gemma-3-4b, I have my own script for training using DDP, This training loop used in my custom framework
```python
for epoch in range(starting_epoch, int(config.training.num_train_epochs)):
# in case using steps_limit avoid multiple evaluation.
if (config.training.max_steps != -1 and completed_steps >= config.training.max_steps) or (
completed_steps >= config.training.max_train_steps):
break
# The MemoryTrace context manager is track the memory usage.
with (MemoryTrace() if config.training.use_memory_track else nullcontext()) as memtrace:
model.train()
total_loss = 0
# Fix gradient accumulation step update
training_iterator = iter(train_dataloader)
# start training for one epoch
for step in range(resume_step,num_update_steps_per_epoch):
# The gradient accumulation logic
# we can see this https://discuss.pytorch.org/t/why-do-we-need-to-set-the-gradients-manually-to-zero-in-pytorch/4903/85
# In the previous blog I've notice this as a way to adapt the mode to the gradient accumulation step
# We also need to fix the gradient accumulation step as mention in unsloth blog and huggingface blog
# https://huggingface.co/blog/gradient_accumulation & https://unsloth.ai/blog/gradient
# We look for implementation at Traniner and unsloth trainer and example from accelerate you can acess
# these implementaion here
# https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation_for_autoregressive_models.py#L337
# https://github.com/unslothai/unsloth-zoo/blob/main/unsloth_zoo/training_utils.py#L424
# https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3802
# we will follow the same abroach in previous base code
batch_samples = []
num_batches_in_step = (
config.training.gradient_accumulation_steps if step != (num_update_steps_per_epoch - 1) else remainder
)
for _ in range(num_batches_in_step):
batch_samples += [next(training_iterator)]
# return tensor
local_num_items_in_batch = torch.stack([(batch["labels"].ne(-100)).sum() for batch in batch_samples]).sum()
# logger.warning(f"[Rank]:{rank}, [Local]:{local_rank}, {local_num_items_in_batch}")
# make sum operation over all rank
# ------------------- all_reduc ------------------------------
if config.parallel.enable:
local_num_items_in_batch = local_num_items_in_batch.to(local_rank)
dist.all_reduce(local_num_items_in_batch, op=dist.ReduceOp.SUM)
# -------------------------------------------------------------
# logger.warning(f"[Rank]:{rank}, [Local]:{local_rank}, {local_num_items_in_batch}")
num_items_in_batch = local_num_items_in_batch.item() # the variable should be on the local_rank but
# loss_function do that so I set it as number.
# logger.warning(f"[Rank]:{rank}, [Local]:{local_rank}, {num_items_in_batch}")
losses = []
# This for loop represent one step with gradient accumulation.
for i, batch in enumerate(batch_samples):
# This step is needed for DDP,FSDP
# also for DDP model we need to do more than usuall
# like this article https://muellerzr.github.io/blog/gradient_accumulation.html
# for gradient accumulation with FSDP we need to be cerful, we might get OOM error, so we can
# TODO disable this with FSDP later for more info read the following
# https://huggingface.co/docs/accelerate/en/concept_guides/gradient_synchronization#nosync-requires-additional-gpu-memory-when-using-fsdp
sync_context = (
model.no_sync
if (hasattr(model, "no_sync") and i < len(batch_samples) - 1 and world_size > 1 )
else nullcontext
)
with sync_context():
# TODO we need to set, the scaler for float16 training
batch = {key: batch[key].to(local_rank) for key in batch.keys()}
outputs = model(**batch, use_cache=False, num_items_in_batch=num_items_in_batch)
loss = outputs.loss
# We multiply by num_processes because the DDP and FSDP calculates the average gradient
# across all devices whereas dividing by num_items_in_batch already takes into account all devices
loss *= world_size
# compute the gradient using backward.
loss.backward()
losses.append(loss.detach())
if config.training.gradient_clipping and config.training.gradient_clipping_threshold > 0.0:
grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), config.training.gradient_clipping_threshold)
# update parameters
optimizer.step()
# update learning rate
lr_scheduler.step()
# make all the gradient zero for all parameters the gradient is an accumulated process,
# so we should make it zero after each step.
optimizer.zero_grad()
# Checks if the accelerator has performed an optimization step behind the scenes
progress_bar.update(1)
completed_steps += 1
train_statue.update()
# match the save strategy to use with the script
# TODO test the peft training saving and loading
checkpoint_manager.save(completed_steps)
# First we sum locally, for one GPU synario this is enogh
# for multi-gpu we need to add all_reduce operation to collect data from other process
local_step_losses = sum(losses)
if config.parallel.enable:
dist.all_reduce(local_step_losses, op=dist.ReduceOp.SUM)
global_total_loss = local_step_losses # we add this just for clarity of code
# to distinguish between local and global in case of multi-gpu
else:
global_total_loss = local_step_losses
global_total_loss = global_total_loss / (world_size * config.training.gradient_accumulation_steps)
# we get the right value that we compute the loss on it
global_average_loss = global_total_loss / num_items_in_batch if num_items_in_batch > 0 else global_total_loss
progress_bar.set_description(f"[gl loss: {global_total_loss.item()}, "
f"avg loss: {global_average_loss}, "
f"grad norm: {grad_norm} ]: ")
```
I consider using code for one or multi-gpu, The code is work very will for any model. I've tested for many LLM and SLM for fine-tuning and make sure that the code has no bugs specially for gradient accumulation step as has been report from unsloth https://unsloth.ai/blog/gradient and transformers https://huggingface.co/blog/gradient_accumulation
I've follow the code on the trainer and unsloth_trainer and also the provided script from accelerate
https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation_for_autoregressive_models.py#L248
All three scripts has the same implementation with respect for one-gpu in unsloth case and multi-gpu in trainer and accelerator
I got a strange behavior when running the code with Gemma3 I am sure that there is somethings related to gradient accumulation calculation, I thought it might be my code or I've missed somthing, then I've try to fine-tuning the model using Trainer on one GPU, I get the following training loss and norm for three running
Run 1: batch_size 4 ,gac = 1
Run 2: batch_size 2 ,gac = 2
Run 3: batch_size 1 ,gac = 4
the following chart show all cruves together


I use the following code
```python
from transformers import TrainingArguments
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoProcessor
import torch
from transformers import Trainer
from peft import LoraConfig, get_peft_model, TaskType
training_args = TrainingArguments(
output_dir="yelp_review_classifier",
eval_strategy="no",
push_to_hub=True,
per_device_train_batch_size=4,
per_device_eval_batch_size=1,
num_train_epochs=3,
learning_rate=1e-4,
weight_decay=0.01,
gradient_accumulation_steps = 1,
report_to=["wandb"],
run_name="trainer_with_batch_size_4_gac_1",
hub_token= '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',
logging_steps = 5,
max_steps = 100
)
tokenizer = AutoTokenizer.from_pretrained('google/gemma-3-4b-it')
processor = AutoProcessor.from_pretrained("google/gemma-3-4b-it")
model = AutoModelForCausalLM.from_pretrained('google/gemma-3-4b-it', token='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', torch_dtype=torch.bfloat16, device_map='auto',attn_implementation='flash_attention_2')
lora_config = LoraConfig(
lora_alpha=16,
r=8,
lora_dropout=0.0,
bias="none",
use_dora=True,
task_type=TaskType.CAUSAL_LM,
target_modules=[
"q_proj",
"v_proj",
"k_proj",
"o_proj",
],
)
model = get_peft_model(model, lora_config)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=data,
eval_dataset=data,
)
trainer.train()
```
for my custom code I get the following chart, the batch size and training step is different from trainer


if you can see within my code the loss and the norm almost twice the value when using the gac, the pattern repeted when I increase the gac and decrease the batch size
I hope to help with that, @muellerzr and @ArthurZucker | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39123/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39122/comments | https://api.github.com/repos/huggingface/transformers/issues/39122/events | https://github.com/huggingface/transformers/pull/39122 | 3,187,900,326 | PR_kwDOCUB6oc6cqoeu | 39,122 | [Fix] Make EoMT compatible with pipeline | {
"login": "yaswanth19",
"id": 82788246,
"node_id": "MDQ6VXNlcjgyNzg4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82788246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaswanth19",
"html_url": "https://github.com/yaswanth19",
"followers_url": "https://api.github.com/users/yaswanth19/followers",
"following_url": "https://api.github.com/users/yaswanth19/following{/other_user}",
"gists_url": "https://api.github.com/users/yaswanth19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaswanth19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaswanth19/subscriptions",
"organizations_url": "https://api.github.com/users/yaswanth19/orgs",
"repos_url": "https://api.github.com/users/yaswanth19/repos",
"events_url": "https://api.github.com/users/yaswanth19/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaswanth19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-30T10:04:50 | 2025-07-11T10:44:39 | 2025-07-02T11:25:27 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39122",
"html_url": "https://github.com/huggingface/transformers/pull/39122",
"diff_url": "https://github.com/huggingface/transformers/pull/39122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39122.patch",
"merged_at": "2025-07-02T11:25:27"
} | Renames arg `original_image_sizes` -> `target_sizes` and few more changes so that the model is compatible with Image segmentation Pipeline | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39122/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39121/comments | https://api.github.com/repos/huggingface/transformers/issues/39121/events | https://github.com/huggingface/transformers/pull/39121 | 3,187,837,094 | PR_kwDOCUB6oc6cqbA_ | 39,121 | [qwen2-vl] fix FA2 inference | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-06-30T09:43:34 | 2025-07-01T10:18:38 | 2025-07-01T10:18:38 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39121",
"html_url": "https://github.com/huggingface/transformers/pull/39121",
"diff_url": "https://github.com/huggingface/transformers/pull/39121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39121.patch",
"merged_at": "2025-07-01T10:18:37"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/39095 and set `mask=None` for FA2, otherwise inference fails because FA2 expects a 2d mask
Tested with `RUN_SLOW=1 pytest -k flash_attn` for all modified models | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39121/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39120/comments | https://api.github.com/repos/huggingface/transformers/issues/39120/events | https://github.com/huggingface/transformers/pull/39120 | 3,187,790,809 | PR_kwDOCUB6oc6cqRHP | 39,120 | Refactor the way we handle outputs for new llamas and new models | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T09:28:05 | 2025-10-08T11:58:52 | 2025-07-05T09:34:28 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39120",
"html_url": "https://github.com/huggingface/transformers/pull/39120",
"diff_url": "https://github.com/huggingface/transformers/pull/39120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39120.patch",
"merged_at": "2025-07-05T09:34:28"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39120/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39120/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39119/comments | https://api.github.com/repos/huggingface/transformers/issues/39119/events | https://github.com/huggingface/transformers/pull/39119 | 3,187,785,251 | PR_kwDOCUB6oc6cqP5_ | 39,119 | All CI jobs with A10 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T09:26:24 | 2025-06-30T12:23:29 | 2025-06-30T12:23:27 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39119",
"html_url": "https://github.com/huggingface/transformers/pull/39119",
"diff_url": "https://github.com/huggingface/transformers/pull/39119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39119.patch",
"merged_at": "2025-06-30T12:23:27"
} | # What does this PR do?
I updated
https://huggingface.co/datasets/hf-internal-testing/transformers_daily_ci/commit/687b472ebd38c936c34205c0fda34e27bb7fca9a
so all model jobs are using A10 now. Let's also change everything to A10 ( CI triggered by comment, the job checking new failures etc.)
Probably to remove the `runner_map` stuff and its usage, but let's keep it for a while (maybe it would be useful at some point) | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39119/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39119/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39118/comments | https://api.github.com/repos/huggingface/transformers/issues/39118/events | https://github.com/huggingface/transformers/issues/39118 | 3,187,346,715 | I_kwDOCUB6oc69-w0b | 39,118 | [Gaudi] the seamless_m4t cannot work on Gaudi. No need to fix. Workaround PR is merged. | {
"login": "yuanwu2017",
"id": 34643241,
"node_id": "MDQ6VXNlcjM0NjQzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanwu2017",
"html_url": "https://github.com/yuanwu2017",
"followers_url": "https://api.github.com/users/yuanwu2017/followers",
"following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions",
"organizations_url": "https://api.github.com/users/yuanwu2017/orgs",
"repos_url": "https://api.github.com/users/yuanwu2017/repos",
"events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanwu2017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-30T06:45:06 | 2025-07-03T13:11:04 | 2025-07-03T13:11:04 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.53.0.dev0
- Platform: Linux-5.15.0-140-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.33.0
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: 0.16.1+hpu.synapse.v1.21.0
- PyTorch version (accelerator?): 2.6.0+hpu_1.21.0-555.gitabf798b (HPU)
- Tensorflow version (GPU?): 2.15.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using distributed or parallel set-up in script?: <fill in>
- Using HPU in script?: <fill in>
- HPU type: GAUDI2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It is a known issue of Gaudi. The root cause is that Gaudi torch.gather cannot support input of int64 or long . It seems slow to fix this bug from the Gaudi pytorch. After they fix, I will remove the workaround [PR38363](https://github.com/huggingface/transformers/pull/38363).
```
git clone https://github.com/yuanwu2017/llm-dbg.git
PT_HPU_LAZY_MODE=1 run_text-to-speech.py
```

### Expected behavior
No crash. | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39118/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39117/comments | https://api.github.com/repos/huggingface/transformers/issues/39117/events | https://github.com/huggingface/transformers/pull/39117 | 3,187,128,258 | PR_kwDOCUB6oc6coDLW | 39,117 | update bnb ground truth | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T04:47:04 | 2025-07-01T18:06:37 | 2025-07-01T18:06:37 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39117",
"html_url": "https://github.com/huggingface/transformers/pull/39117",
"diff_url": "https://github.com/huggingface/transformers/pull/39117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39117.patch",
"merged_at": "2025-07-01T18:06:37"
} | The previous ground truth was generated by ipex backend, we should update it since bnb is going to use sycl/triton backend. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39117/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39116/comments | https://api.github.com/repos/huggingface/transformers/issues/39116/events | https://github.com/huggingface/transformers/pull/39116 | 3,186,851,666 | PR_kwDOCUB6oc6cnIVR | 39,116 | fix UT failures on XPU w/ stock PyTorch 2.7 & 2.8 | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T01:49:07 | 2025-07-01T08:41:54 | 2025-06-30T09:49:03 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39116",
"html_url": "https://github.com/huggingface/transformers/pull/39116",
"diff_url": "https://github.com/huggingface/transformers/pull/39116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39116.patch",
"merged_at": "2025-06-30T09:49:03"
} | @ydshieh , pls help review, thx. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39116/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39115/comments | https://api.github.com/repos/huggingface/transformers/issues/39115/events | https://github.com/huggingface/transformers/issues/39115 | 3,186,828,040 | I_kwDOCUB6oc698yMI | 39,115 | `transformers.utils.metrics` sets global `TracerProvider` | {
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"repos_url": "https://api.github.com/users/harupy/repos",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T01:31:31 | 2025-07-21T11:21:55 | 2025-07-15T12:22:13 | CONTRIBUTOR | null | null | null | null | ## Reproduction
```python
import opentelemetry.sdk.trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, SpanExporter
from opentelemetry import trace
import transformers.utils.metrics
# This calls `trace.set_tracer_provider`
provider = trace_sdk.TracerProvider()
processor = SimpleSpanProcessor(SpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# This does nothing and emits a warning:
# Overriding of current TracerProvider is not allowed
```
Since `import transformers.utils.metrics` calls `trace.set_tracer_provider`, `trace.set_tracer_provider` in my script does nothing.
## Version
transformers >= 4.53.0
## Related files/lines/PRs
- https://github.com/huggingface/transformers/blob/ccf2ca162e33f381e454cdb74bf4b41a51ab976d/src/transformers/utils/metrics.py#L42
- https://github.com/huggingface/transformers/pull/38085 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39115/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39115/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39114/comments | https://api.github.com/repos/huggingface/transformers/issues/39114/events | https://github.com/huggingface/transformers/issues/39114 | 3,186,650,574 | I_kwDOCUB6oc698G3O | 39,114 | Is there a way to force it to use ASCII based progress bar and not the ipython widget one? | {
"login": "weathon",
"id": 41298844,
"node_id": "MDQ6VXNlcjQxMjk4ODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/41298844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weathon",
"html_url": "https://github.com/weathon",
"followers_url": "https://api.github.com/users/weathon/followers",
"following_url": "https://api.github.com/users/weathon/following{/other_user}",
"gists_url": "https://api.github.com/users/weathon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weathon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weathon/subscriptions",
"organizations_url": "https://api.github.com/users/weathon/orgs",
"repos_url": "https://api.github.com/users/weathon/repos",
"events_url": "https://api.github.com/users/weathon/events{/privacy}",
"received_events_url": "https://api.github.com/users/weathon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-06-29T22:41:19 | 2025-07-07T13:20:13 | null | NONE | null | null | null | null | When loading models, I like it better to have a ASCII based progress bar and not a IPython one | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39114/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39114/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39113/comments | https://api.github.com/repos/huggingface/transformers/issues/39113/events | https://github.com/huggingface/transformers/pull/39113 | 3,186,631,067 | PR_kwDOCUB6oc6cmbWw | 39,113 | Improve Code Llama documentation with explanations and helpful links | {
"login": "PrakyathMC",
"id": 92665624,
"node_id": "U_kgDOBYX3GA",
"avatar_url": "https://avatars.githubusercontent.com/u/92665624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PrakyathMC",
"html_url": "https://github.com/PrakyathMC",
"followers_url": "https://api.github.com/users/PrakyathMC/followers",
"following_url": "https://api.github.com/users/PrakyathMC/following{/other_user}",
"gists_url": "https://api.github.com/users/PrakyathMC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PrakyathMC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PrakyathMC/subscriptions",
"organizations_url": "https://api.github.com/users/PrakyathMC/orgs",
"repos_url": "https://api.github.com/users/PrakyathMC/repos",
"events_url": "https://api.github.com/users/PrakyathMC/events{/privacy}",
"received_events_url": "https://api.github.com/users/PrakyathMC/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-29T22:23:27 | 2025-06-30T14:38:43 | 2025-06-30T14:38:43 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39113",
"html_url": "https://github.com/huggingface/transformers/pull/39113",
"diff_url": "https://github.com/huggingface/transformers/pull/39113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39113.patch",
"merged_at": null
} | ## What does this PR do?
Improves the Code Llama model documentation following the standardized format outlined in #36979.
## Changes Made
- ✅ Added explanatory comments to code examples explaining imports and parameters
- ✅ Added "Quick Links" section with popular Code Llama models on HuggingFace Hub
- ✅ Included additional resources (paper links, model collection)
- ✅ Improved accessibility for beginners learning to use Code Llama
## Before and After
**Before**: Basic code examples without explanations
**After**: Commented code that teaches users what each line does + easy access to models
Fixes #36979
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39113/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39112/comments | https://api.github.com/repos/huggingface/transformers/issues/39112/events | https://github.com/huggingface/transformers/issues/39112 | 3,186,543,810 | I_kwDOCUB6oc697szC | 39,112 | QWEN2VLProcessor missing video_token_id in mm_token_type_ids | {
"login": "NikeHop",
"id": 38683970,
"node_id": "MDQ6VXNlcjM4NjgzOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/38683970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikeHop",
"html_url": "https://github.com/NikeHop",
"followers_url": "https://api.github.com/users/NikeHop/followers",
"following_url": "https://api.github.com/users/NikeHop/following{/other_user}",
"gists_url": "https://api.github.com/users/NikeHop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikeHop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikeHop/subscriptions",
"organizations_url": "https://api.github.com/users/NikeHop/orgs",
"repos_url": "https://api.github.com/users/NikeHop/repos",
"events_url": "https://api.github.com/users/NikeHop/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikeHop/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-06-29T20:34:21 | 2025-08-18T08:03:33 | 2025-08-18T08:03:33 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.0
- Platform: Linux-5.14.0-427.70.1.el9_4.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.13
- Huggingface_hub version: 0.33.1
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
import torch
from qwen_vl_utils import process_vision_info # Needs to be installed via pip install qwen-vl-utils
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
# Load model and processor
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct", torch_dtype=torch.bfloat16, device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct")
VIDEO_PATH = "./videos/examples/fps_30.mp4"
FPS = 30
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What can you see in the video?",
},
{"type": "video", "video": VIDEO_PATH, "fps": FPS},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
result = process_vision_info(
messages,
return_video_kwargs=True,
)
image_input, video_input, video_kwargs = result[0], result[1], result[2]
inputs = processor(
text=[text],
images=image_input,
videos=video_input,
padding=True,
return_tensors="pt",
text_kwargs={"return_mm_token_type_ids": True},
**video_kwargs,
).to("cuda:0")
print(inputs["mm_token_type_ids"]) # All 0's
print(torch.sum(inputs["mm_token_type_ids"]).item()) # 0
```
### Expected behavior
Relevant Code:

I think there should be another line:
```python
mm_token_type_ids[array_ids == self.video_token_id] = 2
```
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39112/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39111/comments | https://api.github.com/repos/huggingface/transformers/issues/39111/events | https://github.com/huggingface/transformers/issues/39111 | 3,186,506,844 | I_kwDOCUB6oc697jxc | 39,111 | New release 4.53.0 breaks HF trainer/model | {
"login": "liujch1998",
"id": 13347962,
"node_id": "MDQ6VXNlcjEzMzQ3OTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/13347962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liujch1998",
"html_url": "https://github.com/liujch1998",
"followers_url": "https://api.github.com/users/liujch1998/followers",
"following_url": "https://api.github.com/users/liujch1998/following{/other_user}",
"gists_url": "https://api.github.com/users/liujch1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liujch1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liujch1998/subscriptions",
"organizations_url": "https://api.github.com/users/liujch1998/orgs",
"repos_url": "https://api.github.com/users/liujch1998/repos",
"events_url": "https://api.github.com/users/liujch1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/liujch1998/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-29T19:45:00 | 2025-08-09T08:03:11 | 2025-08-09T08:03:11 | NONE | null | null | null | null | ### System Info
transformers==4.53.0
linux ubuntu
python==3.12.9
pytorch==2.6.0
accelerate==1.8.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
With 4.53.0, Trainer throws an error during model forward pass. I'm training with FSDP (haven't ablated if this is related). When I pin to transformers==4.52.4 the issue is gone.
Model: allenai/OLMo-2-1124-7B
Code snippet:
```
tokenizer = AutoTokenizer.from_pretrained(base_model_name, add_bos_token=False, add_eos_token=False)
model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype=torch.float32) # let FSDP handle device placement
training_args = TrainingArguments(
output_dir=save_dir,
num_train_epochs=1,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
save_strategy="no",
# save_steps=1000,
# save_total_limit=1,
logging_steps=100,
learning_rate=5e-5,
lr_scheduler_type="linear",
warmup_ratio=0.1,
weight_decay=0.01,
bf16=True, # Enable bf16 mixed precision training
optim="adamw_torch", # Use PyTorch's AdamW implementation
adam_beta1=0.9,
adam_beta2=0.95,
adam_epsilon=1e-8,
max_grad_norm=1.0, # Gradient clipping
seed=42, # Set random seed for reproducibility
# FSDP configuration
fsdp="full_shard auto_wrap", # Enable full sharding with auto wrapping
fsdp_config={
"transformer_layer_cls_to_wrap": ["Olmo2DecoderLayer"],
"backward_prefetch": "backward_pre",
},
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=hf_dataset,
)
trainer.train()
```
Error log:
```
2025-06-29T19:18:32.802Z
0%| | 0/24 [00:00<?, ?it/s][rank3]: Traceback (most recent call last):
2025-06-29T19:18:32.802Z [rank3]: File "/gantry-runtime/cont-pretrain.py", line 217, in <module>
2025-06-29T19:18:32.802Z [rank3]: main()
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/click/core.py", line 1442, in __call__
2025-06-29T19:18:32.802Z [rank3]: return self.main(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/click/core.py", line 1363, in main
2025-06-29T19:18:32.802Z [rank3]: rv = self.invoke(ctx)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/click/core.py", line 1226, in invoke
2025-06-29T19:18:32.802Z [rank3]: return ctx.invoke(self.callback, **ctx.params)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/click/core.py", line 794, in invoke
2025-06-29T19:18:32.802Z [rank3]: return callback(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/gantry-runtime/cont-pretrain.py", line 159, in main
2025-06-29T19:18:32.802Z [rank3]: trainer.train()
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/trainer.py", line 2207, in train
2025-06-29T19:18:32.802Z [rank3]: return inner_training_loop(
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/trainer.py", line 2549, in _inner_training_loop
2025-06-29T19:18:32.802Z [rank3]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/trainer.py", line 3750, in training_step
2025-06-29T19:18:32.802Z [rank3]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/trainer.py", line 3837, in compute_loss
2025-06-29T19:18:32.802Z [rank3]: outputs = model(**inputs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
2025-06-29T19:18:32.802Z [rank3]: return self._call_impl(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
2025-06-29T19:18:32.802Z [rank3]: return forward_call(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/accelerate/utils/operations.py", line 818, in forward
2025-06-29T19:18:32.802Z [rank3]: return model_forward(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/accelerate/utils/operations.py", line 806, in __call__
2025-06-29T19:18:32.802Z [rank3]: return convert_to_fp32(self.model_forward(*args, **kwargs))
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
2025-06-29T19:18:32.802Z [rank3]: return func(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/utils/generic.py", line 943, in wrapper
2025-06-29T19:18:32.802Z [rank3]: output = func(self, *args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 537, in forward
2025-06-29T19:18:32.802Z [rank3]: outputs: BaseModelOutputWithPast = self.model(
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
2025-06-29T19:18:32.802Z [rank3]: return self._call_impl(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
2025-06-29T19:18:32.802Z [rank3]: return forward_call(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/utils/generic.py", line 943, in wrapper
2025-06-29T19:18:32.802Z [rank3]: output = func(self, *args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 390, in forward
2025-06-29T19:18:32.802Z [rank3]: inputs_embeds = self.embed_tokens(input_ids)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
2025-06-29T19:18:32.802Z [rank3]: return self._call_impl(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
2025-06-29T19:18:32.802Z [rank3]: return forward_call(*args, **kwargs)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/sparse.py", line 190, in forward
2025-06-29T19:18:32.802Z [rank3]: return F.embedding(
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/functional.py", line 2551, in embedding
2025-06-29T19:18:32.802Z [rank3]: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2025-06-29T19:18:32.802Z [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.802Z [rank3]: RuntimeError: 'weight' must be 2-D
2025-06-29T19:18:32.806Z [rank5]: Traceback (most recent call last):
2025-06-29T19:18:32.806Z [rank5]: File "/gantry-runtime/cont-pretrain.py", line 217, in <module>
2025-06-29T19:18:32.806Z [rank5]: main()
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/click/core.py", line 1442, in __call__
2025-06-29T19:18:32.806Z [rank5]: return self.main(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/click/core.py", line 1363, in main
2025-06-29T19:18:32.806Z [rank5]: rv = self.invoke(ctx)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/click/core.py", line 1226, in invoke
2025-06-29T19:18:32.806Z [rank5]: return ctx.invoke(self.callback, **ctx.params)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/click/core.py", line 794, in invoke
2025-06-29T19:18:32.806Z [rank5]: return callback(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/gantry-runtime/cont-pretrain.py", line 159, in main
2025-06-29T19:18:32.806Z [rank5]: trainer.train()
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/trainer.py", line 2207, in train
2025-06-29T19:18:32.806Z [rank5]: return inner_training_loop(
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/trainer.py", line 2549, in _inner_training_loop
2025-06-29T19:18:32.806Z [rank5]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/trainer.py", line 3750, in training_step
2025-06-29T19:18:32.806Z [rank5]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/trainer.py", line 3837, in compute_loss
2025-06-29T19:18:32.806Z [rank5]: outputs = model(**inputs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
2025-06-29T19:18:32.806Z [rank5]: return self._call_impl(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
2025-06-29T19:18:32.806Z [rank5]: return forward_call(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/accelerate/utils/operations.py", line 818, in forward
2025-06-29T19:18:32.806Z [rank5]: return model_forward(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/accelerate/utils/operations.py", line 806, in __call__
2025-06-29T19:18:32.806Z [rank5]: return convert_to_fp32(self.model_forward(*args, **kwargs))
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
2025-06-29T19:18:32.806Z [rank5]: return func(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/utils/generic.py", line 943, in wrapper
2025-06-29T19:18:32.806Z [rank5]: output = func(self, *args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 537, in forward
2025-06-29T19:18:32.806Z [rank5]: outputs: BaseModelOutputWithPast = self.model(
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
2025-06-29T19:18:32.806Z [rank5]: return self._call_impl(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
2025-06-29T19:18:32.806Z [rank5]: return forward_call(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/utils/generic.py", line 943, in wrapper
2025-06-29T19:18:32.806Z [rank5]: output = func(self, *args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/transformers/models/olmo2/modeling_olmo2.py", line 390, in forward
2025-06-29T19:18:32.806Z [rank5]: inputs_embeds = self.embed_tokens(input_ids)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
2025-06-29T19:18:32.806Z [rank5]: return self._call_impl(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
2025-06-29T19:18:32.806Z [rank5]: return forward_call(*args, **kwargs)
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/modules/sparse.py", line 190, in forward
2025-06-29T19:18:32.806Z [rank5]: return F.embedding(
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: File "/opt/conda/envs/gantry/lib/python3.12/site-packages/torch/nn/functional.py", line 2529, in embedding
2025-06-29T19:18:32.806Z [rank5]: assert padding_idx < weight.size(
2025-06-29T19:18:32.806Z [rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-06-29T19:18:32.806Z [rank5]: AssertionError: Padding_idx must be within num_embeddings
```
### Expected behavior
Trainer should run w/o error. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39111/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39110/comments | https://api.github.com/repos/huggingface/transformers/issues/39110/events | https://github.com/huggingface/transformers/pull/39110 | 3,186,500,186 | PR_kwDOCUB6oc6cmBAb | 39,110 | Fixed flash attention 2 crash in qwen vl and omni models | {
"login": "petkokp",
"id": 61232356,
"node_id": "MDQ6VXNlcjYxMjMyMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/61232356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petkokp",
"html_url": "https://github.com/petkokp",
"followers_url": "https://api.github.com/users/petkokp/followers",
"following_url": "https://api.github.com/users/petkokp/following{/other_user}",
"gists_url": "https://api.github.com/users/petkokp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petkokp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petkokp/subscriptions",
"organizations_url": "https://api.github.com/users/petkokp/orgs",
"repos_url": "https://api.github.com/users/petkokp/repos",
"events_url": "https://api.github.com/users/petkokp/events{/privacy}",
"received_events_url": "https://api.github.com/users/petkokp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-29T19:36:44 | 2025-06-29T19:54:04 | 2025-06-29T19:54:04 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39110",
"html_url": "https://github.com/huggingface/transformers/pull/39110",
"diff_url": "https://github.com/huggingface/transformers/pull/39110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39110.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/39095
The changes introduced by this refactoring are the reason for the issue https://github.com/huggingface/transformers/pull/38930
This appear reverts the refactoring.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "petkokp",
"id": 61232356,
"node_id": "MDQ6VXNlcjYxMjMyMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/61232356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petkokp",
"html_url": "https://github.com/petkokp",
"followers_url": "https://api.github.com/users/petkokp/followers",
"following_url": "https://api.github.com/users/petkokp/following{/other_user}",
"gists_url": "https://api.github.com/users/petkokp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petkokp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petkokp/subscriptions",
"organizations_url": "https://api.github.com/users/petkokp/orgs",
"repos_url": "https://api.github.com/users/petkokp/repos",
"events_url": "https://api.github.com/users/petkokp/events{/privacy}",
"received_events_url": "https://api.github.com/users/petkokp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39110/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39109/comments | https://api.github.com/repos/huggingface/transformers/issues/39109/events | https://github.com/huggingface/transformers/pull/39109 | 3,186,460,604 | PR_kwDOCUB6oc6cl4vj | 39,109 | Fix: rename 'eval_strategy' to 'evaluation_strategy' in TrainingArgum… | {
"login": "trevorhughdavis",
"id": 1328964,
"node_id": "MDQ6VXNlcjEzMjg5NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1328964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trevorhughdavis",
"html_url": "https://github.com/trevorhughdavis",
"followers_url": "https://api.github.com/users/trevorhughdavis/followers",
"following_url": "https://api.github.com/users/trevorhughdavis/following{/other_user}",
"gists_url": "https://api.github.com/users/trevorhughdavis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trevorhughdavis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trevorhughdavis/subscriptions",
"organizations_url": "https://api.github.com/users/trevorhughdavis/orgs",
"repos_url": "https://api.github.com/users/trevorhughdavis/repos",
"events_url": "https://api.github.com/users/trevorhughdavis/events{/privacy}",
"received_events_url": "https://api.github.com/users/trevorhughdavis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-29T18:54:36 | 2025-06-29T18:54:36 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39109",
"html_url": "https://github.com/huggingface/transformers/pull/39109",
"diff_url": "https://github.com/huggingface/transformers/pull/39109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39109.patch",
"merged_at": null
} | ### 🐛 Bugfix: Rename `eval_strategy` → `evaluation_strategy` in `TrainingArguments`
This PR corrects an API mismatch in the `TrainingArguments` dataclass where the constructor expected `eval_strategy`, but `Trainer`, config files, and CLI use `evaluation_strategy`.
### ✅ Changes
- Renames the field `eval_strategy` → `evaluation_strategy`
- Updates all internal logic and references
- Corrects docstrings and argument descriptions
- Validated via `inspect.signature(TrainingArguments.__init__)`
- Tested with full reinstallation and cache invalidation
### 🧪 Reproduction (Before Fix)
```python
TrainingArguments(evaluation_strategy="epoch")
# ❌ TypeError: got an unexpected keyword argument 'evaluation_strategy'
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39109/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39108/comments | https://api.github.com/repos/huggingface/transformers/issues/39108/events | https://github.com/huggingface/transformers/pull/39108 | 3,186,458,419 | PR_kwDOCUB6oc6cl4UM | 39,108 | Disable static cache on certain MoE models | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-29T18:51:27 | 2025-07-28T16:26:29 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39108",
"html_url": "https://github.com/huggingface/transformers/pull/39108",
"diff_url": "https://github.com/huggingface/transformers/pull/39108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39108.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39108/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39107/comments | https://api.github.com/repos/huggingface/transformers/issues/39107/events | https://github.com/huggingface/transformers/issues/39107 | 3,186,455,667 | I_kwDOCUB6oc697XRz | 39,107 | Caching of model code in ~/.cache/huggingface/modules/transformers_modules | {
"login": "ira-b-27",
"id": 61705702,
"node_id": "MDQ6VXNlcjYxNzA1NzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/61705702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ira-b-27",
"html_url": "https://github.com/ira-b-27",
"followers_url": "https://api.github.com/users/ira-b-27/followers",
"following_url": "https://api.github.com/users/ira-b-27/following{/other_user}",
"gists_url": "https://api.github.com/users/ira-b-27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ira-b-27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ira-b-27/subscriptions",
"organizations_url": "https://api.github.com/users/ira-b-27/orgs",
"repos_url": "https://api.github.com/users/ira-b-27/repos",
"events_url": "https://api.github.com/users/ira-b-27/events{/privacy}",
"received_events_url": "https://api.github.com/users/ira-b-27/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 7438284845,
"node_id": "LA_kwDOCUB6oc8AAAABu1s4LQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Remote%20code",
"name": "Remote code",
"color": "344EDB",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-29T18:47:26 | 2025-08-24T08:03:30 | 2025-08-24T08:03:30 | NONE | null | null | null | null | > @dragen1860 Could you detail the code you're running to reproduce this problem and the output which is indicating that downloading is happening?
>
>
>
> It's possible that some files might be downloaded from the hub e.g. the config, depending wha's being called. However, calling a second time no downloads should happen provided there's no upstream changes.
>
>
>
> Could you try running with `HF_HUB_OFFLINE=1` set in your environment?
_Originally posted by @amyeroberts in [#22260](https://github.com/huggingface/transformers/issues/22260#issuecomment-2144859632)_ | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39107/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39106/comments | https://api.github.com/repos/huggingface/transformers/issues/39106/events | https://github.com/huggingface/transformers/pull/39106 | 3,185,985,881 | PR_kwDOCUB6oc6ckjUt | 39,106 | [cache refactor] Move all the caching logic to a per-layer approach | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-29T09:54:19 | 2025-08-15T09:44:13 | 2025-07-22T14:10:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39106",
"html_url": "https://github.com/huggingface/transformers/pull/39106",
"diff_url": "https://github.com/huggingface/transformers/pull/39106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39106.patch",
"merged_at": "2025-07-22T14:10:26"
} | This PR completes **Part 1** of the cache refactor tracked in #38077.
### Summary:
* Now `Cache` is structured in a list of layers.
* Ports all existing cache types (Static, Dynamic, Offloaded, Quantized, Hybrid, etc.) to use layer composition
* Backward compatibility, tests passing.
* Modification logic (e.g., `reset()`, `crop()`, `batch_split()`) now auto-propagates to layers.
Implementation details:
* We emulate the properties `cache.key_cache`and `cache.value_cache` through KVProxy to efficiently return a layer-indexed list of keys or values and keep BC.
* Offloading and Quantizing caches are now defined as a `CacheProcessor`. In the future, it can be expanded to a `CacheProcessorList` if needed.
* Diff will be cleaner once we merge #38086. | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39106/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39106/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39105/comments | https://api.github.com/repos/huggingface/transformers/issues/39105/events | https://github.com/huggingface/transformers/issues/39105 | 3,185,889,633 | I_kwDOCUB6oc695NFh | 39,105 | How to use other acceleration apis of npu? | {
"login": "zheliuyu",
"id": 190869220,
"node_id": "U_kgDOC2Bu5A",
"avatar_url": "https://avatars.githubusercontent.com/u/190869220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zheliuyu",
"html_url": "https://github.com/zheliuyu",
"followers_url": "https://api.github.com/users/zheliuyu/followers",
"following_url": "https://api.github.com/users/zheliuyu/following{/other_user}",
"gists_url": "https://api.github.com/users/zheliuyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zheliuyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zheliuyu/subscriptions",
"organizations_url": "https://api.github.com/users/zheliuyu/orgs",
"repos_url": "https://api.github.com/users/zheliuyu/repos",
"events_url": "https://api.github.com/users/zheliuyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zheliuyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-06-29T08:26:29 | 2025-10-01T13:10:48 | null | CONTRIBUTOR | null | null | null | null | ### Feature request
I noticed that transformers now support using flash attention directly in the npu by [```npu_flash_attention.py```](https://github.com/huggingface/transformers/pull/36696). There are many other acceleration apis that can be used in npu, such as shown in [doc](https://www.hiascend.com/document/detail/zh/Pytorch/700/ptmoddevg/trainingmigrguide/performance_tuning_0028.html).
How can we use them directly in transformers? How to switch seamlessly between different devices?
### Motivation
Request to integrate other acceleration apis of npu in transformers. If this can be done, the ease of using transformers will be greatly improved in npu.
### Your contribution
Try to design a plan. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39105/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39104/comments | https://api.github.com/repos/huggingface/transformers/issues/39104/events | https://github.com/huggingface/transformers/pull/39104 | 3,185,671,621 | PR_kwDOCUB6oc6cjvd2 | 39,104 | Update BigBirdPegasus model card | {
"login": "dross20",
"id": 73395516,
"node_id": "MDQ6VXNlcjczMzk1NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/73395516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dross20",
"html_url": "https://github.com/dross20",
"followers_url": "https://api.github.com/users/dross20/followers",
"following_url": "https://api.github.com/users/dross20/following{/other_user}",
"gists_url": "https://api.github.com/users/dross20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dross20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dross20/subscriptions",
"organizations_url": "https://api.github.com/users/dross20/orgs",
"repos_url": "https://api.github.com/users/dross20/repos",
"events_url": "https://api.github.com/users/dross20/events{/privacy}",
"received_events_url": "https://api.github.com/users/dross20/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-29T02:28:51 | 2025-06-30T17:42:57 | 2025-06-30T17:42:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39104",
"html_url": "https://github.com/huggingface/transformers/pull/39104",
"diff_url": "https://github.com/huggingface/transformers/pull/39104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39104.patch",
"merged_at": "2025-06-30T17:42:57"
} | # What does this PR do?
This PR replaces the BigBirdPegasus model card with a new model card matching the format introduced in #36979.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu
## Notes
- Omitted `attn_implementation="sdpa"` since `BigBirdPegasusForConditionalGeneration` doesn't support SDPA. | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39104/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39103/comments | https://api.github.com/repos/huggingface/transformers/issues/39103/events | https://github.com/huggingface/transformers/pull/39103 | 3,185,472,744 | PR_kwDOCUB6oc6cjNs5 | 39,103 | Fix audio-related config naming for Gemma3n | {
"login": "ywang96",
"id": 136131678,
"node_id": "U_kgDOCB00Xg",
"avatar_url": "https://avatars.githubusercontent.com/u/136131678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ywang96",
"html_url": "https://github.com/ywang96",
"followers_url": "https://api.github.com/users/ywang96/followers",
"following_url": "https://api.github.com/users/ywang96/following{/other_user}",
"gists_url": "https://api.github.com/users/ywang96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ywang96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ywang96/subscriptions",
"organizations_url": "https://api.github.com/users/ywang96/orgs",
"repos_url": "https://api.github.com/users/ywang96/repos",
"events_url": "https://api.github.com/users/ywang96/events{/privacy}",
"received_events_url": "https://api.github.com/users/ywang96/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-28T20:53:35 | 2025-06-30T07:12:25 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39103",
"html_url": "https://github.com/huggingface/transformers/pull/39103",
"diff_url": "https://github.com/huggingface/transformers/pull/39103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39103.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
While working on adding multimodal support for Gemma3n on vLLM, I noticed `audio_soft_tokens_per_image` should really be renamed to `audio_soft_tokens_per_audio`. This PR updates the variable naming but we'll also need to update this field in the model repository.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39103/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39103/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39102/comments | https://api.github.com/repos/huggingface/transformers/issues/39102/events | https://github.com/huggingface/transformers/pull/39102 | 3,185,280,420 | PR_kwDOCUB6oc6ciqjx | 39,102 | docs: correct two typos in awesome-transformers.md | {
"login": "VladimirGutuev",
"id": 203542588,
"node_id": "U_kgDODCHQPA",
"avatar_url": "https://avatars.githubusercontent.com/u/203542588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VladimirGutuev",
"html_url": "https://github.com/VladimirGutuev",
"followers_url": "https://api.github.com/users/VladimirGutuev/followers",
"following_url": "https://api.github.com/users/VladimirGutuev/following{/other_user}",
"gists_url": "https://api.github.com/users/VladimirGutuev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VladimirGutuev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VladimirGutuev/subscriptions",
"organizations_url": "https://api.github.com/users/VladimirGutuev/orgs",
"repos_url": "https://api.github.com/users/VladimirGutuev/repos",
"events_url": "https://api.github.com/users/VladimirGutuev/events{/privacy}",
"received_events_url": "https://api.github.com/users/VladimirGutuev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-28T17:23:34 | 2025-06-30T15:53:44 | 2025-06-30T15:53:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39102",
"html_url": "https://github.com/huggingface/transformers/pull/39102",
"diff_url": "https://github.com/huggingface/transformers/pull/39102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39102.patch",
"merged_at": "2025-06-30T15:53:43"
} | # What does this PR do?
https://github.com/huggingface/transformers/blob/main/awesome-transformers.md?plain=1
This PR fixes two small typos in the **[awesome-transformers.md](https://github.com/huggingface/transformers/blob/main/awesome-transformers.md?plain=1)** list:
- **DALL·E Flow** section: changes
`Itt leverages DALL·E-Mega…` → `It leverages DALL·E-Mega…`
- **Underthesea** section: changes
`We provides extremely easy API…` → `We provide an extremely easy API…`
Fixes #39101
## Before submitting
- [x] This PR fixes a typo or improves the docs
- [x] I’ve read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request)
- [ ] Tests are not affected (no code changes)
- [ ] I’ve updated the documentation with my changes (N/A – markdown only)
- [ ] I’ve added necessary tests (N/A)
## Who can review?
Documentation maintainers, feel free to take a look: @stevhliu | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39102/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39101/comments | https://api.github.com/repos/huggingface/transformers/issues/39101/events | https://github.com/huggingface/transformers/issues/39101 | 3,185,263,005 | I_kwDOCUB6oc6920Gd | 39,101 | docs: fix typos in awesome-transformers.md WIP | {
"login": "VladimirGutuev",
"id": 203542588,
"node_id": "U_kgDODCHQPA",
"avatar_url": "https://avatars.githubusercontent.com/u/203542588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VladimirGutuev",
"html_url": "https://github.com/VladimirGutuev",
"followers_url": "https://api.github.com/users/VladimirGutuev/followers",
"following_url": "https://api.github.com/users/VladimirGutuev/following{/other_user}",
"gists_url": "https://api.github.com/users/VladimirGutuev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VladimirGutuev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VladimirGutuev/subscriptions",
"organizations_url": "https://api.github.com/users/VladimirGutuev/orgs",
"repos_url": "https://api.github.com/users/VladimirGutuev/repos",
"events_url": "https://api.github.com/users/VladimirGutuev/events{/privacy}",
"received_events_url": "https://api.github.com/users/VladimirGutuev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-28T17:09:11 | 2025-06-30T15:53:44 | 2025-06-30T15:53:44 | CONTRIBUTOR | null | null | null | null | ### System Info
Hello!
There are two small typos in [awesome-transformers.md](https://github.com/huggingface/transformers/blob/main/awesome-transformers.md?plain=1#L289-L294):
1. DALL·E Flow section (**TYPO**)
https://github.com/huggingface/transformers/blob/ccf2ca162e33f381e454cdb74bf4b41a51ab976d/awesome-transformers.md?plain=1#L289-L294
2. Underthesea section (**GRAMMAR**)
https://github.com/huggingface/transformers/blob/ccf2ca162e33f381e454cdb74bf4b41a51ab976d/awesome-transformers.md?plain=1#L527-L531
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Reproduction
Steps to reproduce the typo in documentation:
1. Open the file `docs/source/awesome-transformers.md`.
2. Scroll down to the **DALL·E Flow** section.
3. Observe the sentence:
> “DALL·E Flow is an interactive workflow … **Itt leverages** DALL·E-Mega…”
4. Scroll further to the **Underthesea** section.
5. Observe the sentence:
> “… **We provides** extremely easy API to quickly apply pretrained NLP models…”
### Expected behavior
## Expected behavior
The text should read with correct grammar:
- In **DALL·E Flow**:
> “… **It leverages** DALL·E-Mega…”
- In **Underthesea**:
> “… **We provide an extremely easy API** to quickly apply pretrained NLP models…” | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39101/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39100/comments | https://api.github.com/repos/huggingface/transformers/issues/39100/events | https://github.com/huggingface/transformers/pull/39100 | 3,185,114,623 | PR_kwDOCUB6oc6ciKCe | 39,100 | Suggest jobs to use in `run-slow` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-28T15:32:07 | 2025-07-03T08:02:42 | 2025-07-01T18:19:06 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39100",
"html_url": "https://github.com/huggingface/transformers/pull/39100",
"diff_url": "https://github.com/huggingface/transformers/pull/39100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39100.patch",
"merged_at": "2025-07-01T18:19:06"
} | # What does this PR do?
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39100/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39099/comments | https://api.github.com/repos/huggingface/transformers/issues/39099/events | https://github.com/huggingface/transformers/issues/39099 | 3,184,822,798 | I_kwDOCUB6oc691IoO | 39,099 | Pretrainedtokenizerfast Segmentation fault | {
"login": "Facerman-cloud",
"id": 178777071,
"node_id": "U_kgDOCqfr7w",
"avatar_url": "https://avatars.githubusercontent.com/u/178777071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Facerman-cloud",
"html_url": "https://github.com/Facerman-cloud",
"followers_url": "https://api.github.com/users/Facerman-cloud/followers",
"following_url": "https://api.github.com/users/Facerman-cloud/following{/other_user}",
"gists_url": "https://api.github.com/users/Facerman-cloud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Facerman-cloud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Facerman-cloud/subscriptions",
"organizations_url": "https://api.github.com/users/Facerman-cloud/orgs",
"repos_url": "https://api.github.com/users/Facerman-cloud/repos",
"events_url": "https://api.github.com/users/Facerman-cloud/events{/privacy}",
"received_events_url": "https://api.github.com/users/Facerman-cloud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-28T10:33:11 | 2025-08-09T08:03:14 | 2025-08-09T08:03:14 | NONE | null | null | null | null | ### System Info
**Problem:**
`Fatal Python error: Segmentation fault
Thread 0x00007fd9c172e700 (most recent call first):
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/threading.py", line 316 in wait
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/threading.py", line 574 in wait
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/site-packages/tqdm/_monitor.py", line 60 in run
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/threading.py", line 973 in _bootstrap_inner
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/threading.py", line 930 in _bootstrap
Current thread 0x00007fdb742c1740 (most recent call first):
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/third_party/transformers-CFG/transformers_cfg/tokenization/middle/TokenizerMiddleMapping.py", line 68 in __init__
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/third_party/transformers-CFG/transformers_cfg/tokenization/middle/TokenizerMiddleMapping.py", line 48 in from_hf_tokenizer
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/third_party/transformers-CFG/transformers_cfg/tokenization/byte_trie.py", line 54 in from_tokenizer
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/third_party/transformers-CFG/transformers_cfg/token_grammar_recognizer.py", line 29 in __init__
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/third_party/transformers-CFG/transformers_cfg/token_grammar_recognizer.py", line 102 in __init__
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/llm/llama.py", line 64 in generate_hf_llm
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/llm/hf_model.py", line 129 in generate_hf
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/llm/hf_model.py", line 97 in generate
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/planner/llm_planner.py", line 557 in replan
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/planner/llm_planner.py", line 638 in get_next_action
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/evaluation/decentralized_evaluation_runner.py", line 128 in get_low_level_actions
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/evaluation/evaluation_runner.py", line 618 in run_instruction
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/examples/planner_demo.py", line 347 in run_planner
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/examples/planner_demo.py", line 188 in run_eval
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/site-packages/hydra/core/utils.py", line 186 in run_job
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/site-packages/hydra/_internal/hydra.py", line 119 in run
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 458 in <lambda>
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 220 in run_and_report
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 457 in _run_app
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 394 in _run_hydra
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/site-packages/hydra/main.py", line 94 in decorated_main
File "/media/taiic/5E08BBF708BBCC71/jiawei/partnr-planner/habitat_llm/examples/planner_demo.py", line 455 in <module>
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/runpy.py", line 87 in _run_code
File "/home/taiic/anaconda3/envs/habitat-llm/lib/python3.9/runpy.py", line 197 in _run_module_as_main`
**device**:
the device I am using is Ubuntu 22.04 and the driver I am using is Nvidia-driver-550. The graphics cards used simultaneously are two 48GB NVIDIA GeForce RTX 4090 D.
**Version**
The versions of transformer and tokenizer are tokenizers-0.20.3 transformer-4.46.2
### Who can help?
@ArthurZucker and @itazap
I really hope to receive help from all the teachers here.
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Code related links**
[](https://github.com/facebookresearch/partnr-planner)
### Expected behavior
The error mentioned above was encountered while running QuickStart in the above link. While solving the problem, I found that it was related to the HuggingFaceFastTokenizer compiled using Rust. I have been thinking for a long time but have not been able to solve it. I sincerely hope to receive help from all the teachers here. I would greatly appreciate it if this issue could be resolved. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39099/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39098/comments | https://api.github.com/repos/huggingface/transformers/issues/39098/events | https://github.com/huggingface/transformers/pull/39098 | 3,184,802,708 | PR_kwDOCUB6oc6chUX4 | 39,098 | Update Dockerfiles to install packages inside a virtual environment | {
"login": "Sai-Suraj-27",
"id": 87087741,
"node_id": "MDQ6VXNlcjg3MDg3NzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/87087741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sai-Suraj-27",
"html_url": "https://github.com/Sai-Suraj-27",
"followers_url": "https://api.github.com/users/Sai-Suraj-27/followers",
"following_url": "https://api.github.com/users/Sai-Suraj-27/following{/other_user}",
"gists_url": "https://api.github.com/users/Sai-Suraj-27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sai-Suraj-27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sai-Suraj-27/subscriptions",
"organizations_url": "https://api.github.com/users/Sai-Suraj-27/orgs",
"repos_url": "https://api.github.com/users/Sai-Suraj-27/repos",
"events_url": "https://api.github.com/users/Sai-Suraj-27/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sai-Suraj-27/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-28T10:02:37 | 2025-08-13T21:51:52 | 2025-08-13T21:51:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39098",
"html_url": "https://github.com/huggingface/transformers/pull/39098",
"diff_url": "https://github.com/huggingface/transformers/pull/39098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39098.patch",
"merged_at": "2025-08-13T21:51:52"
} | # What does this PR do?
Since we are explicitly setting this `UV_PYTHON` to use the global python interpreter in the containers. All packages will be installed globally.
```bash
ENV UV_PYTHON=/usr/local/bin/python
```
No use of creating a virtual environment as we are not using it anywhere. Right now it's just creating an extra virtual environment in the container with no packages being installed. So, `uv venv` can be removed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh @Rocketknight1 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39098/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39097/comments | https://api.github.com/repos/huggingface/transformers/issues/39097/events | https://github.com/huggingface/transformers/issues/39097 | 3,184,588,442 | I_kwDOCUB6oc690Paa | 39,097 | [Core] Saving models with multiple shared tensor groups is not supported when model is dispatched | {
"login": "kylesayrs",
"id": 17103692,
"node_id": "MDQ6VXNlcjE3MTAzNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17103692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylesayrs",
"html_url": "https://github.com/kylesayrs",
"followers_url": "https://api.github.com/users/kylesayrs/followers",
"following_url": "https://api.github.com/users/kylesayrs/following{/other_user}",
"gists_url": "https://api.github.com/users/kylesayrs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylesayrs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylesayrs/subscriptions",
"organizations_url": "https://api.github.com/users/kylesayrs/orgs",
"repos_url": "https://api.github.com/users/kylesayrs/repos",
"events_url": "https://api.github.com/users/kylesayrs/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylesayrs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2107554019,
"node_id": "MDU6TGFiZWwyMTA3NTU0MDE5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models",
"name": "Distributed Training / Models",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-28T05:05:19 | 2025-07-10T16:33:31 | 2025-07-10T16:33:31 | CONTRIBUTOR | null | null | null | null | ### System Info
Any transformers version, any python version, any system
### Who can help?
@Rocketknight1 @ArthurZucker
### Reproduction
```python3
from transformers import AutoModelForCausalLM
device_map = {
"model.embed_tokens": "cuda:0",
"model.layers": "cpu",
"model.norm": "cpu",
"model.rotary_emb": "cpu",
"lm_head": "cuda:0"
}
model = AutoModelForCausalLM.from_pretrained("nm-testing/llama2.c-stories15M", device_map=device_map)
transform_a = torch.nn.Linear(1, 1, bias=False)
transform_a._dynamic_tied_weights_keys = ["weight"]
transform_b = torch.nn.Linear(1, 1, bias=False)
transform_b._dynamic_tied_weights_keys = ["weight"]
model.model.layers[0].self_attn.q_proj.register_module("transform", transform_a)
model.model.layers[1].self_attn.q_proj.register_module("transform", transform_a)
model.model.layers[2].self_attn.q_proj.register_module("transform", transform_b)
model.model.layers[3].self_attn.q_proj.register_module("transform", transform_b)
model.save_pretrained("tmp")
```
### Expected behavior
Downstream user [LLM Compressor](https://github.com/vllm-project/llm-compressor) would like to save models which have multiple shared tensor groups. For example, these two modules share two sets of shared tensors.
```python3
assert module_a.weight is module_b.weight
assert module_a.bias is module_b.bias
assert module_a._dynamic_tied_weights_keys = ["weight", "bias"]
assert module_b._dynamic_tied_weights_keys = ["weight", "bias"]
```
However, attempting to save a model with modules like these raises an error only if `hf_device_map` is present.
```
RuntimeError:
Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [...]
```
The problematic code seems to lie here
```python3
if hasattr(self, "hf_device_map"):
# if the model has offloaded parameters, we must check using find_tied_parameters()
tied_params = find_tied_parameters(self)
if tied_params:
tied_names = tied_params[0]
shared_ptrs = {
ptr: names for ptr, names in ptrs.items() if any(name in tied_names for name in names)
}
else:
shared_ptrs = {}
else:
shared_ptrs = {ptr: names for ptr, names in ptrs.items() if len(names) > 1}
```
As you can see, the line `tied_names = tied_params[0]` assumes that there is only one shared tensor group. This means that only the first shared tensor group is used to populate `shared_ptrs`, which means that only the first group is deleted. Other groups are not deleted which results in the downstream error.
This assumption was likely made because the most common case is `lm_head.weight == embed_tokens.weight`. However, there's no reason why multiple groups can't be supported, as it's provably possible in the non-offloaded case. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39097/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39097/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39096/comments | https://api.github.com/repos/huggingface/transformers/issues/39096/events | https://github.com/huggingface/transformers/pull/39096 | 3,184,280,140 | PR_kwDOCUB6oc6cf1HW | 39,096 | Fix pos idx v4.52.4 | {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T23:16:08 | 2025-09-22T16:26:29 | 2025-09-22T16:26:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39096",
"html_url": "https://github.com/huggingface/transformers/pull/39096",
"diff_url": "https://github.com/huggingface/transformers/pull/39096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39096.patch",
"merged_at": null
} | # What does this PR do?
This PR fixes an issue with the pos_idx parameter error in fp32 raised at #38843.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
This PR fixes an issue with the pos_idx parameter error in fp32 mode that exists in version v4.52.4 of Hugging Face Transformers.
The bug does not affect the latest main branch, but this patch is intended for users or branches still relying on v4.52.4
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#38843
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section? Yes.
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Yes.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
No
## Who can review?
@ArthurZucker, @Rocketknight1
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39096/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39095/comments | https://api.github.com/repos/huggingface/transformers/issues/39095/events | https://github.com/huggingface/transformers/issues/39095 | 3,184,266,072 | I_kwDOCUB6oc69zAtY | 39,095 | `Qwen2_5_VLVisionAttention` with flash attention has no `is_causal` attribute | {
"login": "allentsouhuang",
"id": 6635995,
"node_id": "MDQ6VXNlcjY2MzU5OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6635995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allentsouhuang",
"html_url": "https://github.com/allentsouhuang",
"followers_url": "https://api.github.com/users/allentsouhuang/followers",
"following_url": "https://api.github.com/users/allentsouhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/allentsouhuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allentsouhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allentsouhuang/subscriptions",
"organizations_url": "https://api.github.com/users/allentsouhuang/orgs",
"repos_url": "https://api.github.com/users/allentsouhuang/repos",
"events_url": "https://api.github.com/users/allentsouhuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/allentsouhuang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-27T23:01:42 | 2025-07-02T09:29:47 | 2025-07-01T10:18:38 | NONE | null | null | null | null | ### System Info
transformers v4.53.0
### Who can help?
@amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
New in v4.53.0, I see that Qwen2_5_VLVisionAttention can now use flash attention.
https://github.com/huggingface/transformers/blob/v4.53.0/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L252
However, in this line https://github.com/huggingface/transformers/blob/v4.53.0/src/transformers/integrations/flash_attention.py#L71
we assume that Qwen2_5_VLVisionAttention has an is_causal attribute but by inspection it does not. This crashes inference!
### Expected behavior
I would expect it not to crash. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39095/reactions",
"total_count": 12,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 2,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/39095/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39094/comments | https://api.github.com/repos/huggingface/transformers/issues/39094/events | https://github.com/huggingface/transformers/pull/39094 | 3,184,201,207 | PR_kwDOCUB6oc6cflu9 | 39,094 | docs: PyTorch examples (image-classification & image-pretraining) clarity | {
"login": "ethanknights",
"id": 34215814,
"node_id": "MDQ6VXNlcjM0MjE1ODE0",
"avatar_url": "https://avatars.githubusercontent.com/u/34215814?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanknights",
"html_url": "https://github.com/ethanknights",
"followers_url": "https://api.github.com/users/ethanknights/followers",
"following_url": "https://api.github.com/users/ethanknights/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanknights/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanknights/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanknights/subscriptions",
"organizations_url": "https://api.github.com/users/ethanknights/orgs",
"repos_url": "https://api.github.com/users/ethanknights/repos",
"events_url": "https://api.github.com/users/ethanknights/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanknights/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-27T22:12:42 | 2025-07-03T07:53:04 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39094",
"html_url": "https://github.com/huggingface/transformers/pull/39094",
"diff_url": "https://github.com/huggingface/transformers/pull/39094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39094.patch",
"merged_at": null
} | # What does this PR do?
Improves the clarity of the `PyTorch` examples READMEs for image-classification and image-pretraining.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39094/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39093/comments | https://api.github.com/repos/huggingface/transformers/issues/39093/events | https://github.com/huggingface/transformers/pull/39093 | 3,183,811,139 | PR_kwDOCUB6oc6ceVM_ | 39,093 | Change `@lru_cache()` to `@lru_cache` to match styles from #38883. | {
"login": "rasmi",
"id": 2267370,
"node_id": "MDQ6VXNlcjIyNjczNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2267370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rasmi",
"html_url": "https://github.com/rasmi",
"followers_url": "https://api.github.com/users/rasmi/followers",
"following_url": "https://api.github.com/users/rasmi/following{/other_user}",
"gists_url": "https://api.github.com/users/rasmi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rasmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rasmi/subscriptions",
"organizations_url": "https://api.github.com/users/rasmi/orgs",
"repos_url": "https://api.github.com/users/rasmi/repos",
"events_url": "https://api.github.com/users/rasmi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rasmi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T18:56:39 | 2025-07-01T16:29:17 | 2025-07-01T16:29:16 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39093",
"html_url": "https://github.com/huggingface/transformers/pull/39093",
"diff_url": "https://github.com/huggingface/transformers/pull/39093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39093.patch",
"merged_at": "2025-07-01T16:29:16"
} | Change `@lru_cache()` to `@lru_cache` to match styles from #38883. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39093/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39092/comments | https://api.github.com/repos/huggingface/transformers/issues/39092/events | https://github.com/huggingface/transformers/pull/39092 | 3,183,537,705 | PR_kwDOCUB6oc6cdfBz | 39,092 | skip some `test_sdpa_can_dispatch_on_flash` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T17:10:00 | 2025-06-27T21:08:16 | 2025-06-27T21:08:14 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39092",
"html_url": "https://github.com/huggingface/transformers/pull/39092",
"diff_url": "https://github.com/huggingface/transformers/pull/39092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39092.patch",
"merged_at": "2025-06-27T21:08:14"
} | # What does this PR do?
Let's skip, and if the community ask for the support of this for some models, we can allocate the time then, or let the community to open the PRs. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39092/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39091/comments | https://api.github.com/repos/huggingface/transformers/issues/39091/events | https://github.com/huggingface/transformers/issues/39091 | 3,183,205,009 | I_kwDOCUB6oc69u9qR | 39,091 | `transformers`' dependency on `sentencepiece` blocks use on windows in python 3.13 | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | null | [] | 2025-06-27T15:23:57 | 2025-07-03T16:02:47 | 2025-07-03T06:53:29 | CONTRIBUTOR | null | null | null | null | ### System Info
Due to
* changes in Python 3.13,
* an incompatibility in `sentencepiece`,
* `transformers` dependency on `sentencepiece`,
`transformers` cannot be easily installed under windows + py3.13, and does not work as a dependency of other packages in this environment
There are multiple issues and a merged PR on sentencepiece (https://github.com/google/sentencepiece/pull/1084) from Feb 26 2025 but no release has been forthcoming
### Who can help?
* people currently using `sentencepiece` in `transformers` code they own
* people determining what the scope of `transformers`' OS & python support is
* `sentencepiece` pypi maintainers
### Reproduction
1. Be on windows
2. Be on python 3.13
3. Try to install current `transformers` from pypi
4. If you get this far, use any function importing `sentencepiece`, e.g. loading an `xlm_roberta` model
### Expected behavior
Code doesn't raise exception | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39091/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39090/comments | https://api.github.com/repos/huggingface/transformers/issues/39090/events | https://github.com/huggingface/transformers/pull/39090 | 3,183,127,308 | PR_kwDOCUB6oc6ccRn8 | 39,090 | Fix some bug for finetune and batch infer For GLM-4.1V | {
"login": "zRzRzRzRzRzRzR",
"id": 93239683,
"node_id": "U_kgDOBY65gw",
"avatar_url": "https://avatars.githubusercontent.com/u/93239683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zRzRzRzRzRzRzR",
"html_url": "https://github.com/zRzRzRzRzRzRzR",
"followers_url": "https://api.github.com/users/zRzRzRzRzRzRzR/followers",
"following_url": "https://api.github.com/users/zRzRzRzRzRzRzR/following{/other_user}",
"gists_url": "https://api.github.com/users/zRzRzRzRzRzRzR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zRzRzRzRzRzRzR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zRzRzRzRzRzRzR/subscriptions",
"organizations_url": "https://api.github.com/users/zRzRzRzRzRzRzR/orgs",
"repos_url": "https://api.github.com/users/zRzRzRzRzRzRzR/repos",
"events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/events{/privacy}",
"received_events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-06-27T14:56:30 | 2025-07-09T09:55:32 | 2025-06-30T10:16:23 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39090",
"html_url": "https://github.com/huggingface/transformers/pull/39090",
"diff_url": "https://github.com/huggingface/transformers/pull/39090.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39090.patch",
"merged_at": "2025-06-30T10:16:23"
} | null | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39090/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39089/comments | https://api.github.com/repos/huggingface/transformers/issues/39089/events | https://github.com/huggingface/transformers/issues/39089 | 3,183,104,510 | I_kwDOCUB6oc69ulH- | 39,089 | Bug in version 4.52.4: LlavaOnevisonConfig Class init the inappropriate (hidden_size,num_attention_heads) pair in vision_config | {
"login": "glgjss960",
"id": 56515101,
"node_id": "MDQ6VXNlcjU2NTE1MTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/56515101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glgjss960",
"html_url": "https://github.com/glgjss960",
"followers_url": "https://api.github.com/users/glgjss960/followers",
"following_url": "https://api.github.com/users/glgjss960/following{/other_user}",
"gists_url": "https://api.github.com/users/glgjss960/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glgjss960/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glgjss960/subscriptions",
"organizations_url": "https://api.github.com/users/glgjss960/orgs",
"repos_url": "https://api.github.com/users/glgjss960/repos",
"events_url": "https://api.github.com/users/glgjss960/events{/privacy}",
"received_events_url": "https://api.github.com/users/glgjss960/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-27T14:49:18 | 2025-07-02T09:45:52 | 2025-07-02T09:45:52 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-5.4.210-4-velinux1-amd64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.30.1
- Safetensors version: 0.4.5
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A800-SXM4-80GB
### Who can help?
@amyeroberts @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
transformers==4.52.4 on linux plantform ,python==3.10.12
The bug is located in
/usr/local/lib/python3.10/dist-packages/transformers/models/llava_onevision/configuration_llava_onevision.py:
# coding=utf-8
# Copyright 2024 HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ...configuration_utils import PretrainedConfig
from ...utils import (
logging,
)
from ..auto import CONFIG_MAPPING, AutoConfig
logger = logging.get_logger(__name__)
class LlavaOnevisionConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`LlavaOnevisionForConditionalGeneration`]. It is used to instantiate an
Llava-NeXT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the [llava-hf/llava-onevision-qwen2-7b-ov-hf](https://huggingface.co/llava-hf/llava-onevision-qwen2-7b-ov-hf)
model.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vision_config (`Union[AutoConfig, dict]`, *optional*, defaults to `SiglipVisionConfig`):
The config object or dictionary of the vision backbone.
text_config (`Union[AutoConfig, dict]`, *optional*, defaults to `Qwen2Config`):
The config object or dictionary of the text backbone.
image_token_index (`int`, *optional*, defaults to 151646):
The image token index to encode the image prompt.
video_token_index (`int`, *optional*, defaults to 151647):
The video token index to encode the video prompt.
projector_hidden_act (`str`, *optional*, defaults to `"gelu"`):
The activation function used by the multimodal projector.
vision_feature_select_strategy (`str`, *optional*, defaults to `"full"`):
The feature selection strategy used to select the vision feature from the vision backbone.
Can be one of `"default"` or `"full"`. If `"default"`, the CLS token is removed from the vision features.
If `"full"`, the full vision features are used.
vision_feature_layer (`Union[int, List[int]]`, *optional*, defaults to -1):
The index of the layer to select the vision feature. If multiple indices are provided,
the vision feature of the corresponding indices will be concatenated to form the
vision features.
vision_aspect_ratio (`str`, *optional*, defaults to `"anyres_max_9"`):
Aspect ratio used when processong image features. The default value is "anyres_max_9".
image_grid_pinpoints (`List`, *optional*):
A list of possible resolutions to use for processing high resolution images. Each item in the list should be a tuple or list
of the form `(height, width)`.
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether the model's input and output word embeddings should be tied.
multimodal_projector_bias (`bool`, *optional*, defaults to `True`):
Whether to use bias in the multimodal projector.
Example:
```python
>>> from transformers import LlavaOnevisionForConditionalGeneration, LlavaOnevisionConfig, SiglipVisionConfig, Qwen2Config
>>> # Initializing a CLIP-vision config
>>> vision_config = SiglipVisionConfig()
>>> # Initializing a Llama config
>>> text_config = Qwen2Config()
>>> # Initializing a Llava-Next llava-hf/llava-onevision-qwen2-7b-ov-hf style configuration
>>> configuration = LlavaOnevisionConfig(vision_config, text_config)
>>> # Initializing a model from the llava-hf/llava-onevision-qwen2-7b-ov-hf style configuration
>>> model = LlavaOnevisionForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "llava_onevision"
attribute_map = {
"image_token_id": "image_token_index",
"video_token_id": "video_token_index",
}
sub_configs = {"text_config": AutoConfig, "vision_config": AutoConfig}
def __init__(
self,
vision_config=None,
text_config=None,
image_token_index=151646,
video_token_index=151647,
projector_hidden_act="gelu",
vision_feature_select_strategy="full",
vision_feature_layer=-1,
vision_aspect_ratio="anyres_max_9",
image_grid_pinpoints=None,
tie_word_embeddings=False,
multimodal_projector_bias=True,
**kwargs,
):
self.image_token_index = image_token_index
self.video_token_index = video_token_index
self.projector_hidden_act = projector_hidden_act
self.multimodal_projector_bias = multimodal_projector_bias
if vision_feature_select_strategy not in ["default", "full"]:
raise ValueError(
"vision_feature_select_strategy should be one of 'default', 'full'."
f"Got: {vision_feature_select_strategy}"
)
self.vision_feature_select_strategy = vision_feature_select_strategy
self.vision_feature_layer = vision_feature_layer
self.vision_aspect_ratio = vision_aspect_ratio
image_grid_pinpoints = (
image_grid_pinpoints
if image_grid_pinpoints is not None
else [
[384, 384],
[384, 768],
[384, 1152],
[384, 1536],
[384, 1920],
[384, 2304],
[768, 384],
[768, 768],
[768, 1152],
[768, 1536],
[768, 1920],
[768, 2304],
[1152, 384],
[1152, 768],
[1152, 1152],
[1152, 1536],
[1152, 1920],
[1152, 2304],
[1536, 384],
[1536, 768],
[1536, 1152],
[1536, 1536],
[1536, 1920],
[1536, 2304],
[1920, 384],
[1920, 768],
[1920, 1152],
[1920, 1536],
[1920, 1920],
[1920, 2304],
[2304, 384],
[2304, 768],
[2304, 1152],
[2304, 1536],
[2304, 1920],
[2304, 2304],
]
)
self.image_grid_pinpoints = image_grid_pinpoints
if isinstance(vision_config, dict):
vision_config["model_type"] = (
vision_config["model_type"] if "model_type" in vision_config else "siglip_vision_model"
)
vision_config = CONFIG_MAPPING[vision_config["model_type"]](**vision_config)
elif vision_config is None:
vision_config = CONFIG_MAPPING["siglip_vision_model"](
hidden_size=1152,
intermediate_size=4304,
patch_size=14,
image_size=384,
num_hidden_layers=26,
num_attention_heads=14,
vision_use_head=False,
)
self.vision_config = vision_config
if isinstance(text_config, dict):
text_config["model_type"] = text_config["model_type"] if "model_type" in text_config else "qwen2"
text_config = CONFIG_MAPPING[text_config["model_type"]](**text_config)
elif text_config is None:
text_config = CONFIG_MAPPING["qwen2"]()
self.text_config = text_config
super().__init__(tie_word_embeddings=tie_word_embeddings, **kwargs)
__all__ = ["LlavaOnevisionConfig"]
The bug is at:
elif vision_config is None:
vision_config = CONFIG_MAPPING["siglip_vision_model"](
hidden_size=1152,
intermediate_size=4304,
patch_size=14,
image_size=384,
num_hidden_layers=26,
num_attention_heads=14,
vision_use_head=False,
)
self.vision_config = vision_config
When initially construct a LlavaOnevisionConfig Class example, It will set self.vision_config.hidden_size=1152 and self.vision_config.num_attention_heads=14.However, num_attention_heads=14 can't be divisible by hidden_size=1152, which violates the basic requirement of the Transformer multi-head attention mechanism (the dimension of each head must be an integer).Therefore, in many cases, it will raise valueerror, for example:
Traceback (most recent call last):
File "/map-vepfs/zhengxuan/MMMU/mmmu-pro/infer/infer_llava-onevision.py", line 164, in <module>
model = LlavaOnevisionForConditionalGeneration.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 309, in _wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 4509, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava_onevision/modeling_llava_onevision.py", line 707, in __init__
self.model = LlavaOnevisionModel(config)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/llava_onevision/modeling_llava_onevision.py", line 335, in __init__
self.vision_tower = AutoModel.from_config(config.vision_config)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 440, in from_config
return model_class._from_config(config, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 309, in _wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2077, in _from_config
model = cls(config, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/siglip/modeling_siglip.py", line 852, in __init__
self.vision_model = SiglipVisionTransformer(config)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/siglip/modeling_siglip.py", line 775, in __init__
self.encoder = SiglipEncoder(config)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/siglip/modeling_siglip.py", line 581, in __init__
self.layers = nn.ModuleList([SiglipEncoderLayer(config) for _ in range(config.num_hidden_layers)])
File "/usr/local/lib/python3.10/dist-packages/transformers/models/siglip/modeling_siglip.py", line 581, in <listcomp>
self.layers = nn.ModuleList([SiglipEncoderLayer(config) for _ in range(config.num_hidden_layers)])
File "/usr/local/lib/python3.10/dist-packages/transformers/models/siglip/modeling_siglip.py", line 462, in __init__
self.self_attn = SiglipAttention(config)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/siglip/modeling_siglip.py", line 380, in __init__
raise ValueError(
ValueError: embed_dim must be divisible by num_heads (got `embed_dim`: 1152 and `num_heads`: 14).
As it is showed above, the ValueError comes from below:
in /usr/local/lib/python3.10/dist-packages/transformers/models/siglip/modeling_siglip.py
class SiglipAttention(nn.Module):
"""Multi-headed attention from 'Attention Is All You Need' paper"""
def __init__(self, config: Union[SiglipVisionConfig, SiglipTextConfig]):
super().__init__()
self.config = config
self.embed_dim = config.hidden_size
self.num_heads = config.num_attention_heads
self.head_dim = self.embed_dim // self.num_heads
if self.head_dim * self.num_heads != self.embed_dim:
raise ValueError(
f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:"
f" {self.num_heads})."
)
config: Union[SiglipVisionConfig, SiglipTextConfig] above is actually preconfig.vision_config , preconfig(just call it preconfig for distinction) is a LlavaOnevisionConfig Class example which is initialized by configuration_llava_onevision.LlavaOnevisionConfig.__init__(), self.embed_dim=1152 is actually preconfig.vision_config.hidden_size and self.num_heads=14 is actually preconfig.vision_config.num_attention_heads.
So I think in configuration_llava_onevision.LlavaOnevisionConfig.__init__(), it's better to modify
elif vision_config is None:
vision_config = CONFIG_MAPPING["siglip_vision_model"](
hidden_size=1152,
intermediate_size=4304,
patch_size=14,
image_size=384,
num_hidden_layers=26,
num_attention_heads=14,
vision_use_head=False,
)
self.vision_config = vision_config
to:
elif vision_config is None:
vision_config = CONFIG_MAPPING["siglip_vision_model"](
hidden_size=1152,
intermediate_size=4304,
patch_size=14,
image_size=384,
num_hidden_layers=26,
num_attention_heads=16, //modify num_attention_heads=16 or 12
vision_use_head=False,
)
self.vision_config = vision_config
modify modify num_attention_heads=16, which is also consistent with the config.json of google/siglip-so400m-patch14-384.
### Expected behavior
I think in models/configuration_llava_onevision.LlavaOnevisionConfig.__init__(), it's better to modify
elif vision_config is None:
vision_config = CONFIG_MAPPING["siglip_vision_model"](
hidden_size=1152,
intermediate_size=4304,
patch_size=14,
image_size=384,
num_hidden_layers=26,
num_attention_heads=14,
vision_use_head=False,
)
self.vision_config = vision_config
to:
elif vision_config is None:
vision_config = CONFIG_MAPPING["siglip_vision_model"](
hidden_size=1152,
intermediate_size=4304,
patch_size=14,
image_size=384,
num_hidden_layers=26,
num_attention_heads=16, //modify num_attention_heads=16 or 12
vision_use_head=False,
)
self.vision_config = vision_config
modify modify num_attention_heads=16, which is also consistent with the config.json of google/siglip-so400m-patch14-384. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39089/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39088/comments | https://api.github.com/repos/huggingface/transformers/issues/39088/events | https://github.com/huggingface/transformers/pull/39088 | 3,183,075,777 | PR_kwDOCUB6oc6ccHpN | 39,088 | fix `dots1` tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T14:41:09 | 2025-06-27T14:54:13 | 2025-06-27T14:54:11 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39088",
"html_url": "https://github.com/huggingface/transformers/pull/39088",
"diff_url": "https://github.com/huggingface/transformers/pull/39088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39088.patch",
"merged_at": "2025-06-27T14:54:11"
} | # What does this PR do?
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39088/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39087/comments | https://api.github.com/repos/huggingface/transformers/issues/39087/events | https://github.com/huggingface/transformers/pull/39087 | 3,182,852,297 | PR_kwDOCUB6oc6cbYUh | 39,087 | docs: Gemma 3n audio encoder | {
"login": "RyanMullins",
"id": 868555,
"node_id": "MDQ6VXNlcjg2ODU1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/868555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMullins",
"html_url": "https://github.com/RyanMullins",
"followers_url": "https://api.github.com/users/RyanMullins/followers",
"following_url": "https://api.github.com/users/RyanMullins/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanMullins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanMullins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanMullins/subscriptions",
"organizations_url": "https://api.github.com/users/RyanMullins/orgs",
"repos_url": "https://api.github.com/users/RyanMullins/repos",
"events_url": "https://api.github.com/users/RyanMullins/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanMullins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T13:29:26 | 2025-06-30T12:10:51 | 2025-06-30T12:10:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39087",
"html_url": "https://github.com/huggingface/transformers/pull/39087",
"diff_url": "https://github.com/huggingface/transformers/pull/39087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39087.patch",
"merged_at": "2025-06-30T12:10:51"
} | # What does this PR do?
Updating Gemma 3n docs and docstrings to clarify the relationship between the newly trained audio encoder used in Gemma 3n and the USM model from the original paper.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39087/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39086/comments | https://api.github.com/repos/huggingface/transformers/issues/39086/events | https://github.com/huggingface/transformers/pull/39086 | 3,182,841,902 | PR_kwDOCUB6oc6cbWNc | 39,086 | TST PEFT integration tests with pipeline generate | {
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T13:26:08 | 2025-06-27T14:06:39 | 2025-06-27T13:58:10 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39086",
"html_url": "https://github.com/huggingface/transformers/pull/39086",
"diff_url": "https://github.com/huggingface/transformers/pull/39086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39086.patch",
"merged_at": "2025-06-27T13:58:10"
} | # What does this PR do?
Some PEFT integration tests involving text generation pipelines were failing since #38129 because the base model is too small to generate longer sequences. Setting max_new_tokens fixes this.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. <= internally on slack
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? <= already there
## Who can review? | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39086/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39085/comments | https://api.github.com/repos/huggingface/transformers/issues/39085/events | https://github.com/huggingface/transformers/pull/39085 | 3,182,831,429 | PR_kwDOCUB6oc6cbUXd | 39,085 | docs: github training | {
"login": "Meenal-cloud",
"id": 218230525,
"node_id": "U_kgDODQHu_Q",
"avatar_url": "https://avatars.githubusercontent.com/u/218230525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Meenal-cloud",
"html_url": "https://github.com/Meenal-cloud",
"followers_url": "https://api.github.com/users/Meenal-cloud/followers",
"following_url": "https://api.github.com/users/Meenal-cloud/following{/other_user}",
"gists_url": "https://api.github.com/users/Meenal-cloud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Meenal-cloud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Meenal-cloud/subscriptions",
"organizations_url": "https://api.github.com/users/Meenal-cloud/orgs",
"repos_url": "https://api.github.com/users/Meenal-cloud/repos",
"events_url": "https://api.github.com/users/Meenal-cloud/events{/privacy}",
"received_events_url": "https://api.github.com/users/Meenal-cloud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T13:23:17 | 2025-06-27T13:55:05 | 2025-06-27T13:55:05 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39085",
"html_url": "https://github.com/huggingface/transformers/pull/39085",
"diff_url": "https://github.com/huggingface/transformers/pull/39085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39085.patch",
"merged_at": null
} | # What does this PR do?
MIT Training on Github | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39085/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39084/comments | https://api.github.com/repos/huggingface/transformers/issues/39084/events | https://github.com/huggingface/transformers/pull/39084 | 3,182,729,657 | PR_kwDOCUB6oc6ca-2B | 39,084 | Refactor gemma3n | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-27T12:49:25 | 2025-06-27T13:10:43 | null | COLLABORATOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39084",
"html_url": "https://github.com/huggingface/transformers/pull/39084",
"diff_url": "https://github.com/huggingface/transformers/pull/39084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39084.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39084/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39083/comments | https://api.github.com/repos/huggingface/transformers/issues/39083/events | https://github.com/huggingface/transformers/pull/39083 | 3,182,726,469 | PR_kwDOCUB6oc6ca-IO | 39,083 | Fix: unprotected import of tp plugin | {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-06-27T12:48:25 | 2025-06-27T15:28:07 | 2025-06-27T15:28:06 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39083",
"html_url": "https://github.com/huggingface/transformers/pull/39083",
"diff_url": "https://github.com/huggingface/transformers/pull/39083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39083.patch",
"merged_at": "2025-06-27T15:28:06"
} | Fixes #39077. We only use `TensorParallelPlugin` [here](https://github.com/huggingface/transformers/blob/9c8d3a70b8bf359150c960c4281aaa853498fe8c/src/transformers/trainer.py#L5204) and that is version checked, the matching import that is also version checked is [here](https://github.com/huggingface/transformers/blob/9c8d3a70b8bf359150c960c4281aaa853498fe8c/src/transformers/trainer.py#L242). The import I deleted is non-version checked and not needed.
| {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39083/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39082/comments | https://api.github.com/repos/huggingface/transformers/issues/39082/events | https://github.com/huggingface/transformers/pull/39082 | 3,182,724,394 | PR_kwDOCUB6oc6ca9qg | 39,082 | TST: Fix PEFT integration test bitsandbytes config | {
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T12:47:51 | 2025-06-27T16:33:55 | 2025-06-27T16:33:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39082",
"html_url": "https://github.com/huggingface/transformers/pull/39082",
"diff_url": "https://github.com/huggingface/transformers/pull/39082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39082.patch",
"merged_at": "2025-06-27T16:33:11"
} | # What does this PR do?
The PEFT integration tests still used load_in_{4,8}_bit, which is deprecated, moving to properly setting BitsAndBytesConfig. For 4bit, also ensure that nf4 is being used to prevent
> RuntimeError: quant_type must be nf4 on CPU, got fp4
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. <= internal slack
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? <= already there
## Who can review?
| {
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39082/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39081/comments | https://api.github.com/repos/huggingface/transformers/issues/39081/events | https://github.com/huggingface/transformers/issues/39081 | 3,182,724,159 | I_kwDOCUB6oc69tIQ_ | 39,081 | AttributeError: 'HfTrainerDeepSpeedConfig' object has no attribute 'is_zero3' | {
"login": "JaydenChao101",
"id": 109850072,
"node_id": "U_kgDOBowt2A",
"avatar_url": "https://avatars.githubusercontent.com/u/109850072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JaydenChao101",
"html_url": "https://github.com/JaydenChao101",
"followers_url": "https://api.github.com/users/JaydenChao101/followers",
"following_url": "https://api.github.com/users/JaydenChao101/following{/other_user}",
"gists_url": "https://api.github.com/users/JaydenChao101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JaydenChao101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JaydenChao101/subscriptions",
"organizations_url": "https://api.github.com/users/JaydenChao101/orgs",
"repos_url": "https://api.github.com/users/JaydenChao101/repos",
"events_url": "https://api.github.com/users/JaydenChao101/events{/privacy}",
"received_events_url": "https://api.github.com/users/JaydenChao101/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-27T12:47:45 | 2025-08-05T08:02:44 | 2025-08-05T08:02:44 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 0.33.1
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.0+cu126 (CUDA)
- Tensorflow version (GPU?): 2.18.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.6 (gpu)
- Jax version: 0.5.2
- JaxLib version: 0.5.1
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-40GB
### Who can help?
@SunMarc @zach-huggingface
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
# 1. 掛載 Google Drive (如果您在 Colab 環境)
from google.colab import drive
import os
import shutil
import time
import threading
import gc
import torch
import wandb
# 確保乾淨的環境
try:
drive.flush_and_unmount()
except Exception as e:
print(f"Drive unmount failed (this is expected if not mounted): {e}")
drive.mount('/content/drive', force_remount=True)
# 清理 CUDA 快取
gc.collect()
torch.cuda.empty_cache()
# 2. 設定路徑
# 訓練期間,Checkpoint 會先存在本機以提高速度
local_ckpt_dir = "/content/qwen3-8b-finetuned-local"
# 雲端硬碟的備份路徑
drive_ckpt_dir = "/content/drive/MyDrive/TinyQwen3-4B-GC-Math-full"
os.makedirs(local_ckpt_dir, exist_ok=True)
os.makedirs(drive_ckpt_dir, exist_ok=True)
# ==============================================================================
# 2. 同步功能:非同步將 Checkpoint 備份到 Google Drive
# ==============================================================================
def sync_checkpoints_to_drive():
"""
將本地端最新的 checkpoint 完整複製到 Google Drive。
這是一個阻塞操作,會先刪除舊的備份再複製新的。
"""
print(f"[{time.strftime('%Y-%m-%d %H:%M:%S')}] Starting sync to Drive...")
try:
# 使用 shutil.rmtree 刪除舊的備份
if os.path.exists(drive_ckpt_dir):
shutil.rmtree(drive_ckpt_dir)
print(f"Removed old Drive backup at: {drive_ckpt_dir}")
# 使用 shutil.copytree 複製整個目錄樹
shutil.copytree(local_ckpt_dir, drive_ckpt_dir)
print(f"[{time.strftime('%Y-%m-%d %H:%M:%S')}] Sync to Drive successful!")
except Exception as e:
print(f"[{time.strftime('%Y-%m-%d %H:%M:%S')}] ERROR during sync: {e}")
def async_sync_checkpoints():
"""
使用單獨的執行緒在背景執行同步任務,避免中斷訓練。
"""
sync_thread = threading.Thread(target=sync_checkpoints_to_drive)
sync_thread.daemon = True # 確保主程式結束時執行緒也會結束
sync_thread.start()
print("Background sync process initiated.")
# ==============================================================================
# 3. 載入模型與 Tokenizer (A100 優化)
# ==============================================================================
from unsloth import FastLanguageModel
# *** A100 優化 ***
# 使用 Unsloth 優化的 bf16 模型,不進行 4-bit 量化
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "TinyQwen/TinyQwen3-4B-GC", # 使用 Unsloth 優化的 bf16 版本
max_seq_length = 2048,
dtype = torch.bfloat16, # 為 A100 指定 bfloat16
load_in_4bit = False, # 關閉 4-bit 量化
attn_implementation = "flash_attention_2", # 使用 Flash Attention 2 加速
)
# ==============================================================================
# 4. 設定 LoRA (PEFT) (A100 優化)
# ==============================================================================
# 添加結構化推理用的特殊 token
tokenizer.add_tokens(["<reasoning>", "</reasoning>", "<answer>", "</answer>"], special_tokens=True)
# 調整模型的 token embedding 層以適應新的 token
model.resize_token_embeddings(len(tokenizer))
model = FastLanguageModel.get_peft_model(
model,
r = 32, # *** A100 優化:提升 LoRA rank,增加模型容量 ***
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
lora_alpha = 32,
lora_dropout = 0.05,
bias = "none",
use_gradient_checkpointing = False,
)
# ==============================================================================
# 5. 讀取並處理資料集
# ==============================================================================
from datasets import load_dataset
import pandas as pd
from datasets import Dataset
# 從 Hugging Face Hub 下載資料集
dataset = load_dataset("open-r1/Mixture-of-Thoughts", "code", split="train[:50000]")
# 定義一個函數,將所有需要的資訊格式化成單一的文字欄位 'text'
def format_prompt(example):
msgs = example.get("messages") or example.get("conversations") or []
text = ""
for m in msgs:
role = m.get("role")
content = m.get("content", "").strip()
if role in ["user", "human"]:
text += f"Human: {content}\n"
elif role == "assistant":
text += f"Assistant: {content}\n"
return {"text": text.strip()}
# 使用 .map() 應用格式化函數,創建 'text' 欄位並移除其他欄位
dataset = dataset.map(format_prompt, remove_columns=dataset.column_names)
# 切分訓練集和驗證集
dataset = dataset.train_test_split(test_size=0.01, seed=42)
train_ds = dataset["train"]
eval_ds = dataset["test"]
print(f"資料處理完成。訓練集大小: {len(train_ds)}, 驗證集大小: {len(eval_ds)}")
print("\n處理後的一筆資料範例:")
print(train_ds[0]['text'])
# ==============================================================================
# 6. 設定訓練參數與 Callback (A100 優化)
# ==============================================================================
from transformers import TrainingArguments, TrainerCallback, TrainerState, TrainerControl
from trl import SFTTrainer
class DriveSyncCallback(TrainerCallback):
def on_save(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs):
print(f"\nCheckpoint saved at step {state.global_step}. Triggering background sync to Drive.")
async_sync_checkpoints()
return control
training_args = TrainingArguments(
output_dir=local_ckpt_dir,
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=16, # *** A100 優化:大幅提升批次大小 ***
gradient_accumulation_steps=2, # 有效 batch size = 16 * 2 = 32
learning_rate=2e-4,
save_strategy="steps",
save_steps=200,
eval_steps=100,
save_total_limit=2,
logging_steps=25,
bf16=True, # 啟用 bfloat16
report_to="wandb",
#deepspeed="./ds_config.json", # ← 啟用 DeepSpeed ZeRO
remove_unused_columns=False,
)
# ==============================================================================
# 7. 初始化並開始訓練
# ==============================================================================
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
train_dataset=train_ds,
eval_dataset=eval_ds,
dataset_text_field="text",
max_seq_length=2048,
packing=False,
callbacks=[DriveSyncCallback()],
)
# 使用 try...finally 確保訓練中斷時也能保存最終模型
try:
print("="*20 + " 開始訓練 (A100 優化模式) " + "="*20)
latest_checkpoint = None
if os.path.exists(local_ckpt_dir):
checkpoints = [d for d in os.listdir(local_ckpt_dir) if d.startswith("checkpoint-")]
if checkpoints:
latest_checkpoint = os.path.join(local_ckpt_dir, max(checkpoints, key=lambda x: int(x.split('-')[-1])))
print(f"發現可用的 Checkpoint,將從 {latest_checkpoint} 恢復訓練。")
trainer.train(resume_from_checkpoint=latest_checkpoint)
except Exception as e:
print(f"\n訓練過程中發生錯誤: {e}")
finally:
print("\n訓練結束或中斷。正在保存最終的 LoRA adapter 並執行最後一次同步...")
final_adapter_dir = os.path.join(local_ckpt_dir, "final_adapter")
model.save_pretrained(final_adapter_dir)
tokenizer.save_pretrained(final_adapter_dir)
print(f"最終 adapter 已保存至: {final_adapter_dir}")
sync_checkpoints_to_drive()
print("所有任務完成。")
# 10. 清理
gc.collect()
torch.cuda.empty_cache()
from google.colab import runtime
runtime.unassign()
```
### Expected behavior
I expected the TrainingArguments(deepspeed=...) integration with the Hugging Face Trainer to work without raising an AttributeError related to HfTrainerDeepSpeedConfig.
Specifically, the method is_zero3() used inside the integration code should either:
- Still exist and be accessible, or
- Be safely replaced/removed in all internal references when removed in recent versions. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39081/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39080/comments | https://api.github.com/repos/huggingface/transformers/issues/39080/events | https://github.com/huggingface/transformers/pull/39080 | 3,182,696,406 | PR_kwDOCUB6oc6ca3h1 | 39,080 | Blip2 fixes | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T12:38:10 | 2025-07-02T12:39:40 | 2025-07-02T12:39:40 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39080",
"html_url": "https://github.com/huggingface/transformers/pull/39080",
"diff_url": "https://github.com/huggingface/transformers/pull/39080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39080.patch",
"merged_at": "2025-07-02T12:39:40"
} | This PR fixes some tests for the `blip_2` model:
1. there was a device mismatch issue when `getattr(self.config, "image_token_id", None) is not None` was `True`
2. the `Blip2EncoderLayer` could be split, which led to faulty device maps
3. although the `qformer` attribute does not support FA2, some models that inherited from `Blip2Model` did not change the support flag, which led to error, eg:
```
tests/models/blip_2/test_modeling_blip_2.py::Blip2ForConditionalGenerationDecoderOnlyTest::test_flash_attn_2_fp32_ln
ValueError: Blip2QFormerModel does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co//discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new
```
4. added some expectations for AMD
About (3): not sure if we want to mark some models as not supporting FA2 when it's only a single of their sub-modules that does not support it, but there way the test is written makes me think it's the appropriate fix, so comments are welcome on that. | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39080/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39079/comments | https://api.github.com/repos/huggingface/transformers/issues/39079/events | https://github.com/huggingface/transformers/pull/39079 | 3,182,387,557 | PR_kwDOCUB6oc6cZ0Pl | 39,079 | Fixes the failing test `test_is_split_into_words` in `test_pipelines_token_classification.py` | {
"login": "st81",
"id": 58893365,
"node_id": "MDQ6VXNlcjU4ODkzMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58893365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/st81",
"html_url": "https://github.com/st81",
"followers_url": "https://api.github.com/users/st81/followers",
"following_url": "https://api.github.com/users/st81/following{/other_user}",
"gists_url": "https://api.github.com/users/st81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/st81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/st81/subscriptions",
"organizations_url": "https://api.github.com/users/st81/orgs",
"repos_url": "https://api.github.com/users/st81/repos",
"events_url": "https://api.github.com/users/st81/events{/privacy}",
"received_events_url": "https://api.github.com/users/st81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T10:58:18 | 2025-06-27T18:25:33 | 2025-06-27T18:25:33 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39079",
"html_url": "https://github.com/huggingface/transformers/pull/39079",
"diff_url": "https://github.com/huggingface/transformers/pull/39079.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39079.patch",
"merged_at": "2025-06-27T18:25:33"
} | # What does this PR do?
This test was originally added in https://github.com/huggingface/transformers/pull/38818 .
However, running the test currently results in a failure:
```bash
RUN_SLOW=1 python -m pytest -k "test_is_split_into_words" tests/pipelines/test_pipelines_token_classification.py
````
First assertion failure:
```bash
E AssertionError: Lists differ: [[{'entity_group': 'PER', 'score': 0.921, [121 chars]29}]] != [{'entity_group': 'PER', 'score': ANY(floa[129 chars] 29}]
E
E First differing element 0:
E [{'entity_group': 'PER', 'score': 0.921, [120 chars] 29}]
E {'entity_group': 'PER', 'score': ANY(floa[38 chars]: 11}
E
E Second list contains 1 additional elements.
E First extra element 1:
E {'entity_group': 'LOC', 'score': ANY(float), 'word': 'New York', 'start': 21, 'end': 29}
E
E - [[{'end': 11,
E ? -
E
E + [{'end': 11,
E - 'entity_group': 'PER',
E ? -
E
E + 'entity_group': 'PER',
E - 'score': 0.921,
E + 'score': ANY(float),
E - 'start': 6,
E ? -
E
E + 'start': 6,
E - 'word': 'Sarah'},
E ? -
E
E + 'word': 'Sarah'},
E - {'end': 29,
E ? -
E
E + {'end': 29,
E - 'entity_group': 'LOC',
E ? -
E
E + 'entity_group': 'LOC',
E - 'score': 0.999,
E + 'score': ANY(float),
E - 'start': 21,
E ? -
E
E + 'start': 21,
E - 'word': 'New York'}]]
E ? - -
E
E + 'word': 'New York'}]
tests/pipelines/test_pipelines_token_classification.py:331: AssertionError
```
Second assertion failure:
```bash
E AssertionError: Lists differ: [[{'e[25 chars]re': 0.921, 'word': 'Sarah', 'start': 6, 'end'[255 chars]40}]] != [[{'e[25 chars]re': ANY(float), 'word': 'Sarah', 'start': 6, [277 chars]42}]]
E
E First differing element 1:
E [{'en[24 chars]re': 0.999, 'word': 'Wolfgang', 'start': 11, '[86 chars] 40}]
E [{'en[24 chars]re': ANY(float), 'word': 'Wolfgang', 'start': [98 chars] 42}]
E
E Diff is 738 characters long. Set self.maxDiff to None to see it.
tests/pipelines/test_pipelines_token_classification.py:344: AssertionError
```
After applying the fix, the same command passes successfully:
```bash
======================================================================================= test session starts ========================================================================================
platform linux -- Python 3.10.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /home/shutotakahashi/projects/transformers
configfile: pyproject.toml
plugins: asyncio-1.0.0, anyio-4.9.0, xdist-3.7.0, timeout-2.4.0, rerunfailures-15.1, order-1.3.0, rich-0.2.0, hypothesis-6.135.14
asyncio: mode=strict, asyncio_default_fixture_loop_scope=function, asyncio_default_test_loop_scope=function
collected 21 items / 20 deselected / 1 selected
tests/pipelines/test_pipelines_token_classification.py::TokenClassificationPipelineTests::test_is_split_into_words PASSED
```
No changes were made to the core implementation — only the test code was modified to better reflect the intended behavior.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39079/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39078/comments | https://api.github.com/repos/huggingface/transformers/issues/39078/events | https://github.com/huggingface/transformers/pull/39078 | 3,182,342,862 | PR_kwDOCUB6oc6cZraV | 39,078 | Uninstallling Flash attention from quantization docker | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T10:46:25 | 2025-06-27T11:51:48 | 2025-06-27T11:51:47 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39078",
"html_url": "https://github.com/huggingface/transformers/pull/39078",
"diff_url": "https://github.com/huggingface/transformers/pull/39078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39078.patch",
"merged_at": "2025-06-27T11:51:47"
} | # What does this PR do?
Flash attention (dependency of autoawq) is failling with some missing symbol error, it causes all the other quantization tests to fail. Uninstalling it for now to allow the tests to run.
Failling Job : https://github.com/huggingface/transformers/actions/runs/15915442841/job/44892146131
New docker job : https://github.com/huggingface/transformers/actions/runs/15924428551/job/44918352936#step:5:3812 | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39078/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39077/comments | https://api.github.com/repos/huggingface/transformers/issues/39077/events | https://github.com/huggingface/transformers/issues/39077 | 3,182,230,541 | I_kwDOCUB6oc69rPwN | 39,077 | ImportError: cannot import name 'TorchTensorParallelPlugin' from 'accelerate.utils' | {
"login": "ibillxia",
"id": 2110361,
"node_id": "MDQ6VXNlcjIxMTAzNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2110361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibillxia",
"html_url": "https://github.com/ibillxia",
"followers_url": "https://api.github.com/users/ibillxia/followers",
"following_url": "https://api.github.com/users/ibillxia/following{/other_user}",
"gists_url": "https://api.github.com/users/ibillxia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibillxia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibillxia/subscriptions",
"organizations_url": "https://api.github.com/users/ibillxia/orgs",
"repos_url": "https://api.github.com/users/ibillxia/repos",
"events_url": "https://api.github.com/users/ibillxia/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibillxia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-27T10:07:33 | 2025-07-07T23:49:11 | 2025-06-27T15:28:07 | NONE | null | null | null | null | ### System Info
enviorment info:
-Python 3.11.9
-pip 25.1.1
-Successfully installed hf-xet-1.1.5 huggingface-hub-0.33.1 joblib-1.5.1 opencv-python-4.11.0.86 retrying-1.4.0 scikit-learn-1.7.0 sentence-transformers-4.1.0 threadpoolctl-3.6.0 tokenizers-0.21.2 `transformers-4.53.0`
error log:
```Traceback (most recent call last):
File "test.py", line 13, in <module>
from sentence_transformers import SentenceTransformer
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/sentence_transformers/__init__.py", line 14, in <module>
from sentence_transformers.cross_encoder import (
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/sentence_transformers/cross_encoder/__init__.py", line 5, in <module>
from .trainer import CrossEncoderTrainer
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/sentence_transformers/cross_encoder/trainer.py", line 22, in <module>
from sentence_transformers.trainer import SentenceTransformerTrainer
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/sentence_transformers/trainer.py", line 14, in <module>
from transformers import EvalPrediction, PreTrainedTokenizerBase, Trainer, TrainerCallback
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 2154, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 2184, in _get_module
raise e
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 2182, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/transformers/trainer.py", line 229, in <module>
from accelerate.utils import (
ImportError: cannot import name 'TorchTensorParallelPlugin' from 'accelerate.utils' (/data/miniforge3/envs/env-3.11.9/lib/python3.11/site-packages/accelerate/utils/__init__.py)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
only `transformers-4.53.0` failed.
when change to `transformers-4.51.0`, no errors
### Expected behavior
`from sentence_transformers import SentenceTransformer`
this import no error | {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39077/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39076/comments | https://api.github.com/repos/huggingface/transformers/issues/39076/events | https://github.com/huggingface/transformers/issues/39076 | 3,182,189,090 | I_kwDOCUB6oc69rFoi | 39,076 | Loading audio in video from video URLs fail with chat template | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-27T09:54:33 | 2025-08-15T08:03:35 | 2025-08-15T08:03:35 | CONTRIBUTOR | null | null | null | null | ### System Info
4.54.0.dev0 but also older versions
### Who can help?
@zucchini-nlp [this line](https://github.com/huggingface/transformers/blob/a52478253bbe522a420e88ea3940d4d98a935300/src/transformers/audio_utils.py#L60) in audio utils tries to load the audio in the video, technically expects a url to wav or mp4 but fails, tries to open the video file as audio hence the error. [in this line](https://github.com/huggingface/transformers/blob/a52478253bbe522a420e88ea3940d4d98a935300/src/transformers/processing_utils.py#L1597) you directly pass video filename (url here) to `load_audio`, so issue is fundamentally with chat templates and not per the model. Loading from path works ([here](https://colab.research.google.com/drive/10u49NTDod48kPc7bf-6YCHAdhRqdoZSL?usp=sharing)). I can take a stab, I want to confirm if this is in any way intended, the same code exists in [Qwen-Omni model docs](https://huggingface.co/docs/transformers/main/model_doc/qwen2_5_omni) but with loading from path.
here's the trace:
```
/usr/local/lib/python3.11/dist-packages/transformers/models/qwen2_5_omni/processing_qwen2_5_omni.py:347: FutureWarning: `video_fps` is deprecated and will be removed in version 4.58 for `Qwen2_5OmniProcessor.apply_chat_template`. Use `fps` instead.
return super().apply_chat_template(conversations, chat_template, **kwargs)
---------------------------------------------------------------------------
LibsndfileError Traceback (most recent call last)
[/tmp/ipython-input-2-2033772580.py](https://localhost:8080/#) in <cell line: 0>()
15 ]
16
---> 17 inputs = processor.apply_chat_template(
18 conversations,
19 load_audio_from_video=True,
8 frames
[/usr/local/lib/python3.11/dist-packages/transformers/models/qwen2_5_omni/processing_qwen2_5_omni.py](https://localhost:8080/#) in apply_chat_template(self, conversations, chat_template, **kwargs)
345 + "Audio output mode only works when using default system prompt 'You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.'"
346 )
--> 347 return super().apply_chat_template(conversations, chat_template, **kwargs)
348
349 @property
[/usr/local/lib/python3.11/dist-packages/transformers/utils/deprecation.py](https://localhost:8080/#) in wrapped_func(*args, **kwargs)
170 warnings.warn(message, FutureWarning, stacklevel=2)
171
--> 172 return func(*args, **kwargs)
173
174 return wrapped_func
[/usr/local/lib/python3.11/dist-packages/transformers/processing_utils.py](https://localhost:8080/#) in apply_chat_template(self, conversation, chat_template, **kwargs)
1595 else:
1596 for fname in video_fnames:
-> 1597 batch_audios.append(load_audio(fname, sampling_rate=mm_load_kwargs["sampling_rate"]))
1598
1599 for fname in video_fnames:
[/usr/local/lib/python3.11/dist-packages/transformers/audio_utils.py](https://localhost:8080/#) in load_audio(audio, sampling_rate, timeout)
58 # Load audio from URL (e.g https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/translate_to_chinese.wav)
59 if audio.startswith("http://") or audio.startswith("https://"):
---> 60 audio = librosa.load(BytesIO(requests.get(audio, timeout=timeout).content), sr=sampling_rate)[0]
61 elif os.path.isfile(audio):
62 audio = librosa.load(audio, sr=sampling_rate)[0]
[/usr/local/lib/python3.11/dist-packages/librosa/core/audio.py](https://localhost:8080/#) in load(path, sr, mono, offset, duration, dtype, res_type)
184 y, sr_native = __audioread_load(path, offset, duration, dtype)
185 else:
--> 186 raise exc
187
188 # Final cleanup for dtype and contiguity
[/usr/local/lib/python3.11/dist-packages/librosa/core/audio.py](https://localhost:8080/#) in load(path, sr, mono, offset, duration, dtype, res_type)
174 # Otherwise try soundfile first, and then fall back if necessary
175 try:
--> 176 y, sr_native = __soundfile_load(path, offset, duration, dtype)
177
178 except sf.SoundFileRuntimeError as exc:
[/usr/local/lib/python3.11/dist-packages/librosa/core/audio.py](https://localhost:8080/#) in __soundfile_load(path, offset, duration, dtype)
207 else:
208 # Otherwise, create the soundfile object
--> 209 context = sf.SoundFile(path)
210
211 with context as sf_desc:
[/usr/local/lib/python3.11/dist-packages/soundfile.py](https://localhost:8080/#) in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd, compression_level, bitrate_mode)
688 self._info = _create_info_struct(file, mode, samplerate, channels,
689 format, subtype, endian)
--> 690 self._file = self._open(file, mode_int, closefd)
691 if set(mode).issuperset('r+') and self.seekable():
692 # Move write position to 0 (like in Python file objects)
[/usr/local/lib/python3.11/dist-packages/soundfile.py](https://localhost:8080/#) in _open(self, file, mode_int, closefd)
1263 # get the actual error code
1264 err = _snd.sf_error(file_ptr)
-> 1265 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
1266 if mode_int == _snd.SFM_WRITE:
1267 # Due to a bug in libsndfile version <= 1.0.25, frames != 0
LibsndfileError: Error opening <_io.BytesIO object at 0x7b13bd3af600>: Format not recognised./usr/local/lib/python3.11/dist-packages/transformers/models/qwen2_5_omni/processing_qwen2_5_omni.py:347: FutureWarning: `video_fps` is deprecated and will be removed in version 4.58 for `Qwen2_5OmniProcessor.apply_chat_template`. Use `fps` instead.
return super().apply_chat_template(conversations, chat_template, **kwargs)
---------------------------------------------------------------------------
LibsndfileError Traceback (most recent call last)
[/tmp/ipython-input-2-2033772580.py](https://localhost:8080/#) in <cell line: 0>()
15 ]
16
---> 17 inputs = processor.apply_chat_template(
18 conversations,
19 load_audio_from_video=True,
8 frames
[/usr/local/lib/python3.11/dist-packages/transformers/models/qwen2_5_omni/processing_qwen2_5_omni.py](https://localhost:8080/#) in apply_chat_template(self, conversations, chat_template, **kwargs)
345 + "Audio output mode only works when using default system prompt 'You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.'"
346 )
--> 347 return super().apply_chat_template(conversations, chat_template, **kwargs)
348
349 @property
[/usr/local/lib/python3.11/dist-packages/transformers/utils/deprecation.py](https://localhost:8080/#) in wrapped_func(*args, **kwargs)
170 warnings.warn(message, FutureWarning, stacklevel=2)
171
--> 172 return func(*args, **kwargs)
173
174 return wrapped_func
[/usr/local/lib/python3.11/dist-packages/transformers/processing_utils.py](https://localhost:8080/#) in apply_chat_template(self, conversation, chat_template, **kwargs)
1595 else:
1596 for fname in video_fnames:
-> 1597 batch_audios.append(load_audio(fname, sampling_rate=mm_load_kwargs["sampling_rate"]))
1598
1599 for fname in video_fnames:
[/usr/local/lib/python3.11/dist-packages/transformers/audio_utils.py](https://localhost:8080/#) in load_audio(audio, sampling_rate, timeout)
58 # Load audio from URL (e.g https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/translate_to_chinese.wav)
59 if audio.startswith("http://") or audio.startswith("https://"):
---> 60 audio = librosa.load(BytesIO(requests.get(audio, timeout=timeout).content), sr=sampling_rate)[0]
61 elif os.path.isfile(audio):
62 audio = librosa.load(audio, sr=sampling_rate)[0]
[/usr/local/lib/python3.11/dist-packages/librosa/core/audio.py](https://localhost:8080/#) in load(path, sr, mono, offset, duration, dtype, res_type)
184 y, sr_native = __audioread_load(path, offset, duration, dtype)
185 else:
--> 186 raise exc
187
188 # Final cleanup for dtype and contiguity
[/usr/local/lib/python3.11/dist-packages/librosa/core/audio.py](https://localhost:8080/#) in load(path, sr, mono, offset, duration, dtype, res_type)
174 # Otherwise try soundfile first, and then fall back if necessary
175 try:
--> 176 y, sr_native = __soundfile_load(path, offset, duration, dtype)
177
178 except sf.SoundFileRuntimeError as exc:
[/usr/local/lib/python3.11/dist-packages/librosa/core/audio.py](https://localhost:8080/#) in __soundfile_load(path, offset, duration, dtype)
207 else:
208 # Otherwise, create the soundfile object
--> 209 context = sf.SoundFile(path)
210
211 with context as sf_desc:
[/usr/local/lib/python3.11/dist-packages/soundfile.py](https://localhost:8080/#) in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd, compression_level, bitrate_mode)
688 self._info = _create_info_struct(file, mode, samplerate, channels,
689 format, subtype, endian)
--> 690 self._file = self._open(file, mode_int, closefd)
691 if set(mode).issuperset('r+') and self.seekable():
692 # Move write position to 0 (like in Python file objects)
[/usr/local/lib/python3.11/dist-packages/soundfile.py](https://localhost:8080/#) in _open(self, file, mode_int, closefd)
1263 # get the actual error code
1264 err = _snd.sf_error(file_ptr)
-> 1265 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
1266 if mode_int == _snd.SFM_WRITE:
1267 # Due to a bug in libsndfile version <= 1.0.25, frames != 0
LibsndfileError: Error opening <_io.BytesIO object at 0x7b13bd3af600>: Format not recognised.
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here's the link to notebook: https://colab.research.google.com/drive/174ZQ1brFQxMwGZsVpQQLPYGhACuKOYEH?usp=sharing
### Expected behavior
It should work the same as loading from path | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39076/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39075/comments | https://api.github.com/repos/huggingface/transformers/issues/39075/events | https://github.com/huggingface/transformers/issues/39075 | 3,182,133,460 | I_kwDOCUB6oc69q4DU | 39,075 | facebook/dinov2-with-registers-giant does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack. | {
"login": "MengHao666",
"id": 47114466,
"node_id": "MDQ6VXNlcjQ3MTE0NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/47114466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MengHao666",
"html_url": "https://github.com/MengHao666",
"followers_url": "https://api.github.com/users/MengHao666/followers",
"following_url": "https://api.github.com/users/MengHao666/following{/other_user}",
"gists_url": "https://api.github.com/users/MengHao666/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MengHao666/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MengHao666/subscriptions",
"organizations_url": "https://api.github.com/users/MengHao666/orgs",
"repos_url": "https://api.github.com/users/MengHao666/repos",
"events_url": "https://api.github.com/users/MengHao666/events{/privacy}",
"received_events_url": "https://api.github.com/users/MengHao666/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-27T09:39:03 | 2025-08-05T08:02:47 | 2025-08-05T08:02:47 | NONE | null | null | null | null | ### System Info
linux, latest transformer version: transformers-4.53.0
### Who can help?
raceback (most recent call last):
File "/home/momiao.mh/Projects/debug.py", line 47, in <module>
model = AutoModel.from_pretrained(model_name)
File "/home/momiao.mh/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained
return model_class.from_pretrained(
File "/home/momiao.mh/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 311, in _wrapper
return func(*args, **kwargs)
File "/home/momiao.mh/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4674, in from_pretrained
checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files(
File "/home/momiao.mh/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1243, in _get_resolved_checkpoint_files
raise OSError(
OSError: facebook/dinov2-with-registers-giant does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
print(image.size)
model_name = 'facebook/dinov2-with-registers-giant'
processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
inputs = processor(images=image, return_tensors="pt")
print(inputs["pixel_values"].shape)
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
print(last_hidden_states.shape)
### Expected behavior
none | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39075/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39074/comments | https://api.github.com/repos/huggingface/transformers/issues/39074/events | https://github.com/huggingface/transformers/pull/39074 | 3,182,057,503 | PR_kwDOCUB6oc6cYuQ7 | 39,074 | [Whisper] fix shape mismatch in tests | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T09:15:21 | 2025-06-27T09:28:20 | 2025-06-27T09:27:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39074",
"html_url": "https://github.com/huggingface/transformers/pull/39074",
"diff_url": "https://github.com/huggingface/transformers/pull/39074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39074.patch",
"merged_at": "2025-06-27T09:27:43"
} | # What does this PR do?
The switch to `torch.testing.assert_close` entailed shape mismatches, forgot to update 😅
| {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39074/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39073/comments | https://api.github.com/repos/huggingface/transformers/issues/39073/events | https://github.com/huggingface/transformers/issues/39073 | 3,182,033,912 | I_kwDOCUB6oc69qfv4 | 39,073 | Inefficient default GELU implementation in GPT2 | {
"login": "null-pointer-access",
"id": 210762976,
"node_id": "U_kgDODI_84A",
"avatar_url": "https://avatars.githubusercontent.com/u/210762976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/null-pointer-access",
"html_url": "https://github.com/null-pointer-access",
"followers_url": "https://api.github.com/users/null-pointer-access/followers",
"following_url": "https://api.github.com/users/null-pointer-access/following{/other_user}",
"gists_url": "https://api.github.com/users/null-pointer-access/gists{/gist_id}",
"starred_url": "https://api.github.com/users/null-pointer-access/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/null-pointer-access/subscriptions",
"organizations_url": "https://api.github.com/users/null-pointer-access/orgs",
"repos_url": "https://api.github.com/users/null-pointer-access/repos",
"events_url": "https://api.github.com/users/null-pointer-access/events{/privacy}",
"received_events_url": "https://api.github.com/users/null-pointer-access/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T09:07:39 | 2025-08-12T03:35:13 | 2025-08-05T08:02:48 | CONTRIBUTOR | null | null | null | null | While profiling the HuggingFace GPT2 model, I found that the default GELU backend used is NewGELUActivation, which is inefficient in most cases. Instead of using a fused CUDA kernel, NewGELUActivation executes multiple separate PyTorch-level operators, leading to unnecessary kernel launches and memory overhead.
```python
# activations.py:L46
class NewGELUActivation(nn.Module):
def forward(self, input: Tensor) -> Tensor:
return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))
```
Is there a reason why NewGELUActivation is still used as the default for GPT2, rather than switching to nn.functional.gelu or another fused alternative?
I’d be happy to share profiler traces or help test a patch if helpful. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39073/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39072/comments | https://api.github.com/repos/huggingface/transformers/issues/39072/events | https://github.com/huggingface/transformers/issues/39072 | 3,182,016,610 | I_kwDOCUB6oc69qbhi | 39,072 | Inefficient memory resharding in attention layer | {
"login": "null-pointer-access",
"id": 210762976,
"node_id": "U_kgDODI_84A",
"avatar_url": "https://avatars.githubusercontent.com/u/210762976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/null-pointer-access",
"html_url": "https://github.com/null-pointer-access",
"followers_url": "https://api.github.com/users/null-pointer-access/followers",
"following_url": "https://api.github.com/users/null-pointer-access/following{/other_user}",
"gists_url": "https://api.github.com/users/null-pointer-access/gists{/gist_id}",
"starred_url": "https://api.github.com/users/null-pointer-access/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/null-pointer-access/subscriptions",
"organizations_url": "https://api.github.com/users/null-pointer-access/orgs",
"repos_url": "https://api.github.com/users/null-pointer-access/repos",
"events_url": "https://api.github.com/users/null-pointer-access/events{/privacy}",
"received_events_url": "https://api.github.com/users/null-pointer-access/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T09:02:09 | 2025-08-05T08:02:50 | 2025-08-05T08:02:50 | CONTRIBUTOR | null | null | null | null | While analyzing the self-attention implementation in HuggingFace Transformers and comparing it to vLLM, I noticed that after the KQV projection step, the model performs a memory resharding process involving three separate Tensor::contiguous() calls to align with the memory layout of the attention kernel. This introduces significant overhead especially under small batch sizes, where kernel launch and memory movement become relatively more expensive.
In contrast, replacing the multiple contiguous() calls with a single fused operator could significantly reduce latency and improve runtime efficiency. This behavior is easy to reproduce by inspecting the self-attention forward pass on a small batch size (e.g., batch=1, seqlen=128, GPT2 model).
```python
# models/gpt2/modeling_gpt2.py:L298
query_states = query_states.view(shape_q).transpose(1, 2)
key_states = key_states.view(shape_kv).transpose(1, 2)
value_states = value_states.view(shape_kv).transpose(1, 2)
``` | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39072/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39071/comments | https://api.github.com/repos/huggingface/transformers/issues/39071/events | https://github.com/huggingface/transformers/pull/39071 | 3,181,695,667 | PR_kwDOCUB6oc6cXga4 | 39,071 | fixed typo for docstring in prepare_inputs method | {
"login": "JINO-ROHIT",
"id": 63234112,
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JINO-ROHIT",
"html_url": "https://github.com/JINO-ROHIT",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T07:26:08 | 2025-06-27T13:58:32 | 2025-06-27T13:57:56 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39071",
"html_url": "https://github.com/huggingface/transformers/pull/39071",
"diff_url": "https://github.com/huggingface/transformers/pull/39071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39071.patch",
"merged_at": "2025-06-27T13:57:56"
} | fixed typo for docstring in prepare_inputs method
- [x] This PR fixes a typo or improves the docs .
## Who can review?
Documentation: @stevhliu
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39071/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39070/comments | https://api.github.com/repos/huggingface/transformers/issues/39070/events | https://github.com/huggingface/transformers/pull/39070 | 3,181,582,826 | PR_kwDOCUB6oc6cXKNb | 39,070 | fix caching_allocator_warmup with tie weights | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T06:45:57 | 2025-07-01T09:32:20 | 2025-07-01T09:32:20 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39070",
"html_url": "https://github.com/huggingface/transformers/pull/39070",
"diff_url": "https://github.com/huggingface/transformers/pull/39070.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39070.patch",
"merged_at": "2025-07-01T09:32:20"
} | ### Motivation
The test `tests/quantization/finegrained_fp8/test_fp8.py::FP8QuantizerTest::test_quantized_model_multi_accelerator` failed with
```
> self.assertTrue(set(quantized_model.hf_device_map.values()) == {0, 1})tests/quantization/finegrained_fp8/test_fp8.py:206:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.11/dist-packages/transformers/testing_utils.py:609: in wrapper
return test_case(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
E AssertionError: False is not true
```
### RootCause
The `caching_allocator_warmup` didn't consider tie weights, so it reserved more memory than the model actually needed.
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39070/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39069/comments | https://api.github.com/repos/huggingface/transformers/issues/39069/events | https://github.com/huggingface/transformers/pull/39069 | 3,181,127,711 | PR_kwDOCUB6oc6cVtHj | 39,069 | fix a bunch of XPU UT failures on stock PyTorch 2.7 and 2.8 | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-27T02:37:12 | 2025-06-29T22:52:49 | 2025-06-27T12:01:54 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39069",
"html_url": "https://github.com/huggingface/transformers/pull/39069",
"diff_url": "https://github.com/huggingface/transformers/pull/39069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39069.patch",
"merged_at": "2025-06-27T12:01:54"
} | @ydshieh, pls help review, thx very much. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39069/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39068/comments | https://api.github.com/repos/huggingface/transformers/issues/39068/events | https://github.com/huggingface/transformers/pull/39068 | 3,180,088,908 | PR_kwDOCUB6oc6cSRLM | 39,068 | fix `Gemma3nProcessorTest` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T19:03:27 | 2025-06-27T10:28:12 | 2025-06-27T10:28:11 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39068",
"html_url": "https://github.com/huggingface/transformers/pull/39068",
"diff_url": "https://github.com/huggingface/transformers/pull/39068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39068.patch",
"merged_at": "2025-06-27T10:28:11"
} | # What does this PR do?
Use [a new repository](https://huggingface.co/hf-internal-testing/namespace-google-repo_name-gemma-3n-E4B-it/tree/main) to run processor tests on CircleCI.
or if @RyanMullins could check these maybe?
There are still 2 failures, could you take a look on these 2 please?
> FAILED tests/models/gemma3n/test_processing_gemma3n.py::Gemma3nProcessorTest::test_save_load_pretrained_additional_features - AssertionError: '{\n [107 chars] "disable_grouping": null,\n "dither": 5.0,\[1044 chars]n}\n' != '{\n [107 chars] "dither": 5.0,\n "do_center_crop": null,\n [1015 chars]n}\n'
> FAILED tests/models/gemma3n/test_processing_gemma3n.py::Gemma3nProcessorTest::test_save_load_pretrained_default - AssertionError: '{\n [107 chars] "disable_grouping": null,\n "dither": 0.0,\[1044 chars]n}\n' != '{\n [107 chars] "dither": 0.0,\n "do_center_crop": null,\n [1015 chars]n}\n'
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39068/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39067/comments | https://api.github.com/repos/huggingface/transformers/issues/39067/events | https://github.com/huggingface/transformers/issues/39067 | 3,179,826,076 | I_kwDOCUB6oc69iEuc | 39,067 | Output logits differ significantly for different attn_implementations on image inputs | {
"login": "zzigak",
"id": 67168743,
"node_id": "MDQ6VXNlcjY3MTY4NzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/67168743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zzigak",
"html_url": "https://github.com/zzigak",
"followers_url": "https://api.github.com/users/zzigak/followers",
"following_url": "https://api.github.com/users/zzigak/following{/other_user}",
"gists_url": "https://api.github.com/users/zzigak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zzigak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zzigak/subscriptions",
"organizations_url": "https://api.github.com/users/zzigak/orgs",
"repos_url": "https://api.github.com/users/zzigak/repos",
"events_url": "https://api.github.com/users/zzigak/events{/privacy}",
"received_events_url": "https://api.github.com/users/zzigak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-26T17:35:53 | 2025-08-30T08:03:21 | 2025-08-30T08:03:21 | NONE | null | null | null | null | ### System Info
Hi everyone,
I've been comparing fa2, sdpa, and eager attention implementations. My understand is that these should be very close in logits.
For textual inputs, the relative mean difference is ~1%. For **image** inputs, the relative mean difference is 25% and relative max difference is 75%. These are very large. I attempted this on:
Qwen 2.5 VL 7B, 3B and qwen 2 VL 7B. **See the script below**
Any thoughts?
If you see an error in my script causing this issue, all the better! Thanks.
Thanks!
My env pacakges:
```
Name Version Build Channel
───────────────────────────────────────────────────────────────────────────
GitPython 3.1.44 pypi_0 pypi
Jinja2 3.1.4 pypi_0 pypi
MarkupSafe 2.1.5 pypi_0 pypi
PyYAML 6.0.2 pypi_0 pypi
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
accelerate 1.8.1 pypi_0 pypi
aiohappyeyeballs 2.6.1 pypi_0 pypi
aiohttp 3.11.18 pypi_0 pypi
aiosignal 1.3.2 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
asttokens 3.0.0 pyhd8ed1ab_1 conda-forge
attrs 25.3.0 pypi_0 pypi
av 14.3.0 pypi_0 pypi
bitsandbytes 0.45.5 pypi_0 pypi
bzip2 1.0.8 h4bc722e_7 conda-forge
ca-certificates 2025.4.26 hbd8a1cb_0 conda-forge
certifi 2025.4.26 pypi_0 pypi
charset-normalizer 3.4.1 pypi_0 pypi
click 8.1.8 pypi_0 pypi
comm 0.2.2 pyhd8ed1ab_1 conda-forge
datasets 3.5.1 pypi_0 pypi
debugpy 1.8.14 py311hfdbb021_0 conda-forge
decorator 5.2.1 pyhd8ed1ab_0 conda-forge
decord 0.6.0 pypi_0 pypi
deepspeed 0.16.7 pypi_0 pypi
dill 0.3.8 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
einops 0.8.1 pypi_0 pypi
exceptiongroup 1.2.2 pyhd8ed1ab_1 conda-forge
executing 2.2.0 pyhd8ed1ab_0 conda-forge
filelock 3.13.1 pypi_0 pypi
frozenlist 1.6.0 pypi_0 pypi
fsspec 2024.6.1 pypi_0 pypi
gitdb 4.0.12 pypi_0 pypi
hjson 3.1.0 pypi_0 pypi
huggingface-hub 0.30.2 pypi_0 pypi
idna 3.10 pypi_0 pypi
importlib-metadata 8.6.1 pyha770c72_0 conda-forge
ipykernel 6.29.5 pyh3099207_0 conda-forge
ipython 9.2.0 pyhfb0248b_0 conda-forge
ipython_pygments_lexers 1.1.1 pyhd8ed1ab_0 conda-forge
ipywidgets 8.1.6 pypi_0 pypi
jedi 0.19.2 pyhd8ed1ab_1 conda-forge
jupyter_client 8.6.3 pyhd8ed1ab_1 conda-forge
jupyter_core 5.7.2 pyh31011fe_1 conda-forge
jupyterlab_widgets 3.0.14 pypi_0 pypi
keyutils 1.6.1 h166bdaf_0 conda-forge
krb5 1.21.3 h659f571_0 conda-forge
ld_impl_linux-64 2.43 h712a8e2_4 conda-forge
libedit 3.1.20250104 pl5321h7949ede_0 conda-forge
libexpat 2.7.0 h5888daf_0 conda-forge
libffi 3.4.6 h2dba641_1 conda-forge
libgcc 14.2.0 h767d61c_2 conda-forge
libgcc-ng 14.2.0 h69a702a_2 conda-forge
libgomp 14.2.0 h767d61c_2 conda-forge
liblzma 5.8.1 hb9d3cd8_0 conda-forge
libnsl 2.0.1 hd590300_0 conda-forge
libsodium 1.0.20 h4ab18f5_0 conda-forge
libsqlite 3.49.1 hee588c1_2 conda-forge
libstdcxx 14.2.0 h8f9b012_2 conda-forge
libstdcxx-ng 14.2.0 h4852527_2 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libxcrypt 4.4.36 hd590300_1 conda-forge
libzlib 1.3.1 hb9d3cd8_2 conda-forge
liger_kernel 0.5.8 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
matplotlib-inline 0.1.7 pyhd8ed1ab_1 conda-forge
mdurl 0.1.2 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
msgpack 1.1.0 pypi_0 pypi
multidict 6.4.3 pypi_0 pypi
multiprocess 0.70.16 pypi_0 pypi
ncurses 6.5 h2d0b736_3 conda-forge
nest-asyncio 1.6.0 pyhd8ed1ab_1 conda-forge
networkx 3.3 pypi_0 pypi
ninja 1.11.1.4 pypi_0 pypi
numpy 2.1.2 pypi_0 pypi
nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
nvidia-ml-py 12.570.86 pypi_0 pypi
nvidia-nccl-cu12 2.21.5 pypi_0 pypi
nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
opencv-python 4.11.0.86 pypi_0 pypi
openssl 3.5.0 h7b32b05_0 conda-forge
packaging 25.0 pyh29332c3_1 conda-forge
pandas 2.2.3 pypi_0 pypi
parso 0.8.4 pyhd8ed1ab_1 conda-forge
peft 0.15.2 pypi_0 pypi
pexpect 4.9.0 pyhd8ed1ab_1 conda-forge
pickleshare 0.7.5 pyhd8ed1ab_1004 conda-forge
pillow 11.0.0 pypi_0 pypi
pip 25.1.1 pyh8b19718_0 conda-forge
platformdirs 4.3.7 pyh29332c3_0 conda-forge
prompt-toolkit 3.0.51 pyha770c72_0 conda-forge
propcache 0.3.1 pypi_0 pypi
protobuf 6.30.2 pypi_0 pypi
psutil 7.0.0 py311h9ecbd09_0 conda-forge
ptyprocess 0.7.0 pyhd8ed1ab_1 conda-forge
pure_eval 0.2.3 pyhd8ed1ab_1 conda-forge
py-cpuinfo 9.0.0 pypi_0 pypi
pyarrow 20.0.0 pypi_0 pypi
pydantic 2.11.3 pypi_0 pypi
pydantic_core 2.33.1 pypi_0 pypi
pygments 2.19.1 pyhd8ed1ab_0 conda-forge
python 3.11.12 h9e4cc4f_0_cpython conda-forge
python-dateutil 2.9.0.post0 pyhff2d567_1 conda-forge
python_abi 3.11 7_cp311 conda-forge
pytz 2025.2 pypi_0 pypi
pyzmq 26.4.0 py311h7deb3e3_0 conda-forge
qwen-vl-utils 0.0.11 pypi_0 pypi
readline 8.2 h8c095d6_2 conda-forge (main)
regex 2024.11.6 pypi_0 pypi
requests 2.32.3 pypi_0 pypi
rich 14.0.0 pypi_0 pypi
safetensors 0.5.3 pypi_0 pypi
sentry-sdk 2.27.0 pypi_0 pypi
setproctitle 1.3.5 pypi_0 pypi
setuptools 79.0.1 pyhff2d567_0 conda-forge
six 1.17.0 pyhd8ed1ab_0 conda-forge
smmap 5.0.2 pypi_0 pypi
stack_data 0.6.3 pyhd8ed1ab_1 conda-forge
sympy 1.13.1 pypi_0 pypi
tensorboardX 2.6.2.2 pypi_0 pypi
tk 8.6.13 noxft_h4845f30_101 conda-forge
torch 2.6.0 pypi_0 pypi
torchaudio 2.6.0 pypi_0 pypi
torchvision 0.21.0 pypi_0 pypi
tornado 6.4.2 py311h9ecbd09_0 conda-forge
tqdm 4.67.1 pypi_0 pypi
traitlets 5.14.3 pyhd8ed1ab_1 conda-forge
transformers 4.53.0 pypi_0 pypi
triton 3.2.0 pypi_0 pypi
trl 0.17.0 pypi_0 pypi
typing-inspection 0.4.0 pypi_0 pypi
typing_extensions 4.13.2 pyh29332c3_0 conda-forge
tzdata 2025b h78e105d_0 conda-forge
ujson 5.10.0 pypi_0 pypi
urllib3 2.4.0 pypi_0 pypi
wandb 0.19.10 pypi_0 pypi
wcwidth 0.2.13 pyhd8ed1ab_1 conda-forge
wheel 0.45.1 pyhd8ed1ab_1 conda-forge
widgetsnbextension 4.0.14 pypi_0 pypi
xxhash 3.5.0 pypi_0 pypi
yarl 1.20.0 pypi_0 pypi
zeromq 4.3.5 h3b0a872_7 conda-forge
zipp 3.21.0 pyhd8ed1ab_1 conda-forge
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run below for input being text / image (slightly modified script from the tutorial https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). With the newest version of transformers library I was getting an error saying "is_causal is not a property of the model", so had to downgrade. I have the same problem when installing from source.
```py
import torch
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
torch.manual_seed(42)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
messages = [{
"role": "user",
"content": [
# {"type": "image", "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"},
{"type": "text", "text": "Describe this simage."}
],
}]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt"
)
model_flash = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct",
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
device_map={"": 0},
)
model_flash.eval()
model_sdpa = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2", # or omit
# attn_implementation="eager",
device_map={"": 1},
)
model_sdpa.eval()
# === Run FlashAttention2 ===
with torch.no_grad():
inputs_flash = {k: v.to("cuda:0") for k, v in inputs.items()}
out_flash = model_flash(**inputs_flash).logits.cpu()
# === Run SDPA ===
with torch.no_grad():
inputs_sdpa = {k: v.to("cuda:1") for k, v in inputs.items()}
out_sdpa = model_sdpa(**inputs_sdpa).logits.cpu()
diff = (out_flash - out_sdpa).abs()
print(f"Max abs diff: {diff.max().item():.6f}")
print(f"Mean abs diff: {diff.mean().item():.6f}")
print(f"Relative max diff: {(diff.max() / out_flash.abs().max()).item():.6f}")
print(f"Relative mean diff: {(diff.mean() / out_flash.abs().mean()).item():.6f}")
```
### Expected behavior
Relative error for logits on image inputs being comparable to text inputs and much lower than 25%. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39067/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39067/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39066/comments | https://api.github.com/repos/huggingface/transformers/issues/39066/events | https://github.com/huggingface/transformers/pull/39066 | 3,179,638,448 | PR_kwDOCUB6oc6cQz-C | 39,066 | Fix many HPU failures in the CI | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T16:16:17 | 2025-07-03T09:17:29 | 2025-07-03T09:17:27 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39066",
"html_url": "https://github.com/huggingface/transformers/pull/39066",
"diff_url": "https://github.com/huggingface/transformers/pull/39066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39066.patch",
"merged_at": "2025-07-03T09:17:27"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39066/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39065/comments | https://api.github.com/repos/huggingface/transformers/issues/39065/events | https://github.com/huggingface/transformers/pull/39065 | 3,179,597,750 | PR_kwDOCUB6oc6cQrJr | 39,065 | Add MambaCache into modeling_mamba and make FalconMamba modular. | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T15:59:55 | 2025-06-26T18:21:19 | 2025-06-26T18:20:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39065",
"html_url": "https://github.com/huggingface/transformers/pull/39065",
"diff_url": "https://github.com/huggingface/transformers/pull/39065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39065.patch",
"merged_at": null
} | This is a sub-PR from #38086 . This allows us to reuse the `MambaCache` from `modeling_mamba` into `FalconMamba`, which is now modular. | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39065/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39064/comments | https://api.github.com/repos/huggingface/transformers/issues/39064/events | https://github.com/huggingface/transformers/pull/39064 | 3,179,593,615 | PR_kwDOCUB6oc6cQqQg | 39,064 | Bug/38843 fix pos idx in fp32 parameter error | {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T15:58:15 | 2025-09-22T16:26:29 | 2025-09-22T16:26:29 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39064",
"html_url": "https://github.com/huggingface/transformers/pull/39064",
"diff_url": "https://github.com/huggingface/transformers/pull/39064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39064.patch",
"merged_at": null
} | # What does this PR do?
The PR aims to address the issue 38843: Error when creating ModernBert model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
This PR aims to address the issue that occur when creating a ModernBert model with flash attention and it raise the parameter 'pos_idx_in_fp32' type error.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
38843
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline]
- Yes
- [ ] - [ ] (https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Yes
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39064/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39063/comments | https://api.github.com/repos/huggingface/transformers/issues/39063/events | https://github.com/huggingface/transformers/pull/39063 | 3,179,556,775 | PR_kwDOCUB6oc6cQijG | 39,063 | Ignore extra position embeddings weights for ESM | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T15:44:30 | 2025-07-15T11:57:33 | 2025-07-15T11:57:32 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39063",
"html_url": "https://github.com/huggingface/transformers/pull/39063",
"diff_url": "https://github.com/huggingface/transformers/pull/39063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39063.patch",
"merged_at": "2025-07-15T11:57:32"
} | Before #38089, ESM created `self.position_embeddings` even when `config.position_embedding_type == "rotary"`. This weight was not used. The error was fixed in #38089, but most ESM checkpoints still have the unused `self.position_embeddings` weight. This PR adds that it the unexpected_ignore list, so users don't get scary warnings for a benign issue. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39063/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39062/comments | https://api.github.com/repos/huggingface/transformers/issues/39062/events | https://github.com/huggingface/transformers/pull/39062 | 3,179,542,267 | PR_kwDOCUB6oc6cQgUK | 39,062 | Add Parakeet | {
"login": "nithinraok",
"id": 19668129,
"node_id": "MDQ6VXNlcjE5NjY4MTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/19668129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nithinraok",
"html_url": "https://github.com/nithinraok",
"followers_url": "https://api.github.com/users/nithinraok/followers",
"following_url": "https://api.github.com/users/nithinraok/following{/other_user}",
"gists_url": "https://api.github.com/users/nithinraok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nithinraok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nithinraok/subscriptions",
"organizations_url": "https://api.github.com/users/nithinraok/orgs",
"repos_url": "https://api.github.com/users/nithinraok/repos",
"events_url": "https://api.github.com/users/nithinraok/events{/privacy}",
"received_events_url": "https://api.github.com/users/nithinraok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-26T15:40:31 | 2025-10-09T06:56:19 | 2025-09-25T13:52:25 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39062",
"html_url": "https://github.com/huggingface/transformers/pull/39062",
"diff_url": "https://github.com/huggingface/transformers/pull/39062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39062.patch",
"merged_at": "2025-09-25T13:52:25"
} | # What does this PR do?
This PR adds support for FastConformer encoder and ParakeetCTC models within transformers, enabling the use of NVIDIA's Parakeet and Canary speech-to-text models.
### FastConformer
- **Architecture**: Linearly scalable Conformer encoder for audio processing
- **Features**: Relative positional encoding, efficient attention, convolutional modules
- **Usage**: Can be used as encoder foundation for various speech tasks
### ParakeetCTC
- **Architecture**: FastConformer encoder + CTC decoder for speech recognition
- **Features**: Complete ASR pipeline with CTC decoding
- **Performance**: State-of-the-art accuracy with computational efficiency
Parakeet-TDT support is coming soon. | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39062/reactions",
"total_count": 13,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 11,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39062/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39061/comments | https://api.github.com/repos/huggingface/transformers/issues/39061/events | https://github.com/huggingface/transformers/pull/39061 | 3,179,501,543 | PR_kwDOCUB6oc6cQXlT | 39,061 | Bug/38843 pos idx in fp32 unexpected | {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T15:25:31 | 2025-06-26T15:58:59 | 2025-06-26T15:32:30 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39061",
"html_url": "https://github.com/huggingface/transformers/pull/39061",
"diff_url": "https://github.com/huggingface/transformers/pull/39061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39061.patch",
"merged_at": null
} | # What does this PR do?
The PR aims to address the issue 38843: Error when creating ModernBert model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
This PR request addresses the type error raised when creating a ModernBert model with flash attention which uses
` class ModernBertUnpaddedRotaryEmbedding(RotaryEmbedding)
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
38843
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
Yes
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Yes
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39061/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39060/comments | https://api.github.com/repos/huggingface/transformers/issues/39060/events | https://github.com/huggingface/transformers/pull/39060 | 3,179,455,204 | PR_kwDOCUB6oc6cQNoM | 39,060 | Make run_object_detection compatible with `datasets` 4.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T15:08:45 | 2025-07-03T09:53:13 | 2025-07-03T09:53:13 | MEMBER | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39060",
"html_url": "https://github.com/huggingface/transformers/pull/39060",
"diff_url": "https://github.com/huggingface/transformers/pull/39060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39060.patch",
"merged_at": null
} | run_object_detection isn't compatible with `datasets` 4.0, here is this error:
```
examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_object_detection - AttributeError: 'dict' object has no attribute 'feature'
```
This change makes run_object_detection compatible with `datasets` 4.0 (out next week). Existing dataset will still work as soon as `datasets` 4.0 is used.
Keeping this as a draft and we can merge once `datasets` 4.0 is out | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39060/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39059/comments | https://api.github.com/repos/huggingface/transformers/issues/39059/events | https://github.com/huggingface/transformers/pull/39059 | 3,179,404,138 | PR_kwDOCUB6oc6cQCj1 | 39,059 | Gemma 3n | {
"login": "RyanMullins",
"id": 868555,
"node_id": "MDQ6VXNlcjg2ODU1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/868555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMullins",
"html_url": "https://github.com/RyanMullins",
"followers_url": "https://api.github.com/users/RyanMullins/followers",
"following_url": "https://api.github.com/users/RyanMullins/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanMullins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanMullins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanMullins/subscriptions",
"organizations_url": "https://api.github.com/users/RyanMullins/orgs",
"repos_url": "https://api.github.com/users/RyanMullins/repos",
"events_url": "https://api.github.com/users/RyanMullins/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanMullins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T14:51:14 | 2025-06-26T16:05:40 | 2025-06-26T15:55:47 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39059",
"html_url": "https://github.com/huggingface/transformers/pull/39059",
"diff_url": "https://github.com/huggingface/transformers/pull/39059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39059.patch",
"merged_at": "2025-06-26T15:55:47"
} | * initial commit of Gemma 3n scaffold
* Fixing param pass through on Gemm3p5RMSNorm
* Adds Einsum layer to Gemma 3n
* Updating EinsumLayer API
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Adds AltUp to Gemma 3n
* Adding Gemma3p5 overall and text config with vision and audio config placeholders (#3)
* Adding gemma3p5 text configs
* Adding audio config placeholders
* Adding a placeholder for vision configs
* Updating MobileNetVisionConfig, inheriting TimmWrapperConfig
* Updating text configs
* Update src/transformers/models/gemma3p5/modular_gemma3p5.py
* Removing altup configs to accept the suggested configs
* Update src/transformers/models/gemma3p5/modular_gemma3p5.py
* Updating altup config
* Update modular
* Update modular
* Update modular
* Update modular
* Addressing review comments and updating text configs
* Adding a config for activation sparsity
* Updating configs to pass through options to super class init and adjust some name prefixes
* Updating laurel and altup with corrected config values
* Normalizing sub_config initializers
---------
* Updating MLP with activation sparsity (#2)
* Updating DecoderBlock for Gemma 3n (#3)
* Initial Gemm3nTextModel (#4)
NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference.
* Adding KV Cache Sharing
* Adds Einsum layer to Gemma 3n
* Updating EinsumLayer API
* Refactored kv cache sharing in attention
* Adding KVStore for cache sharing
* Update modular
* Update modular
* Update modular
* Update src/transformers/cache_utils.py
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Updating KV Cache Sharing implementation
* Updating the q and k norm definitions in the attention module
* Fixing name error for q,k,v RMS norm to use the right 3n module
* Updating MLP with activation sparsity
* Updating DecoderBlock for Gemma 3.5
* Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code
* Isolating KV Cache logic to relevant components
* Fixing logic error in Gemma3nAttention.forward
* Refactoring caching contributions and fixing kv_store initialization
* Simplifying Configs
* Remove errant self from super init call
* Bug fix in the Attention module - changing self.head_dim to config.head_dim
* Bug fixes in the LaurelBlock and RMS Norm super init call
* removing redundant code from a merge
* Adding per_layer_inputs to TextModel
* Adding preprocess embeddings with altup
* Adds per-layer-to-single output and a host of TODOs
* Integrating altup predict with the model workflow and other minor bug fixes
* Using nn.Embedding temporarily for text model
* It goes forward
* Minor refactor of attention sparsity and RoPE initialization
* Fixing duplicate rope_scaling param bug when loading from pretrained
---------
* Normalizing on altup_num_inputs config option
* regenerating modeling file after syncing to HEAD
* Use torch.std(..., unbiased=False) for activation sparsity (#8)
* Refactoring to a single QVK Norm (#13)
* AltUp: support scale_corrected_output (#14)
* Converts einsums to nn.Linear (#7)
* Converts einsums to nn.Linear
* Removing unused variables
* Aligning SharedKVCache with HybridCache (#11)
* Alinging SharedKVStore with HybridCache
* Remove KVStore. Refactor apply_rotary_pos_emb for sharing
* Addressing review comments
* Supporting split modality embeddings in Gemma3n (#10)
* Adding the Embedder class
* Update modular
* Update modular
* Update modular
* Update modular
* Update modular
* Update modular
* Addressing review comments, adding audio embedding layers, integrating embedder with the remaining architecture, adding a forward method for conditional generation
* Apply suggestions from code review
* Update modular
* Addressing review comments, prop drilling audio and vision configs to the text config
* Removing TODO's that have been addressed
* Simplify Embedder init and add audio embeddings
* Embeddings refactor. Adds Gemma3nAudioEmbedder and Gemma3nVisionEmbedder
* Refactoring vision and audio embeddings into ConditionalGeneration model
---------
* Updating attention mask for Gemma 3.5 (#15)
* xxx_token_index to xxx_token_id
* remvoing deprecated last_cache_position
* Removing references to SigLIP
* Always init per-layer inputs
* Using torch.finfo().min for epsilon_tensor
* Gemma3nDecoderLayer inherits from Gemma3DecoderLayer. Remove gating lambdas
* fix modular GEMMA3N_INPUTS_DOCSTRING
* Gemma3nAttention inherits from Gemma3Attention
* Modular inheritance fixes
* CausalLM conversion script for 4B model (#16)
* Add Gemma3n Audio Encoder (#6)
* initial commit of Gemma 3.5 scaffold
* Fixing param pass through on Gemm3nRMSNorm
* Adds Einsum layer to Gemma 3.5
* Updating EinsumLayer API
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Adds AltUp to Gemma 3n
* Adding Gemma3n overall and text config with vision and audio config placeholders (#3)
* Adding gemma3n text configs
* Adding audio config placeholders
* Adding a placeholder for vision configs
* Updating MobileNetVisionConfig, inheriting TimmWrapperConfig
* Updating text configs
* Update modular
* Removing altup configs to accept the suggested configs
* Update modular
* Updating altup config
* Update modular
* Update modular
* Update modular
* Update modular
* Addressing review comments and updating text configs
* Adding a config for activation sparsity
* Updating configs to pass through options to super class init and adjust some name prefixes
* Updating laurel and altup with corrected config values
* Normalizing sub_config initializers
---------
* Updating MLP with activation sparsity (#2)
* Updating DecoderBlock for Gemma 3.5 (#3)
* Initial Gemm3nTextModel (#4)
NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference.
* Adding KV Cache Sharing
* Adds Einsum layer to Gemma 3.5
* Updating EinsumLayer API
* Refactored kv cache sharing in attention
* Adding KVStore for cache sharing
* Update modular
* Update modular
* Update modular
* Update src/transformers/cache_utils.py
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Updating KV Cache Sharing implementation
* Updating the q and k norm definitions in the attention module
* Fixing name error for q,k,v RMS norm to use the right Gemma 3n module
* Updating MLP with activation sparsity
* Updating DecoderBlock for Gemma 3.5
* Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code
* Isolating KV Cache logic to relevant components
* Fixing logic error in Gemma3nAttention.forward
* Refactoring caching contributions and fixing kv_store initialization
* Simplifying Configs
* Remove errant self from super init call
* Bug fix in the Attention module - changing self.head_dim to config.head_dim
* Bug fixes in the LaurelBlock and RMS Norm super init call
* removing redundant code from a merge
* Adding per_layer_inputs to TextModel
* Adding preprocess embeddings with altup
* Adds per-layer-to-single output and a host of TODOs
* Integrating altup predict with the model workflow and other minor bug fixes
* Using nn.Embedding temporarily for text model
* It goes forward
* Minor refactor of attention sparsity and RoPE initialization
* Fixing duplicate rope_scaling param bug when loading from pretrained
---------
* Normalizing on altup_num_inputs config option
* Adding audio encoder config
* Adds high-level components for Audio Encoder
* Implement uniform reducer for Audio Encoder
* Adding placeholders for Conformer components in Audio Encoder
* Adding placeholders for SubSampleConvProjection components in Audio Encoder
* Adding SequenceLayer component placeholders
* Implementing Gemma3nAudioEncoder with nn.Sequential
* Implementing Gemma3nAudioSubSampleConvProjection with nn.Sequential
* Implementing Conformer model with SequenceLayers
* Use OrderedDict in nn.Sequential initializers
* Implements sl.Residual in Torch with nn.Sequential and OrderedDict
* Adopting a base SequenceLayer class with default forward() method
* Implementing sl.GatedLinearUnit in Torch
* Implementing sl.Swish in Torch
* Implementing sl.ReLU in Torch
* Implementing sl.Scale in Torch
* Removing sl.Dropout after tree-shaking
* Implementing sl.RMSNorm in Torch with fake shape
* Implementing sl.GroupNorm in Torch
* Implementing sl.Conv2d in Torch
* Implementing sl.Dense in Torch
* Removing sl.Delay layers, which act as pass-throughs
* Connecting shapes to configs in initializers
* Removing sl.Emit
* Implementing sl.ExpandDims in Torch
* Adding sl.GradientClipping to Torch
* Implementing sl.DenseShaped in Torch
* Implementing sl.LDPA in Torch
* Removing unused sl.CombinedQKVProj class
* Fixing erroneous type hint
* Implemnenting sl.DepthwiseConv1D in Torch
* Implementing sl.MaskInvalid in Torch
* Fixes for initialization
* Fixes for saving weights
* Removing einsums per feedback from HF staff
* Removing Sequence Layers idioms from audio encoder
* Fixes for reviewer comments
* CausalLM conversion script for 4B model
* inv_timescales to non-persistent buffer
* Addressing audio encoder Attention feedback
* Addressing Gemma3nAudioSSCPConvBlock feedback
* Addressing Gemma3nAudioConformerAttention feedback
* Addressing padding feedback
* Weights conversion loads audio state dict
* Always use vision_config so saving works
* Token id updates for configs
* Stubs for interleaving audio embs
* Addressing reviewer feedback
---------
* Fixing cache access error
* Removing duplicate code from a bad merge
* Gemma 3n Text + Vision Part 1 (#17)
* testing utilities for numerics comparisons
* Corrected einsum to nn.Linear weights conversion
* Inherit scaled word embs from Gemma3 not Bart
* Fixing transposes for collapsed linears
* More transpose fixes
* numpy api fix
* RMSNorm: Explicit kwargs, scale_shift=0.0 when with_scale=True
* Force AltUp to float32
* Updating debugging script for AudioEncoder debugging
* Support divide_weight_by_sqrt_fan_in from JAX for per-layer inputs
* Correcting attention einsum conversions
* RMSNorm in type of x
* Fixing douplicate laurel norm/gating
* KV sharing using the right previous indices
* Refactor kv shared index computation. Correct frac_shared_layers
* Use num_shared_layers instead of inferring from a fraction
* fixing a bug for logging
* Fix shared data_ptrs in altup inits
* rope: adjust proj -> norm -> rope to preserve computation (#20)
* rope: adjust proj -> norm -> rope to preserve computation
* Removing some breaking language model fluff in ConditionalGeneration
* Consolidate query_states transforms
---------
* Vectorize the loops in AltUp (#19)
* Vectorize the loops in AltUp
* fix typo
* Expanding to support batched inputs
* remove extra debug script
* Fix AltUp.forward
---------
* Add 'scale_shift=0.0, with_scale=True' to the final norm in TextModel
* Convert norm to 1/sqrt (#21)
* Convert norm to 1/sqrt
* Scale shift change per Phil's rec
* Adding default activation sparsity
* Fixing 2B config in weights conversion script
* Fixing RMSNorm parameters - adding scale_shift and with_scale
* Correcting query pre-attention scaling
* Adding query_rescale_scalar to text config
* Adding layer_idx to MLP
* Permafix for input_layernorm
* Use 1/sqrt instead of rsqrt in DecoderLayer
* Fix o_proj conversion
* Conversion script update for vision encoder
* Removing logging for debugging timm model
* Fixing bugs in Gemma3nForConditionalGeneration for text generation
* Generating the modeling_gemma3n.py file
* Removing the addition of an erroneous line in the modeling file
* Adding gemma3n text model to modeling_auto
* Bugfix: Updating the interleaving of inputs_embeds and vision_embeds
* Updating the modeling file with the latest bugfix changes
* Updating models/auto for Gemma 3n
* using AutoTokenizer in forward test
* Adding processing_gemma3n.py
* Gemma 3n configured for AutoModel. Conversion script updated.
* Removing errant merge artifacts
---------
* Removing errant debugging statements from Gemma 3
* Gemma3n audio model (#18)
* testing utilities for numerics comparisons
* Implement CumulativeGroupNorm and add to SubSampleConvProjection and SSCPConvBlock
* Add audio version of forward script based on RyanMullins' implementation
* Updating to match encoder tests. WIP: config question needs resolving
* Updates to audio classes to enable end-to-end running
* Removing vestigial classes, cleaning up print statements
* Adding SiLU / Swish to audio conformer feed forward block
* Shifted Gemma3p5Audio naming prefix to Gemma3NanoAudio
* Adding outputs to audio test
* Fixes to padding in SSCP and 1D convolution, align RMS Norm with wider model
* Update forward test to load from local weights
* Update conversion to process / output audio layers
* Update __all__ to export audio encoder
* AutoModel registration for Gemma 3n Audio
* Use AutoModel for ConditionalGeneration.audio_tower
* Fixing input_proj_linear transpose
* Fixing Gemma3NanoAudioConformerAttention.post conversion
* Fixing Gemma3NanoAudioSSCPConvBlock.conv weights conversion
* Correcting indentation issue on Gemma3p5RMSNorm
---------
* Text + Vision Part 2 (#23)
* Updates for ConditionalGeneration.get_image_features
* Adding a WIP draft of image_processing_gemma3p5.py
* Update src/transformers/models/gemma3p5/modular_gemma3p5.py
* Modular conversion after github suggested change
* Text + image gives good results
* Fixing image size preset
* Updating configs for the 2B variant in the conversion script
* Using final generation config in conversion script
---------
* Audio Integration (#12)
* initial commit of Gemma 3n scaffold
* Fixing param pass through on Gemm3nRMSNorm
* Adds Einsum layer to Gemma 3n
* Updating EinsumLayer API
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Adds AltUp to Gemma 3n
* Adding Gemma 3n overall and text config with vision and audio config placeholders (#3)
* Adding Gemma 3n text configs
* Adding audio config placeholders
* Adding a placeholder for vision configs
* Updating MobileNetVisionConfig, inheriting TimmWrapperConfig
* Updating text configs
* Update modular
* Removing altup configs to accept the suggested configs
* Update modular
* Updating altup config
* Update modular
* Update modular
* Update modular
* Update modular
* Addressing review comments and updating text configs
* Adding a config for activation sparsity
* Updating configs to pass through options to super class init and adjust some name prefixes
* Updating laurel and altup with corrected config values
* Normalizing sub_config initializers
---------
* Updating MLP with activation sparsity (#2)
* Updating DecoderBlock for Gemma 3n (#3)
* Initial Gemma3nTextModel (#4)
NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference.
* Adding KV Cache Sharing
* Adds Einsum layer to Gemma 3n
* Updating EinsumLayer API
* Refactored kv cache sharing in attention
* Adding KVStore for cache sharing
* Update modular
* Update modular
* Update modular
* Update src/transformers/cache_utils.py
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Updating KV Cache Sharing implementation
* Updating the q and k norm definitions in the attention module
* Fixing name error for q,k,v RMS norm to use the right 3n module
* Updating MLP with activation sparsity
* Updating DecoderBlock for Gemma 3n
* Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code
* Isolating KV Cache logic to relevant components
* Fixing logic error in Gemma3nAttention.forward
* Refactoring caching contributions and fixing kv_store initialization
* Simplifying Configs
* Remove errant self from super init call
* Bug fix in the Attention module - changing self.head_dim to config.head_dim
* Bug fixes in the LaurelBlock and RMS Norm super init call
* removing redundant code from a merge
* Adding per_layer_inputs to TextModel
* Adding preprocess embeddings with altup
* Adds per-layer-to-single output and a host of TODOs
* Integrating altup predict with the model workflow and other minor bug fixes
* Using nn.Embedding temporarily for text model
* It goes forward
* Minor refactor of attention sparsity and RoPE initialization
* Fixing duplicate rope_scaling param bug when loading from pretrained
---------
* Normalizing on altup_num_inputs config option
* Adding audio encoder config
* Adds high-level components for Audio Encoder
* Implement uniform reducer for Audio Encoder
* Adding placeholders for Conformer components in Audio Encoder
* Adding placeholders for SubSampleConvProjection components in Audio Encoder
* Adding SequenceLayer component placeholders
* Implementing Gemma3nAudioEncoder with nn.Sequential
* Implementing Gemma3nAudioSubSampleConvProjection with nn.Sequential
* Implementing Conformer model with SequenceLayers
* Use OrderedDict in nn.Sequential initializers
* Implements sl.Residual in Torch with nn.Sequential and OrderedDict
* Adopting a base SequenceLayer class with default forward() method
* Implementing sl.GatedLinearUnit in Torch
* Implementing sl.Swish in Torch
* Implementing sl.ReLU in Torch
* Implementing sl.Scale in Torch
* Removing sl.Dropout after tree-shaking
* Implementing sl.RMSNorm in Torch with fake shape
* Implementing sl.GroupNorm in Torch
* Implementing sl.Conv2d in Torch
* Implementing sl.Dense in Torch
* Removing sl.Delay layers, which act as pass-throughs
* Connecting shapes to configs in initializers
* Removing sl.Emit
* Implementing sl.ExpandDims in Torch
* Adding sl.GradientClipping to Torch
* Implementing sl.DenseShaped in Torch
* Implementing sl.LDPA in Torch
* Removing unused sl.CombinedQKVProj class
* Fixing erroneous type hint
* Implemnenting sl.DepthwiseConv1D in Torch
* Implementing sl.MaskInvalid in Torch
* Fixes for initialization
* Fixes for saving weights
* Removing einsums per feedback from HF staff
* Removing Sequence Layers idioms from audio encoder
* Fixes for reviewer comments
* Converting sl.Frontend to FeatureExtractor
* Updates for ConditionalGeneration.get_image_features
* Adding a WIP draft of image_processing_gemma3n.py
* Update modular
* Modular conversion after github suggested change
* Text + image gives good results
* Fixing image size preset
* Draft of audio data in chat template
* Removing image processing. Using SigLIP instead.
* Audio input going end-to-end
* Fixing dtype issues in audio encoder
* x-lib formatting consistency
* Adding example data
* Save preprocessor_config.json from conversion script
* Instrumentaiton for debugging
* Additional instrumentation for preprocessing debugging
* Updates to preprocessor, padding; produces correct end-to-end results on sample
* Tackling configuraiton TODOs
* Start of feature extractor refatcor
* Adds Numpy version of USM extractor, removes Torch version and dependencies
* Fixing AltUp.correct coef permute
* Supporting batches of single audio segment inputs
* Docstrings updates for config
* In-lining audio feature extraction
* Adjustments to conversion script and smoke test script
---------
* Gemma 3n renaming
* Removing test data and utilities
* Renaming test files
* Gemma 3n refactor
* Fix tokenizer config in conversion script
* Address reviewer feedback
* FeatureExtractor returns float32 by default
* Adding basic tests for audio, and input name for audio encoder
* Audio integration test, updates to model_id for other integration tests
* Use scales for q and k norms (#26)
* Update audio integration test to use HF dataset
* Reviewer feedback
* Expand embedding table to full vocab size in weights conversion
* Mix-n-match MatFormers for Gemma 3n (#25)
* Remove in-place operations (#30)
* chore: removing inplace ops
* remove [tensor] * n pattern
* chore: reviewer feedback in AudioEncoder and AltUp
* More grad clipping
* Dynamo compatibility
* fix: cache slicing error
* chore: simplify shared kv cache slicing
* chore: vision encoder rename in timm
* fix: image processor do_normalize=False
* fixup: style
* chore: model_doc
* fix: docs for code quality
* chore: repo consistency
* fix: RMSNorm in float as in prior Gemmas
* fix: per_layer_inputs = None
* chore: Gemma3nForCausalLM from Gemma3nForConditionalGeneration checkpoint
* chore: repo consistency
* Add initial unit tests for Gemma3nAudioFeatureExtractor (#27)
* Add initial unit tests for Gemma3nAudioFeatureExtractor
* Add basic unit tests for Gemma3nProcessor (#28)
* parameterize tests
---------
* chore: code style
* fix: test cases
* style and consistency
* fix config in the test to be coherent with layer cache sharing
* fix hidden states in tests and code
* inits and mappings
* fix modality prefixes
* test order and prefixes
* fix test exception
* fix class order and reduce model size for faster tests
* restore _checkpoint_conversion_mapping to load Caual from Conditional
* fix config mapping!
* fix: reviewer feedback
---------
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39059/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39059/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39058/comments | https://api.github.com/repos/huggingface/transformers/issues/39058/events | https://github.com/huggingface/transformers/pull/39058 | 3,179,178,816 | PR_kwDOCUB6oc6cPUcQ | 39,058 | add _keep_in_fp32_modules_strict | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T13:41:10 | 2025-06-26T14:35:54 | 2025-06-26T13:55:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39058",
"html_url": "https://github.com/huggingface/transformers/pull/39058",
"diff_url": "https://github.com/huggingface/transformers/pull/39058.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39058.patch",
"merged_at": "2025-06-26T13:55:29"
} | # What does this PR do?
Fix `_keep_in_32_modules` behavior and introduce a new `_keep_in_32_modules_strict` that has the same behaviour + prevent casting from `float32` to `bfloat16` (which is not the case with `_keep_in_32_modules`). | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39058/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39057/comments | https://api.github.com/repos/huggingface/transformers/issues/39057/events | https://github.com/huggingface/transformers/pull/39057 | 3,179,054,894 | PR_kwDOCUB6oc6cO5iH | 39,057 | guard torch distributed check | {
"login": "tvukovic-amd",
"id": 127323445,
"node_id": "U_kgDOB5bNNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/127323445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tvukovic-amd",
"html_url": "https://github.com/tvukovic-amd",
"followers_url": "https://api.github.com/users/tvukovic-amd/followers",
"following_url": "https://api.github.com/users/tvukovic-amd/following{/other_user}",
"gists_url": "https://api.github.com/users/tvukovic-amd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tvukovic-amd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tvukovic-amd/subscriptions",
"organizations_url": "https://api.github.com/users/tvukovic-amd/orgs",
"repos_url": "https://api.github.com/users/tvukovic-amd/repos",
"events_url": "https://api.github.com/users/tvukovic-amd/events{/privacy}",
"received_events_url": "https://api.github.com/users/tvukovic-amd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T12:58:59 | 2025-06-27T14:49:47 | 2025-06-27T14:49:47 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39057",
"html_url": "https://github.com/huggingface/transformers/pull/39057",
"diff_url": "https://github.com/huggingface/transformers/pull/39057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39057.patch",
"merged_at": "2025-06-27T14:49:47"
} | This PR fixes issue where transformer pipelines invoke `torch.distributed.is_initialized()` in `src/transformers/pipelines/base.py`.
We need to check if `torch.distributed.is_available()` before checking if `torch.distributed.is_initialized()` otherwise we will face
```
File "C:\develop\v\Lib\site-packages\transformers\pipelines\base.py", line 1036, in __init__
if is_torch_available() and torch.distributed.is_initialized():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'torch.distributed' has no attribute 'is_initialized'
```
Similar issue but in different file is mentioned on [Regression: Need to guard torch.distributed.is_initialized with torch.distributed.is_available #26039](https://github.com/huggingface/transformers/issues/26039) and solved with [safeguard torch distributed check #26056](https://github.com/huggingface/transformers/pull/26056). | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39057/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39057/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39056/comments | https://api.github.com/repos/huggingface/transformers/issues/39056/events | https://github.com/huggingface/transformers/pull/39056 | 3,178,946,716 | PR_kwDOCUB6oc6cOh87 | 39,056 | Remove double soft-max in load-balancing loss. Fixes #39055 . | {
"login": "rudolfwilliam",
"id": 31388158,
"node_id": "MDQ6VXNlcjMxMzg4MTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/31388158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rudolfwilliam",
"html_url": "https://github.com/rudolfwilliam",
"followers_url": "https://api.github.com/users/rudolfwilliam/followers",
"following_url": "https://api.github.com/users/rudolfwilliam/following{/other_user}",
"gists_url": "https://api.github.com/users/rudolfwilliam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rudolfwilliam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rudolfwilliam/subscriptions",
"organizations_url": "https://api.github.com/users/rudolfwilliam/orgs",
"repos_url": "https://api.github.com/users/rudolfwilliam/repos",
"events_url": "https://api.github.com/users/rudolfwilliam/events{/privacy}",
"received_events_url": "https://api.github.com/users/rudolfwilliam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T12:21:48 | 2025-07-16T09:20:54 | 2025-07-16T09:20:24 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39056",
"html_url": "https://github.com/huggingface/transformers/pull/39056",
"diff_url": "https://github.com/huggingface/transformers/pull/39056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39056.patch",
"merged_at": "2025-07-16T09:20:24"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #39055
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39056/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39055/comments | https://api.github.com/repos/huggingface/transformers/issues/39055/events | https://github.com/huggingface/transformers/issues/39055 | 3,178,751,217 | I_kwDOCUB6oc69d-Tx | 39,055 | DBRX model passes probabilities and not logits to the load balancer | {
"login": "rudolfwilliam",
"id": 31388158,
"node_id": "MDQ6VXNlcjMxMzg4MTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/31388158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rudolfwilliam",
"html_url": "https://github.com/rudolfwilliam",
"followers_url": "https://api.github.com/users/rudolfwilliam/followers",
"following_url": "https://api.github.com/users/rudolfwilliam/following{/other_user}",
"gists_url": "https://api.github.com/users/rudolfwilliam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rudolfwilliam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rudolfwilliam/subscriptions",
"organizations_url": "https://api.github.com/users/rudolfwilliam/orgs",
"repos_url": "https://api.github.com/users/rudolfwilliam/repos",
"events_url": "https://api.github.com/users/rudolfwilliam/events{/privacy}",
"received_events_url": "https://api.github.com/users/rudolfwilliam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-26T11:11:26 | 2025-07-16T09:20:25 | 2025-07-16T09:20:25 | CONTRIBUTOR | null | null | null | null | ### System Info
The problem has nothing to do with my system.
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Look into the code and verify that it is not compliant with the [intended load balancing](https://arxiv.org/pdf/2101.03961):
The signature of the load balancer in [modeling_dbrx](https://github.com/huggingface/transformers/blob/main/src/transformers/models/dbrx/modeling_dbrx.py) looks as follows:
```python
def load_balancing_loss_func(
gate_logits: torch.Tensor,
num_experts: int,
top_k: int,
attention_mask: Optional[torch.Tensor],
) -> torch.Tensor:
```
And also, the `gate_logits` are treated as such, meaning that the softmax is computed later in the code. However, the values (called `weights`) that are passed into that function are the expert probabilities and not the logits:
```python
weights = self.layer(hidden_states).softmax(dim=-1, dtype=torch.float32)
```
Thus, the load balancer takes the softmax of the logits twice, which is incorrect.
### Expected behavior
`gate_logits` should be logits and not probabilities. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39055/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39054/comments | https://api.github.com/repos/huggingface/transformers/issues/39054/events | https://github.com/huggingface/transformers/pull/39054 | 3,178,690,785 | PR_kwDOCUB6oc6cNsZH | 39,054 | fix condition where torch_dtype auto collides with model_kwargs. | {
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T10:48:53 | 2025-06-26T12:52:59 | 2025-06-26T12:52:57 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39054",
"html_url": "https://github.com/huggingface/transformers/pull/39054",
"diff_url": "https://github.com/huggingface/transformers/pull/39054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39054.patch",
"merged_at": "2025-06-26T12:52:57"
} | # What does this PR do?
Fixes broken tests on nightly CI | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39054/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39053/comments | https://api.github.com/repos/huggingface/transformers/issues/39053/events | https://github.com/huggingface/transformers/pull/39053 | 3,178,552,257 | PR_kwDOCUB6oc6cNOgG | 39,053 | fix `test_compare_unprocessed_logit_scores` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T09:57:13 | 2025-06-26T16:36:58 | 2025-06-26T16:36:56 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39053",
"html_url": "https://github.com/huggingface/transformers/pull/39053",
"diff_url": "https://github.com/huggingface/transformers/pull/39053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39053.patch",
"merged_at": "2025-06-26T16:36:56"
} | # What does this PR do?
With #39016, this test
> tests/generation/test_utils.py::GenerationIntegrationTests::test_compare_unprocessed_logit_scores
is failing.
However, the difference is in the range of 1e-7
```
(Pdb) torch.amax(torch.abs(logits_fwd - logits_gen))
tensor(5.9605e-08, device='cuda:0')
```
The test was using `assertListEqual` which is not good for float numbers. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39053/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39052/comments | https://api.github.com/repos/huggingface/transformers/issues/39052/events | https://github.com/huggingface/transformers/pull/39052 | 3,178,501,826 | PR_kwDOCUB6oc6cNEZR | 39,052 | fix `t5gemma` tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T09:40:35 | 2025-06-26T16:56:03 | 2025-06-26T16:48:14 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39052",
"html_url": "https://github.com/huggingface/transformers/pull/39052",
"diff_url": "https://github.com/huggingface/transformers/pull/39052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39052.patch",
"merged_at": "2025-06-26T16:48:14"
} | # What does this PR do?
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39052/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39051/comments | https://api.github.com/repos/huggingface/transformers/issues/39051/events | https://github.com/huggingface/transformers/pull/39051 | 3,178,352,457 | PR_kwDOCUB6oc6cMlub | 39,051 | [tests] remove tests from libraries with deprecated support (flax, tensorflow_text, ...) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T08:52:38 | 2025-06-26T15:25:06 | 2025-06-26T15:25:01 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39051",
"html_url": "https://github.com/huggingface/transformers/pull/39051",
"diff_url": "https://github.com/huggingface/transformers/pull/39051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39051.patch",
"merged_at": "2025-06-26T15:25:01"
} | # What does this PR do?
Follow-up to #38944
Deprecates a few more `require_XXX` and removes related tests, corresponding to libraries with deprecated support. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39051/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39051/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39050/comments | https://api.github.com/repos/huggingface/transformers/issues/39050/events | https://github.com/huggingface/transformers/pull/39050 | 3,178,269,312 | PR_kwDOCUB6oc6cMTrr | 39,050 | fix `layoutlmv3` tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T08:21:24 | 2025-06-26T18:07:19 | 2025-06-26T18:07:17 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39050",
"html_url": "https://github.com/huggingface/transformers/pull/39050",
"diff_url": "https://github.com/huggingface/transformers/pull/39050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39050.patch",
"merged_at": "2025-06-26T18:07:17"
} | # What does this PR do?
> tests/models/layoutlmv3/test_image_processing_layoutlmv3.py::LayoutLMv3ImageProcessingTest::test_LayoutLMv3_integration_test
This is failing on main. But before I merged #38940, I ran the tests including this one and it is passing.
When I restore that PR branch and run again now, it also fails.
I just update the expected values as it might come from a 3rd party library, and the diff is only one place.
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39050/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39049/comments | https://api.github.com/repos/huggingface/transformers/issues/39049/events | https://github.com/huggingface/transformers/issues/39049 | 3,178,068,761 | I_kwDOCUB6oc69bXsZ | 39,049 | pytorch version 1.8.1 compatibility | {
"login": "Dominiq24",
"id": 138902077,
"node_id": "U_kgDOCEd6PQ",
"avatar_url": "https://avatars.githubusercontent.com/u/138902077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dominiq24",
"html_url": "https://github.com/Dominiq24",
"followers_url": "https://api.github.com/users/Dominiq24/followers",
"following_url": "https://api.github.com/users/Dominiq24/following{/other_user}",
"gists_url": "https://api.github.com/users/Dominiq24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dominiq24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dominiq24/subscriptions",
"organizations_url": "https://api.github.com/users/Dominiq24/orgs",
"repos_url": "https://api.github.com/users/Dominiq24/repos",
"events_url": "https://api.github.com/users/Dominiq24/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dominiq24/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-26T07:11:42 | 2025-08-03T08:02:25 | 2025-08-03T08:02:25 | NONE | null | null | null | null | which transformers version is compatible with pytorch 1.8.1 and also peft 0.4.0 | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39049/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39048/comments | https://api.github.com/repos/huggingface/transformers/issues/39048/events | https://github.com/huggingface/transformers/issues/39048 | 3,177,704,408 | I_kwDOCUB6oc69Z-vY | 39,048 | [BUG] Converting Megatron-LM GPT-2 to hf format raise error: KeyError: 'position_embeddings' | {
"login": "lishuai-97",
"id": 87744419,
"node_id": "MDQ6VXNlcjg3NzQ0NDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/87744419?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lishuai-97",
"html_url": "https://github.com/lishuai-97",
"followers_url": "https://api.github.com/users/lishuai-97/followers",
"following_url": "https://api.github.com/users/lishuai-97/following{/other_user}",
"gists_url": "https://api.github.com/users/lishuai-97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lishuai-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lishuai-97/subscriptions",
"organizations_url": "https://api.github.com/users/lishuai-97/orgs",
"repos_url": "https://api.github.com/users/lishuai-97/repos",
"events_url": "https://api.github.com/users/lishuai-97/events{/privacy}",
"received_events_url": "https://api.github.com/users/lishuai-97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-26T04:39:20 | 2025-06-28T06:39:40 | 2025-06-28T01:23:58 | NONE | null | null | null | null | ### System Info
I train a GPT-2 345M model with Megatron-LM and try to convert the model to huggingface transformers format with `transformers/src/transformers/models/megatron_gpt2
/convert_megatron_gpt2_checkpoint.py`. Then it raise error like following:
```bash
Traceback (most recent call last):
File "tools/checkpoint/convert_megatron_gpt2_checkpoint.py", line 371, in <module>
main()
File "tools/checkpoint/convert_megatron_gpt2_checkpoint.py", line 330, in main
output_state_dict = convert_megatron_checkpoint(args, input_state_dict, config)
File "tools/checkpoint/convert_megatron_gpt2_checkpoint.py", line 138, in convert_megatron_checkpoint
pos_embeddings = embeddings["position_embeddings"]["weight"]
KeyError: 'position_embeddings'
```
It seems that the Megatron-LM checkpoint doesn't have the `position_embeddings` state.
```bash
# model
..# language_model
....# embedding
......# word_embeddings
........# weight : torch.Size([50304, 1024])
....# encoder
......# layers.0.input_norm.weight : torch.Size([1024])
......# layers.0.input_norm.bias : torch.Size([1024])
......# layers.0.self_attention.query_key_value.weight : torch.Size([3072, 1024])
......# layers.0.self_attention.query_key_value.bias : torch.Size([3072])
......# layers.0.self_attention.dense.weight : torch.Size([1024, 1024])
......# layers.0.self_attention.dense.bias : torch.Size([1024])
......# layers.0.post_attention_norm.weight : torch.Size([1024])
......# layers.0.post_attention_norm.bias : torch.Size([1024])
......# layers.0.mlp.dense_h_to_4h.weight : torch.Size([4096, 1024])
......# layers.0.mlp.dense_h_to_4h.bias : torch.Size([4096])
......# layers.0.mlp.dense_4h_to_h.weight : torch.Size([1024, 4096])
......# layers.0.mlp.dense_4h_to_h.bias : torch.Size([1024])
......# layers.1.input_norm.weight : torch.Size([1024])
......# layers.1.input_norm.bias : torch.Size([1024])
......# layers.1.self_attention.query_key_value.weight : torch.Size([3072, 1024])
......# layers.1.self_attention.query_key_value.bias : torch.Size([3072])
......# layers.1.self_attention.dense.weight : torch.Size([1024, 1024])
......# layers.1.self_attention.dense.bias : torch.Size([1024])
......# layers.1.post_attention_norm.weight : torch.Size([1024])
......# layers.1.post_attention_norm.bias : torch.Size([1024])
......# layers.1.mlp.dense_h_to_4h.weight : torch.Size([4096, 1024])
......# layers.1.mlp.dense_h_to_4h.bias : torch.Size([4096])
......# layers.1.mlp.dense_4h_to_h.weight : torch.Size([1024, 4096])
......# layers.1.mlp.dense_4h_to_h.bias : torch.Size([1024])
```
What should I do to correctly convert the megatron-lm checkpoint to Huggingface format? Whether commenting out the position_embeddings line works. I'm not sure if this is right.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```bash
PYTHONPATH=$megatron_path python tools/checkpoint/convert_megatron_gpt2_checkpoint.py \
--print-checkpoint-structure \
--path_to_checkpoint $src_dir
```
### Expected behavior
Convert the Megatron-LM checkpoint to HuggingFace format successfully. | {
"login": "lishuai-97",
"id": 87744419,
"node_id": "MDQ6VXNlcjg3NzQ0NDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/87744419?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lishuai-97",
"html_url": "https://github.com/lishuai-97",
"followers_url": "https://api.github.com/users/lishuai-97/followers",
"following_url": "https://api.github.com/users/lishuai-97/following{/other_user}",
"gists_url": "https://api.github.com/users/lishuai-97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lishuai-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lishuai-97/subscriptions",
"organizations_url": "https://api.github.com/users/lishuai-97/orgs",
"repos_url": "https://api.github.com/users/lishuai-97/repos",
"events_url": "https://api.github.com/users/lishuai-97/events{/privacy}",
"received_events_url": "https://api.github.com/users/lishuai-97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39048/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39047/comments | https://api.github.com/repos/huggingface/transformers/issues/39047/events | https://github.com/huggingface/transformers/pull/39047 | 3,177,365,266 | PR_kwDOCUB6oc6cJWUk | 39,047 | RFC: refactor causal lm loss to handle lm_head in loss function | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-26T01:02:42 | 2025-06-28T16:24:42 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39047",
"html_url": "https://github.com/huggingface/transformers/pull/39047",
"diff_url": "https://github.com/huggingface/transformers/pull/39047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39047.patch",
"merged_at": null
} | # What does this PR do?
The ecosystem is has a lot of patches to optimize the loss for causal models. Due to frequent api changes, these patches are hard to maintain and keep up to date.
- liger https://github.com/linkedin/Liger-Kernel/tree/main/src/liger_kernel/transformers/model
- cce https://github.com/apple/ml-cross-entropy/tree/main/cut_cross_entropy/transformers
The RFC proposes changes to the ForCausalLMLoss to accept the lm_head weights and last hidden_states as part of the loss function. By moving the loss calculation here, it's much easier to add various loss variations that leverage fusing the linear + cross entropy calculations.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39047/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39046/comments | https://api.github.com/repos/huggingface/transformers/issues/39046/events | https://github.com/huggingface/transformers/pull/39046 | 3,177,279,587 | PR_kwDOCUB6oc6cJD94 | 39,046 | Fix deprecated max_size parameter handling in DETR image processors | {
"login": "nck90",
"id": 162806035,
"node_id": "U_kgDOCbQ5Ew",
"avatar_url": "https://avatars.githubusercontent.com/u/162806035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nck90",
"html_url": "https://github.com/nck90",
"followers_url": "https://api.github.com/users/nck90/followers",
"following_url": "https://api.github.com/users/nck90/following{/other_user}",
"gists_url": "https://api.github.com/users/nck90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nck90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nck90/subscriptions",
"organizations_url": "https://api.github.com/users/nck90/orgs",
"repos_url": "https://api.github.com/users/nck90/repos",
"events_url": "https://api.github.com/users/nck90/events{/privacy}",
"received_events_url": "https://api.github.com/users/nck90/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-26T00:06:34 | 2025-06-27T10:01:22 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39046",
"html_url": "https://github.com/huggingface/transformers/pull/39046",
"diff_url": "https://github.com/huggingface/transformers/pull/39046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39046.patch",
"merged_at": null
} | Fixes #37939
The deprecated max_size parameter was incorrectly overriding the size parameter completely instead of being properly handled. This affected both ConditionalDetrImageProcessor and DetrImageProcessor.
Changes:
- Fixed __init__ method to properly handle max_size parameter by setting longest_edge when size dict doesn't have it, instead of replacing size
- Fixed preprocess method to handle max_size parameter correctly
- Added max_size as explicit parameter to __init__ for proper from_dict support
- Added comprehensive tests for max_size parameter handling
The fix ensures backward compatibility while properly deprecating the max_size parameter in favor of size['longest_edge'].
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39046/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39045/comments | https://api.github.com/repos/huggingface/transformers/issues/39045/events | https://github.com/huggingface/transformers/issues/39045 | 3,176,841,849 | I_kwDOCUB6oc69WsJ5 | 39,045 | GPTBot AI4BUSINESS | {
"login": "Onay931",
"id": 186447663,
"node_id": "U_kgDOCxz3Lw",
"avatar_url": "https://avatars.githubusercontent.com/u/186447663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Onay931",
"html_url": "https://github.com/Onay931",
"followers_url": "https://api.github.com/users/Onay931/followers",
"following_url": "https://api.github.com/users/Onay931/following{/other_user}",
"gists_url": "https://api.github.com/users/Onay931/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Onay931/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Onay931/subscriptions",
"organizations_url": "https://api.github.com/users/Onay931/orgs",
"repos_url": "https://api.github.com/users/Onay931/repos",
"events_url": "https://api.github.com/users/Onay931/events{/privacy}",
"received_events_url": "https://api.github.com/users/Onay931/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-25T20:15:33 | 2025-06-25T20:57:15 | 2025-06-25T20:57:15 | NONE | null | null | null | null | ### Model description
[](url)git clone https://huggingface.co/spaces/Ariell931/AI4BusinessGPT
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"login": "Onay931",
"id": 186447663,
"node_id": "U_kgDOCxz3Lw",
"avatar_url": "https://avatars.githubusercontent.com/u/186447663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Onay931",
"html_url": "https://github.com/Onay931",
"followers_url": "https://api.github.com/users/Onay931/followers",
"following_url": "https://api.github.com/users/Onay931/following{/other_user}",
"gists_url": "https://api.github.com/users/Onay931/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Onay931/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Onay931/subscriptions",
"organizations_url": "https://api.github.com/users/Onay931/orgs",
"repos_url": "https://api.github.com/users/Onay931/repos",
"events_url": "https://api.github.com/users/Onay931/events{/privacy}",
"received_events_url": "https://api.github.com/users/Onay931/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39045/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39044/comments | https://api.github.com/repos/huggingface/transformers/issues/39044/events | https://github.com/huggingface/transformers/issues/39044 | 3,176,840,765 | I_kwDOCUB6oc69Wr49 | 39,044 | Deepseek-ai | {
"login": "Onay931",
"id": 186447663,
"node_id": "U_kgDOCxz3Lw",
"avatar_url": "https://avatars.githubusercontent.com/u/186447663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Onay931",
"html_url": "https://github.com/Onay931",
"followers_url": "https://api.github.com/users/Onay931/followers",
"following_url": "https://api.github.com/users/Onay931/following{/other_user}",
"gists_url": "https://api.github.com/users/Onay931/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Onay931/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Onay931/subscriptions",
"organizations_url": "https://api.github.com/users/Onay931/orgs",
"repos_url": "https://api.github.com/users/Onay931/repos",
"events_url": "https://api.github.com/users/Onay931/events{/privacy}",
"received_events_url": "https://api.github.com/users/Onay931/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-25T20:15:01 | 2025-06-26T11:46:07 | 2025-06-26T11:46:07 | NONE | null | null | null | null | ### Model description
[](url)git clone https://huggingface.co/spaces/Ariell931/AI4BusinessGPT
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39044/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39043/comments | https://api.github.com/repos/huggingface/transformers/issues/39043/events | https://github.com/huggingface/transformers/pull/39043 | 3,176,763,923 | PR_kwDOCUB6oc6cHamA | 39,043 | [qwen2-vl] fix vision attention scaling | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-25T19:41:50 | 2025-06-30T10:37:44 | 2025-06-26T12:06:53 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39043",
"html_url": "https://github.com/huggingface/transformers/pull/39043",
"diff_url": "https://github.com/huggingface/transformers/pull/39043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39043.patch",
"merged_at": "2025-06-26T12:06:53"
} | # What does this PR do?
As per title, after the refactor scaling was accidentally changed from `1/math.sqrt(head_dim)` to `math.sqrt(head_dim)`
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39043/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39043/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39042/comments | https://api.github.com/repos/huggingface/transformers/issues/39042/events | https://github.com/huggingface/transformers/pull/39042 | 3,176,746,264 | PR_kwDOCUB6oc6cHW44 | 39,042 | polishing docs: error fixes for clarity | {
"login": "eeemmmmmm",
"id": 155267286,
"node_id": "U_kgDOCUEw1g",
"avatar_url": "https://avatars.githubusercontent.com/u/155267286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eeemmmmmm",
"html_url": "https://github.com/eeemmmmmm",
"followers_url": "https://api.github.com/users/eeemmmmmm/followers",
"following_url": "https://api.github.com/users/eeemmmmmm/following{/other_user}",
"gists_url": "https://api.github.com/users/eeemmmmmm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eeemmmmmm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eeemmmmmm/subscriptions",
"organizations_url": "https://api.github.com/users/eeemmmmmm/orgs",
"repos_url": "https://api.github.com/users/eeemmmmmm/repos",
"events_url": "https://api.github.com/users/eeemmmmmm/events{/privacy}",
"received_events_url": "https://api.github.com/users/eeemmmmmm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-25T19:33:39 | 2025-06-26T11:57:14 | 2025-06-26T11:56:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39042",
"html_url": "https://github.com/huggingface/transformers/pull/39042",
"diff_url": "https://github.com/huggingface/transformers/pull/39042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39042.patch",
"merged_at": "2025-06-26T11:56:32"
} | Spotted and fixed a few wording hiccups:
utils/deprecate_models.py
`of of `- `of` -- fix duplicate
utils/modular_model_converter.py
`is is` - `is` -- fix duplicate
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39042/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39041/comments | https://api.github.com/repos/huggingface/transformers/issues/39041/events | https://github.com/huggingface/transformers/pull/39041 | 3,176,436,095 | PR_kwDOCUB6oc6cGWtU | 39,041 | Add owlv2 fast processor | {
"login": "lmarshall12",
"id": 215543211,
"node_id": "U_kgDODNjtqw",
"avatar_url": "https://avatars.githubusercontent.com/u/215543211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lmarshall12",
"html_url": "https://github.com/lmarshall12",
"followers_url": "https://api.github.com/users/lmarshall12/followers",
"following_url": "https://api.github.com/users/lmarshall12/following{/other_user}",
"gists_url": "https://api.github.com/users/lmarshall12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lmarshall12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmarshall12/subscriptions",
"organizations_url": "https://api.github.com/users/lmarshall12/orgs",
"repos_url": "https://api.github.com/users/lmarshall12/repos",
"events_url": "https://api.github.com/users/lmarshall12/events{/privacy}",
"received_events_url": "https://api.github.com/users/lmarshall12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-25T17:30:56 | 2025-07-25T02:40:50 | 2025-07-25T02:40:11 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39041",
"html_url": "https://github.com/huggingface/transformers/pull/39041",
"diff_url": "https://github.com/huggingface/transformers/pull/39041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39041.patch",
"merged_at": "2025-07-25T02:40:11"
} | # What does this PR do?
Adds OWLv2 Fast image processor as identified in issue #36978
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39041/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39040/comments | https://api.github.com/repos/huggingface/transformers/issues/39040/events | https://github.com/huggingface/transformers/pull/39040 | 3,176,404,189 | PR_kwDOCUB6oc6cGPkL | 39,040 | Cleanup Attention class for Siglip and dependent models | {
"login": "yaswanth19",
"id": 82788246,
"node_id": "MDQ6VXNlcjgyNzg4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82788246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaswanth19",
"html_url": "https://github.com/yaswanth19",
"followers_url": "https://api.github.com/users/yaswanth19/followers",
"following_url": "https://api.github.com/users/yaswanth19/following{/other_user}",
"gists_url": "https://api.github.com/users/yaswanth19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaswanth19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaswanth19/subscriptions",
"organizations_url": "https://api.github.com/users/yaswanth19/orgs",
"repos_url": "https://api.github.com/users/yaswanth19/repos",
"events_url": "https://api.github.com/users/yaswanth19/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaswanth19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-25T17:20:19 | 2025-06-27T12:00:30 | 2025-06-27T10:14:10 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39040",
"html_url": "https://github.com/huggingface/transformers/pull/39040",
"diff_url": "https://github.com/huggingface/transformers/pull/39040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39040.patch",
"merged_at": "2025-06-27T10:14:10"
} | Observed in another PR by Arthur where we error out early if output_attn is True for SDPA. This PR is inline with that for Siglip and it's dependent Models. | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39040/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39039/comments | https://api.github.com/repos/huggingface/transformers/issues/39039/events | https://github.com/huggingface/transformers/pull/39039 | 3,176,341,634 | PR_kwDOCUB6oc6cGBt6 | 39,039 | Allow compression on meta device | {
"login": "shanjiaz",
"id": 43143795,
"node_id": "MDQ6VXNlcjQzMTQzNzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/43143795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shanjiaz",
"html_url": "https://github.com/shanjiaz",
"followers_url": "https://api.github.com/users/shanjiaz/followers",
"following_url": "https://api.github.com/users/shanjiaz/following{/other_user}",
"gists_url": "https://api.github.com/users/shanjiaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shanjiaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shanjiaz/subscriptions",
"organizations_url": "https://api.github.com/users/shanjiaz/orgs",
"repos_url": "https://api.github.com/users/shanjiaz/repos",
"events_url": "https://api.github.com/users/shanjiaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/shanjiaz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-25T17:00:34 | 2025-08-29T13:49:15 | 2025-08-29T13:49:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39039",
"html_url": "https://github.com/huggingface/transformers/pull/39039",
"diff_url": "https://github.com/huggingface/transformers/pull/39039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39039.patch",
"merged_at": "2025-08-29T13:49:15"
} | # What does this PR do?
This PR depends on compressed-tensor updates [here](https://github.com/neuralmagic/compressed-tensors/pull/376/files) that allows compression on meta device.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In `_process_model_before_weight_loading`, always compress meta device model when the checkpoint model is quantized/sparsified.
In `_process_model_after_weight_loading`, just decompress if needed.
Rationale
Fixes runtime errors like:
```
Only Tensors of floating point and complex dtype can require gradients
```
Ensures proper behavior when loading and using compressed models.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39039/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39038/comments | https://api.github.com/repos/huggingface/transformers/issues/39038/events | https://github.com/huggingface/transformers/issues/39038 | 3,176,242,568 | I_kwDOCUB6oc69UZ2I | 39,038 | Only with newest version (4.52.4): from_pretrained() esm.embeddings.position_embeddings.weight missing | {
"login": "hunklinger",
"id": 164493628,
"node_id": "U_kgDOCc35PA",
"avatar_url": "https://avatars.githubusercontent.com/u/164493628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hunklinger",
"html_url": "https://github.com/hunklinger",
"followers_url": "https://api.github.com/users/hunklinger/followers",
"following_url": "https://api.github.com/users/hunklinger/following{/other_user}",
"gists_url": "https://api.github.com/users/hunklinger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hunklinger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hunklinger/subscriptions",
"organizations_url": "https://api.github.com/users/hunklinger/orgs",
"repos_url": "https://api.github.com/users/hunklinger/repos",
"events_url": "https://api.github.com/users/hunklinger/events{/privacy}",
"received_events_url": "https://api.github.com/users/hunklinger/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-25T16:22:07 | 2025-08-03T08:02:27 | 2025-08-03T08:02:27 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-5.14.0-503.23.2.el9_5.x86_64-x86_64-with-glibc2.34
- Python version: 3.12.4
- Huggingface_hub version: 0.33.1
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.1+cu126 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
There is some problem with the from_pretrained method when loading ESM2 models. When I compare which keys are available in the model.safetensors file and which one the loaded model has there are some missing as expected but also the esm.embeddings.position_embeddings.weight key is not loaded properly. With transformers version 4.46.0 I did not have this problem.
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
file_path = hf_hub_download(
repo_id="facebook/esm2_t6_8M_UR50D",
filename="model.safetensors"
)
safetensor = load_file(file_path)
print(f"Amount of keys: {len(safetensor.keys())}")
model = AutoModelForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D")
print(f"Amount of keys: {len(model.state_dict().keys())}")
print("Keys in the safetensor.keys() but not in the model.state_dict().keys()")
for k in safetensor.keys():
if k not in model.state_dict().keys():
print(k)
print("Keys in the model.state_dict().keys() but not in the safetensor.keys()")
for j in model.state_dict().keys():
if j not in safetensor.keys():
print(j)
```
Output:
Amount of keys: 114
Amount of keys: 111
Keys in the safetensor.keys() but not in the model.state_dict().keys()
esm.embeddings.position_embeddings.weight
esm.embeddings.position_ids
lm_head.bias
lm_head.dense.bias
lm_head.dense.weight
lm_head.layer_norm.bias
lm_head.layer_norm.weight
Keys in the model.state_dict().keys() but not in the safetensor.keys()
classifier.dense.weight
classifier.dense.bias
classifier.out_proj.weight
classifier.out_proj.bias
### Expected behavior
Expected outcome (with environment with transformers==4.46.0) which causes no problems downstream:
Amount of keys: 114
Amount of keys: 112
Keys in the safetensor.keys() but not in the model.state_dict().keys()
esm.embeddings.position_ids
lm_head.bias
lm_head.dense.bias
lm_head.dense.weight
lm_head.layer_norm.bias
lm_head.layer_norm.weight
Keys in the model.state_dict().keys() but not in the safetensor.keys()
classifier.dense.weight
classifier.dense.bias
classifier.out_proj.weight
classifier.out_proj.bias | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39038/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39037/comments | https://api.github.com/repos/huggingface/transformers/issues/39037/events | https://github.com/huggingface/transformers/pull/39037 | 3,176,101,734 | PR_kwDOCUB6oc6cFP8c | 39,037 | fix `kosmos2` tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-25T15:35:00 | 2025-06-30T10:07:28 | null | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39037",
"html_url": "https://github.com/huggingface/transformers/pull/39037",
"diff_url": "https://github.com/huggingface/transformers/pull/39037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39037.patch",
"merged_at": null
} | # What does this PR do?
[VLMs] support attention backends (#37576) actually breaks `kosmos2` as `KosmosTextAttention` is used in the decoder (`Kosmos2TextBlock`) as well as `Kosmos2ImageToTextProjection` (which should attend to all image places).
But without `is_causal`, the `sdpa_attention_forward` will treat it as `causal` due to
```
if torch.jit.is_tracing() and isinstance(is_causal, torch.Tensor):
is_causal = is_causal.item()
```
Maybe there is a better way to handle this. But I would not spend too much time but just adding `is_causal` arugment and pass it.
All tests pass on A10 now | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39037/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39036/comments | https://api.github.com/repos/huggingface/transformers/issues/39036/events | https://github.com/huggingface/transformers/issues/39036 | 3,176,021,277 | I_kwDOCUB6oc69Tj0d | 39,036 | AutoModelForCausalLM.from_pretrained(..., device_map=...) ignore `Tensor.retain_grad()` in Multi-GPUs setting | {
"login": "jasonrichdarmawan",
"id": 63768126,
"node_id": "MDQ6VXNlcjYzNzY4MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/63768126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasonrichdarmawan",
"html_url": "https://github.com/jasonrichdarmawan",
"followers_url": "https://api.github.com/users/jasonrichdarmawan/followers",
"following_url": "https://api.github.com/users/jasonrichdarmawan/following{/other_user}",
"gists_url": "https://api.github.com/users/jasonrichdarmawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jasonrichdarmawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jasonrichdarmawan/subscriptions",
"organizations_url": "https://api.github.com/users/jasonrichdarmawan/orgs",
"repos_url": "https://api.github.com/users/jasonrichdarmawan/repos",
"events_url": "https://api.github.com/users/jasonrichdarmawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jasonrichdarmawan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-25T15:09:21 | 2025-10-12T08:03:44 | 2025-10-12T08:03:44 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.51.3
- Platform: Linux-5.15.0-83-generic-x86_64-with-glibc2.31
- Python version: 3.11.13
- Huggingface_hub version: 0.33.0
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.1+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
Sorry the code is long. However, I can guarantee it's standalone and does not require pre-trained weights.
This problem has been bugging me for 2 days already. So, I decided to create a dummy version of it. Also, I am sure this is `transformers` issue after confirming it to manually set the `device` without `dispatch_model(model=..., device_map=...)` or the `AutoModelForCausalLM.from_pretrained(..., device_map=....)` function. The hidden states grad is there.
1. Create `debugging/in_place_modification/main.py`, `debugging/in_place_modification/ipm_models/recpre/raven_config.py`, `debugging/in_place_modification/ipm_models/recpre/raven_for_causal_lm.py`, `debugging/in_place_modification/ipm_models/recpre/raven_pre_trained_model.py`, `debugging/in_place_modification/ipm_models/recpre/sandwich_block.py`, and `__init__.py` in the `debugging/in_place_modification/ipm_models` and `debugging/in_place_modification/ipm_models/recpre` folders.
2. Run the `debugging/in_place_modification/main.py`
3. Have at least 2 GPUs. Use the provided `device_map` variable.
4. See the `hidden_states[0].grad` is None although `x.retain_grad()` is called.
The code
`debugging/in_place_modification/main.py`
```
# %%
import os
import sys
# To be able to import modules from the utils
project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", ".."))
if project_root not in sys.path:
print(f"Adding project root to sys.path: {project_root}")
sys.path.insert(0, project_root)
# %%
if True:
print("Reloading modules to ensure the latest code is used.")
import sys
from importlib import reload
reload(sys.modules.get("ipm_models.recpre.raven_config", sys))
reload(sys.modules.get("ipm_models.recpre.raven_for_causal_lm", sys))
reload(sys.modules.get("ipm_models.recpre", sys))
reload(sys.modules.get("ipm_models", sys))
from ipm_models import RavenConfig
from transformers import AutoModelForCausalLM
from accelerate import dispatch_model
import torch
# %%
config = RavenConfig(
tie_word_embeddings=True,
n_layers=8,
n_layers_in_prelude=2,
n_layers_in_recurrent_block=4,
mean_recurrence=32,
n_layers_in_coda=2,
vocab_size=4,
n_embd=16,
norm_eps=1e-6,
)
model = AutoModelForCausalLM.from_config(
config=config,
)
device_map = {
"transformer.wte": 0,
"transformer.prelude": 0,
"transformer.coda": 1,
"transformer.ln_f": 1,
"lm_head": 0,
}
model = dispatch_model(
model=model,
device_map=device_map,
)
# %%
input_ids = torch.randint(
low=0,
high=config.vocab_size,
size=(1, 4),
device="cuda:0"
)
logits, all_hidden_states = model(input_ids)
logits.sum().backward()
for layer_index, hidden_state in all_hidden_states.items():
print(f"Layer {layer_index} hidden state grad:", hidden_state.grad)
print("Logits grad:", logits.grad)
# %%
```
`debugging/in_place_modification/ipm_models/__init__.py`
```
from .recpre import *
```
`debugging/in_place_modification/ipm_models/recpre/__init__.py`
```
from .raven_pre_trained_model import *
from .raven_config import *
from .raven_for_causal_lm import *
AutoConfig.register(
model_type="raven_muginn",
config=RavenConfig,
)
AutoModel.register(
config_class=RavenConfig,
model_class=RavenForCausalLM,
)
AutoModelForCausalLM.register(
config_class=RavenConfig,
model_class=RavenForCausalLM,
)
```
`debugging/in_place_modification/ipm_models/recpre/raven_config.py`
```
from transformers import PretrainedConfig
class RavenConfig(PretrainedConfig):
model_type = "raven_muginn"
keys_to_ignore_at_inference = []
attribute_map = {
"num_attention_heads": "n_heads",
"hidden_size": "n_embd",
"num_hidden_layers": "n_layers",
}
def __init__(
self,
tie_word_embeddings: bool,
n_layers: 8,
n_layers_in_prelude: int,
n_layers_in_recurrent_block: int,
mean_recurrence: int,
n_layers_in_coda: int,
vocab_size: int,
n_embd: int,
norm_eps: float,
**kwargs,
):
super().__init__(
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)
self.n_layers = n_layers
self.n_layers_in_prelude = n_layers_in_prelude
self.n_layers_in_recurrent_block = n_layers_in_recurrent_block
self.n_layers_in_coda = n_layers_in_coda
self.mean_recurrence = mean_recurrence
self.vocab_size = self.padded_vocab_size = vocab_size
self.n_embd = n_embd
self.norm_eps = norm_eps
```
`debugging/in_place_modification/ipm_models/recpre/raven_for_causal_lm.py`
```
from .raven_pre_trained_model import RavenPreTrainedModel
from .raven_config import RavenConfig
from .sandwich_block import SandwichBlock
import torch
from torch import Tensor
from torch.nn import RMSNorm
from jaxtyping import Float
from transformers import AutoConfig
from transformers import AutoModel
from transformers import AutoModelForCausalLM
class RavenForCausalLM(RavenPreTrainedModel):
def __init__(
self,
config: RavenConfig,
):
super().__init__(config)
self.config = config
prelude = torch.nn.ModuleList(
SandwichBlock(config=config, layer_id=i)
for i in range(config.n_layers_in_prelude)
)
o = config.n_layers_in_prelude + config.n_layers_in_recurrent_block * config.mean_recurrence
coda = torch.nn.ModuleList(
SandwichBlock(config=config, layer_id=i + o)
for i in range(config.n_layers_in_coda)
)
self.transformer = torch.nn.ModuleDict(dict(
wte=torch.nn.Embedding(
num_embeddings=config.padded_vocab_size,
embedding_dim=config.n_embd,
),
prelude=prelude,
coda=coda,
ln_f=RMSNorm(
normalized_shape=config.n_embd,
eps=config.norm_eps
)
))
self.lm_head = torch.nn.Linear(
in_features=config.n_embd,
out_features=config.padded_vocab_size,
bias=False,
)
self.tie_weights()
def get_input_embeddings(self):
return self.transformer.wte
def get_output_embeddings(self):
return self.lm_head
def forward(self, input_ids: Float[Tensor, "batch seq_len"]):
all_hidden_states = {}
input_embeds = self.transformer.wte(input_ids)
for block in self.transformer.prelude:
input_embeds, all_hidden_states = block(
x=input_embeds,
depth_index=block.layer_id,
all_hidden_states=all_hidden_states,
)
x = input_embeds
for block in self.transformer.coda:
x, all_hidden_states = block(
x=x,
depth_index=block.layer_id,
all_hidden_states=all_hidden_states,
)
x = self.transformer.ln_f(x)
if x.requires_grad:
x.retain_grad()
all_hidden_states[132] = x
logits = self.lm_head(x)
return logits, all_hidden_states
```
`debugging/in_place_modification/ipm_models/recpre/raven_pre_trained_model.py`
```
from .raven_config import RavenConfig
from transformers import PreTrainedModel
class RavenPreTrainedModel(PreTrainedModel):
config_class = RavenConfig
```
`debugging/in_place_modification/ipm_models/recpre/sandwich_block.py`
```
from .raven_config import RavenConfig
from torch import nn
from torch import Tensor
from torch.nn import RMSNorm
from jaxtyping import Float
class SandwichBlock(nn.Module):
def __init__(
self,
config: RavenConfig,
layer_id: int,
):
super().__init__()
self.layer_id = layer_id
self.norm_1 = RMSNorm(
normalized_shape=config.n_embd,
eps=config.norm_eps
)
def forward(
self,
x: Float[Tensor, "size"],
depth_index: int,
all_hidden_states: dict[int, Float[Tensor, "size"]]
) -> tuple[
Float[Tensor, "size"],
dict[int, Float[Tensor, "size"]]
]:
x = self.norm_1(x)
if x.requires_grad:
x.retain_grad()
all_hidden_states[depth_index] = x
return x, all_hidden_states
```
### Expected behavior
Expected behavior: Use the provided `device_map` variable and the `hidden_states[0].grad` is not None. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39036/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.