url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/39536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39536/comments | https://api.github.com/repos/huggingface/transformers/issues/39536/events | https://github.com/huggingface/transformers/pull/39536 | 3,246,415,721 | PR_kwDOCUB6oc6fvN-w | 39,536 | ๐ [i18n-KO] Translated `how_to_hack_models.md` to Korean | {
"login": "skwh54",
"id": 108786184,
"node_id": "U_kgDOBnvyCA",
"avatar_url": "https://avatars.githubusercontent.com/u/108786184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skwh54",
"html_url": "https://github.com/skwh54",
"followers_url": "https://api.github.com/users/skwh54/followers",
"following_url": "https://api.github.com/users/skwh54/following{/other_user}",
"gists_url": "https://api.github.com/users/skwh54/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skwh54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skwh54/subscriptions",
"organizations_url": "https://api.github.com/users/skwh54/orgs",
"repos_url": "https://api.github.com/users/skwh54/repos",
"events_url": "https://api.github.com/users/skwh54/events{/privacy}",
"received_events_url": "https://api.github.com/users/skwh54/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-20T14:31:42 | 2025-07-29T15:09:16 | 2025-07-29T15:09:16 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39536",
"html_url": "https://github.com/huggingface/transformers/pull/39536",
"diff_url": "https://github.com/huggingface/transformers/pull/39536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39536.patch",
"merged_at": "2025-07-29T15:09:16"
} | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `how_to_hack_models.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง KREW ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW ํ์๋ค์ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39536/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39551/comments | https://api.github.com/repos/huggingface/transformers/issues/39551/events | https://github.com/huggingface/transformers/issues/39551 | 3,248,232,979 | I_kwDOCUB6oc7BnBoT | 39,551 | InformerForPrediction [I would like to seek your opinions, everyone, How can I set the dynamic real features for prediction] | {
"login": "2004learner",
"id": 117647339,
"node_id": "U_kgDOBwMn6w",
"avatar_url": "https://avatars.githubusercontent.com/u/117647339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/2004learner",
"html_url": "https://github.com/2004learner",
"followers_url": "https://api.github.com/users/2004learner/followers",
"following_url": "https://api.github.com/users/2004learner/following{/other_user}",
"gists_url": "https://api.github.com/users/2004learner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/2004learner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2004learner/subscriptions",
"organizations_url": "https://api.github.com/users/2004learner/orgs",
"repos_url": "https://api.github.com/users/2004learner/repos",
"events_url": "https://api.github.com/users/2004learner/events{/privacy}",
"received_events_url": "https://api.github.com/users/2004learner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-20T11:38:50 | 2025-08-28T08:03:20 | 2025-08-28T08:03:20 | NONE | null | null | null | null | Here is the description cited from the docs of InformerForPrediction๏ผ
> future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) โ Required time features for the prediction window, which the model internally will add to future_values. These could be things like โmonth of yearโ, โday of the monthโ, etc. encoded as vectors (for instance as Fourier features). These could also be so-called โageโ features, which basically help the model know โat which point in lifeโ a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the โpositional encodingsโ of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features. The Time Series Transformer only learns additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
Hi, I have a question regarding inference in time series forecasting models.
When making predictions, how can I obtain or construct the dynamic_real_features for the future steps (i.e., for the prediction_length)?
More specifically, how should I concatenate the corresponding dynamic_real_features and time_features during inference?
Is it appropriate to use all-zero placeholders for the future dynamic_real_features?
Will this affect prediction performance, considering that during training the model has access to real values for these features over the full context + prediction window?
On a related note:
In time series forecasting, is it necessary for all timestamps in the input window to be equally spaced (e.g., every x minutes)?
Or can I use sequences with irregular time intervals, as long as the time order is preserved?
Thanks for your help!
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39551/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39535/comments | https://api.github.com/repos/huggingface/transformers/issues/39535/events | https://github.com/huggingface/transformers/pull/39535 | 3,246,154,317 | PR_kwDOCUB6oc6fuZ-B | 39,535 | ๐ [i18n-KO] Translated `cache_explanation.md` to Korean | {
"login": "pyapyapya",
"id": 42240862,
"node_id": "MDQ6VXNlcjQyMjQwODYy",
"avatar_url": "https://avatars.githubusercontent.com/u/42240862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pyapyapya",
"html_url": "https://github.com/pyapyapya",
"followers_url": "https://api.github.com/users/pyapyapya/followers",
"following_url": "https://api.github.com/users/pyapyapya/following{/other_user}",
"gists_url": "https://api.github.com/users/pyapyapya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pyapyapya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pyapyapya/subscriptions",
"organizations_url": "https://api.github.com/users/pyapyapya/orgs",
"repos_url": "https://api.github.com/users/pyapyapya/repos",
"events_url": "https://api.github.com/users/pyapyapya/events{/privacy}",
"received_events_url": "https://api.github.com/users/pyapyapya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-20T08:39:03 | 2025-08-05T15:20:14 | 2025-08-05T15:20:14 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39535",
"html_url": "https://github.com/huggingface/transformers/pull/39535",
"diff_url": "https://github.com/huggingface/transformers/pull/39535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39535.patch",
"merged_at": "2025-08-05T15:20:14"
} | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `cache_explanation.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `cache_explanation.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง KREW ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW ํ์๋ค์ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- @stevhliu May you please review this PR? --> | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39535/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39535/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39534/comments | https://api.github.com/repos/huggingface/transformers/issues/39534/events | https://github.com/huggingface/transformers/pull/39534 | 3,246,117,265 | PR_kwDOCUB6oc6fuSlD | 39,534 | Add Beit3 model | {
"login": "Leon0402",
"id": 33933142,
"node_id": "MDQ6VXNlcjMzOTMzMTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/33933142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Leon0402",
"html_url": "https://github.com/Leon0402",
"followers_url": "https://api.github.com/users/Leon0402/followers",
"following_url": "https://api.github.com/users/Leon0402/following{/other_user}",
"gists_url": "https://api.github.com/users/Leon0402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Leon0402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leon0402/subscriptions",
"organizations_url": "https://api.github.com/users/Leon0402/orgs",
"repos_url": "https://api.github.com/users/Leon0402/repos",
"events_url": "https://api.github.com/users/Leon0402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Leon0402/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-20T07:42:17 | 2025-10-09T08:14:25 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39534",
"html_url": "https://github.com/huggingface/transformers/pull/39534",
"diff_url": "https://github.com/huggingface/transformers/pull/39534.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39534.patch",
"merged_at": null
} | Fixes https://github.com/huggingface/transformers/issues/22178
This a rebase based on the original PR: https://github.com/huggingface/transformers/pull/22289
Potentially Missing:
- Using Scaled Dot Product Attention (SDPA)
- BeitImageProcessorFast Support
- Beit3Backbone | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39534/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39534/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39533/comments | https://api.github.com/repos/huggingface/transformers/issues/39533/events | https://github.com/huggingface/transformers/issues/39533 | 3,246,051,325 | I_kwDOCUB6oc7Bes_9 | 39,533 | KeyError: 'llava_qwen2' | {
"login": "fengyuentau",
"id": 17219438,
"node_id": "MDQ6VXNlcjE3MjE5NDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/17219438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fengyuentau",
"html_url": "https://github.com/fengyuentau",
"followers_url": "https://api.github.com/users/fengyuentau/followers",
"following_url": "https://api.github.com/users/fengyuentau/following{/other_user}",
"gists_url": "https://api.github.com/users/fengyuentau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fengyuentau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fengyuentau/subscriptions",
"organizations_url": "https://api.github.com/users/fengyuentau/orgs",
"repos_url": "https://api.github.com/users/fengyuentau/repos",
"events_url": "https://api.github.com/users/fengyuentau/events{/privacy}",
"received_events_url": "https://api.github.com/users/fengyuentau/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-20T06:25:55 | 2025-07-22T12:51:29 | 2025-07-21T09:12:44 | NONE | null | null | null | null | ### System Info
transformers: 4.53.0
os: wsl2
python: 3.12.4
### Who can help?
Hello @amyeroberts @qubvel, I was trying to set up FastVLM inference with SGLang and it prompted me that llava_qwen2 is not supported by transformers. Log is attached below:
```
Traceback (most recent call last):
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1218, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 914, in __getitem__
raise KeyError(key)
KeyError: 'llava_qwen2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/sglang/launch_server.py", line 11, in <module>
server_args = prepare_server_args(sys.argv[1:])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/sglang/srt/server_args.py", line 1748, in prepare_server_args
server_args = ServerArgs.from_cli_args(raw_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/sglang/srt/server_args.py", line 1691, in from_cli_args
return cls(**{attr: getattr(args, attr) for attr in attrs})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 171, in __init__
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/sglang/srt/server_args.py", line 329, in __post_init__
model_config = ModelConfig.from_server_args(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/sglang/srt/configs/model_config.py", line 278, in from_server_args
return ModelConfig(
^^^^^^^^^^^^
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/sglang/srt/configs/model_config.py", line 79, in __init__
self.hf_config = get_config(
^^^^^^^^^^^
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/sglang/srt/utils.py", line 2775, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/sglang/srt/hf_transformers_utils.py", line 121, in get_config
config = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/sglang/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1220, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type `llava_qwen2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install sglang https://docs.sglang.ai/start/install.html
2. Download models from https://github.com/apple/ml-fastvlm?tab=readme-ov-file#model-zoo
3. Run with the following command:
```
python3 -m sglang.launch_server \
--model-path /path/to/ml-fastvlm/checkpoints/llava-fastvithd_7b_stage2 \
--host 0.0.0.0 \
--port 30000
```
### Expected behavior
It should just work. I can make contributions if providing guidance. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39533/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39532/comments | https://api.github.com/repos/huggingface/transformers/issues/39532/events | https://github.com/huggingface/transformers/pull/39532 | 3,245,996,097 | PR_kwDOCUB6oc6ft5AM | 39,532 | ๐ [i18n-KO] Translated `tokenizer.md` to Korean | {
"login": "seopp",
"id": 100005890,
"node_id": "U_kgDOBfX4Ag",
"avatar_url": "https://avatars.githubusercontent.com/u/100005890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seopp",
"html_url": "https://github.com/seopp",
"followers_url": "https://api.github.com/users/seopp/followers",
"following_url": "https://api.github.com/users/seopp/following{/other_user}",
"gists_url": "https://api.github.com/users/seopp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seopp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seopp/subscriptions",
"organizations_url": "https://api.github.com/users/seopp/orgs",
"repos_url": "https://api.github.com/users/seopp/repos",
"events_url": "https://api.github.com/users/seopp/events{/privacy}",
"received_events_url": "https://api.github.com/users/seopp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-20T05:04:14 | 2025-07-29T15:04:15 | 2025-07-29T15:04:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39532",
"html_url": "https://github.com/huggingface/transformers/pull/39532",
"diff_url": "https://github.com/huggingface/transformers/pull/39532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39532.patch",
"merged_at": "2025-07-29T15:04:14"
} | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `tokenizer.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง KREW ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW ํ์๋ค์ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39532/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/39532/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39531/comments | https://api.github.com/repos/huggingface/transformers/issues/39531/events | https://github.com/huggingface/transformers/pull/39531 | 3,245,988,474 | PR_kwDOCUB6oc6ft3dQ | 39,531 | updated mistral3 model card | {
"login": "cassiasamp",
"id": 4005687,
"node_id": "MDQ6VXNlcjQwMDU2ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4005687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cassiasamp",
"html_url": "https://github.com/cassiasamp",
"followers_url": "https://api.github.com/users/cassiasamp/followers",
"following_url": "https://api.github.com/users/cassiasamp/following{/other_user}",
"gists_url": "https://api.github.com/users/cassiasamp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cassiasamp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cassiasamp/subscriptions",
"organizations_url": "https://api.github.com/users/cassiasamp/orgs",
"repos_url": "https://api.github.com/users/cassiasamp/repos",
"events_url": "https://api.github.com/users/cassiasamp/events{/privacy}",
"received_events_url": "https://api.github.com/users/cassiasamp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-20T04:49:54 | 2025-07-22T16:01:55 | 2025-07-22T16:01:55 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39531",
"html_url": "https://github.com/huggingface/transformers/pull/39531",
"diff_url": "https://github.com/huggingface/transformers/pull/39531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39531.patch",
"merged_at": "2025-07-22T16:01:55"
} | # What does this PR do?
Updates `Mistral3` model card as per #36979
* updated mistral3 model card
* applying suggestions from code review
* made all changes to mistral3.md
* adding space between paragraphs in docs/source/en/model_doc/mistral3.md
* removing duplicate in mistral3.md
* opens PR in correct main branch
---------
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu Please check the PR and see if it's alright ๐
The examples need to be ran to be double checked, if that's still ok. I will try to manage to run them for possible next contributions.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39531/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39530/comments | https://api.github.com/repos/huggingface/transformers/issues/39530/events | https://github.com/huggingface/transformers/issues/39530 | 3,245,983,352 | I_kwDOCUB6oc7BecZ4 | 39,530 | Typo in `apply_transcrition_request` method name | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-20T04:41:05 | 2025-07-25T12:09:45 | 2025-07-25T12:09:45 | CONTRIBUTOR | null | null | null | null | https://github.com/huggingface/transformers/blob/34133d0a790787739bfc9a42603985de3728ede4/src/transformers/models/voxtral/processing_voxtral.py#L287
should most likely be `apply_transcription_request` :)
cc @eustlb
I would open a PR, but this will also require some updates to the various model cards (which use this method name). | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39530/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39530/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39529/comments | https://api.github.com/repos/huggingface/transformers/issues/39529/events | https://github.com/huggingface/transformers/pull/39529 | 3,245,888,825 | PR_kwDOCUB6oc6ftm5M | 39,529 | build: Add fast image processor tvp | {
"login": "adutchengineer",
"id": 24729684,
"node_id": "MDQ6VXNlcjI0NzI5Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/24729684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adutchengineer",
"html_url": "https://github.com/adutchengineer",
"followers_url": "https://api.github.com/users/adutchengineer/followers",
"following_url": "https://api.github.com/users/adutchengineer/following{/other_user}",
"gists_url": "https://api.github.com/users/adutchengineer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adutchengineer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adutchengineer/subscriptions",
"organizations_url": "https://api.github.com/users/adutchengineer/orgs",
"repos_url": "https://api.github.com/users/adutchengineer/repos",
"events_url": "https://api.github.com/users/adutchengineer/events{/privacy}",
"received_events_url": "https://api.github.com/users/adutchengineer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-20T03:06:41 | 2025-08-14T15:48:57 | 2025-08-14T15:48:19 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39529",
"html_url": "https://github.com/huggingface/transformers/pull/39529",
"diff_url": "https://github.com/huggingface/transformers/pull/39529.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39529.patch",
"merged_at": "2025-08-14T15:48:19"
} | Adds TVP from https://github.com/huggingface/transformers/issues/36978
cc: @yonigozlan
| {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39529/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39528/comments | https://api.github.com/repos/huggingface/transformers/issues/39528/events | https://github.com/huggingface/transformers/pull/39528 | 3,245,680,706 | PR_kwDOCUB6oc6fs8Q- | 39,528 | standardized YOLOS model card according to template in #36979 | {
"login": "EthanV431",
"id": 113210015,
"node_id": "U_kgDOBr9ynw",
"avatar_url": "https://avatars.githubusercontent.com/u/113210015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EthanV431",
"html_url": "https://github.com/EthanV431",
"followers_url": "https://api.github.com/users/EthanV431/followers",
"following_url": "https://api.github.com/users/EthanV431/following{/other_user}",
"gists_url": "https://api.github.com/users/EthanV431/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EthanV431/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EthanV431/subscriptions",
"organizations_url": "https://api.github.com/users/EthanV431/orgs",
"repos_url": "https://api.github.com/users/EthanV431/repos",
"events_url": "https://api.github.com/users/EthanV431/events{/privacy}",
"received_events_url": "https://api.github.com/users/EthanV431/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T21:25:18 | 2025-07-23T21:25:21 | 2025-07-23T18:00:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39528",
"html_url": "https://github.com/huggingface/transformers/pull/39528",
"diff_url": "https://github.com/huggingface/transformers/pull/39528.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39528.patch",
"merged_at": "2025-07-23T18:00:26"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR standardizes the YOLOS model card in the docs. I've followed the template in #36979 as best I can, but I'm not sure if I've done it correctly since the information on this model didn't exactly fit 1-to-1 with the template. I got rid of the table and the images since the other PRs for this did that. Let me know if I need to make any changes.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39528/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39527/comments | https://api.github.com/repos/huggingface/transformers/issues/39527/events | https://github.com/huggingface/transformers/issues/39527 | 3,245,629,848 | I_kwDOCUB6oc7BdGGY | 39,527 | training google colab error | {
"login": "unoroberto12",
"id": 221759458,
"node_id": "U_kgDODTfH4g",
"avatar_url": "https://avatars.githubusercontent.com/u/221759458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unoroberto12",
"html_url": "https://github.com/unoroberto12",
"followers_url": "https://api.github.com/users/unoroberto12/followers",
"following_url": "https://api.github.com/users/unoroberto12/following{/other_user}",
"gists_url": "https://api.github.com/users/unoroberto12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unoroberto12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unoroberto12/subscriptions",
"organizations_url": "https://api.github.com/users/unoroberto12/orgs",
"repos_url": "https://api.github.com/users/unoroberto12/repos",
"events_url": "https://api.github.com/users/unoroberto12/events{/privacy}",
"received_events_url": "https://api.github.com/users/unoroberto12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-19T20:33:33 | 2025-08-18T14:06:03 | 2025-08-18T14:06:03 | NONE | null | null | null | null | ### System Info
when I started training Laura in google colab, I got this error. Please help me if you can๐ซ๐ซ
Checking dataset...
๐MyDrive/Loras/Kamixxx_style_lora/dataset
๐ Found 11 images with 10 repeats, equaling 110 steps.
๐ Divide 110 steps by 2 batch size to get 55.0 steps per epoch.
๐ฎ There will be 10 epochs, for around 550 total training steps.
โ
Dependencies already installed.
๐ Model already downloaded.
๐ Config saved to /content/drive/MyDrive/Loras/Kamixxx_style_lora/training_config.toml
๐ Dataset config saved to /content/drive/MyDrive/Loras/Kamixxx_style_lora/dataset_config.toml
โญ Starting trainer...
ipex flag is deprecated, will be removed in Accelerate v1.10. From 2.7.0, PyTorch has all needed optimizations for Intel CPU and XPU.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1752956937.419400 2914 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1752956937.429199 2914 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
Traceback (most recent call last):
File "/content/kohya-trainer/train_network.py", line 17, in <module>
import library.train_util as train_util
File "/content/kohya-trainer/library/train_util.py", line 1768, in <module>
def replace_unet_modules(unet: diffusers.models.unet_2d_condition.UNet2DConditionModel, mem_eff_attn, xformers):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/diffusers/utils/import_utils.py", line 876, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module diffusers.models has no attribute unet_2d_condition. Did you mean: 'unets.unet_2d_condition'?
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 10, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/accelerate_cli.py", line 50, in main
args.func(args)
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 1199, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 785, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_network.py', '--dataset_config=/content/drive/MyDrive/Loras/Kamixxx_style_lora/dataset_config.toml', '--config_file=/content/drive/MyDrive/Loras/Kamixxx_style_lora/training_config.toml']' returned non-zero exit status 1.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39527/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39526/comments | https://api.github.com/repos/huggingface/transformers/issues/39526/events | https://github.com/huggingface/transformers/pull/39526 | 3,245,622,339 | PR_kwDOCUB6oc6fsv3I | 39,526 | build: add TvpImageProcessorFast | {
"login": "adutchengineer",
"id": 24729684,
"node_id": "MDQ6VXNlcjI0NzI5Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/24729684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adutchengineer",
"html_url": "https://github.com/adutchengineer",
"followers_url": "https://api.github.com/users/adutchengineer/followers",
"following_url": "https://api.github.com/users/adutchengineer/following{/other_user}",
"gists_url": "https://api.github.com/users/adutchengineer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adutchengineer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adutchengineer/subscriptions",
"organizations_url": "https://api.github.com/users/adutchengineer/orgs",
"repos_url": "https://api.github.com/users/adutchengineer/repos",
"events_url": "https://api.github.com/users/adutchengineer/events{/privacy}",
"received_events_url": "https://api.github.com/users/adutchengineer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T20:20:53 | 2025-07-19T21:54:07 | 2025-07-19T21:54:07 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39526",
"html_url": "https://github.com/huggingface/transformers/pull/39526",
"diff_url": "https://github.com/huggingface/transformers/pull/39526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39526.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds TVP from [# (issue 36978)](https://github.com/huggingface/transformers/issues/36978)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "adutchengineer",
"id": 24729684,
"node_id": "MDQ6VXNlcjI0NzI5Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/24729684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adutchengineer",
"html_url": "https://github.com/adutchengineer",
"followers_url": "https://api.github.com/users/adutchengineer/followers",
"following_url": "https://api.github.com/users/adutchengineer/following{/other_user}",
"gists_url": "https://api.github.com/users/adutchengineer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adutchengineer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adutchengineer/subscriptions",
"organizations_url": "https://api.github.com/users/adutchengineer/orgs",
"repos_url": "https://api.github.com/users/adutchengineer/repos",
"events_url": "https://api.github.com/users/adutchengineer/events{/privacy}",
"received_events_url": "https://api.github.com/users/adutchengineer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39526/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39525/comments | https://api.github.com/repos/huggingface/transformers/issues/39525/events | https://github.com/huggingface/transformers/issues/39525 | 3,245,402,342 | I_kwDOCUB6oc7BcOjm | 39,525 | paged attention NOT working with Qwen Models | {
"login": "NickNickGo",
"id": 66033489,
"node_id": "MDQ6VXNlcjY2MDMzNDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/66033489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NickNickGo",
"html_url": "https://github.com/NickNickGo",
"followers_url": "https://api.github.com/users/NickNickGo/followers",
"following_url": "https://api.github.com/users/NickNickGo/following{/other_user}",
"gists_url": "https://api.github.com/users/NickNickGo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NickNickGo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NickNickGo/subscriptions",
"organizations_url": "https://api.github.com/users/NickNickGo/orgs",
"repos_url": "https://api.github.com/users/NickNickGo/repos",
"events_url": "https://api.github.com/users/NickNickGo/events{/privacy}",
"received_events_url": "https://api.github.com/users/NickNickGo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-19T15:52:00 | 2025-08-29T16:13:47 | 2025-08-28T08:03:22 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.2
- Platform: Linux-5.10.225-213.878.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: 0.17.2
- PyTorch version (accelerator?): 2.6.0 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
I'm trying to train a qwen model with long context (4096) with paged attention. When I switch the attention to paged , it immediately errors out:
Here is traceback:
Traceback (most recent call last):
File "/mnt/task_runtime/mango/train/grpo/train_grpo_token_space_efficient_reasoning.py", line 251, in <module>
main()
File "/mnt/task_runtime/mango/train/grpo/train_grpo_token_space_efficient_reasoning.py", line 247, in main
trainer.train()
File "/miniconda/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
return inner_training_loop(
File "/miniconda/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/miniconda/lib/python3.10/site-packages/transformers/trainer.py", line 3743, in training_step
inputs = self._prepare_inputs(inputs)
File "/miniconda/lib/python3.10/site-packages/trl/extras/profiling.py", line 98, in wrapper
return func(self, *args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/trl/trainer/grpo_trainer.py", line 990, in _prepare_inputs
generation_batch = self._generate_and_score_completions(generation_batch)
File "/miniconda/lib/python3.10/site-packages/trl/trainer/grpo_trainer.py", line 1174, in _generate_and_score_completions
prompt_completion_ids = unwrapped_model.generate(
File "/miniconda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2625, in generate
result = self._sample(
File "/miniconda/lib/python3.10/site-packages/transformers/generation/utils.py", line 3606, in _sample
outputs = self(**model_inputs, return_dict=True)
File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/accelerate/utils/operations.py", line 819, in forward
return model_forward(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/accelerate/utils/operations.py", line 807, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/miniconda/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 570, in forward
outputs: BaseModelOutputWithPast = self.model(
File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 458, in forward
layer_outputs = decoder_layer(
File "/miniconda/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
return super().__call__(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 262, in forward
hidden_states, self_attn_weights = self.self_attn(
File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/miniconda/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 217, in forward
attn_output, attn_weights = attention_interface(
File "/miniconda/lib/python3.10/site-packages/transformers/integrations/flash_paged.py", line 47, in paged_attention_forward
k, v = cache.update(k, v, module.layer_idx, cumulative_seqlens_k=cumulative_seqlens_k, **kwargs)
AttributeError: 'NoneType' object has no attribute 'update'
[rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/task_runtime/mango/train/grpo/train_grpo_token_space_efficient_reasoning.py", line 251, in <module>
[rank0]: main()
[rank0]: File "/mnt/task_runtime/mango/train/grpo/train_grpo_token_space_efficient_reasoning.py", line 247, in main
[rank0]: trainer.train()
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank0]: return inner_training_loop(
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/trainer.py", line 3743, in training_step
[rank0]: inputs = self._prepare_inputs(inputs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/trl/extras/profiling.py", line 98, in wrapper
[rank0]: return func(self, *args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/trl/trainer/grpo_trainer.py", line 990, in _prepare_inputs
[rank0]: generation_batch = self._generate_and_score_completions(generation_batch)
[rank0]: File "/miniconda/lib/python3.10/site-packages/trl/trainer/grpo_trainer.py", line 1174, in _generate_and_score_completions
[rank0]: prompt_completion_ids = unwrapped_model.generate(
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2625, in generate
[rank0]: result = self._sample(
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/generation/utils.py", line 3606, in _sample
[rank0]: outputs = self(**model_inputs, return_dict=True)
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/accelerate/utils/operations.py", line 819, in forward
[rank0]: return model_forward(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/accelerate/utils/operations.py", line 807, in __call__
[rank0]: return convert_to_fp32(self.model_forward(*args, **kwargs))
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
[rank0]: return func(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank0]: output = func(self, *args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 570, in forward
[rank0]: outputs: BaseModelOutputWithPast = self.model(
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank0]: output = func(self, *args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 458, in forward
[rank0]: layer_outputs = decoder_layer(
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank0]: return super().__call__(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 262, in forward
[rank0]: hidden_states, self_attn_weights = self.self_attn(
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/models/qwen3/modeling_qwen3.py", line 217, in forward
[rank0]: attn_output, attn_weights = attention_interface(
[rank0]: File "/miniconda/lib/python3.10/site-packages/transformers/integrations/flash_paged.py", line 47, in paged_attention_forward
[rank0]: k, v = cache.update(k, v, module.layer_idx, cumulative_seqlens_k=cumulative_seqlens_k, **kwargs)
[rank0]: AttributeError: 'NoneType' object has no attribute 'update'
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
# train_grpo.py
import sys
import argparse
import wandb
import torch
import os
sys.path.append("/mnt/task_runtime/")
from datasets import load_dataset
try:
from trl import GRPOConfig, GRPOTrainer
except:
from trl.trl import GRPOConfig, GRPOTrainer
def get_world_size():
if dist.is_available() and dist.is_initialized():
return dist.get_world_size()
else:
return 1 # fallback for non-distributed run
def sanitize_for_wandb(name):
return name.replace("/", "-").replace(":", "-")
def get_args():
parser = argparse.ArgumentParser(description="Train with GRPO")
parser.add_argument(
"--model_name",
type=str,
default="Qwen/Qwen2-0.5B-Instruct",
required=True,
help="Model name or path",
)
parser.add_argument(
"--ckpt_path", type=str, default=None, help="Optional checkpoint to resume from"
)
parser.add_argument(
"--rewards",
type=str,
default="efficient_reasoning",
help="Comma-separated reward functions ",
)
# add reward weights argument
parser.add_argument(
"--reward_weights",
type=str,
default=None,
help="Comma-separated weights for each reward function",
)
parser.add_argument(
"--output_dir",
type=str,
required=True,
default="output_token_space_efficient_reasoning",
help="Output directory for training artifacts",
)
parser.add_argument(
"--dataset_name",
type=str,
default=None,
help="Dataset name (e.g., gsm8k, trl-lib/tldr)",
)
parser.add_argument(
"--dataset_split",
type=int,
default=1024,
help="Subset of training data to use (optional)",
)
parser.add_argument(
"--use_peft",
action="store_true",
help="only train Lora",
)
parser.add_argument(
"--use_quant",
action="store_true",
help="load base model in 4/8 bits",
)
parser.add_argument(
"--use_fsdp",
action="store_true",
help="Use Fully Sharded Data Parallel (FSDP) for distributed training",
)
parser.add_argument(
"--use_attention",
type=str,
default="flash_attention_2",
help="Attention implementation to use (e.g., flash_attention_2, default)",
)
parser.add_argument(
"--max_context_length",
type=int,
default=1024,
help="Maximum context length",
)
parser.add_argument(
"--dataset_difficulty",
type=str,
default=None,
help="Difficulty level of the dataset (e.g., easy, medium, hard)",
)
parser.add_argument(
"--expt_tag",
type=str,
default="",
help="Experiment tag for wandb",
)
parser.add_argument(
"--per_device_train_batch_size",
type=int,
default=4,
help="Batch size per device for training",
)
parser.add_argument(
"--use_unsloth",
action="store_true",
help="Use Unsloth for fast inference",
)
return parser.parse_args()
def main():
args = get_args()
init_distributed()
torch.backends.cuda.enable_mem_efficient_sdp(True)
model = get_model(
args.model_name,
use_peft=args.use_peft,
use_quant=args.use_quant,
use_attention=args.use_attention,
ckpt_path=args.ckpt_path,
use_unsloth=args.use_unsloth,
)
train_dataset, val_datasets, eval_datasets = get_grpo_dataset(
dataset_name=args.dataset_name,
n_shot=0, # Set to 0 for zero-shot training
)
rank0_print(f"Using datasets {train_dataset} and {val_datasets}")
# if torch.distributed.get_rank() == 0:
if not dist.is_available() or not dist.is_initialized() or dist.get_rank() == 0:
project_name = sanitize_for_wandb(
f"{args.expt_tag}_GRPO_{'PEFT' if args.use_peft else ''}_{args.model_name}_{args.dataset_name if args.dataset_name else args.dataset_difficulty}"
)
wandb.init(
project=project_name, # change to your wandb project
name=args.expt_tag + args.output_dir, # optional run name
)
per_device_train_batch_size = args.per_device_train_batch_size
num_processes = get_world_size()
gradient_accumulation_steps = 1
effective_batch_size = (
per_device_train_batch_size * num_processes * gradient_accumulation_steps
)
for name, param in model.named_parameters():
if param.requires_grad:
rank0_print(f"Trainable parameter: {name}, shape: {param.shape}")
fsdp_config = dict(
fsdp="full_shard auto_wrap",
auto_wrap_policy=custom_wrap_policy_qwen,
)
training_args = GRPOConfig(
output_dir=args.output_dir,
logging_steps=10,
num_train_epochs=3,
per_device_train_batch_size=per_device_train_batch_size,
save_steps=100,
report_to="wandb",
eval_steps=1000 if val_datasets else None,
eval_strategy="steps" if val_datasets else "no",
num_generations=effective_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
per_device_eval_batch_size=per_device_train_batch_size,
max_completion_length=args.max_context_length,
fsdp_config=fsdp_config if args.use_fsdp else None,
# bf16=True, # Enable bf16 if supported
# fp16=True, # Enable fp16 for training
)
rewards = [REWARD_FUNC_MAP[reward] for reward in args.rewards.split(",")]
if args.reward_weights:
reward_weights = list(map(float, args.reward_weights.split(",")))
training_args.reward_weights = reward_weights
# model.gradient_checkpointing_enable()
torch.backends.cuda.enable_mem_efficient_sdp(True)
trainer = GRPOTrainer(
model=model,
reward_funcs=rewards,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_datasets[0] if val_datasets else None,
)
trainer.train()
if __name__ == "__main__":
main()
### Expected behavior
specifying attention implementation as pages attention should not change anything. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39525/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39524/comments | https://api.github.com/repos/huggingface/transformers/issues/39524/events | https://github.com/huggingface/transformers/pull/39524 | 3,245,363,147 | PR_kwDOCUB6oc6fr7Sb | 39,524 | ๐ [i18n-KO] Translated albert.md to Korean | {
"login": "ahnjj",
"id": 23564581,
"node_id": "MDQ6VXNlcjIzNTY0NTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/23564581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahnjj",
"html_url": "https://github.com/ahnjj",
"followers_url": "https://api.github.com/users/ahnjj/followers",
"following_url": "https://api.github.com/users/ahnjj/following{/other_user}",
"gists_url": "https://api.github.com/users/ahnjj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahnjj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahnjj/subscriptions",
"organizations_url": "https://api.github.com/users/ahnjj/orgs",
"repos_url": "https://api.github.com/users/ahnjj/repos",
"events_url": "https://api.github.com/users/ahnjj/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahnjj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T14:57:25 | 2025-07-29T15:03:41 | 2025-07-29T15:03:41 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39524",
"html_url": "https://github.com/huggingface/transformers/pull/39524",
"diff_url": "https://github.com/huggingface/transformers/pull/39524.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39524.patch",
"merged_at": "2025-07-29T15:03:41"
} | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `albert.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง KREW ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW ํ์๋ค์ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- @stevhliu May you please review this PR? --> | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39524/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39523/comments | https://api.github.com/repos/huggingface/transformers/issues/39523/events | https://github.com/huggingface/transformers/pull/39523 | 3,245,312,484 | PR_kwDOCUB6oc6frxeI | 39,523 | Update attention modules of `SeamlessM4T` | {
"login": "bvantuan",
"id": 37981884,
"node_id": "MDQ6VXNlcjM3OTgxODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37981884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bvantuan",
"html_url": "https://github.com/bvantuan",
"followers_url": "https://api.github.com/users/bvantuan/followers",
"following_url": "https://api.github.com/users/bvantuan/following{/other_user}",
"gists_url": "https://api.github.com/users/bvantuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bvantuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bvantuan/subscriptions",
"organizations_url": "https://api.github.com/users/bvantuan/orgs",
"repos_url": "https://api.github.com/users/bvantuan/repos",
"events_url": "https://api.github.com/users/bvantuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/bvantuan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T13:36:36 | 2025-07-19T13:39:00 | 2025-07-19T13:39:00 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39523",
"html_url": "https://github.com/huggingface/transformers/pull/39523",
"diff_url": "https://github.com/huggingface/transformers/pull/39523.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39523.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When loading a model using this line:
```python
model = SeamlessM4TForSpeechToText.from_pretrained("facebook/hf-seamless-m4t-medium")
```
a warning message is returned:
```
Instantiating a decoder SeamlessM4TAttention without passing `layer_idx` is not recommended and will lead to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` when creating this class.
```
I think we need to update the attention modules of `SeamlessM4T`. Iโve added `layer_idx`, `layer_head_mask`, `cache_position` to the attention classes and also incorporated the new cache classes (`DynamicCache`, `EncoderDecoderCache`). Most of the code was copied from `Whisper`. The models now support all the latest attention functions (SDPA, Flash Attention, and others).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@vasqu @eustlb @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "bvantuan",
"id": 37981884,
"node_id": "MDQ6VXNlcjM3OTgxODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37981884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bvantuan",
"html_url": "https://github.com/bvantuan",
"followers_url": "https://api.github.com/users/bvantuan/followers",
"following_url": "https://api.github.com/users/bvantuan/following{/other_user}",
"gists_url": "https://api.github.com/users/bvantuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bvantuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bvantuan/subscriptions",
"organizations_url": "https://api.github.com/users/bvantuan/orgs",
"repos_url": "https://api.github.com/users/bvantuan/repos",
"events_url": "https://api.github.com/users/bvantuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/bvantuan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39523/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39522/comments | https://api.github.com/repos/huggingface/transformers/issues/39522/events | https://github.com/huggingface/transformers/issues/39522 | 3,245,223,120 | I_kwDOCUB6oc7BbizQ | 39,522 | T5Gemma failing on provided example | {
"login": "jadermcs",
"id": 7156771,
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jadermcs",
"html_url": "https://github.com/jadermcs",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-19T11:07:26 | 2025-08-27T07:51:08 | 2025-08-12T11:35:25 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.53.2
- Platform: Linux-6.14.0-23-generic-x86_64-with-glibc2.41
- Python version: 3.13.3
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'}
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 5060 Ti
### Who can help?
@ArthurZucker and @itazap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the example from the T5Gemma docs page.
```
echo -e "Question: Why is the sky blue? Answer:" | transformers run --task text2text-generation --model google/t5gemma-s-s-ul2 --device 0
```
### Expected behavior
When I run I get:
```
File ".venv/lib/python3.13/site-packages/transformers/configuration_utils.py", line 209, in __getattribute__
return super().__getattribute__(key)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
AttributeError: 'T5GemmaConfig' object has no attribute **'vocab_size'**
```
Indeed. The vocab_size is a sub attribute from encoder/decoder, not a direct attribute.
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39522/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39521/comments | https://api.github.com/repos/huggingface/transformers/issues/39521/events | https://github.com/huggingface/transformers/issues/39521 | 3,245,206,298 | I_kwDOCUB6oc7Bbesa | 39,521 | T5Gemma problem with tokenizer(?) | {
"login": "jadermcs",
"id": 7156771,
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jadermcs",
"html_url": "https://github.com/jadermcs",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-19T10:43:18 | 2025-07-21T11:10:31 | 2025-07-21T11:10:31 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.53.2
- Platform: Linux-6.14.0-23-generic-x86_64-with-glibc2.41
- Python version: 3.13.3
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'}
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 5060 Ti
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset
import numpy as np
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
from transformers import EvalPrediction
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
)
# Convert to Hugging Face Dataset
dataset = load_dataset("super_glue", "wic")
# Initialize tokenizer and model
model_name = "google/t5gemma-b-b-ul2-it"
# model_name = "google-t5/t5-small"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name, attn_implementation="eager")
def compute_metrics(eval_pred: EvalPrediction):
predictions, labels = eval_pred
# Decode predicted token IDs to strings
pred_str = tokenizer.batch_decode(predictions, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
label_str = tokenizer.batch_decode(labels, skip_special_tokens=True)
print(pred_str)
print(label_str)
# Convert "true"/"false" strings to 1/0
pred_labels = [1 if p.strip().lower() == "true" else 0 for p in pred_str]
true_labels = [1 if l.strip().lower() == "true" else 0 for l in label_str]
# compute precision, recall, f1
precision, recall, f1_score, _ = precision_recall_fscore_support(
true_labels, pred_labels, average="binary"
)
accuracy = accuracy_score(true_labels, pred_labels)
return {
"accuracy": accuracy,
"precision": precision,
"recall": recall,
"f1_score": f1_score,
}
# Preprocessing function
def preprocess(example):
input_text = f"sentence1: {example['sentence1']} sentence2: {example['sentence2']} word: {example['word']}"
target_text = "true" if example["label"] == 1 else "false"
target_text = target_text + tokenizer.eos_token
# Tokenize inputs and targets
model_inputs = tokenizer(
input_text, max_length=128, truncation=True, padding="max_length"
)
labels = tokenizer(target_text, max_length=5, truncation=True, padding="max_length")
# Replace pad token id's in labels with -100 so they're ignored by loss
labels_ids = labels["input_ids"]
labels_ids = [
label if label != tokenizer.pad_token_id else -100 for label in labels_ids
]
model_inputs["labels"] = labels_ids
return model_inputs
# Tokenize dataset
tokenized_dataset = dataset.map(
preprocess, remove_columns=dataset["train"].column_names
)
# Training arguments
training_args = Seq2SeqTrainingArguments(
output_dir="./t5-wic",
eval_strategy="epoch",
per_device_train_batch_size=32,
num_train_epochs=10,
save_strategy="epoch",
save_total_limit=1,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
predict_with_generate=True,
bf16=True,
)
print(tokenized_dataset["train"][0])
print(tokenizer.decode(tokenized_dataset["train"][0]["input_ids"]))
# remove -100
labels = [
label if label != -100 else tokenizer.pad_token_id
for label in tokenized_dataset["train"][0]["labels"]
]
print(tokenizer.decode(labels))
# Initialize Trainer
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["validation"],
compute_metrics=compute_metrics,
)
# Train the model
trainer.train()
```
### Expected behavior
I am training T5Gemma for Word-in-Context binary classification as sentence-to-sentence problem (the same as original T5 paper). However the model is predicting the same label. Initially, I notice that the tokenizer do not add the end-of-string token so I manually did it in my code. It went from "falsetruetruetruetrue" until reaching maximum tokens. Now, after adding eos, it predicts only true.
Any help here? The code above works with "google-t5/t5-small" | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39521/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39520/comments | https://api.github.com/repos/huggingface/transformers/issues/39520/events | https://github.com/huggingface/transformers/pull/39520 | 3,245,116,408 | PR_kwDOCUB6oc6frKpQ | 39,520 | ๐ [i18n-KO] Translated `pipeline_gradio.md` to Korean | {
"login": "AhnJoonSung",
"id": 53860803,
"node_id": "MDQ6VXNlcjUzODYwODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/53860803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AhnJoonSung",
"html_url": "https://github.com/AhnJoonSung",
"followers_url": "https://api.github.com/users/AhnJoonSung/followers",
"following_url": "https://api.github.com/users/AhnJoonSung/following{/other_user}",
"gists_url": "https://api.github.com/users/AhnJoonSung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AhnJoonSung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AhnJoonSung/subscriptions",
"organizations_url": "https://api.github.com/users/AhnJoonSung/orgs",
"repos_url": "https://api.github.com/users/AhnJoonSung/repos",
"events_url": "https://api.github.com/users/AhnJoonSung/events{/privacy}",
"received_events_url": "https://api.github.com/users/AhnJoonSung/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T08:38:43 | 2025-07-29T15:04:31 | 2025-07-29T15:04:31 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39520",
"html_url": "https://github.com/huggingface/transformers/pull/39520",
"diff_url": "https://github.com/huggingface/transformers/pull/39520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39520.patch",
"merged_at": "2025-07-29T15:04:31"
} | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `pipeline_gradio.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง KREW ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW ํ์๋ค์ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39520/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39519/comments | https://api.github.com/repos/huggingface/transformers/issues/39519/events | https://github.com/huggingface/transformers/pull/39519 | 3,245,057,318 | PR_kwDOCUB6oc6fq-Jx | 39,519 | ๐ [i18n-KO] Translated `main_classes/processors.md` to Korean | {
"login": "TaskerJang",
"id": 124780552,
"node_id": "U_kgDOB3AACA",
"avatar_url": "https://avatars.githubusercontent.com/u/124780552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TaskerJang",
"html_url": "https://github.com/TaskerJang",
"followers_url": "https://api.github.com/users/TaskerJang/followers",
"following_url": "https://api.github.com/users/TaskerJang/following{/other_user}",
"gists_url": "https://api.github.com/users/TaskerJang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TaskerJang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TaskerJang/subscriptions",
"organizations_url": "https://api.github.com/users/TaskerJang/orgs",
"repos_url": "https://api.github.com/users/TaskerJang/repos",
"events_url": "https://api.github.com/users/TaskerJang/events{/privacy}",
"received_events_url": "https://api.github.com/users/TaskerJang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T07:36:36 | 2025-08-13T15:21:38 | 2025-08-13T15:21:38 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39519",
"html_url": "https://github.com/huggingface/transformers/pull/39519",
"diff_url": "https://github.com/huggingface/transformers/pull/39519.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39519.patch",
"merged_at": "2025-08-13T15:21:38"
} | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated processors.md to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `main_classes/processors.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง KREW ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW ํ์๋ค์ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39519/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39518/comments | https://api.github.com/repos/huggingface/transformers/issues/39518/events | https://github.com/huggingface/transformers/pull/39518 | 3,245,053,596 | PR_kwDOCUB6oc6fq9WB | 39,518 | ๐ [i18n-KO] Translated `models.md` to Korean | {
"login": "Judy-Choi",
"id": 53294075,
"node_id": "MDQ6VXNlcjUzMjk0MDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/53294075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Judy-Choi",
"html_url": "https://github.com/Judy-Choi",
"followers_url": "https://api.github.com/users/Judy-Choi/followers",
"following_url": "https://api.github.com/users/Judy-Choi/following{/other_user}",
"gists_url": "https://api.github.com/users/Judy-Choi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Judy-Choi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Judy-Choi/subscriptions",
"organizations_url": "https://api.github.com/users/Judy-Choi/orgs",
"repos_url": "https://api.github.com/users/Judy-Choi/repos",
"events_url": "https://api.github.com/users/Judy-Choi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Judy-Choi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T07:33:40 | 2025-08-25T16:17:09 | 2025-08-25T16:17:09 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39518",
"html_url": "https://github.com/huggingface/transformers/pull/39518",
"diff_url": "https://github.com/huggingface/transformers/pull/39518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39518.patch",
"merged_at": "2025-08-25T16:17:09"
} | <!-- PR์ ์ ๋ชฉ์ "๐ [i18n-KO] Translated `<your_file>.md` to Korean" ์ผ๋ก ๋ถํ๋๋ฆฝ๋๋ค -->
# What does this PR do?
Translated the `models.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง KREW ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW ํ์๋ค์ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- @stevhliu May you please review this PR? --> | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39518/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39517/comments | https://api.github.com/repos/huggingface/transformers/issues/39517/events | https://github.com/huggingface/transformers/pull/39517 | 3,245,041,028 | PR_kwDOCUB6oc6fq6p_ | 39,517 | ๐ [i18n-KO] Translated `compressed_tensor.md` to Korean | {
"login": "maximizemaxwell",
"id": 138701551,
"node_id": "U_kgDOCERq7w",
"avatar_url": "https://avatars.githubusercontent.com/u/138701551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximizemaxwell",
"html_url": "https://github.com/maximizemaxwell",
"followers_url": "https://api.github.com/users/maximizemaxwell/followers",
"following_url": "https://api.github.com/users/maximizemaxwell/following{/other_user}",
"gists_url": "https://api.github.com/users/maximizemaxwell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximizemaxwell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximizemaxwell/subscriptions",
"organizations_url": "https://api.github.com/users/maximizemaxwell/orgs",
"repos_url": "https://api.github.com/users/maximizemaxwell/repos",
"events_url": "https://api.github.com/users/maximizemaxwell/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximizemaxwell/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-19T07:20:47 | 2025-10-11T05:47:59 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39517",
"html_url": "https://github.com/huggingface/transformers/pull/39517",
"diff_url": "https://github.com/huggingface/transformers/pull/39517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39517.patch",
"merged_at": null
} | # What does this PR do?
Translated the `compressed_tensor.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [X] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [X] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [X] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [X] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [X] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
<!-- 1. ์ ์ฒดํฌ๊ฐ ๋ชจ๋ ์๋ฃ๋ ๋ค์๋ง KREW ํ์๋ค์๊ฒ ๋ฆฌ๋ทฐ๋ฅผ ์์ฒญํ๋ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW ํ์๋ค์ ๋ฆฌ๋ทฐ๊ฐ ๋๋ ํ์ ์๋ ์ฃผ์์ ๋
ธ์ถํด์ฃผ์ธ์! -->
<!-- @stevhliu May you please review this PR? --> | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39517/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39516/comments | https://api.github.com/repos/huggingface/transformers/issues/39516/events | https://github.com/huggingface/transformers/pull/39516 | 3,245,039,577 | PR_kwDOCUB6oc6fq6Uu | 39,516 | Update `docs/source/ko/_toctree.yml` | {
"login": "jungnerd",
"id": 46880056,
"node_id": "MDQ6VXNlcjQ2ODgwMDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/46880056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungnerd",
"html_url": "https://github.com/jungnerd",
"followers_url": "https://api.github.com/users/jungnerd/followers",
"following_url": "https://api.github.com/users/jungnerd/following{/other_user}",
"gists_url": "https://api.github.com/users/jungnerd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungnerd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungnerd/subscriptions",
"organizations_url": "https://api.github.com/users/jungnerd/orgs",
"repos_url": "https://api.github.com/users/jungnerd/repos",
"events_url": "https://api.github.com/users/jungnerd/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungnerd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T07:19:51 | 2025-07-22T16:00:43 | 2025-07-22T16:00:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39516",
"html_url": "https://github.com/huggingface/transformers/pull/39516",
"diff_url": "https://github.com/huggingface/transformers/pull/39516.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39516.patch",
"merged_at": "2025-07-22T16:00:43"
} | # What does this PR do?
Updated the outdated `docs/source/ko/_toctree.yml` and reorganized the file locations accordingly.
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Hi @stevhliu ๐
I modified `docs/source/ko/_toctree.yml` because the table of contents of the Korean documentation was outdated.
May you please review this PR?
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39516/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39516/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39515/comments | https://api.github.com/repos/huggingface/transformers/issues/39515/events | https://github.com/huggingface/transformers/pull/39515 | 3,245,009,591 | PR_kwDOCUB6oc6fq0Jl | 39,515 | ๐ [i18n-KO] Translated `main_classes/peft.md` | {
"login": "luckyvickyricky",
"id": 75977640,
"node_id": "MDQ6VXNlcjc1OTc3NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/75977640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckyvickyricky",
"html_url": "https://github.com/luckyvickyricky",
"followers_url": "https://api.github.com/users/luckyvickyricky/followers",
"following_url": "https://api.github.com/users/luckyvickyricky/following{/other_user}",
"gists_url": "https://api.github.com/users/luckyvickyricky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckyvickyricky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckyvickyricky/subscriptions",
"organizations_url": "https://api.github.com/users/luckyvickyricky/orgs",
"repos_url": "https://api.github.com/users/luckyvickyricky/repos",
"events_url": "https://api.github.com/users/luckyvickyricky/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckyvickyricky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-19T06:48:41 | 2025-07-29T15:03:17 | 2025-07-29T15:03:17 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39515",
"html_url": "https://github.com/huggingface/transformers/pull/39515",
"diff_url": "https://github.com/huggingface/transformers/pull/39515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39515.patch",
"merged_at": "2025-07-29T15:03:17"
} | # What does this PR do?
Translated the `main_classes/peft.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39515/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39514/comments | https://api.github.com/repos/huggingface/transformers/issues/39514/events | https://github.com/huggingface/transformers/issues/39514 | 3,244,740,835 | I_kwDOCUB6oc7BZtDj | 39,514 | T5Gemma returning 0 loss for s2s training | {
"login": "jadermcs",
"id": 7156771,
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jadermcs",
"html_url": "https://github.com/jadermcs",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-19T01:37:29 | 2025-07-19T02:25:35 | 2025-07-19T02:25:35 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.53.0
- Platform: Linux-6.14.0-23-generic-x86_64-with-glibc2.41
- Python version: 3.13.3
- Huggingface_hub version: 0.33.1
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'}
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 5060 Ti
### Who can help?
@zach-huggingface @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset
import numpy as np
from sklearn.metrics import accuracy_score
from transformers import EvalPrediction
from transformers import (
AutoTokenizer,
T5GemmaForConditionalGeneration,
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
)
# Convert to Hugging Face Dataset
dataset = load_dataset("super_glue", "wic")
# Initialize tokenizer and model
model_name = "google/t5gemma-s-s-ul2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5GemmaForConditionalGeneration.from_pretrained(
model_name, attn_implementation="eager"
)
def compute_metrics(eval_pred: EvalPrediction):
predictions, labels = eval_pred
# Decode predicted token IDs to strings
pred_str = tokenizer.batch_decode(predictions, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
label_str = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Convert "true"/"false" strings to 1/0
pred_labels = [1 if p.strip().lower() == "true" else 0 for p in pred_str]
true_labels = [1 if l.strip().lower() == "true" else 0 for l in label_str]
return {"accuracy": accuracy_score(true_labels, pred_labels)}
# Preprocessing function
def preprocess(example):
input_text = f"sentence1: {example['sentence1']} sentence2: {example['sentence2']} word: {example['word']}"
target_text = "true" if example["label"] == 1 else "false"
# Tokenize inputs and targets
model_inputs = tokenizer(
input_text, max_length=128, truncation=True, padding="max_length"
)
labels = tokenizer(target_text, max_length=5, truncation=True, padding="max_length")
# Replace pad token id's in labels with -100 so they're ignored by loss
labels_ids = labels["input_ids"]
labels_ids = [
label if label != tokenizer.pad_token_id else -100 for label in labels_ids
]
model_inputs["labels"] = labels_ids
return model_inputs
# Tokenize dataset
tokenized_dataset = dataset.map(
preprocess, remove_columns=dataset["train"].column_names
)
# Training arguments
training_args = Seq2SeqTrainingArguments(
output_dir="./t5-wic",
eval_strategy="epoch",
per_device_train_batch_size=16,
num_train_epochs=10,
save_strategy="epoch",
save_total_limit=1,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
predict_with_generate=True,
bf16=True,
)
print(tokenized_dataset["train"][0])
# Initialize Trainer
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["validation"],
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
# Train the model
trainer.train()
metrics = trainer.evaluate(tokenized_dataset["test"])
print("Final metrics:")
print(metrics)
```
### Expected behavior
When I run this code I first get a:
```
`loss_type=ForMaskedLMLoss` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
```
Then I have this for every epoch:
```bash
{'loss': 0.0, 'grad_norm': 0.0, 'learning_rate': 4.2661764705882354e-05, 'epoch': 1.47}
{'eval_loss': nan, 'eval_accuracy': 0.5, 'eval_runtime': 19.3888, 'eval_samples_per_second': 32.906, 'eval_steps_per_second': 4.126, 'epoch': 2.0}
``` | {
"login": "jadermcs",
"id": 7156771,
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jadermcs",
"html_url": "https://github.com/jadermcs",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39514/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39513/comments | https://api.github.com/repos/huggingface/transformers/issues/39513/events | https://github.com/huggingface/transformers/pull/39513 | 3,244,579,751 | PR_kwDOCUB6oc6fpchv | 39,513 | [Docs] Translate audio_classification.md from English to Spanish | {
"login": "weezymatt",
"id": 85853890,
"node_id": "MDQ6VXNlcjg1ODUzODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/85853890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weezymatt",
"html_url": "https://github.com/weezymatt",
"followers_url": "https://api.github.com/users/weezymatt/followers",
"following_url": "https://api.github.com/users/weezymatt/following{/other_user}",
"gists_url": "https://api.github.com/users/weezymatt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weezymatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weezymatt/subscriptions",
"organizations_url": "https://api.github.com/users/weezymatt/orgs",
"repos_url": "https://api.github.com/users/weezymatt/repos",
"events_url": "https://api.github.com/users/weezymatt/events{/privacy}",
"received_events_url": "https://api.github.com/users/weezymatt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T23:07:03 | 2025-07-23T22:55:13 | 2025-07-23T22:55:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39513",
"html_url": "https://github.com/huggingface/transformers/pull/39513",
"diff_url": "https://github.com/huggingface/transformers/pull/39513.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39513.patch",
"merged_at": "2025-07-23T22:55:13"
} | # What does this PR do?
Translates ```audio_classification.md``` from English to Spanish.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
#28936
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu
cc: @tadeodonegana @gisturiz
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39513/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39512/comments | https://api.github.com/repos/huggingface/transformers/issues/39512/events | https://github.com/huggingface/transformers/issues/39512 | 3,244,514,725 | I_kwDOCUB6oc7BY12l | 39,512 | text-generation extremely slow with large `bad_words_ids` list | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 7211844573,
"node_id": "LA_kwDOCUB6oc8AAAABrdwD3Q",
"url": "https://api.github.com/repos/huggingface/transformers/labels/mps",
"name": "mps",
"color": "2A73C2",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-18T22:33:09 | 2025-07-25T17:45:43 | 2025-07-25T17:45:43 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.54.0.dev0
- Platform: macOS-15.5-arm64-arm-64bit-Mach-O
- Python version: 3.13.2
- Huggingface_hub version: 0.32.4
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
### Who can help?
@gante @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
import time
from transformers import pipeline
pipe = pipeline("text-generation", "hf-internal-testing/tiny-random-Gemma3ForCausalLM")
# Without bad_words_ids
start = time.time()
output_no_bad = pipe("hello", max_new_tokens=5, do_sample=False)
time_no_bad = time.time() - start
print(f"Without bad_words_ids: {time_no_bad:.4f} seconds")
# With bad_words_ids
bad_words_ids = [[2 * i] for i in range(pipe.tokenizer.vocab_size // 2)]
start = time.time()
output_bad = pipe("hello", max_new_tokens=5, do_sample=False, bad_words_ids=bad_words_ids)
time_bad = time.time() - start
print(f"With bad_words_ids: {time_bad:.4f} seconds")
```
outputs the following
```
Without bad_words_ids: 0.4049 seconds
With bad_words_ids: 36.2829 seconds
```
This indicates a significant overhead due to the `bad_words_ids`, most likely due to inefficient looping & tensor access or slow regex patterns.
### Expected behavior
The generation should only run marginally slower when using a block list, but not this significantly. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39512/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39511/comments | https://api.github.com/repos/huggingface/transformers/issues/39511/events | https://github.com/huggingface/transformers/issues/39511 | 3,244,433,056 | I_kwDOCUB6oc7BYh6g | 39,511 | Export voxtral to ExecuTorch | {
"login": "mergennachin",
"id": 1409555,
"node_id": "MDQ6VXNlcjE0MDk1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1409555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mergennachin",
"html_url": "https://github.com/mergennachin",
"followers_url": "https://api.github.com/users/mergennachin/followers",
"following_url": "https://api.github.com/users/mergennachin/following{/other_user}",
"gists_url": "https://api.github.com/users/mergennachin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mergennachin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mergennachin/subscriptions",
"organizations_url": "https://api.github.com/users/mergennachin/orgs",
"repos_url": "https://api.github.com/users/mergennachin/repos",
"events_url": "https://api.github.com/users/mergennachin/events{/privacy}",
"received_events_url": "https://api.github.com/users/mergennachin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-18T21:44:43 | 2025-09-22T15:19:01 | 2025-09-22T15:19:00 | NONE | null | null | null | null | ### Feature request
- [ ] Add a executorch exportability test in transformers so that voxtral. See if we need to extend integration/executorch.py capability.
- [ ] Add a test in optimum-executorch for performant lowering
- [ ] Have a demo cpp app
Looking at the voxtral, it looks like fairly straightforward. In particular, for
```
model = VoxtralForConditionalGeneration.from_pretrained("mistralai/Voxtral-Mini-3B-2507")
```
it has three parts:
1. It has Whisper-based audio encoder (which should be exportable already)
2. Language model which is based on LlamaCausalLM, which is already exportable
3. VoxtralMultiModalProjector, also a fairly simple projection.
Reference:
https://huggingface.co/docs/transformers/en/executorch
https://huggingface.co/mistralai/Voxtral-Mini-3B-2507
https://github.com/huggingface/transformers/blob/main/src/transformers/models/voxtral/modular_voxtral.py
https://github.com/huggingface/transformers/blob/main/src/transformers/models/voxtral/modeling_voxtral.py
### Motivation
So that we can easily run voxtral in C++ directly. Voxtral was added recently in https://github.com/huggingface/transformers/pull/39429
### Your contribution
N/A | {
"login": "mergennachin",
"id": 1409555,
"node_id": "MDQ6VXNlcjE0MDk1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1409555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mergennachin",
"html_url": "https://github.com/mergennachin",
"followers_url": "https://api.github.com/users/mergennachin/followers",
"following_url": "https://api.github.com/users/mergennachin/following{/other_user}",
"gists_url": "https://api.github.com/users/mergennachin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mergennachin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mergennachin/subscriptions",
"organizations_url": "https://api.github.com/users/mergennachin/orgs",
"repos_url": "https://api.github.com/users/mergennachin/repos",
"events_url": "https://api.github.com/users/mergennachin/events{/privacy}",
"received_events_url": "https://api.github.com/users/mergennachin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39511/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39511/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39510/comments | https://api.github.com/repos/huggingface/transformers/issues/39510/events | https://github.com/huggingface/transformers/issues/39510 | 3,244,328,499 | I_kwDOCUB6oc7BYIYz | 39,510 | "ValueError: Predictions and/or references don't match the expected format." error | {
"login": "ashokmon-aws",
"id": 191915435,
"node_id": "U_kgDOC3Blqw",
"avatar_url": "https://avatars.githubusercontent.com/u/191915435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashokmon-aws",
"html_url": "https://github.com/ashokmon-aws",
"followers_url": "https://api.github.com/users/ashokmon-aws/followers",
"following_url": "https://api.github.com/users/ashokmon-aws/following{/other_user}",
"gists_url": "https://api.github.com/users/ashokmon-aws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashokmon-aws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashokmon-aws/subscriptions",
"organizations_url": "https://api.github.com/users/ashokmon-aws/orgs",
"repos_url": "https://api.github.com/users/ashokmon-aws/repos",
"events_url": "https://api.github.com/users/ashokmon-aws/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashokmon-aws/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-18T20:39:49 | 2025-07-22T20:54:50 | 2025-07-22T20:54:49 | NONE | null | null | null | null | ### System Info
Ran on trn1.2xlarge instance with Ubuntu 22
transformers version: 4.44.0
Platform: Linux/UNIX
Python version: 3.10.12
Accelerate version: 1.7.0
PyTorch version (GPU?): 2.7.0.2.8.6734+ac864f72
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install dependencies:
```
env TOKENIZERS_PARALLELISM=True
pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
pip install -U "protobuf<4" "transformers==4.53.*" "datasets<=3.6.0" "accelerate==1.7.*" scikit-learn evaluate
git clone https://github.com/huggingface/transformers --branch v4.52.3
```
2. Set the parameters:
```
model_name = "camembert-base"
env_var_options = "XLA_USE_BF16=1 NEURON_CC_FLAGS=\"--model-type=transformer\""
num_workers = 2
task_name = "sst2"
batch_size = 8
max_seq_length = 128
learning_rate = 2e-05
dataset_name = "glue"
num_train_epochs = 5
model_base_name = model_name
```
3. Compile model with neuron_parallel_compile along with `--eval_do_concat_batches=False`
```
import subprocess
print("Compile model")
COMPILE_CMD = f"""{env_var_options} neuron_parallel_compile \
torchrun --nproc_per_node={num_workers} \
transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path {model_name} \
--task_name {task_name} \
--dataset_name {dataset_name} \
--do_train \
--max_seq_length {max_seq_length} \
--per_device_train_batch_size {batch_size} \
--learning_rate {learning_rate} \
--max_train_samples 128 \
--overwrite_output_dir \
--eval_do_concat_batches=False \
--output_dir {model_base_name}-{task_name}-{batch_size}bs"""
print(f'Running command: \n{COMPILE_CMD}')
if subprocess.check_call(COMPILE_CMD,shell=True):
print("There was an error with the compilation command")
else:
print("Compilation Success!!!")
```
4. Fine tune the model
```
print("Train model")
RUN_CMD = f"""{env_var_options} torchrun --nproc_per_node={num_workers} \
transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path {model_name} \
--task_name {task_name} \
--do_train \
--do_eval \
--max_seq_length {max_seq_length} \
--dataset_name {dataset_name} \
--per_device_train_batch_size {batch_size} \
--learning_rate {learning_rate} \
--num_train_epochs {num_train_epochs} \
--overwrite_output_dir \
--eval_do_concat_batches=False \
--output_dir {model_base_name}-{task_name}-{num_workers}w-{batch_size}bs"""
print(f'Running command: \n{RUN_CMD}')
if subprocess.check_call(RUN_CMD,shell=True):
print("There was an error with the fine-tune command")
else:
print("Fine-tune Successful!!!")
```
Error:
```
Traceback (most recent call last):
[rank1]: File "/home/ubuntu/aws-neuron-samples/torch-neuronx/training/hf_text_classification/transformers/examples/pytorch/text-classification/run_glue.py", line 626, in <module>
[rank1]: main()
[rank1]: File "/home/ubuntu/aws-neuron-samples/torch-neuronx/training/hf_text_classification/transformers/examples/pytorch/text-classification/run_glue.py", line 564, in main
[rank1]: metrics = trainer.evaluate(eval_dataset=eval_dataset)
[rank1]: File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/transformers/trainer.py", line 4199, in evaluate
[rank1]: output = eval_loop(
[rank1]: File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/transformers/trainer.py", line 4489, in evaluation_loop
[rank1]: metrics = self.compute_metrics(
[rank1]: File "/home/ubuntu/aws-neuron-samples/torch-neuronx/training/hf_text_classification/transformers/examples/pytorch/text-classification/run_glue.py", line 498, in compute_metrics
[rank1]: result = metric.compute(predictions=preds, references=p.label_ids)
[rank1]: File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/evaluate/module.py", line 455, in compute
[rank1]: self.add_batch(**inputs)
[rank1]: File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/evaluate/module.py", line 546, in add_batch
[rank1]: raise ValueError(error_msg) from None
[rank1]: ValueError: Predictions and/or references don't match the expected format.
[rank1]: Expected format: {'predictions': Value(dtype='int64', id=None), 'references': Value(dtype='int64', id=None)},
[rank1]: Input predictions: [[ 1 0]
[rank1]: [ 3 7]
[rank1]: [ 2 0]
[rank1]: [ 6 3]
[rank1]: [10 3]
[rank1]: [ 6 14]
[rank1]: [14 7]
[rank1]: [ 0 5]
[rank1]: [ 2 1]
[rank1]: [11 7]
[rank1]: [ 0 10]
[rank1]: [ 6 15]
[rank1]: [11 14]
[rank1]: [13 14]
[rank1]: [ 2 7]
[rank1]: [ 2 10]
[rank1]: [ 2 7]
[rank1]: [ 0 3]
[rank1]: [ 8 2]
[rank1]: [15 0]
[rank1]: [ 9 0]
[rank1]: [ 1 0]
[rank1]: [ 9 1]
[rank1]: [12 1]
[rank1]: [ 8 12]
[rank1]: [ 8 15]
[rank1]: [ 3 11]
[rank1]: [ 4 11]
[rank1]: [14 7]
[rank1]: [ 5 15]
[rank1]: [13 12]
[rank1]: [ 6 8]
[rank1]: [ 0 2]
[rank1]: [10 7]
[rank1]: [ 5 13]
[rank1]: [ 2 5]
[rank1]: [14 1]
[rank1]: [ 6 1]
[rank1]: [ 7 8]
[rank1]: [ 5 4]
[rank1]: [ 2 15]
[rank1]: [ 6 2]
[rank1]: [ 4 2]
[rank1]: [ 5 3]
[rank1]: [11 10]
[rank1]: [ 2 0]
[rank1]: [12 8]
[rank1]: [13 6]
[rank1]: [ 8 4]
[rank1]: [11 15]
[rank1]: [14 7]
[rank1]: [12 1]
[rank1]: [ 1 5]
[rank1]: [ 6 0]
[rank1]: [ 0 8]],
[rank1]: Input references: [array([1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1]), array([1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0]), array([1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1]), ..., array([0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1]), array([1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]), array([0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0])]
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/ubuntu/aws-neuron-samples/torch-neuronx/training/hf_text_classification/transformers/examples/pytorch/text-classification/run_glue.py", line 626, in <module>
[rank0]: main()
[rank0]: File "/home/ubuntu/aws-neuron-samples/torch-neuronx/training/hf_text_classification/transformers/examples/pytorch/text-classification/run_glue.py", line 564, in main
[rank0]: metrics = trainer.evaluate(eval_dataset=eval_dataset)
[rank0]: File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/transformers/trainer.py", line 4199, in evaluate
[rank0]: output = eval_loop(
[rank0]: File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/transformers/trainer.py", line 4489, in evaluation_loop
[rank0]: metrics = self.compute_metrics(
[rank0]: File "/home/ubuntu/aws-neuron-samples/torch-neuronx/training/hf_text_classification/transformers/examples/pytorch/text-classification/run_glue.py", line 498, in compute_metrics
[rank0]: result = metric.compute(predictions=preds, references=p.label_ids)
[rank0]: File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/evaluate/module.py", line 455, in compute
[rank0]: self.add_batch(**inputs)
[rank0]: File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/evaluate/module.py", line 546, in add_batch
[rank0]: raise ValueError(error_msg) from None
[rank0]: ValueError: Predictions and/or references don't match the expected format.
[rank0]: Expected format: {'predictions': Value(dtype='int64', id=None), 'references': Value(dtype='int64', id=None)},
[rank0]: Input predictions: [[ 1 0]
[rank0]: [ 3 7]
[rank0]: [ 2 0]
[rank0]: [ 6 3]
[rank0]: [10 3]
[rank0]: [ 6 14]
[rank0]: [14 7]
[rank0]: [ 0 5]
[rank0]: [ 2 1]
[rank0]: [11 7]
[rank0]: [ 0 10]
[rank0]: [ 6 15]
[rank0]: [11 14]
[rank0]: [13 14]
[rank0]: [ 2 7]
[rank0]: [ 2 10]
[rank0]: [ 2 7]
[rank0]: [ 0 3]
[rank0]: [ 8 2]
[rank0]: [15 0]
[rank0]: [ 9 0]
[rank0]: [ 1 0]
[rank0]: [ 9 1]
[rank0]: [12 1]
[rank0]: [ 8 12]
[rank0]: [ 8 15]
[rank0]: [ 3 11]
[rank0]: [ 4 11]
[rank0]: [14 7]
[rank0]: [ 5 15]
[rank0]: [13 12]
[rank0]: [ 6 8]
[rank0]: [ 0 2]
[rank0]: [10 7]
[rank0]: [ 5 13]
[rank0]: [ 2 5]
[rank0]: [14 1]
[rank0]: [ 6 1]
[rank0]: [ 7 8]
[rank0]: [ 5 4]
[rank0]: [ 2 15]
[rank0]: [ 6 2]
[rank0]: [ 4 2]
[rank0]: [ 5 3]
[rank0]: [11 10]
[rank0]: [ 2 0]
[rank0]: [12 8]
[rank0]: [13 6]
[rank0]: [ 8 4]
[rank0]: [11 15]
[rank0]: [14 7]
[rank0]: [12 1]
[rank0]: [ 1 5]
[rank0]: [ 6 0]
[rank0]: [ 0 8]],
[rank0]: Input references: [array([1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1]), array([1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0]), array([1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1]), ..., array([0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1]), array([1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]), array([0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0])]
W0718 19:39:23.555000 107154 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 107163 closing signal SIGTERM
E0718 19:39:24.170000 107154 torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 1 (pid: 107164) of binary: /home/ubuntu/aws_neuron_venv_pytorch/bin/python
Traceback (most recent call last):
File "/home/ubuntu/aws_neuron_venv_pytorch/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in main
run(args)
File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/torch/distributed/run.py", line 883, in run
elastic_launch(
File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
transformers/examples/pytorch/text-classification/run_glue.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-07-18_19:39:23
host : ip-172-31-43-136.us-west-2.compute.internal
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 107164)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
Cell In[7], line 18
2 RUN_CMD = f"""{env_var_options} torchrun --nproc_per_node={num_workers} \
3 transformers/examples/pytorch/text-classification/run_glue.py \
4 --model_name_or_path {model_name} \
(...)
14 --eval_do_concat_batches=False \
15 --output_dir {model_base_name}-{task_name}-{num_workers}w-{batch_size}bs"""
17 print(f'Running command: \n{RUN_CMD}')
---> 18 if subprocess.check_call(RUN_CMD,shell=True):
19 print("There was an error with the fine-tune command")
20 else:
File /usr/lib/python3.10/subprocess.py:369, in check_call(*popenargs, **kwargs)
367 if cmd is None:
368 cmd = popenargs[0]
--> 369 raise CalledProcessError(retcode, cmd)
370 return 0
CalledProcessError: Command 'XLA_USE_BF16=1 NEURON_CC_FLAGS="--model-type=transformer" torchrun --nproc_per_node=2 transformers/examples/pytorch/text-classification/run_glue.py --model_name_or_path camembert-base --task_name sst2 --do_train --do_eval --max_seq_length 128 --dataset_name glue --per_device_train_batch_size 8 --learning_rate 2e-05 --num_train_epochs 5 --overwrite_output_dir --eval_do_concat_batches=False --output_dir camembert-base-sst2-2w-8bs' returned non-zero exit status 1.
```
### Expected behavior
Training should complete successfully | {
"login": "ashokmon-aws",
"id": 191915435,
"node_id": "U_kgDOC3Blqw",
"avatar_url": "https://avatars.githubusercontent.com/u/191915435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashokmon-aws",
"html_url": "https://github.com/ashokmon-aws",
"followers_url": "https://api.github.com/users/ashokmon-aws/followers",
"following_url": "https://api.github.com/users/ashokmon-aws/following{/other_user}",
"gists_url": "https://api.github.com/users/ashokmon-aws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashokmon-aws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashokmon-aws/subscriptions",
"organizations_url": "https://api.github.com/users/ashokmon-aws/orgs",
"repos_url": "https://api.github.com/users/ashokmon-aws/repos",
"events_url": "https://api.github.com/users/ashokmon-aws/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashokmon-aws/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39510/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39509/comments | https://api.github.com/repos/huggingface/transformers/issues/39509/events | https://github.com/huggingface/transformers/pull/39509 | 3,244,029,542 | PR_kwDOCUB6oc6fnmvI | 39,509 | :broom: :broom: :broom: Get set decoder cleanup | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T18:29:45 | 2025-08-25T08:57:58 | 2025-08-25T08:57:56 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39509",
"html_url": "https://github.com/huggingface/transformers/pull/39509",
"diff_url": "https://github.com/huggingface/transformers/pull/39509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39509.patch",
"merged_at": "2025-08-25T08:57:56"
} | # What does this PR do?
Refactor of boilerplate one-liners like get_decoder and set_decoder which are prevalent across the codebase while not being part of the core modeling code. They are now pulled from `PreTrainedModel` more reliably.
Continued from #39339 .
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39509/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39509/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39508/comments | https://api.github.com/repos/huggingface/transformers/issues/39508/events | https://github.com/huggingface/transformers/issues/39508 | 3,243,869,090 | I_kwDOCUB6oc7BWYOi | 39,508 | Whisper transcription is 2x slower between 4.51.3 -> 4.52.1 | {
"login": "Anjum48",
"id": 13783303,
"node_id": "MDQ6VXNlcjEzNzgzMzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/13783303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anjum48",
"html_url": "https://github.com/Anjum48",
"followers_url": "https://api.github.com/users/Anjum48/followers",
"following_url": "https://api.github.com/users/Anjum48/following{/other_user}",
"gists_url": "https://api.github.com/users/Anjum48/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anjum48/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anjum48/subscriptions",
"organizations_url": "https://api.github.com/users/Anjum48/orgs",
"repos_url": "https://api.github.com/users/Anjum48/repos",
"events_url": "https://api.github.com/users/Anjum48/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anjum48/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
},
{
"id": 7377881103,
"node_id": "LA_kwDOCUB6oc8AAAABt8GIDw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Whisper",
"name": "Whisper",
"color": "83303E",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-18T17:21:46 | 2025-09-08T13:47:05 | 2025-09-08T13:47:05 | NONE | null | null | null | null | ### System Info
transformers version: 4.52.1
Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39
Python version: 3.12.3
Huggingface_hub version: 0.32.4
Safetensors version: 0.4.5
Accelerate version: 1.7.0
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (GPU?): 2.7.1+cu128 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?: No
Using GPU in script?: Yes
GPU type: NVIDIA GeForce RTX 5090
flash-attn: Built from main
### Who can help?
@eustlb @vasqu
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am benchmarking with 125 mins of audio from [here](https://traffic.libsyn.com/secure/skepticsguide/skepticast2025-05-31.mp3)
Using this code:
```python
import torch
from whisper.utils import get_writer
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from pathlib import Path
audio_path = Path("/home/anjum/Downloads/skepticast2025-05-31.mp3") # 124 mins
output_path = Path("/home/anjum/Documents")
device_id = 1
device = f"cuda:{device_id}" if torch.cuda.is_available() else "cpu"
torch_dtype = (
torch.float16 if torch.cuda.is_available() else torch.float32
)
torch.set_float32_matmul_precision("high")
torch.cuda.empty_cache()
def format_hf_to_whisper(result):
results_formatted = {"text": result["text"], "segments": [], "language": None}
for chunk in result["chunks"]:
if chunk["timestamp"][0] is None:
start = 0
else:
start = chunk["timestamp"][0]
if chunk["timestamp"][1] is None:
end = start
else:
end = chunk["timestamp"][1]
segment = {
"start": start,
"end": end,
"text": chunk["text"],
}
results_formatted["segments"].append(segment)
return results_formatted
def run_hf_model(model_id, params={}):
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
use_safetensors=True,
# attn_implementation="flash_attention_2",
attn_implementation="sdpa",
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
torch_dtype=torch_dtype,
device=device,
return_timestamps=True,
**params,
)
generate_kwargs = {
"max_new_tokens": 128,
"return_timestamps": True,
}
result = pipe(str(audio_path), generate_kwargs=generate_kwargs)
result = format_hf_to_whisper(result)
writer = get_writer("all", str(output_path))
writer_args = {
"highlight_words": False,
"max_line_count": None,
"max_line_width": None,
}
writer(result, f"hf_{model_id.replace('/', '_')}", writer_args)
torch.cuda.empty_cache()
batch_size = 16
run_hf_model("distil-whisper/distil-large-v3", {"chunk_length_s": 25, "batch_size": batch_size, "ignore_warning": True})
```
### Expected behavior
Time taken to benchmark the audio file linked above in M:SS (2 repeats):
- **4.51.3**: 1:30, 1:36
- **4.52.1**: 3:08, 3:11
- **4.53.2**: 3:07, 3:09
This is about 2x slower on my hardware
Note this is a new issue on the back of the discussion in #38662 | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39508/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39508/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39507/comments | https://api.github.com/repos/huggingface/transformers/issues/39507/events | https://github.com/huggingface/transformers/pull/39507 | 3,243,774,342 | PR_kwDOCUB6oc6fmu0u | 39,507 | feat(tokenization): add encode_message to tokenize messages one by one | {
"login": "pco111",
"id": 56655972,
"node_id": "MDQ6VXNlcjU2NjU1OTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/56655972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pco111",
"html_url": "https://github.com/pco111",
"followers_url": "https://api.github.com/users/pco111/followers",
"following_url": "https://api.github.com/users/pco111/following{/other_user}",
"gists_url": "https://api.github.com/users/pco111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pco111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pco111/subscriptions",
"organizations_url": "https://api.github.com/users/pco111/orgs",
"repos_url": "https://api.github.com/users/pco111/repos",
"events_url": "https://api.github.com/users/pco111/events{/privacy}",
"received_events_url": "https://api.github.com/users/pco111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6900377379,
"node_id": "LA_kwDOCUB6oc8AAAABm0tnIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Chat%20Template",
"name": "Chat Template",
"color": "EDDB5D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-18T16:41:21 | 2025-07-31T08:55:45 | 2025-07-31T08:55:45 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39507",
"html_url": "https://github.com/huggingface/transformers/pull/39507",
"diff_url": "https://github.com/huggingface/transformers/pull/39507.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39507.patch",
"merged_at": "2025-07-31T08:55:45"
} | What does this PR do?
This PR introduces a new method, tokenizer.encode_message, to the base tokenizer class. This method allows for tokenizing a single chat message at a time while correctly handling the conversational context provided by conversation_history. This is particularly useful for token-by-token streaming applications where re-tokenizing the entire conversation history for each new token is inefficient.
The new method works by applying the chat template to the full conversation (history + new message) and then programmatically isolating the tokens that correspond to the new message. This ensures that all special tokens, roles, and formatting are applied correctly according to the model's chat template, maintaining consistency with apply_chat_template.
Fixes #39417
Before submitting
[x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
[x] Did you read the contributor guideline,
Pull Request section?
[x] Was this discussed/approved via a Github issue or the forum? Please add a link
to it if that's the case.
[x] Did you make sure to update the documentation with your changes? Here are the
documentation guidelines, and
here are tips on formatting docstrings.
[x] Did you write any new necessary tests?
Who can review?
@ArthurZucker @Rocketknight1 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39507/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39506/comments | https://api.github.com/repos/huggingface/transformers/issues/39506/events | https://github.com/huggingface/transformers/pull/39506 | 3,243,751,441 | PR_kwDOCUB6oc6fmp2i | 39,506 | Make sure Moshi is exportable with static cache | {
"login": "mergennachin",
"id": 1409555,
"node_id": "MDQ6VXNlcjE0MDk1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1409555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mergennachin",
"html_url": "https://github.com/mergennachin",
"followers_url": "https://api.github.com/users/mergennachin/followers",
"following_url": "https://api.github.com/users/mergennachin/following{/other_user}",
"gists_url": "https://api.github.com/users/mergennachin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mergennachin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mergennachin/subscriptions",
"organizations_url": "https://api.github.com/users/mergennachin/orgs",
"repos_url": "https://api.github.com/users/mergennachin/repos",
"events_url": "https://api.github.com/users/mergennachin/events{/privacy}",
"received_events_url": "https://api.github.com/users/mergennachin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T16:31:19 | 2025-09-02T20:11:13 | 2025-09-02T20:11:13 | NONE | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39506",
"html_url": "https://github.com/huggingface/transformers/pull/39506",
"diff_url": "https://github.com/huggingface/transformers/pull/39506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39506.patch",
"merged_at": null
} | null | {
"login": "mergennachin",
"id": 1409555,
"node_id": "MDQ6VXNlcjE0MDk1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1409555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mergennachin",
"html_url": "https://github.com/mergennachin",
"followers_url": "https://api.github.com/users/mergennachin/followers",
"following_url": "https://api.github.com/users/mergennachin/following{/other_user}",
"gists_url": "https://api.github.com/users/mergennachin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mergennachin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mergennachin/subscriptions",
"organizations_url": "https://api.github.com/users/mergennachin/orgs",
"repos_url": "https://api.github.com/users/mergennachin/repos",
"events_url": "https://api.github.com/users/mergennachin/events{/privacy}",
"received_events_url": "https://api.github.com/users/mergennachin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39506/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39505/comments | https://api.github.com/repos/huggingface/transformers/issues/39505/events | https://github.com/huggingface/transformers/pull/39505 | 3,243,608,277 | PR_kwDOCUB6oc6fmK8L | 39,505 | Rename `supports_static_cache` to `can_compile_fullgraph` | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T15:33:39 | 2025-07-23T09:35:18 | 2025-07-23T09:35:18 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39505",
"html_url": "https://github.com/huggingface/transformers/pull/39505",
"diff_url": "https://github.com/huggingface/transformers/pull/39505.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39505.patch",
"merged_at": "2025-07-23T09:35:18"
} | # What does this PR do?
As per title, let's be specific about how the attribute is used. I will see if we can entirely get rid of the flag, maybe there's another way to infer which models support compile or not. At least from what I know, all MoE and non-transformer arch models don't support it | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39505/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39504/comments | https://api.github.com/repos/huggingface/transformers/issues/39504/events | https://github.com/huggingface/transformers/pull/39504 | 3,243,527,898 | PR_kwDOCUB6oc6fl5XQ | 39,504 | [ASR pipline] fix with datasets 4.0 | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T15:03:41 | 2025-07-30T08:13:40 | 2025-07-30T08:13:40 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39504",
"html_url": "https://github.com/huggingface/transformers/pull/39504",
"diff_url": "https://github.com/huggingface/transformers/pull/39504.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39504.patch",
"merged_at": "2025-07-30T08:13:40"
} | # What does this PR do?
Cf code comments! | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39504/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39503/comments | https://api.github.com/repos/huggingface/transformers/issues/39503/events | https://github.com/huggingface/transformers/pull/39503 | 3,243,477,792 | PR_kwDOCUB6oc6fluY0 | 39,503 | Fix `Qwen2AudioForConditionalGeneration.forward()` and `test_flash_attn_kernels_inference_equivalence` | {
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-18T14:46:08 | 2025-07-28T14:35:14 | 2025-07-28T14:35:08 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39503",
"html_url": "https://github.com/huggingface/transformers/pull/39503",
"diff_url": "https://github.com/huggingface/transformers/pull/39503.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39503.patch",
"merged_at": "2025-07-28T14:35:08"
} | # What does this PR do?
Fixes breaking Qwen2Audio tests: https://github.com/huggingface/transformers/actions/runs/16361063842/job/46229139255
Errors don't display in above Model CI tests, but this is what I got:
```
FAILED tests/models/qwen2_audio/test_modeling_qwen2_audio.py::Qwen2AudioForConditionalGenerationIntegrationTest::test_small_model_integration_test_batch - TypeError: Qwen2AudioForConditionalGeneration.forward() got an unexpected keyword argument 'cache_position'
FAILED tests/models/qwen2_audio/test_modeling_qwen2_audio.py::Qwen2AudioForConditionalGenerationIntegrationTest::test_small_model_integration_test_multiturn - TypeError: Qwen2AudioForConditionalGeneration.forward() got an unexpected keyword argument 'cache_position'
FAILED tests/models/qwen2_audio/test_modeling_qwen2_audio.py::Qwen2AudioForConditionalGenerationIntegrationTest::test_small_model_integration_test_single - TypeError: Qwen2AudioForConditionalGeneration.forward() got an unexpected keyword argument 'cache_position'
== 3 failed, 77 passed, 43 skipped, 4 warnings in 82.32s (0:01:22)
```
In short, `cache_position` was missing in [`forward`](https://github.com/huggingface/transformers/blob/561a79a2f4b5929f71266fb0ec1e2493c419d808/src/transformers/models/qwen2_audio/modeling_qwen2_audio.py#L716) of `Qwen2AudioForConditionalGeneration` and adding it resolves the tests ๐
However, it probably needs to be processed in some way? Like in [`Qwen2VLForConditionalGeneration`](https://github.com/huggingface/transformers/blob/561a79a2f4b5929f71266fb0ec1e2493c419d808/src/transformers/models/qwen2_vl/modeling_qwen2_vl.py#L1283)?
If it's similar as in `Qwen2VLForConditionalGeneration` happy to do it!
cc @eustlb, @gante as it seems you were aware of cache related issues [here](https://github.com/huggingface/transformers/blob/561a79a2f4b5929f71266fb0ec1e2493c419d808/tests/models/qwen2_audio/test_modeling_qwen2_audio.py#L141) | {
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39503/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39503/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39502/comments | https://api.github.com/repos/huggingface/transformers/issues/39502/events | https://github.com/huggingface/transformers/pull/39502 | 3,243,312,947 | PR_kwDOCUB6oc6flLLb | 39,502 | Fix bad tensor shape in failing Hubert test. | {
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T13:55:23 | 2025-07-21T11:25:53 | 2025-07-21T11:25:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39502",
"html_url": "https://github.com/huggingface/transformers/pull/39502",
"diff_url": "https://github.com/huggingface/transformers/pull/39502.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39502.patch",
"merged_at": "2025-07-21T11:25:52"
} | # What does this PR do?
Fixes breaking Hubert test: https://github.com/huggingface/transformers/actions/runs/16361063842/job/46229140250
cc @eustlb
Redoing the integrations test would be nice but [original implementation](https://github.com/facebookresearch/textlesslib) is archived and installation/setup doesn't work because of configuration issues on their side that would take too much time... (see below traceback)
Otherwise fixed test is consistent with another one, see [here](https://github.com/huggingface/transformers/blob/561a79a2f4b5929f71266fb0ec1e2493c419d808/tests/models/hubert/test_modeling_hubert.py#L907)
```
Traceback (most recent call last):
File "scripts/test_hubert.py", line 32, in <module>
model = SpeechEncoder.by_name(dense_model_name='mhubert-base-25hz', quantizer_model_name='kmeans',
File "/home/eric_bezzam/transformers/textlesslib/textless/data/speech_encoder.py", line 134, in by_name
dense_model = dispatch_dense_model(dense_model_name)
File "/home/eric_bezzam/transformers/textlesslib/textless/__init__.py", line 38, in dispatch_dense_model
return model_class(checkpoint_path, layer=model_layer, **kwargs)
File "/home/eric_bezzam/transformers/textlesslib/textless/data/hubert_feature_reader.py", line 28, in __init__
self.load_checkpoint_()
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/eric_bezzam/transformers/textlesslib/textless/data/hubert_feature_reader.py", line 32, in load_checkpoint_
model, _, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/fairseq/checkpoint_utils.py", line 421, in load_model_ensemble_and_task
task = tasks.setup_task(cfg.task)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/fairseq/tasks/__init__.py", line 39, in setup_task
cfg = merge_with_parent(dc(), cfg)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/fairseq/dataclass/utils.py", line 483, in merge_with_parent
merged_cfg = OmegaConf.merge(dc, cfg)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/omegaconf.py", line 321, in merge
target.merge_with(*others[1:])
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 331, in merge_with
self._format_and_raise(key=None, value=None, cause=e)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/base.py", line 95, in _format_and_raise
format_and_raise(
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 329, in merge_with
self._merge_with(*others)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 347, in _merge_with
BaseContainer._map_merge(self, other)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 305, in _map_merge
dest._format_and_raise(key=key, value=src_value, cause=e)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/base.py", line 95, in _format_and_raise
format_and_raise(
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/_utils.py", line 694, in format_and_raise
_raise(ex, cause)
File "/home/eric_bezzam/transformers/textlesslib/textless_env/lib/python3.8/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.ValidationError: Value '50.0' could not be converted to Integer
full_key: label_rate
reference_type=Optional[HubertPretrainingConfig]
object_type=HubertPretrainingConfig
```
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39502/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39501/comments | https://api.github.com/repos/huggingface/transformers/issues/39501/events | https://github.com/huggingface/transformers/pull/39501 | 3,243,271,601 | PR_kwDOCUB6oc6flCKn | 39,501 | Add ep | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T13:41:08 | 2025-07-25T17:46:19 | 2025-07-25T17:46:17 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39501",
"html_url": "https://github.com/huggingface/transformers/pull/39501",
"diff_url": "https://github.com/huggingface/transformers/pull/39501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39501.patch",
"merged_at": "2025-07-25T17:46:17"
} | # What does this PR do?
Add support for expert parallel! | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39501/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39501/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39500/comments | https://api.github.com/repos/huggingface/transformers/issues/39500/events | https://github.com/huggingface/transformers/pull/39500 | 3,242,893,935 | PR_kwDOCUB6oc6fjvYw | 39,500 | [dependencies] Update `datasets` pin | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T11:19:07 | 2025-07-18T12:07:35 | 2025-07-18T12:05:29 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39500",
"html_url": "https://github.com/huggingface/transformers/pull/39500",
"diff_url": "https://github.com/huggingface/transformers/pull/39500.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39500.patch",
"merged_at": "2025-07-18T12:05:29"
} | # What does this PR do?
(follow-up to #39496 with a more thorough fix)
With the release of `pyarrow==21.0.0`, whose most recent version is installed in `datasets` by default, `datasets<2.15.0` starts throwing exceptions.
This means we need `datasets>=2.15.0` or `pyarrow<21.0.0`. Maximum version pins may limit other packages, so I'm adding a minimum version pin.
Relevant commit in `datasets`: https://github.com/huggingface/datasets/commit/c096bd288d07ed86f340ae090e5d4d9c5351f76f (`pyarrow==21.0.0` isn't compatible before this commit) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39500/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39499/comments | https://api.github.com/repos/huggingface/transformers/issues/39499/events | https://github.com/huggingface/transformers/pull/39499 | 3,242,854,902 | PR_kwDOCUB6oc6fjmw0 | 39,499 | Slack CI bot: set default result for non-existing artifacts | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T11:04:46 | 2025-07-18T11:45:48 | 2025-07-18T11:45:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39499",
"html_url": "https://github.com/huggingface/transformers/pull/39499",
"diff_url": "https://github.com/huggingface/transformers/pull/39499.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39499.patch",
"merged_at": "2025-07-18T11:45:48"
} | # What does this PR do?
Since https://github.com/huggingface/transformers/pull/39198 the notification service works based on the downloaded artifacts instead of a pre-compiled list. This resulted in key errors in the AMD CI where some of the tests pipelines are not run and the artifacts are not produced. This PR sets a default for those cases.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39499/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39498/comments | https://api.github.com/repos/huggingface/transformers/issues/39498/events | https://github.com/huggingface/transformers/issues/39498 | 3,242,741,583 | I_kwDOCUB6oc7BSE9P | 39,498 | Gemma3n don't support chat with history | {
"login": "weedge",
"id": 1203957,
"node_id": "MDQ6VXNlcjEyMDM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1203957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weedge",
"html_url": "https://github.com/weedge",
"followers_url": "https://api.github.com/users/weedge/followers",
"following_url": "https://api.github.com/users/weedge/following{/other_user}",
"gists_url": "https://api.github.com/users/weedge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weedge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weedge/subscriptions",
"organizations_url": "https://api.github.com/users/weedge/orgs",
"repos_url": "https://api.github.com/users/weedge/repos",
"events_url": "https://api.github.com/users/weedge/events{/privacy}",
"received_events_url": "https://api.github.com/users/weedge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-18T10:27:28 | 2025-07-18T14:15:45 | 2025-07-18T12:51:20 | NONE | null | null | null | null | ### System Info
transformers==4.54.0.dev0
torch==2.6.0
torchaudio==2.6.0
torchvision==0.21.0
python: 3.10.13
os: nvidia/cuda:12.6.1-cudnn-devel-ubuntu22.04
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
def chat():
MODEL_PATH = "google/gemma-3n-E2B-it"
processor = AutoProcessor.from_pretrained(MODEL_PATH, use_fast=True)
model = Gemma3nForConditionalGeneration.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
# device_map="auto",
# attn_implementation="flash_attention_2" if gpu_prop.major >= 8 else None,
).to("cuda")
model = model.eval()
image_1 = Image.new("RGB", (100, 100), color="white")
image_2 = Image.new("RGB", (100, 100), color="black")
image_3 = Image.new("RGB", (100, 100), color="red")
messages = [
{
"role": "system",
"content": [
{
"type": "text",
"text": "ไฝ ๆฏไธไธชไธญๆ่ฏญ้ณๆบ่ฝๅฉๆ๏ผไธ่ฆไฝฟ็จ็นๆฎๅญ็ฌฆๅๅค๏ผ่ฏทไฝฟ็จไธญๆๅๅคใ",
}
],
},
{
"role": "user",
"content": [
{"type": "text", "text": "ๅพ็ๆฏไปไน้ข่ฒ"},
{"type": "image", "image": image_1},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "่ฟๅผ ๅพ็ๆฏ็บฏ็ฝ่ฒ็๏ผๆฒกๆไปปไฝๅ
ๅฎนใ"}],
},
{
"role": "user",
"content": [
{"type": "text", "text": "ๆ่ฟฐไธๅพ็"},
{"type": "image", "image": image_2},
],
},
{
"role": "assistant",
"content": [{"type": "text", "text": "่ฟๅผ ๅพ็ๆฏ็บฏ็ฝ่ฒ็๏ผๆฒกๆไปปไฝๅ
ๅฎนใ"}],
},
{
"role": "user",
"content": [
{"type": "text", "text": "ไฝ ๅซไปไนๅๅญ"},
{"type": "image", "image": image_3},
],
},
]
for i in range(3):
inputs = processor.apply_chat_template(
messages[: (i + 1) * 2],
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device, dtype=torch.bfloat16)
for key, value in inputs.items():
print(f"{key}: {value.shape=}")
input_ids = inputs["input_ids"]
prompt = processor.decode(input_ids[0])
print(f"{prompt=}")
streamer = TextIteratorStreamer(
tokenizer=processor, skip_prompt=True, skip_special_tokens=True
)
generation_kwargs = dict(
**inputs,
# do_sample=False,
do_sample=True,
temperature=0.2,
top_k=10,
top_p=0.9,
# num_beams=1,
repetition_penalty=1.1,
max_new_tokens=1024,
use_cache=True,
streamer=streamer,
)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
generated_text = ""
start = perf_counter()
times = []
with torch.inference_mode():
for new_text in streamer:
times.append(perf_counter() - start)
print(new_text, end="", flush=True)
generated_text += new_text
start = perf_counter()
print(f"\n{i}. {generated_text=} TTFT: {times[0]:.2f}s total time: {sum(times):.2f}s")
```
2nd turn have a bug:
```
Exception in thread Thread-6 (generate):
Traceback (most recent call last):
File "/usr/local/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py", line 2616, in generate
result = self._sample(
File "/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py", line 3600, in _sample
outputs = model_forward(**model_inputs, return_dict=True)
File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/utils/generic.py", line 955, in wrapper
@wraps(func)
File "/usr/local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1184, in forward
return compiled_fn(full_args)
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 323, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 672, in inner_fn
outs = compiled_fn(args)
File "/usr/local/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 490, in wrapper
return compiled_fn(runtime_args)
File "/usr/local/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 466, in __call__
return self.current_callable(inputs)
File "/usr/local/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1208, in run
return compiled_fn(new_inputs)
File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 398, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 420, in cudagraphify
manager = get_container(device_index).get_tree_manager()
File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 341, in get_container
container_dict = get_obj(local, "tree_manager_containers")
File "/usr/local/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 336, in get_obj
assert torch._C._is_key_in_tls(attr_name)
AssertionError
```
### Expected behavior
support chat with history messages | {
"login": "weedge",
"id": 1203957,
"node_id": "MDQ6VXNlcjEyMDM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1203957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weedge",
"html_url": "https://github.com/weedge",
"followers_url": "https://api.github.com/users/weedge/followers",
"following_url": "https://api.github.com/users/weedge/following{/other_user}",
"gists_url": "https://api.github.com/users/weedge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weedge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weedge/subscriptions",
"organizations_url": "https://api.github.com/users/weedge/orgs",
"repos_url": "https://api.github.com/users/weedge/repos",
"events_url": "https://api.github.com/users/weedge/events{/privacy}",
"received_events_url": "https://api.github.com/users/weedge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39498/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39498/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39497/comments | https://api.github.com/repos/huggingface/transformers/issues/39497/events | https://github.com/huggingface/transformers/issues/39497 | 3,242,521,604 | I_kwDOCUB6oc7BRPQE | 39,497 | dataset 4.0.0 , issue with load_dataset loading audio dataset | {
"login": "nanyi545",
"id": 14959437,
"node_id": "MDQ6VXNlcjE0OTU5NDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/14959437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nanyi545",
"html_url": "https://github.com/nanyi545",
"followers_url": "https://api.github.com/users/nanyi545/followers",
"following_url": "https://api.github.com/users/nanyi545/following{/other_user}",
"gists_url": "https://api.github.com/users/nanyi545/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nanyi545/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nanyi545/subscriptions",
"organizations_url": "https://api.github.com/users/nanyi545/orgs",
"repos_url": "https://api.github.com/users/nanyi545/repos",
"events_url": "https://api.github.com/users/nanyi545/events{/privacy}",
"received_events_url": "https://api.github.com/users/nanyi545/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-18T09:12:52 | 2025-07-30T17:23:03 | 2025-07-18T12:56:53 | NONE | null | null | null | null | ### System Info
dataset: 4.0.0
pytorch: 2.7.1+cu126
system: Ubuntu 22.04.5 LTS
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am play around with the newest version.
following
https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Audio
I tried below
`import datasets
print(datasets.__version__)
import torch
print(torch.__version__)
from datasets import load_dataset, Audio
ds = load_dataset("PolyAI/minds14", name="en-US", split="train")
ds = ds.cast_column("audio", Audio(sampling_rate=44100))
print(ds[0]["audio"])`
but i got error:
Traceback (most recent call last):
File "/home/xh/ww/hug5/audio1.py", line 8, in <module>
ds = load_dataset("PolyAI/minds14", name="en-US", split="train")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xh/miniconda3/envs/hug5/lib/python3.11/site-packages/datasets/load.py", line 1392, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/home/xh/miniconda3/envs/hug5/lib/python3.11/site-packages/datasets/load.py", line 1132, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xh/miniconda3/envs/hug5/lib/python3.11/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/home/xh/miniconda3/envs/hug5/lib/python3.11/site-packages/datasets/load.py", line 989, in dataset_module_factory
raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
RuntimeError: Dataset scripts are no longer supported, but found minds14.py
### Expected behavior
i expect audio dataset loaded successfully | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39497/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39496/comments | https://api.github.com/repos/huggingface/transformers/issues/39496/events | https://github.com/huggingface/transformers/pull/39496 | 3,242,513,470 | PR_kwDOCUB6oc6fibgA | 39,496 | [doc builder job] temporary pyarrow pin | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T09:10:43 | 2025-07-18T10:09:15 | 2025-07-18T10:05:40 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39496",
"html_url": "https://github.com/huggingface/transformers/pull/39496",
"diff_url": "https://github.com/huggingface/transformers/pull/39496.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39496.patch",
"merged_at": "2025-07-18T10:05:40"
} | # What does this PR do?
Adds a temporary pin on the doc builder CI job (pyarrow==20.0.0) to work around a datasets exception in our doc builder.
[Failing job](https://github.com/huggingface/transformers/actions/runs/16365952006/job/46243081358?pr=38545)
```
(...)
File "/home/runner/work/transformers/transformers/.venv/lib/python3.10/site-packages/datasets/features/__init__.py", line 18, in <module>
from .features import Array2D, Array3D, Array4D, Array5D, ClassLabel, Features, Sequence, Value
File "/home/runner/work/transformers/transformers/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 634, in <module>
class _ArrayXDExtensionType(pa.PyExtensionType):
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'. Did you mean: 'ExtensionType'?
``` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39496/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39495/comments | https://api.github.com/repos/huggingface/transformers/issues/39495/events | https://github.com/huggingface/transformers/issues/39495 | 3,242,180,116 | I_kwDOCUB6oc7BP74U | 39,495 | Add Muon Optimiser for 2x faster convergence | {
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-18T07:11:59 | 2025-08-28T12:20:47 | null | NONE | null | null | null | null | ### Feature request
Paper: https://arxiv.org/abs/2502.16982
Used in Kimi
### Motivation
Speeds up convergence
### Your contribution
Muon is just for 2D params, so it would require combination with AdamW. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39495/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39494/comments | https://api.github.com/repos/huggingface/transformers/issues/39494/events | https://github.com/huggingface/transformers/pull/39494 | 3,241,696,312 | PR_kwDOCUB6oc6ffn2W | 39,494 | Add support for including in-memory videos (not just files/urls) in apply_chat_template | {
"login": "akibjawad",
"id": 16791487,
"node_id": "MDQ6VXNlcjE2NzkxNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/16791487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akibjawad",
"html_url": "https://github.com/akibjawad",
"followers_url": "https://api.github.com/users/akibjawad/followers",
"following_url": "https://api.github.com/users/akibjawad/following{/other_user}",
"gists_url": "https://api.github.com/users/akibjawad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akibjawad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akibjawad/subscriptions",
"organizations_url": "https://api.github.com/users/akibjawad/orgs",
"repos_url": "https://api.github.com/users/akibjawad/repos",
"events_url": "https://api.github.com/users/akibjawad/events{/privacy}",
"received_events_url": "https://api.github.com/users/akibjawad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T03:14:35 | 2025-08-04T09:49:43 | 2025-08-04T09:49:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39494",
"html_url": "https://github.com/huggingface/transformers/pull/39494",
"diff_url": "https://github.com/huggingface/transformers/pull/39494.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39494.patch",
"merged_at": "2025-08-04T09:49:43"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/36560, This PR allows inclusion of in-memory video objects, as dictionary of frames and metadata, in the chat template.
**Previously:**
Chat template accepted only file-paths or urls in the chat_template. If user (a developer using transformers library) collected videos from a continuous stream or any input devices, user had to store the video in a file and provide file path in chat messages.
**Now (after this PR):**
Users can collect video frames from streams or devices, provide metadata (describing fps), and directly pass those in the chat_template as a dictionary object. It frees the user from saving the video in files, and increases efficiency by reducing extra IO operation to reload the video again from files.
**Notes:**
Additionally, this PR also fixes hardcoded values used for testing (in assertions) apply_chat_template_videos for models like internvl, qwen2_vl, qwen2_5_vl, qwen2_5_omni.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). No
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section? Yes
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Yes. Issue link: https://github.com/huggingface/transformers/issues/36560
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
- in tests/test_processing_common.py file: I added a new type of video input which will be included in the chat messages while testing functionality of apply_chat_template.
- Added a new test with batchsize 3 for testing in-memory video objects in chat_template. Additionally updated hardcoded assertion (video_len check) for testing with increased batch_size in 4 models:
- tests/models/internvl/test_processor_internvl.py
- tests/models/qwen2_vl/test_processor_qwen2_vl.py
- tests/models/qwen2_5_vl/test_processor_qwen2_5_vl.py
- tests/models/qwen2_5_omni/test_processor_qwen2_5_omni.py
- tests/models/smolvlm/test_processor_smolvlm.py (skip testing smolvlm with list of frames)
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Specifically mentioning @zucchini-nlp for review. Feel free to tag other members/contributors who may be interested to review this PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39494/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39493/comments | https://api.github.com/repos/huggingface/transformers/issues/39493/events | https://github.com/huggingface/transformers/pull/39493 | 3,241,354,789 | PR_kwDOCUB6oc6fecTD | 39,493 | [Voxtral] nit + pin correct mistral common version | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-18T00:17:29 | 2025-07-28T08:23:00 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39493",
"html_url": "https://github.com/huggingface/transformers/pull/39493",
"diff_url": "https://github.com/huggingface/transformers/pull/39493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39493.patch",
"merged_at": null
} | # What does this PR do?
See title :) | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39493/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39492/comments | https://api.github.com/repos/huggingface/transformers/issues/39492/events | https://github.com/huggingface/transformers/issues/39492 | 3,241,319,027 | I_kwDOCUB6oc7BMppz | 39,492 | modeling_flax_gemma.FlaxGemmaModule failed with incompatible shapes when running with GemmaConfig | {
"login": "nhatleSummer22",
"id": 105756286,
"node_id": "U_kgDOBk22fg",
"avatar_url": "https://avatars.githubusercontent.com/u/105756286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nhatleSummer22",
"html_url": "https://github.com/nhatleSummer22",
"followers_url": "https://api.github.com/users/nhatleSummer22/followers",
"following_url": "https://api.github.com/users/nhatleSummer22/following{/other_user}",
"gists_url": "https://api.github.com/users/nhatleSummer22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nhatleSummer22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nhatleSummer22/subscriptions",
"organizations_url": "https://api.github.com/users/nhatleSummer22/orgs",
"repos_url": "https://api.github.com/users/nhatleSummer22/repos",
"events_url": "https://api.github.com/users/nhatleSummer22/events{/privacy}",
"received_events_url": "https://api.github.com/users/nhatleSummer22/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-18T00:01:49 | 2025-07-18T12:29:33 | 2025-07-18T12:29:33 | NONE | null | null | null | null | Hi, I got this error when running the above model with GemmConfig:
"/python3.11/site-packages/jax/_src/numpy/ufuncs.py", line 1280, in multiply
return lax.mul(x, y) if x.dtype != bool else lax.bitwise_and(x, y)
```
from transformers.models.gemma import modeling_flax_gemma
from transformers import GemmaConfig
config = GemmaConfig()
model = modeling_flax_gemma.FlaxGemmaModule(config, dtype=jnp.float32)
input_ids = jnp.zeros((32, 128), dtype=jnp.int32)
variables = model.init(
jax.random.key(0),
input_ids=input_ids,
)
def model_apply(input_ids):
return model.apply(variables, input_ids=input_ids)
model_apply(input_ids)
```
I am using transformers 4.53.2 and jax 3.10. Could you please take a look? Thanks! | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39492/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39491/comments | https://api.github.com/repos/huggingface/transformers/issues/39491/events | https://github.com/huggingface/transformers/pull/39491 | 3,241,099,763 | PR_kwDOCUB6oc6fdi1h | 39,491 | Fix: Skip weight initialization for quantized int8 models | {
"login": "imjbassi",
"id": 41026221,
"node_id": "MDQ6VXNlcjQxMDI2MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/41026221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imjbassi",
"html_url": "https://github.com/imjbassi",
"followers_url": "https://api.github.com/users/imjbassi/followers",
"following_url": "https://api.github.com/users/imjbassi/following{/other_user}",
"gists_url": "https://api.github.com/users/imjbassi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imjbassi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imjbassi/subscriptions",
"organizations_url": "https://api.github.com/users/imjbassi/orgs",
"repos_url": "https://api.github.com/users/imjbassi/repos",
"events_url": "https://api.github.com/users/imjbassi/events{/privacy}",
"received_events_url": "https://api.github.com/users/imjbassi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-17T22:25:41 | 2025-07-18T12:28:51 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39491",
"html_url": "https://github.com/huggingface/transformers/pull/39491",
"diff_url": "https://github.com/huggingface/transformers/pull/39491.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39491.patch",
"merged_at": null
} | Fixes #39366 by skipping weight initialization when all model parameters are non-floating-point types (e.g., int8 from W8A8 quantized models). This avoids a RuntimeError from PyTorch's `normal_()` function, which cannot handle integer dtypes.
Adds a conditional check inside `_load_pretrained_model` to skip `self.initialize_weights()` when appropriate.
Tested to ensure model loads without crashing for quantized cases. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39491/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39490/comments | https://api.github.com/repos/huggingface/transformers/issues/39490/events | https://github.com/huggingface/transformers/pull/39490 | 3,241,052,116 | PR_kwDOCUB6oc6fdYEQ | 39,490 | [Fast image processor] refactor fast image processor glm4v | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T22:04:58 | 2025-07-21T15:18:46 | 2025-07-21T15:18:46 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39490",
"html_url": "https://github.com/huggingface/transformers/pull/39490",
"diff_url": "https://github.com/huggingface/transformers/pull/39490.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39490.patch",
"merged_at": "2025-07-21T15:18:46"
} | # What does this PR do?
Refactor fast image processor of glm4v to be consistent with the library standards, and use grouping by shape
cc @Cyrilvallez
| {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39490/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39489/comments | https://api.github.com/repos/huggingface/transformers/issues/39489/events | https://github.com/huggingface/transformers/pull/39489 | 3,240,930,321 | PR_kwDOCUB6oc6fc9P_ | 39,489 | [Fast image processors] Improve handling of image-like inputs other than images (segmentation_maps) | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T21:12:55 | 2025-07-21T18:12:14 | 2025-07-21T18:12:14 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39489",
"html_url": "https://github.com/huggingface/transformers/pull/39489",
"diff_url": "https://github.com/huggingface/transformers/pull/39489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39489.patch",
"merged_at": "2025-07-21T18:12:14"
} | # What does this PR do?
As the title says, **unbloats** (๐) a lot of the fast processing code for models needing to processed segmentation maps, trimaps, depth_maps etc. | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39489/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39488/comments | https://api.github.com/repos/huggingface/transformers/issues/39488/events | https://github.com/huggingface/transformers/pull/39488 | 3,240,780,289 | PR_kwDOCUB6oc6fccAv | 39,488 | Avoid aliasing in cond's branches for torch 2.8 | {
"login": "ydwu4",
"id": 11565780,
"node_id": "MDQ6VXNlcjExNTY1Nzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/11565780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydwu4",
"html_url": "https://github.com/ydwu4",
"followers_url": "https://api.github.com/users/ydwu4/followers",
"following_url": "https://api.github.com/users/ydwu4/following{/other_user}",
"gists_url": "https://api.github.com/users/ydwu4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydwu4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydwu4/subscriptions",
"organizations_url": "https://api.github.com/users/ydwu4/orgs",
"repos_url": "https://api.github.com/users/ydwu4/repos",
"events_url": "https://api.github.com/users/ydwu4/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydwu4/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T20:11:29 | 2025-08-05T09:18:12 | 2025-08-05T09:18:12 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39488",
"html_url": "https://github.com/huggingface/transformers/pull/39488",
"diff_url": "https://github.com/huggingface/transformers/pull/39488.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39488.patch",
"merged_at": "2025-08-05T09:18:11"
} | torch 2.8 enforces a stricter check for the aliasing relationship for correctness reasons. Fixes https://github.com/pytorch/pytorch/issues/158375 by cloning the outputs that aliasing inputs.
cc @ydshieh
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39488/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39487/comments | https://api.github.com/repos/huggingface/transformers/issues/39487/events | https://github.com/huggingface/transformers/pull/39487 | 3,240,770,961 | PR_kwDOCUB6oc6fcZ9j | 39,487 | Update CTRL model card with improved usage examples and documentation notes | {
"login": "Ishubhammohole",
"id": 95623059,
"node_id": "U_kgDOBbMXkw",
"avatar_url": "https://avatars.githubusercontent.com/u/95623059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ishubhammohole",
"html_url": "https://github.com/Ishubhammohole",
"followers_url": "https://api.github.com/users/Ishubhammohole/followers",
"following_url": "https://api.github.com/users/Ishubhammohole/following{/other_user}",
"gists_url": "https://api.github.com/users/Ishubhammohole/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ishubhammohole/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ishubhammohole/subscriptions",
"organizations_url": "https://api.github.com/users/Ishubhammohole/orgs",
"repos_url": "https://api.github.com/users/Ishubhammohole/repos",
"events_url": "https://api.github.com/users/Ishubhammohole/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ishubhammohole/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T20:06:50 | 2025-08-13T23:46:36 | 2025-08-13T23:46:21 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39487",
"html_url": "https://github.com/huggingface/transformers/pull/39487",
"diff_url": "https://github.com/huggingface/transformers/pull/39487.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39487.patch",
"merged_at": null
} | Description:
This PR improves the ctrl model card to align with Hugging Face documentation standards and improve clarity for users. Specifically, it:
โข Updates metadata fields (e.g., license, tags, library name, model type)
โข Improves model description for better readability
โข Adds usage tips and clearer links to paper and code
โข Enhances tokenizer and limitations sections
โข Fixes minor formatting inconsistencies
These changes ensure consistency with other official model cards and improve user understanding when exploring or deploying the ctrl model.
โธป
What does this PR do?
โ๏ธ Updates and standardizes the ctrl model card
โ๏ธ Improves clarity, formatting, and metadata fields
โ๏ธ Enhances user experience and aligns with documentation best practices
โธป
Before submitting
โข Iโve read the [Pull Request section](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#pull-requests) of the contributing guide
โข Documentation has been updated accordingly
โข No new tests are necessary (doc-only PR)
โธป
Who can review?
Anyone in the community is welcome to review.
Recommended reviewers: @stevhliu | {
"login": "Ishubhammohole",
"id": 95623059,
"node_id": "U_kgDOBbMXkw",
"avatar_url": "https://avatars.githubusercontent.com/u/95623059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ishubhammohole",
"html_url": "https://github.com/Ishubhammohole",
"followers_url": "https://api.github.com/users/Ishubhammohole/followers",
"following_url": "https://api.github.com/users/Ishubhammohole/following{/other_user}",
"gists_url": "https://api.github.com/users/Ishubhammohole/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ishubhammohole/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ishubhammohole/subscriptions",
"organizations_url": "https://api.github.com/users/Ishubhammohole/orgs",
"repos_url": "https://api.github.com/users/Ishubhammohole/repos",
"events_url": "https://api.github.com/users/Ishubhammohole/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ishubhammohole/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39487/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39486/comments | https://api.github.com/repos/huggingface/transformers/issues/39486/events | https://github.com/huggingface/transformers/pull/39486 | 3,240,536,077 | PR_kwDOCUB6oc6fblto | 39,486 | Add AMD GPU expectations for LLaVA tests | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T18:39:07 | 2025-07-22T14:01:55 | 2025-07-22T14:01:54 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39486",
"html_url": "https://github.com/huggingface/transformers/pull/39486",
"diff_url": "https://github.com/huggingface/transformers/pull/39486.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39486.patch",
"merged_at": "2025-07-22T14:01:54"
} | # What does this PR do?
This PR adds the correct output expectations for AMD GPUs to the LLaVA tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39486/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39485/comments | https://api.github.com/repos/huggingface/transformers/issues/39485/events | https://github.com/huggingface/transformers/pull/39485 | 3,240,284,373 | PR_kwDOCUB6oc6faunj | 39,485 | Add Whole Word Masking and Padding Strategy to DataCollatorForLanguageModeling | {
"login": "rjgleaton",
"id": 70818603,
"node_id": "MDQ6VXNlcjcwODE4NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/70818603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rjgleaton",
"html_url": "https://github.com/rjgleaton",
"followers_url": "https://api.github.com/users/rjgleaton/followers",
"following_url": "https://api.github.com/users/rjgleaton/following{/other_user}",
"gists_url": "https://api.github.com/users/rjgleaton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rjgleaton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rjgleaton/subscriptions",
"organizations_url": "https://api.github.com/users/rjgleaton/orgs",
"repos_url": "https://api.github.com/users/rjgleaton/repos",
"events_url": "https://api.github.com/users/rjgleaton/events{/privacy}",
"received_events_url": "https://api.github.com/users/rjgleaton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T17:00:34 | 2025-09-22T12:42:35 | 2025-09-22T12:42:34 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39485",
"html_url": "https://github.com/huggingface/transformers/pull/39485",
"diff_url": "https://github.com/huggingface/transformers/pull/39485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39485.patch",
"merged_at": "2025-09-22T12:42:34"
} | Sorry about the previous PR. I made a git error when trying to amend my commits.
# What does this PR do?
Main contribution is adding Whole Word Masking to DataCollatorForLanguageModeling. This is to fix the issues mentioned in #12998 where the previous implementation only returned input_ids and labels. This implementation should pass through all values from the input mappings. Because I feel it's more intuitive and easier to maintain I've added an arg to DataCollatorForLanguageModeling that allows for whole word masking. The DataCollatorForWholeWordMasking collator has been largely deprecated and is now just a rapper for the language modeling collator where mlm and whole_word_mask are forced to true.
This implementation depends on offset mappings for determining if a given pair of tokens belong to the same word. As such, it only natively works for FastTokenizers. I'm not sure there's a tokenizer agnostic way to do this with non-fast tokenizers atm.
~~Additionally I've adding an argument for specifying PaddingStrategy to DataCollatorForLanguageModeling. This is a minor QOL improvement that makes it more consistent with other collators.~~
Fixes #12998
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, @Rocketknight1
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39485/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39484/comments | https://api.github.com/repos/huggingface/transformers/issues/39484/events | https://github.com/huggingface/transformers/issues/39484 | 3,240,228,444 | I_kwDOCUB6oc7BIfZc | 39,484 | Transformers still tries to use apex.amp which is no longer a thing in apex. | {
"login": "yselivonchyk",
"id": 4716569,
"node_id": "MDQ6VXNlcjQ3MTY1Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4716569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yselivonchyk",
"html_url": "https://github.com/yselivonchyk",
"followers_url": "https://api.github.com/users/yselivonchyk/followers",
"following_url": "https://api.github.com/users/yselivonchyk/following{/other_user}",
"gists_url": "https://api.github.com/users/yselivonchyk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yselivonchyk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yselivonchyk/subscriptions",
"organizations_url": "https://api.github.com/users/yselivonchyk/orgs",
"repos_url": "https://api.github.com/users/yselivonchyk/repos",
"events_url": "https://api.github.com/users/yselivonchyk/events{/privacy}",
"received_events_url": "https://api.github.com/users/yselivonchyk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-17T16:43:14 | 2025-08-25T08:03:03 | 2025-08-25T08:03:03 | NONE | null | null | null | null | ### System Info
```
root@12bb27e08b1b:/# pip show transformers
Name: transformers
Version: 4.52.3
```
trainer.py contains this:
```
if is_apex_available():
from apex import amp
```
Apex (built from source, as they recommend) does no longer come with amp.
How to reproduce?
1. install transformers
2. install apex
3. python `from trl import SFTTrainer`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
How to reproduce?
1. install transformers
2. install apex
3. python `from trl import SFTTrainer`
### Expected behavior
There should not be `from apex import amp` in the code base | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39484/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39483/comments | https://api.github.com/repos/huggingface/transformers/issues/39483/events | https://github.com/huggingface/transformers/pull/39483 | 3,240,090,796 | PR_kwDOCUB6oc6faG55 | 39,483 | Bye bye env vars, keep everything as configs | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T15:55:41 | 2025-10-23T16:02:06 | 2025-10-23T16:01:54 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39483",
"html_url": "https://github.com/huggingface/transformers/pull/39483",
"diff_url": "https://github.com/huggingface/transformers/pull/39483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39483.patch",
"merged_at": null
} | # What does this PR do?
Supersedes https://github.com/huggingface/transformers/pull/37259
As @BenjaminBossan found, (https://github.com/huggingface/accelerate/pull/3252), the TrainingArguments will set environmental variables automatically when using Accelerate because before it wouldn't work otherwise. Nowadays the only env variable required for things to run smoothly is the ones for model init (fsdp cpu eff ram).
This PR does a few things:
- We completely remove the need for environmental variables, creating the proper configs (dynamo relies on https://github.com/huggingface/accelerate/pull/3251)
- I've refactored how mixed_precision gets set, to simplify the training arguments and combine 7 args into 2.
- Removes old references/updates the logic in Trainer to reflect the choices | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39483/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39483/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39482/comments | https://api.github.com/repos/huggingface/transformers/issues/39482/events | https://github.com/huggingface/transformers/pull/39482 | 3,240,050,923 | PR_kwDOCUB6oc6fZ-Ng | 39,482 | Add Whole Word Masking and Padding Strategy to DataCollatorForLanguageModeling | {
"login": "rjgleaton",
"id": 70818603,
"node_id": "MDQ6VXNlcjcwODE4NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/70818603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rjgleaton",
"html_url": "https://github.com/rjgleaton",
"followers_url": "https://api.github.com/users/rjgleaton/followers",
"following_url": "https://api.github.com/users/rjgleaton/following{/other_user}",
"gists_url": "https://api.github.com/users/rjgleaton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rjgleaton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rjgleaton/subscriptions",
"organizations_url": "https://api.github.com/users/rjgleaton/orgs",
"repos_url": "https://api.github.com/users/rjgleaton/repos",
"events_url": "https://api.github.com/users/rjgleaton/events{/privacy}",
"received_events_url": "https://api.github.com/users/rjgleaton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T15:42:36 | 2025-07-17T16:21:55 | 2025-07-17T16:21:55 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39482",
"html_url": "https://github.com/huggingface/transformers/pull/39482",
"diff_url": "https://github.com/huggingface/transformers/pull/39482.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39482.patch",
"merged_at": null
} | # What does this PR do?
Main contribution is adding Whole Word Masking to DataCollatorForLanguageModeling. This is to fix the issues mentioned in #12998 where the previous implementation only returned input_ids and labels. This implementation should pass through all values from the input mappings. Because I feel it's more intuitive and easier to maintain I've added an arg to DataCollatorForLanguageModeling that allows for whole word masking. The DataCollatorForWholeWordMasking collator has been largely deprecated and is now just a rapper for the language modeling collator where mlm and whole_word_mask are forced to true.
This implementation depends on offset mappings for determining if a given pair of tokens belong to the same word. As such, it only natively works for FastTokenizers. I'm not sure there's a tokenizer agnostic way to do this with non-fast tokenizers atm.
Additionally I've adding an argument for specifying PaddingStrategy to DataCollatorForLanguageModeling. This is a minor QOL improvement that makes it more consistent with other collators.
Fixes #12998
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger, @Rocketknight1
| {
"login": "rjgleaton",
"id": 70818603,
"node_id": "MDQ6VXNlcjcwODE4NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/70818603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rjgleaton",
"html_url": "https://github.com/rjgleaton",
"followers_url": "https://api.github.com/users/rjgleaton/followers",
"following_url": "https://api.github.com/users/rjgleaton/following{/other_user}",
"gists_url": "https://api.github.com/users/rjgleaton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rjgleaton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rjgleaton/subscriptions",
"organizations_url": "https://api.github.com/users/rjgleaton/orgs",
"repos_url": "https://api.github.com/users/rjgleaton/repos",
"events_url": "https://api.github.com/users/rjgleaton/events{/privacy}",
"received_events_url": "https://api.github.com/users/rjgleaton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39482/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39481/comments | https://api.github.com/repos/huggingface/transformers/issues/39481/events | https://github.com/huggingface/transformers/pull/39481 | 3,239,904,714 | PR_kwDOCUB6oc6fZeyj | 39,481 | Add AMD expectations to Mistral3 tests | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T14:57:36 | 2025-07-22T13:40:17 | 2025-07-22T13:40:16 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39481",
"html_url": "https://github.com/huggingface/transformers/pull/39481",
"diff_url": "https://github.com/huggingface/transformers/pull/39481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39481.patch",
"merged_at": "2025-07-22T13:40:16"
} | # What does this PR do?
This PR adds the correct expectations for AMD devices in the Mistral3 tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39481/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39480/comments | https://api.github.com/repos/huggingface/transformers/issues/39480/events | https://github.com/huggingface/transformers/pull/39480 | 3,239,895,847 | PR_kwDOCUB6oc6fZc7L | 39,480 | Add model arcinstitute state | {
"login": "drbh",
"id": 9896130,
"node_id": "MDQ6VXNlcjk4OTYxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9896130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drbh",
"html_url": "https://github.com/drbh",
"followers_url": "https://api.github.com/users/drbh/followers",
"following_url": "https://api.github.com/users/drbh/following{/other_user}",
"gists_url": "https://api.github.com/users/drbh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drbh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drbh/subscriptions",
"organizations_url": "https://api.github.com/users/drbh/orgs",
"repos_url": "https://api.github.com/users/drbh/repos",
"events_url": "https://api.github.com/users/drbh/events{/privacy}",
"received_events_url": "https://api.github.com/users/drbh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-07-17T14:55:18 | 2025-08-13T21:33:32 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39480",
"html_url": "https://github.com/huggingface/transformers/pull/39480",
"diff_url": "https://github.com/huggingface/transformers/pull/39480.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39480.patch",
"merged_at": null
} | This PR adds the arc state model
# Run embedding model via transformers
```bash
git clone https://github.com/huggingface/transformers
git checkout add-model-arcinstitute-state
uv run sanity.py
```
sanity.py
```python
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "torch",
# "transformers"
# ]
#
# [tool.uv.sources]
# transformers = { path = ".", editable = true }
# ///
import torch
from transformers import StateEmbeddingModel
model_name = "arcinstitute/SE-600M"
model = StateEmbeddingModel.from_pretrained(model_name)
torch.manual_seed(0)
input_ids = torch.randn((1, 1, 5120), dtype=torch.float32)
mask = torch.ones((1, 1, 5120), dtype=torch.bool)
mask[:, :, 2560:] = False # simulate half masking
print("Input sum:\t", input_ids.sum())
print("Mask sum:\t", mask.sum())
outputs = model(input_ids, mask)
print("Output sum:\t", outputs["gene_output"].sum())
```
outputs
```txt
Input sum: tensor(-38.6611)
Mask sum: tensor(2560)
Output sum: tensor(-19.6819, grad_fn=<SumBackward0>)
```
# Compare to reference
```
git clone https://github.com/ArcInstitute/state.git
cd state
curl -OL https://huggingface.co/arcinstitute/SE-600M/resolve/main/se600m_epoch16.ckpt
```
next, apply this small patch so we can run the model file directly with a fixed input to compare with the impl above
<details>
<summary>file `compare.patch`</summary>
```patch
diff --git a/src/state/emb/nn/model.py b/src/state/emb/nn/model.py
index dbbefb3..42167a1 100644
--- a/src/state/emb/nn/model.py
+++ b/src/state/emb/nn/model.py
@@ -23,20 +23,20 @@ from torch.nn import TransformerEncoder, TransformerEncoderLayer, BCEWithLogitsL
from tqdm.auto import tqdm
from torch.optim.lr_scheduler import ChainedScheduler, LinearLR, CosineAnnealingLR, ReduceLROnPlateau
-from ..data import create_dataloader
-from ..utils import (
+from state.emb.data import create_dataloader
+from state.emb.utils import (
compute_gene_overlap_cross_pert,
get_embedding_cfg,
get_dataset_cfg,
compute_pearson_delta,
compute_perturbation_ranking_score,
)
-from ..eval.emb import cluster_embedding
-from .loss import WassersteinLoss, KLDivergenceLoss, MMDLoss, TabularLoss
+from state.emb.eval.emb import cluster_embedding
+from loss import WassersteinLoss, KLDivergenceLoss, MMDLoss, TabularLoss
-from .flash_transformer import FlashTransformerEncoderLayer
-from .flash_transformer import FlashTransformerEncoder
+from flash_transformer import FlashTransformerEncoderLayer
+from flash_transformer import FlashTransformerEncoder
class SkipBlock(nn.Module):
@@ -196,7 +196,8 @@ class StateEmbeddingModel(L.LightningModule):
self.dataset_embedder = nn.Linear(output_dim, 10)
# Assume self.cfg.model.num_datasets is set to the number of unique datasets.
- num_dataset = get_dataset_cfg(self.cfg).num_datasets
+ # num_dataset = get_dataset_cfg(self.cfg).num_datasets
+ num_dataset = 14420
self.dataset_encoder = nn.Sequential(
nn.Linear(output_dim, d_model),
nn.SiLU(),
@@ -686,3 +687,18 @@ class StateEmbeddingModel(L.LightningModule):
"optimizer": optimizer,
"lr_scheduler": {"scheduler": scheduler, "monitor": "train_loss", "interval": "step", "frequency": 1},
}
+
+if __name__ == "__main__":
+ checkpoint = "/Users/drbh/Projects/state/se600m_epoch16.ckpt"
+ model = StateEmbeddingModel.load_from_checkpoint(checkpoint, dropout=0.0, strict=False)
+
+ torch.manual_seed(0)
+
+ input_ids = torch.randn((1, 1, 5120), dtype=torch.float32)
+ mask = torch.ones((1, 1, 5120), dtype=torch.bool)
+ mask[:, :, 2560:] = False
+ print("Input sum:\t", input_ids.sum())
+ print("Mask sum:\t", mask.sum())
+
+ output, embedding, dataset_emb = model(input_ids, mask)
+ print("Output shape:\t", output.sum())
```
</details>
can be applied like
```bash
# save above as compare.patch
git apply compare.patch
```
run the model
```bash
.venv/bin/python src/state/emb/nn/model.py
```
output
```txt
!!! Using Flash Attention !!!
Input sum: tensor(-38.6611)
Mask sum: tensor(2560)
Output shape: tensor(-19.6819, grad_fn=<SumBackward0>)
``` | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39480/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39480/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39479/comments | https://api.github.com/repos/huggingface/transformers/issues/39479/events | https://github.com/huggingface/transformers/issues/39479 | 3,239,882,578 | I_kwDOCUB6oc7BHK9S | 39,479 | Adding Space-Time-MiniLM-v0 | {
"login": "haidarjomaa",
"id": 130698588,
"node_id": "U_kgDOB8pNXA",
"avatar_url": "https://avatars.githubusercontent.com/u/130698588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haidarjomaa",
"html_url": "https://github.com/haidarjomaa",
"followers_url": "https://api.github.com/users/haidarjomaa/followers",
"following_url": "https://api.github.com/users/haidarjomaa/following{/other_user}",
"gists_url": "https://api.github.com/users/haidarjomaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haidarjomaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haidarjomaa/subscriptions",
"organizations_url": "https://api.github.com/users/haidarjomaa/orgs",
"repos_url": "https://api.github.com/users/haidarjomaa/repos",
"events_url": "https://api.github.com/users/haidarjomaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/haidarjomaa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-07-17T14:51:15 | 2025-07-18T09:22:13 | null | NONE | null | null | null | null | ### Model description
The model is an encoder, used for sentence embedding. It uses a custom attention mechanism to incorporate space-time information into the embeddings. This would be very helpful especially in retrieval scenarios.
This has not been done before, and it is not available yet to be used. We aim to have this the start of a series of models that do this task.
The All-MiniLM-L2-v6 is the closest since our model was built on top of this one's knowledge.
It uses the same tokenizer as MiniLM.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39479/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39478/comments | https://api.github.com/repos/huggingface/transformers/issues/39478/events | https://github.com/huggingface/transformers/pull/39478 | 3,239,741,168 | PR_kwDOCUB6oc6fY7As | 39,478 | Fix Bark failing tests | {
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-17T14:09:03 | 2025-09-08T18:24:55 | 2025-09-08T18:24:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39478",
"html_url": "https://github.com/huggingface/transformers/pull/39478",
"diff_url": "https://github.com/huggingface/transformers/pull/39478.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39478.patch",
"merged_at": "2025-09-08T18:24:51"
} | # Goal of this PR
There are three failing tests for Bark, see [here](https://github.com/huggingface/transformers/actions/runs/16282903241/job/45976157467) for modeling ones.
---
## 1) `tests/models/bark/test_modeling_bark.py::BarkModelIntegrationTests::test_generate_end_to_end_with_args`
```bash
FAILED tests/models/bark/test_modeling_bark.py::BarkModelIntegrationTests::test_generate_end_to_end_with_args - RuntimeError: shape '[1, 518400]' is invalid for input of size 40192
```
Fails because the vocab size is wrongly configured. By default it takes [this value](https://github.com/huggingface/transformers/blob/565dd0bad74a46d85c41e2d870f803d9e7a1a94e/src/transformers/generation/utils.py#L3992), but this is the _input vocab size_ for Bark (instead of output vocab size), which causes failure when reshaping [here](https://github.com/huggingface/transformers/blob/565dd0bad74a46d85c41e2d870f803d9e7a1a94e/src/transformers/generation/utils.py#L4128).
So a new condition should be added to extract the correct value.
@gante I see a [TODO](https://github.com/huggingface/transformers/blob/2b819ba4e383dd5b82d6d251039e30c3e94761b1/src/transformers/generation/utils.py#L3986) about standardizing the special cases. Is proposed solution for now?
## 2) RESOLVED ~~`tests/models/bark/test_modeling_bark.py::BarkModelIntegrationTests::test_generate_batching`~~
```bash
FAILED tests/models/bark/test_modeling_bark.py::BarkModelIntegrationTests::test_generate_batching - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
```
Related to https://github.com/huggingface/transformers/issues/34634 and fixed by https://github.com/huggingface/transformers/pull/38985
## 3) `tests/models/bark/test_processor_bark.py::BarkProcessorTest::test_save_load_pretrained_additional_features`
This one is very subtle. Fails on first run with no cached models:
```bash
src/transformers/models/bark/processing_bark.py:208: ValueError
---------------------------------- Captured stderr call -------------------------------
HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
Retrying in 1s [Retry 1/5].
HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
Retrying in 2s [Retry 2/5].
HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
Retrying in 4s [Retry 3/5].
HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
Retrying in 8s [Retry 4/5].
HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
Retrying in 8s [Retry 5/5].
HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
----------------------------------- Captured log call -----------------------------------
WARNING huggingface_hub.utils._http:_http.py:315 HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
WARNING huggingface_hub.utils._http:_http.py:332 Retrying in 1s [Retry 1/5].
WARNING huggingface_hub.utils._http:_http.py:315 HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
WARNING huggingface_hub.utils._http:_http.py:332 Retrying in 2s [Retry 2/5].
WARNING huggingface_hub.utils._http:_http.py:315 HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
WARNING huggingface_hub.utils._http:_http.py:332 Retrying in 4s [Retry 3/5].
WARNING huggingface_hub.utils._http:_http.py:315 HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
WARNING huggingface_hub.utils._http:_http.py:332 Retrying in 8s [Retry 4/5].
WARNING huggingface_hub.utils._http:_http.py:315 HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
WARNING huggingface_hub.utils._http:_http.py:332 Retrying in 8s [Retry 5/5].
WARNING huggingface_hub.utils._http:_http.py:315 HTTP Error 429 thrown while requesting HEAD https://huggingface.co/ylacombe/bark-small/resolve/main/speaker_embeddings/pl_speaker_3_semantic_prompt.npy
=============================================================================== short test summary info ===============================================================================
FAILED tests/models/bark/test_processor_bark.py::BarkProcessorTest::test_save_load_pretrained_additional_features - ValueError: `ylacombe/bark-small/speaker_embeddings/pl_speaker_3_semantic_prompt.npy` does not exists
```
But may pass after multiple tries (maybe because missing files are downloaded).
It can explain issues like OP of https://github.com/huggingface/transformers/issues/34634 where a voice preset may not be (first) available.
Also this error helped me realize the official checkpoints are still pointing to `ylacombe`'s checkpoints. I opened PRs on the hub to fix this. | {
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39478/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39477/comments | https://api.github.com/repos/huggingface/transformers/issues/39477/events | https://github.com/huggingface/transformers/pull/39477 | 3,239,710,408 | PR_kwDOCUB6oc6fY0U_ | 39,477 | Fix pylint warnings | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T14:00:18 | 2025-08-02T15:04:58 | 2025-07-21T12:38:06 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39477",
"html_url": "https://github.com/huggingface/transformers/pull/39477",
"diff_url": "https://github.com/huggingface/transformers/pull/39477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39477.patch",
"merged_at": "2025-07-21T12:38:06"
} | # What does this PR do?
A general clean-up to fix various pylint warnings, before it's possible to enable most pylint ruff rules. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39477/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39476/comments | https://api.github.com/repos/huggingface/transformers/issues/39476/events | https://github.com/huggingface/transformers/issues/39476 | 3,239,675,467 | I_kwDOCUB6oc7BGYZL | 39,476 | Allow `load_best_model_at_end=True` to work when `save_steps < eval_steps` and best model is saved | {
"login": "KrisTHL181",
"id": 93204914,
"node_id": "U_kgDOBY4xsg",
"avatar_url": "https://avatars.githubusercontent.com/u/93204914?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KrisTHL181",
"html_url": "https://github.com/KrisTHL181",
"followers_url": "https://api.github.com/users/KrisTHL181/followers",
"following_url": "https://api.github.com/users/KrisTHL181/following{/other_user}",
"gists_url": "https://api.github.com/users/KrisTHL181/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KrisTHL181/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KrisTHL181/subscriptions",
"organizations_url": "https://api.github.com/users/KrisTHL181/orgs",
"repos_url": "https://api.github.com/users/KrisTHL181/repos",
"events_url": "https://api.github.com/users/KrisTHL181/events{/privacy}",
"received_events_url": "https://api.github.com/users/KrisTHL181/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-17T13:49:53 | 2025-07-24T11:52:55 | null | NONE | null | null | null | null | ### Feature request
Allow load_best_model_at_end=True to work even when save_steps is not a round multiple of eval_steps, and optionally preserve the best model even when reaching save_total_limit.
This change would remove the current restriction that enforces save_steps to be a multiple of eval_steps when load_best_model_at_end=True. Additionally, it proposes an optional flag to prevent deletion of the best model when the total number of saved checkpoints exceeds the limit.
No specific paper is associated with this feature. This is a usability improvement based on common user workflows and constraints.
### Motivation
Users with limited disk space (e.g., Colab users) often want to:
- Save more frequently (e.g., `save_steps=100`) to avoid losing progress
- Evaluate less frequently (e.g., `eval_steps=200`) to save compute
- Still be able to load the best model at the end using `load_best_model_at_end=True`
Currently, this is not possible unless `save_steps` is a multiple of `eval_steps`, which is unnecessarily restrictive. The restriction could be lifted by simply ensuring that the best model is saved at least once during training, regardless of the save/eval frequency ratio.
Additionally, users may want to keep the best model even when reaching the `save_total_limit`, which currently may cause the best model to be deleted.
This request is related to the discussion in Hugging Face Transformers GitHub issues, where users have reported frustration over this limitation.
### Your contribution
Although I currently lack the resources to submit a PR myself, I'm happy to support the discussion and help refine the proposal. I believe contributions go beyond code โ asking questions, sharing feedback, and helping others in the community are also valuable ways to contribute.
I encourage others who are interested in this feature to join the discussion or take up the implementation. I'm also happy to test or provide input if someone decides to work on it.
In the meantime, Iโll continue to support the project by spreading the word and showing appreciation for the libraryโs impact. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39476/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39475/comments | https://api.github.com/repos/huggingface/transformers/issues/39475/events | https://github.com/huggingface/transformers/pull/39475 | 3,239,395,352 | PR_kwDOCUB6oc6fXvYR | 39,475 | Update the auto batch size we expect to see in transformers tests | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T12:24:55 | 2025-07-17T12:37:54 | 2025-07-17T12:29:41 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39475",
"html_url": "https://github.com/huggingface/transformers/pull/39475",
"diff_url": "https://github.com/huggingface/transformers/pull/39475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39475.patch",
"merged_at": null
} | ERROR: type should be string, got "https://github.com/huggingface/accelerate/pull/3684 changed accelerate's behaviour to backoff to 90% batch size instead of 50% when a batch is too large for memory. We need to update our tests so they know about that!" | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39475/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39474/comments | https://api.github.com/repos/huggingface/transformers/issues/39474/events | https://github.com/huggingface/transformers/pull/39474 | 3,239,155,519 | PR_kwDOCUB6oc6fW55m | 39,474 | Kernels flash attn | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6202871275,
"node_id": "LA_kwDOCUB6oc8AAAABcbhN6w",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flash%20Attention",
"name": "Flash Attention",
"color": "201FF8",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-17T11:09:00 | 2025-08-02T04:06:25 | 2025-07-22T13:41:07 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39474",
"html_url": "https://github.com/huggingface/transformers/pull/39474",
"diff_url": "https://github.com/huggingface/transformers/pull/39474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39474.patch",
"merged_at": "2025-07-22T13:41:06"
} | # What does this PR do?
```bash
pip install transformers[torch] kernels
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import time
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-3B-Instruct",
device_map="auto",
torch_dtype="auto",
attn_implementation="flash_attention_2",
).eval()
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B-Instruct")
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(
["Hello, how are you?", "is this life?"],
padding=True,
padding_side="left",
return_tensors="pt",
).to(model.device)
start = time.time()
outputs = model.generate(**inputs, max_new_tokens=50)
print(f"Generation time: {time.time() - start:.2f} seconds")
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
model.set_attn_implementation("kernels-community/flash-attn3")
start = time.time()
outputs = model.generate(**inputs, max_new_tokens=50)
print(f"Generation time: {time.time() - start:.2f} seconds")
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39474/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39474/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39473/comments | https://api.github.com/repos/huggingface/transformers/issues/39473/events | https://github.com/huggingface/transformers/issues/39473 | 3,239,113,467 | I_kwDOCUB6oc7BEPL7 | 39,473 | Unexpected behaviour with transformers versions above 4.28 for Donut | {
"login": "mdavudov",
"id": 184212753,
"node_id": "U_kgDOCvrdEQ",
"avatar_url": "https://avatars.githubusercontent.com/u/184212753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mdavudov",
"html_url": "https://github.com/mdavudov",
"followers_url": "https://api.github.com/users/mdavudov/followers",
"following_url": "https://api.github.com/users/mdavudov/following{/other_user}",
"gists_url": "https://api.github.com/users/mdavudov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mdavudov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mdavudov/subscriptions",
"organizations_url": "https://api.github.com/users/mdavudov/orgs",
"repos_url": "https://api.github.com/users/mdavudov/repos",
"events_url": "https://api.github.com/users/mdavudov/events{/privacy}",
"received_events_url": "https://api.github.com/users/mdavudov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-17T10:54:33 | 2025-10-15T08:03:54 | 2025-10-15T08:03:54 | NONE | null | null | null | null | ### System Info
Hello,
Big thanks to all the contributors on this repo!
I would like to raise an issue, that was initially encountered when running example notebooks for Donut in Transformer Tutorials (https://github.com/NielsRogge/Transformers-Tutorials) by @NielsRogge . This is issue was previously raised on that repo, but the author advised to re-raise it here. Original issue: https://github.com/NielsRogge/Transformers-Tutorials/issues/496#issuecomment-2955991546
**Bug**:
The bug was encountered when trying to reproduce results from this notebook: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Donut/CORD/Fine_tune_Donut_on_a_custom_dataset_(CORD)_with_PyTorch_Lightning.ipynb
When using newer versions of `transformers` there is strange behaviour during training, as the model shows much higher validation edit distance values than expected. This is fixed by downgrading to versions `4.28.1` or `4.25`.
Reference code uses the following classes from `transformers`:
- `DonutProcessor`
- `VisionEncoderDecoderModel`
- `VisionEncoderDecoderConfig`
The difference can be seen on the attached screenshot, where the red line shows validation edit distance metric when running on `4.28.1` and the orange one when running on `4.36.0`.
Was there any changes introduced after `4.28.1` that could be causing it, and are there any known ways of fixing them?
<img width="3026" height="356" alt="Image" src="https://github.com/user-attachments/assets/6b209a3c-3dd9-4a3a-9b45-a339d381cac7" />
**Environment**
Output of `transformers env` for `4.28.1`:
```
- `transformers` version: 4.28.1
- Platform: Linux-6.1.134-152.225.amzn2023.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.12
- Huggingface_hub version: 0.32.4
- Safetensors version: 0.5.3
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
```
for `4.36.0` (version where issue is encountered):
```
- `transformers` version: 4.36.0
- Platform: Linux-6.1.134-152.225.amzn2023.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.12
- Huggingface_hub version: 0.32.4
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
```
Thank you for you time, and please let me know what I can do on my end to make it easier to diagnose the issue more precisely.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The bug was encountered when trying to reproduce results from this notebook:
https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Donut/CORD/Fine_tune_Donut_on_a_custom_dataset_(CORD)_with_PyTorch_Lightning.ipynb
To reproduce:
1. Follow the notebook as-is, this will install the latest version of transformers
2. Continue until the training step and run the training
3. Observe unexpectedly high validation edit distance metrics
To fix:
1. Pin the transformers version to `4.28.1`
2. Run the notebook again
3. You should observe a much lower validation edit distance metrics
### Expected behavior
I expect the training behaviour to be similar on newer versions of `transformers` and the performance not to degrade so drastically. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39473/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39473/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39472/comments | https://api.github.com/repos/huggingface/transformers/issues/39472/events | https://github.com/huggingface/transformers/pull/39472 | 3,239,059,128 | PR_kwDOCUB6oc6fWkoa | 39,472 | Add unified logits_to_keep support to LLMClass | {
"login": "hellopahe",
"id": 34388421,
"node_id": "MDQ6VXNlcjM0Mzg4NDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/34388421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hellopahe",
"html_url": "https://github.com/hellopahe",
"followers_url": "https://api.github.com/users/hellopahe/followers",
"following_url": "https://api.github.com/users/hellopahe/following{/other_user}",
"gists_url": "https://api.github.com/users/hellopahe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hellopahe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hellopahe/subscriptions",
"organizations_url": "https://api.github.com/users/hellopahe/orgs",
"repos_url": "https://api.github.com/users/hellopahe/repos",
"events_url": "https://api.github.com/users/hellopahe/events{/privacy}",
"received_events_url": "https://api.github.com/users/hellopahe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T10:37:07 | 2025-07-17T15:07:12 | 2025-07-17T15:07:12 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39472",
"html_url": "https://github.com/huggingface/transformers/pull/39472",
"diff_url": "https://github.com/huggingface/transformers/pull/39472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39472.patch",
"merged_at": "2025-07-17T15:07:12"
} | # What does this PR do?
Add `logits_to_keep` arguments to `Glm4vForConditionalGeneration` and `Qwen2_5_VLForConditionalGeneration` class.
Fixes # (issue)
The `trl` repository now assumes that the Transformers library supports the `logits_to_keep` parameter starting from v4.49.0, since the required slicing step was removed in `grpo_trainer`. See the changes in [`grpo_trainer.py`](https://github.com/huggingface/trl/commit/b9572737b4796c6a72bacfd9ed78c686763725ea).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39472/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39471/comments | https://api.github.com/repos/huggingface/transformers/issues/39471/events | https://github.com/huggingface/transformers/pull/39471 | 3,238,902,116 | PR_kwDOCUB6oc6fWCCH | 39,471 | Rename `_supports_flash_attn_2` in examples and tests | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T09:47:15 | 2025-07-21T12:02:58 | 2025-07-21T12:02:57 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39471",
"html_url": "https://github.com/huggingface/transformers/pull/39471",
"diff_url": "https://github.com/huggingface/transformers/pull/39471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39471.patch",
"merged_at": "2025-07-21T12:02:57"
} | # What does this PR do?
As per title, forgot to rename the attribute in tests | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39471/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39470/comments | https://api.github.com/repos/huggingface/transformers/issues/39470/events | https://github.com/huggingface/transformers/pull/39470 | 3,238,874,517 | PR_kwDOCUB6oc6fV74o | 39,470 | [idefics3] fix for vLLM | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T09:39:10 | 2025-07-23T12:00:44 | 2025-07-23T12:00:43 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39470",
"html_url": "https://github.com/huggingface/transformers/pull/39470",
"diff_url": "https://github.com/huggingface/transformers/pull/39470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39470.patch",
"merged_at": "2025-07-23T12:00:43"
} | # What does this PR do?
As per title, the recent integration tests are failing because Idefics had an edge case. We can't assume `num_cols == num_rows` because idefics can output non-square images
See https://github.com/vllm-project/vllm/pull/20543#discussion_r2212501398 for more | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39470/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39469/comments | https://api.github.com/repos/huggingface/transformers/issues/39469/events | https://github.com/huggingface/transformers/pull/39469 | 3,238,777,293 | PR_kwDOCUB6oc6fVme8 | 39,469 | Update integration_utils.py | {
"login": "zhaiji0727",
"id": 74641156,
"node_id": "MDQ6VXNlcjc0NjQxMTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/74641156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaiji0727",
"html_url": "https://github.com/zhaiji0727",
"followers_url": "https://api.github.com/users/zhaiji0727/followers",
"following_url": "https://api.github.com/users/zhaiji0727/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaiji0727/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaiji0727/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaiji0727/subscriptions",
"organizations_url": "https://api.github.com/users/zhaiji0727/orgs",
"repos_url": "https://api.github.com/users/zhaiji0727/repos",
"events_url": "https://api.github.com/users/zhaiji0727/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaiji0727/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T09:09:13 | 2025-07-17T13:58:24 | 2025-07-17T13:57:50 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39469",
"html_url": "https://github.com/huggingface/transformers/pull/39469",
"diff_url": "https://github.com/huggingface/transformers/pull/39469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39469.patch",
"merged_at": "2025-07-17T13:57:50"
} | sanitize mlflow uploaded metrics
# What does this PR do?
Sanitize mlflow supported metrics before upload.
## Before submitting
- [N] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Y] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [N] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [Y] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [N] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
**** | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39469/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39468/comments | https://api.github.com/repos/huggingface/transformers/issues/39468/events | https://github.com/huggingface/transformers/pull/39468 | 3,238,770,036 | PR_kwDOCUB6oc6fVk3u | 39,468 | Fix quantized model dispatch with device_map='auto' | {
"login": "Krish0909",
"id": 134591243,
"node_id": "U_kgDOCAWzCw",
"avatar_url": "https://avatars.githubusercontent.com/u/134591243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Krish0909",
"html_url": "https://github.com/Krish0909",
"followers_url": "https://api.github.com/users/Krish0909/followers",
"following_url": "https://api.github.com/users/Krish0909/following{/other_user}",
"gists_url": "https://api.github.com/users/Krish0909/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Krish0909/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Krish0909/subscriptions",
"organizations_url": "https://api.github.com/users/Krish0909/orgs",
"repos_url": "https://api.github.com/users/Krish0909/repos",
"events_url": "https://api.github.com/users/Krish0909/events{/privacy}",
"received_events_url": "https://api.github.com/users/Krish0909/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-17T09:06:48 | 2025-07-17T09:20:47 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39468",
"html_url": "https://github.com/huggingface/transformers/pull/39468",
"diff_url": "https://github.com/huggingface/transformers/pull/39468.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39468.patch",
"merged_at": null
} | # What does this PR do?
Fixes issue #39461 where quantized models fail to load with `device_map="auto"` on Linux but work on Windows.
**Root cause:** `dispatch_model` calls `model.to(device)` on bitsandbytes quantized models, which isn't supported. Platform differences in device detection/handling cause this code path to be hit on Linux but not Windows.
**Fix:** Skip `dispatch_model` for bitsandbytes quantized models since they're already correctly placed during quantization.
Fixes #39461
please verify quantization experts (bitsandbytes, autogpt): @SunMarc @MekkCyber | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39468/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39467/comments | https://api.github.com/repos/huggingface/transformers/issues/39467/events | https://github.com/huggingface/transformers/pull/39467 | 3,238,610,920 | PR_kwDOCUB6oc6fVByj | 39,467 | Fix typing order | {
"login": "Tavish9",
"id": 73541181,
"node_id": "MDQ6VXNlcjczNTQxMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/73541181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tavish9",
"html_url": "https://github.com/Tavish9",
"followers_url": "https://api.github.com/users/Tavish9/followers",
"following_url": "https://api.github.com/users/Tavish9/following{/other_user}",
"gists_url": "https://api.github.com/users/Tavish9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tavish9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tavish9/subscriptions",
"organizations_url": "https://api.github.com/users/Tavish9/orgs",
"repos_url": "https://api.github.com/users/Tavish9/repos",
"events_url": "https://api.github.com/users/Tavish9/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tavish9/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T08:16:08 | 2025-07-18T00:49:02 | 2025-07-17T15:47:31 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39467",
"html_url": "https://github.com/huggingface/transformers/pull/39467",
"diff_url": "https://github.com/huggingface/transformers/pull/39467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39467.patch",
"merged_at": "2025-07-17T15:47:31"
} | # What does this PR do?
- This PR fix the typing order change by #38797
Fixes #39462
**Importing order matters**
Before #38797, `Union[str, Dict]` was okay for type `Union[dict, str]`.
When changing from `Dict` from `dict`, `Union[str, dict]` would predominate `Union[dict, str]` if it's imported early than `TrainingArguments`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@zach-huggingface, @SunMarc and @qgallouedec | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39467/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39467/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39466/comments | https://api.github.com/repos/huggingface/transformers/issues/39466/events | https://github.com/huggingface/transformers/pull/39466 | 3,238,432,508 | PR_kwDOCUB6oc6fUamD | 39,466 | README: Update Bert Japanese model card | {
"login": "KeshavSingh29",
"id": 130352102,
"node_id": "U_kgDOB8UD5g",
"avatar_url": "https://avatars.githubusercontent.com/u/130352102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KeshavSingh29",
"html_url": "https://github.com/KeshavSingh29",
"followers_url": "https://api.github.com/users/KeshavSingh29/followers",
"following_url": "https://api.github.com/users/KeshavSingh29/following{/other_user}",
"gists_url": "https://api.github.com/users/KeshavSingh29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KeshavSingh29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeshavSingh29/subscriptions",
"organizations_url": "https://api.github.com/users/KeshavSingh29/orgs",
"repos_url": "https://api.github.com/users/KeshavSingh29/repos",
"events_url": "https://api.github.com/users/KeshavSingh29/events{/privacy}",
"received_events_url": "https://api.github.com/users/KeshavSingh29/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-17T07:17:25 | 2025-08-06T19:01:01 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39466",
"html_url": "https://github.com/huggingface/transformers/pull/39466",
"diff_url": "https://github.com/huggingface/transformers/pull/39466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39466.patch",
"merged_at": null
} | # What does this PR do?
As mentioned in #36979 , contributing to HF model cards, specifically to Bert Japanese.
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39466/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39465/comments | https://api.github.com/repos/huggingface/transformers/issues/39465/events | https://github.com/huggingface/transformers/pull/39465 | 3,238,410,762 | PR_kwDOCUB6oc6fUV6N | 39,465 | [gemma3] support sequence classification task | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-17T07:08:29 | 2025-07-21T09:03:20 | 2025-07-21T09:03:20 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39465",
"html_url": "https://github.com/huggingface/transformers/pull/39465",
"diff_url": "https://github.com/huggingface/transformers/pull/39465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39465.patch",
"merged_at": "2025-07-21T09:03:20"
} | # What does this PR do?
As per title, we can't copy from llama or any other LLM because Gemma3 needs to obtain `text_config` params and needs to pass extra vision kwargs in `forward`. Thus the code was adapted from llama and the tests are green
Fixes https://github.com/huggingface/transformers/issues/36755 | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39465/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39464 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39464/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39464/comments | https://api.github.com/repos/huggingface/transformers/issues/39464/events | https://github.com/huggingface/transformers/pull/39464 | 3,238,159,588 | PR_kwDOCUB6oc6fTesl | 39,464 | Skipping `initialize_weights` when model is quantized | {
"login": "DWarez",
"id": 10366381,
"node_id": "MDQ6VXNlcjEwMzY2Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/10366381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DWarez",
"html_url": "https://github.com/DWarez",
"followers_url": "https://api.github.com/users/DWarez/followers",
"following_url": "https://api.github.com/users/DWarez/following{/other_user}",
"gists_url": "https://api.github.com/users/DWarez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DWarez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DWarez/subscriptions",
"organizations_url": "https://api.github.com/users/DWarez/orgs",
"repos_url": "https://api.github.com/users/DWarez/repos",
"events_url": "https://api.github.com/users/DWarez/events{/privacy}",
"received_events_url": "https://api.github.com/users/DWarez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-17T05:20:49 | 2025-07-26T10:06:43 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39464",
"html_url": "https://github.com/huggingface/transformers/pull/39464",
"diff_url": "https://github.com/huggingface/transformers/pull/39464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39464.patch",
"merged_at": null
} | # What does this PR do?
Avoids to perform weights initialization when loading a quantized model. The previous implementation was checking the `is_quantized` condition only when trying to initialize weights with deepspeed.
Fixes #39366
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc @zach-huggingface
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39464/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39463/comments | https://api.github.com/repos/huggingface/transformers/issues/39463/events | https://github.com/huggingface/transformers/issues/39463 | 3,237,879,858 | I_kwDOCUB6oc7A_iAy | 39,463 | can't torch.export.export tinyllama model | {
"login": "heshuju",
"id": 30929784,
"node_id": "MDQ6VXNlcjMwOTI5Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/30929784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heshuju",
"html_url": "https://github.com/heshuju",
"followers_url": "https://api.github.com/users/heshuju/followers",
"following_url": "https://api.github.com/users/heshuju/following{/other_user}",
"gists_url": "https://api.github.com/users/heshuju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heshuju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heshuju/subscriptions",
"organizations_url": "https://api.github.com/users/heshuju/orgs",
"repos_url": "https://api.github.com/users/heshuju/repos",
"events_url": "https://api.github.com/users/heshuju/events{/privacy}",
"received_events_url": "https://api.github.com/users/heshuju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-17T02:33:19 | 2025-07-17T06:04:35 | 2025-07-17T06:04:35 | NONE | null | null | null | null | ### System Info
I need to convert the model into IR for further processing, but I encountered a problem.
I'm encountering several similar issues where torch.dynamo is unable to trace certain model objects and methods. Could this be due to incorrect usage on my part? What would be the recommended approach to resolve this?
version:
pytorch:2.4 cpu
transformers: 4.53.2
error msg:
```
/bin/python /home/x/iree/tinyllama/demo.py
Traceback (most recent call last):
File "/home/x/iree/tinyllama/demo.py", line 55, in <module>
prefill_ep = export(prefill_wrapper,
File "/home/x/.local/lib/python3.10/site-packages/torch/export/__init__.py", line 174, in export
return _export(
File "/home/x/.local/lib/python3.10/site-packages/torch/export/_trace.py", line 945, in wrapper
raise e
File "/home/x/.local/lib/python3.10/site-packages/torch/export/_trace.py", line 928, in wrapper
ep = fn(*args, **kwargs)
......
torch._dynamo.exc.Unsupported: call_method GetAttrVariable(UserDefinedObjectVariable(AttentionMaskInterface), _global_mapping) __contains__ [ConstantVariable()] {}
from user code:
File "/home/x/iree/tinyllama/demo.py", line 17, in forward
out = self.model(input_ids=input_ids,
File "/home/x/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x/.local/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
File "/home/x/.local/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 553, in forward
outputs: BaseModelOutputWithPast = self.model(
File "/home/x/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/x/.local/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
File "/home/x/.local/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 419, in forward
causal_mask = create_causal_mask(
File "/home/x/.local/lib/python3.10/site-packages/transformers/masking_utils.py", line 753, in create_causal_mask
early_exit, attention_mask, packed_sequence_mask, kv_length, kv_offset = _preprocess_mask_arguments(
File "/home/x/.local/lib/python3.10/site-packages/transformers/masking_utils.py", line 683, in _preprocess_mask_arguments
if config._attn_implementation not in ALL_MASK_ATTENTION_FUNCTIONS._global_mapping:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoModelForCausalLM
from torch.export import export, Dim
model_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16)
model.eval()
class PrefillWrapper(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
def forward(self, input_ids, attention_mask):
out = self.model(input_ids=input_ids,
attention_mask=attention_mask,
use_cache=True)
return (out.logits, *out.past_key_values)
class DecodeWrapper(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
def forward(self, input_ids, attention_mask, *flat_kvs):
n_layers = len(flat_kvs) // 2
pkv = tuple((flat_kvs[2*i], flat_kvs[2*i+1]) for i in range(n_layers))
out = self.model(input_ids=input_ids,
attention_mask=attention_mask,
past_key_values=pkv,
use_cache=True)
return (out.logits, *out.past_key_values)
prefill_wrapper = PrefillWrapper(model)
decode_wrapper = DecodeWrapper(model)
B, T = 2, 16
vocab = 32000
input_ids = torch.randint(0, vocab, (B, T), dtype=torch.long)
attention_mask = torch.ones((B, T), dtype=torch.long)
batch = Dim("batch", min=1, max=4)
seq = Dim("seq", min=1, max=1024)
prefill_dyn = {
"input_ids": {0: batch, 1: seq},
"attention_mask": {0: batch, 1: seq},
}
prefill_ep = export(prefill_wrapper,
args=(input_ids, attention_mask),
dynamic_shapes=prefill_dyn)
with torch.no_grad():
prefill_out = prefill_wrapper(input_ids, attention_mask)
past_key_values = prefill_out[1:] # (k0,v0,k1,v1,...)
decode_input_ids = input_ids[:, -1:] # [B, 1]
decode_attention_mask = attention_mask # [B, T]
kv_seq = Dim("kv_seq", min=1, max=1024)
decode_dyn = {
"input_ids": {0: batch, 1: Dim("one", min=1, max=1)},
"attention_mask": {0: batch, 1: seq},
}
for idx, kv in enumerate(past_key_values):
decode_dyn[f"flat_kvs_{idx}"] = {0: batch, 2: kv_seq}
decode_ep = export(decode_wrapper,
args=(decode_input_ids, decode_attention_mask, *past_key_values),
dynamic_shapes=decode_dyn)
```
### Expected behavior
I think it can export 2 ir. prefill and decode.
| {
"login": "heshuju",
"id": 30929784,
"node_id": "MDQ6VXNlcjMwOTI5Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/30929784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heshuju",
"html_url": "https://github.com/heshuju",
"followers_url": "https://api.github.com/users/heshuju/followers",
"following_url": "https://api.github.com/users/heshuju/following{/other_user}",
"gists_url": "https://api.github.com/users/heshuju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heshuju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heshuju/subscriptions",
"organizations_url": "https://api.github.com/users/heshuju/orgs",
"repos_url": "https://api.github.com/users/heshuju/repos",
"events_url": "https://api.github.com/users/heshuju/events{/privacy}",
"received_events_url": "https://api.github.com/users/heshuju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39463/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39462/comments | https://api.github.com/repos/huggingface/transformers/issues/39462/events | https://github.com/huggingface/transformers/issues/39462 | 3,237,863,617 | I_kwDOCUB6oc7A_eDB | 39,462 | HfArgumentParser cannot parse `str` for local path | {
"login": "Tavish9",
"id": 73541181,
"node_id": "MDQ6VXNlcjczNTQxMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/73541181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tavish9",
"html_url": "https://github.com/Tavish9",
"followers_url": "https://api.github.com/users/Tavish9/followers",
"following_url": "https://api.github.com/users/Tavish9/following{/other_user}",
"gists_url": "https://api.github.com/users/Tavish9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tavish9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tavish9/subscriptions",
"organizations_url": "https://api.github.com/users/Tavish9/orgs",
"repos_url": "https://api.github.com/users/Tavish9/repos",
"events_url": "https://api.github.com/users/Tavish9/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tavish9/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-17T02:22:34 | 2025-07-17T15:47:32 | 2025-07-17T15:47:32 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.52.4
- `transformers` version: 4.53.x
- `transformers` version: 4.54.0.dev0
---
- Platform: Linux-5.10.134-008.16.kangaroo.al8.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.17
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
- case1:
```python
from transformers import (
HfArgumentParser,
Trainer,
TrainingArguments,
)
def main():
parser = HfArgumentParser(TrainingArguments)
parser.parse_args_into_dataclasses()[0]
print("parse success")
if __name__ == "__main__":
main()
```
```bash
python test.py --deepspeed ./zero1.json
test.py: error: argument --deepspeed: invalid dict value: './zero1.json'
```
- case2:
```python
from transformers import (
HfArgumentParser,
TrainingArguments,
)
def main():
parser = HfArgumentParser(TrainingArguments)
parser.parse_args_into_dataclasses()[0]
print("parse success")
if __name__ == "__main__":
main()
```
```bash
python test.py --deepspeed ./zero1.json
parse success
```
### Expected behavior
- case1 and case2 should parse local path successfully.
- `transformers<=4.52.4` works well
- `transformers>=4.53.0` fails when importing `Trainer` | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39462/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39462/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39461/comments | https://api.github.com/repos/huggingface/transformers/issues/39461/events | https://github.com/huggingface/transformers/issues/39461 | 3,237,828,546 | I_kwDOCUB6oc7A_VfC | 39,461 | I can't make sense of this works on Windows but not on Linux AutoModelForCausalLM.from_pretrained | {
"login": "FurkanGozukara",
"id": 19240467,
"node_id": "MDQ6VXNlcjE5MjQwNDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FurkanGozukara",
"html_url": "https://github.com/FurkanGozukara",
"followers_url": "https://api.github.com/users/FurkanGozukara/followers",
"following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}",
"gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions",
"organizations_url": "https://api.github.com/users/FurkanGozukara/orgs",
"repos_url": "https://api.github.com/users/FurkanGozukara/repos",
"events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}",
"received_events_url": "https://api.github.com/users/FurkanGozukara/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3716956449,
"node_id": "LA_kwDOCUB6oc7djEEh",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Quantization",
"name": "Quantization",
"color": "971399",
"default": false,
"description": ""
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-17T01:57:42 | 2025-07-17T17:16:54 | 2025-07-17T17:16:53 | NONE | null | null | null | null |
I am using below code with exactly same VENV on Windows and Linux and Windows works but Linux fails
error
```
1:50:23 - INFO - Attempting to load CogVLM2 model with quantization: 4 on device: cuda
01:50:23 - INFO - Loading tokenizer from: /home/Ubuntu/Downloads/upscale/STAR/models/cogvlm2-video-llama3-chat
01:50:23 - INFO - Tokenizer loaded successfully.
01:50:23 - INFO - Preparing to load model from: /home/Ubuntu/Downloads/upscale/STAR/models/cogvlm2-video-llama3-chat with quant: 4, dtype: torch.bfloat16, device: cuda, device_map: auto, low_cpu_mem: True
01:50:23 - INFO - Starting model loading - this operation cannot be interrupted once started
/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 6/6 [00:08<00:00, 1.47s/steps]
01:50:33 - ERROR - Failed to load CogVLM2 model from path: /home/Ubuntu/Downloads/upscale/STAR/models/cogvlm2-video-llama3-chat
01:50:33 - ERROR - Exception type: ValueError
01:50:33 - ERROR - Exception details: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
Traceback (most recent call last):
File "/home/Ubuntu/Downloads/upscale/STAR/logic/cogvlm_utils.py", line 160, in load_cogvlm_model
raise model_loading_result["error"]
File "/home/Ubuntu/Downloads/upscale/STAR/logic/cogvlm_utils.py", line 122, in load_model_thread
model = AutoModelForCausalLM.from_pretrained(
File "/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
return model_class.from_pretrained(
File "/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4000, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/accelerate/big_modeling.py", line 502, in dispatch_model
model.to(device)
File "/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2849, in to
raise ValueError(
ValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
01:50:33 - ERROR - Error during auto-captioning: 'Could not load CogVLM2 model (check logs for details): `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.'
Traceback (most recent call last):
File "/home/Ubuntu/Downloads/upscale/STAR/logic/cogvlm_utils.py", line 160, in load_cogvlm_model
raise model_loading_result["error"]
File "/home/Ubuntu/Downloads/upscale/STAR/logic/cogvlm_utils.py", line 122, in load_model_thread
model = AutoModelForCausalLM.from_pretrained(
File "/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
return model_class.from_pretrained(
File "/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4000, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/accelerate/big_modeling.py", line 502, in dispatch_model
model.to(device)
File "/home/Ubuntu/Downloads/upscale/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2849, in to
raise ValueError(
ValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/Ubuntu/Downloads/upscale/STAR/logic/cogvlm_utils.py", line 478, in auto_caption
local_model_ref, local_tokenizer_ref = load_cogvlm_model(quantization, cogvlm_device, cog_vlm_model_path, logger)
File "/home/Ubuntu/Downloads/upscale/STAR/logic/cogvlm_utils.py", line 243, in load_cogvlm_model
raise gr.Error(f"Could not load CogVLM2 model (check logs for details): {str(e)[:200]}")
gradio.exceptions.Error: 'Could not load CogVLM2 model (check logs for details): `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.'
01:50:33 - INFO - Unloading CogVLM2 model with strategy: full
01:50:33 - INFO - CogVLM2 model/tokenizer not loaded or already unloaded.
```
entire pip freeze below
```
bnb_config = None
model_dtype = torch.bfloat16 if (device == 'cuda' and torch.cuda.is_available() and torch.cuda.get_device_capability()[0] >= 8) else torch.float16
if BITSANDBYTES_AVAILABLE and device == 'cuda':
if quantization == 4:
bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=model_dtype)
elif quantization == 8:
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
elif quantization in [4, 8] and device != 'cuda':
if logger:
logger.warning("BitsAndBytes quantization is only available on CUDA. Loading in FP16/BF16.")
quantization = 0
current_device_map = None
if bnb_config and device == 'cuda':
current_device_map = "auto"
effective_low_cpu_mem_usage = True if bnb_config else False
if logger:
logger.info(f"Preparing to load model from: {cog_vlm_model_path} with quant: {quantization}, dtype: {model_dtype}, device: {device}, device_map: {current_device_map}, low_cpu_mem: {effective_low_cpu_mem_usage}")
# Final check for cancellation before model loading (this is the longest operation)
cancellation_manager.check_cancel("before model loading")
# Note to user that model loading cannot be interrupted once started
if logger:
logger.info("Starting model loading - this operation cannot be interrupted once started")
# Use a separate thread for model loading with periodic cancellation checks
model_loading_result = {"model": None, "error": None, "cancelled_before_start": False}
model_loading_complete = threading.Event()
def load_model_thread():
try:
# One final check right before starting the loading
if cancellation_manager.is_cancelled():
model_loading_result["cancelled_before_start"] = True
model_loading_complete.set()
return
### START OF FIX ###
# Build arguments for from_pretrained conditionally
from_pretrained_kwargs = {
"trust_remote_code": True,
"quantization_config": bnb_config,
"low_cpu_mem_usage": effective_low_cpu_mem_usage,
"device_map": current_device_map
}
# Only specify torch_dtype for non-quantized models.
# For BNB models, the dtype is handled by the quantization_config
# and passing it explicitly can cause conflicts.
if not bnb_config:
from_pretrained_kwargs["torch_dtype"] = model_dtype if device == 'cuda' else torch.float32
model = AutoModelForCausalLM.from_pretrained(
cog_vlm_model_path,
**from_pretrained_kwargs
)
```
```absl-py==2.3.1
accelerate==1.8.1
addict==2.4.0
aiofiles==24.1.0
aiohappyeyeballs==2.6.1
aiohttp==3.12.14
aiosignal==1.4.0
annotated-types==0.7.0
anyio==4.9.0
async-timeout==5.0.1
attrs==25.3.0
av==15.0.0
basicsr==1.4.2
bitsandbytes==0.46.1
Brotli==1.1.0
certifi==2025.7.14
charset-normalizer==3.4.2
click==8.2.1
decorator==5.2.1
decord==0.6.0
deepspeed==0.16.4
diffusers==0.34.0
easydict==1.13
einops==0.8.1
exceptiongroup==1.3.0
fairscale==0.4.13
fastapi==0.116.1
ffmpeg==1.4
ffmpy==0.6.0
filelock==3.18.0
flash-attn @ https://huggingface.co/MonsterMMORPG/SECourses_Premium_Flash_Attention/resolve/main/flash_attn-2.7.4.post1-cp310-cp310-linux_x86_64.whl#sha256=8978824a0affa29999428ee0cc87a4de3085e84ee91b612bf81ad5acd440ced7
frozenlist==1.7.0
fsspec==2025.7.0
ftfy==6.3.1
future==1.0.0
fvcore==0.1.5.post20221221
gradio==5.37.0
gradio_client==1.10.4
groovy==0.1.2
grpcio==1.73.1
h11==0.16.0
hf-xet==1.1.5
hf_transfer==0.1.9
hjson==3.1.0
httpcore==1.0.9
httpx==0.28.1
huggingface-hub==0.33.4
idna==3.10
imageio==2.37.0
imageio-ffmpeg==0.6.0
importlib_metadata==8.7.0
iopath==0.1.10
Jinja2==3.1.6
lazy_loader==0.4
lightning-utilities==0.14.3
lmdb==1.7.2
lpips==0.1.4
Markdown==3.8.2
markdown-it-py==3.0.0
MarkupSafe==3.0.2
mdurl==0.1.2
memory-efficient-attention-pytorch==0.1.6
moviepy==2.2.1
mpmath==1.3.0
msgpack==1.1.1
multidict==6.6.3
networkx==3.4.2
ninja==1.11.1.4
numpy==1.26.4
nvidia-cublas-cu12==12.8.3.14
nvidia-cuda-cupti-cu12==12.8.57
nvidia-cuda-nvrtc-cu12==12.8.61
nvidia-cuda-runtime-cu12==12.8.57
nvidia-cudnn-cu12==9.7.1.26
nvidia-cufft-cu12==11.3.3.41
nvidia-cufile-cu12==1.13.0.11
nvidia-curand-cu12==10.3.9.55
nvidia-cusolver-cu12==11.7.2.55
nvidia-cusparse-cu12==12.5.7.53
nvidia-cusparselt-cu12==0.6.3
nvidia-nccl-cu12==2.26.2
nvidia-nvjitlink-cu12==12.8.61
nvidia-nvtx-cu12==12.8.55
open-clip-torch==2.20.0
opencv-python==4.11.0.86
orjson==3.11.0
packaging==25.0
pandas==2.3.1
parameterized==0.9.0
pillow==11.3.0
platformdirs==4.3.8
portalocker==3.2.0
proglog==0.1.12
propcache==0.3.2
protobuf==3.20.3
psutil==7.0.0
py-cpuinfo==9.0.0
pydantic==2.11.7
pydantic_core==2.33.2
pydub==0.25.1
pyee==13.0.0
Pygments==2.19.2
python-dateutil==2.9.0.post0
python-dotenv==1.1.1
python-ffmpeg==2.0.12
python-multipart==0.0.20
pytorch-lightning==2.5.2
pytorchvideo @ https://huggingface.co/MonsterMMORPG/SECourses_Premium_Flash_Attention/resolve/main/pytorchvideo-0.1.5-py3-none-any.whl#sha256=514a2ef4a68c5a7302e96b1d2b81ac0aeacbd1f63577ffa464fd145add1cc639
pytz==2025.2
PyYAML==6.0.2
regex==2024.11.6
requests==2.32.4
rich==14.0.0
ruff==0.12.3
safehttpx==0.1.6
safetensors==0.5.3
scenedetect @ git+https://github.com/Breakthrough/PySceneDetect.git@34dffabd8666bf4cbb94ff1995f74fcf593eb368
scikit-image==0.25.2
scipy==1.15.3
semantic-version==2.10.0
sentencepiece==0.2.0
shellingham==1.5.4
six==1.17.0
sk-video==1.1.10
sniffio==1.3.1
spandrel==0.4.1
starlette==0.47.1
sympy==1.14.0
tabulate==0.9.0
tb-nightly==2.20.0a20250716
tensorboard-data-server==0.7.2
termcolor==3.1.0
tifffile==2025.5.10
timm==1.0.17
tokenizers==0.19.1
tomli==2.2.1
tomlkit==0.13.3
torch==2.7.0+cu128
torchao==0.11.0+cu128
torchaudio==2.7.0+cu128
torchmetrics==1.7.4
torchsde==0.2.6
torchvision==0.22.0+cu128
tqdm==4.67.1
trampoline==0.1.2
transformers==4.43.4
triton==3.3.0
typer==0.16.0
typing-inspection==0.4.1
typing_extensions==4.14.1
tzdata==2025.2
urllib3==2.5.0
uvicorn==0.35.0
wcwidth==0.2.13
websockets==15.0.1
Werkzeug==3.1.3
xformers @ https://huggingface.co/MonsterMMORPG/SECourses_Premium_Flash_Attention/resolve/main/xformers-0.0.30+3abeaa9e.d20250427-cp310-cp310-linux_x86_64.whl#sha256=4ffe9a26923049b48076e4cd4c5004a2738e9539b2abdcf32775fcdbe3120c1d
yacs==0.1.8
yapf==0.43.0
yarl==1.20.1
zipp==3.23.0
```
### Who can help?
text models: @ArthurZucker
quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
| {
"login": "FurkanGozukara",
"id": 19240467,
"node_id": "MDQ6VXNlcjE5MjQwNDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FurkanGozukara",
"html_url": "https://github.com/FurkanGozukara",
"followers_url": "https://api.github.com/users/FurkanGozukara/followers",
"following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}",
"gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions",
"organizations_url": "https://api.github.com/users/FurkanGozukara/orgs",
"repos_url": "https://api.github.com/users/FurkanGozukara/repos",
"events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}",
"received_events_url": "https://api.github.com/users/FurkanGozukara/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39461/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39460/comments | https://api.github.com/repos/huggingface/transformers/issues/39460/events | https://github.com/huggingface/transformers/issues/39460 | 3,237,512,110 | I_kwDOCUB6oc7A-IOu | 39,460 | Autoformer get_lagged_subsequences always true if condition | {
"login": "Marco-Pi",
"id": 72951119,
"node_id": "MDQ6VXNlcjcyOTUxMTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/72951119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marco-Pi",
"html_url": "https://github.com/Marco-Pi",
"followers_url": "https://api.github.com/users/Marco-Pi/followers",
"following_url": "https://api.github.com/users/Marco-Pi/following{/other_user}",
"gists_url": "https://api.github.com/users/Marco-Pi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Marco-Pi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Marco-Pi/subscriptions",
"organizations_url": "https://api.github.com/users/Marco-Pi/orgs",
"repos_url": "https://api.github.com/users/Marco-Pi/repos",
"events_url": "https://api.github.com/users/Marco-Pi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Marco-Pi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-16T22:30:01 | 2025-08-24T08:03:05 | 2025-08-24T08:03:05 | NONE | null | null | null | null | ### System Info
transformers 4.53.2, MacOS, python 3.12.10
### Who can help?
@elisim @kashif
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
During training of the Autoformer model I get this error:
`ValueError: lags cannot go further than history length, found lag 37 while history length is only 70
`
This exception is raised when calling the create_network_inputs function of the Autoformer class, specifcally when the function call get_lagged_subsequences function (at the time of writing [line 1411 of modeling_autoformer.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/autoformer/modeling_autoformer.py#L1411)):
`lagged_sequence = self.get_lagged_subsequences(sequence=inputs, subsequences_length=subsequences_length)`
The signature of get_lagged_subsequences is:
`def get_lagged_subsequences(
self, sequence: torch.Tensor, subsequences_length: int, shift: int = 0
) -> torch.Tensor:`
The arguments passed during the above function call has the following values:
- inputs is the concatenation of past and future values
- subsequences_length is the sum between the context length and the prediction length
- shift is kept to 0
Inside get_lagged_subsequences there is [this line of code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/autoformer/modeling_autoformer.py#L1305) (at the time of writing 1505):
`if max(indices) + subsequences_length > sequence_length:`
where:
- max(indices) is the maximum value of the lags_sequences
- subsequences_length is the one passed as parameter, thus the sum between the context length and the prediction length
- sequence_length is initialised as sequence_length = sequence.shape[1], thus is equal to the context_length
So in the best case when max(indices) is equal to zero, we get that the if condition became:
`0 + (context_length + prediction_length) > context_length`
Since the model always predict at least one value the prediction length is always greater than zero, so this condition is always true. The problem is that having this condition always being true block the code since when the if condition is verified an exception is raised:
`if reshaped_lagged_sequence.shape[1] != time_feat.shape[1]:
raise ValueError(
f"input length {reshaped_lagged_sequence.shape[1]} and time feature lengths {time_feat.shape[1]} does not match")`
Here is a minimal code that allow to reproduce the error, please note that for every value of lags_sequence the exception is raised here I am using the default ones loaded from huggingface/autoformer-tourism-monthly config:
```from transformers import AutoformerModel, AutoConfig
from datasets import Dataset
import torch
import numpy as np
cfg = AutoConfig.from_pretrained("huggingface/autoformer-tourism-monthly")
model = AutoformerModel(cfg)
X, y, T, M, T_future = (np.random.rand(500, 60), np.random.rand(500, 10), np.random.rand(500, 60, 6), np.random.rand(500, 60), np.random.rand(500, 10, 6))
train_ds = Dataset.from_dict({
'past_values': X,
'future_values': y,
'past_time_features': T,
'past_observed_mask': M,
'future_time_features': T_future
})
training_args = TrainingArguments(
output_dir="./autoformer_checkpoints",
save_steps=100,
logging_steps=50,
num_train_epochs=50,
learning_rate=1e-4,
report_to="none"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
)
trainer.train()
```
### Expected behavior
The expected behavior is to have a meaningful if condition or remove the if line if not actually needed, in order to being able to train and use the Autoformer model | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39460/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39459/comments | https://api.github.com/repos/huggingface/transformers/issues/39459/events | https://github.com/huggingface/transformers/pull/39459 | 3,237,406,828 | PR_kwDOCUB6oc6fQ8Wf | 39,459 | fix a comment typo in utils.py | {
"login": "klimarissa17",
"id": 37802545,
"node_id": "MDQ6VXNlcjM3ODAyNTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/37802545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klimarissa17",
"html_url": "https://github.com/klimarissa17",
"followers_url": "https://api.github.com/users/klimarissa17/followers",
"following_url": "https://api.github.com/users/klimarissa17/following{/other_user}",
"gists_url": "https://api.github.com/users/klimarissa17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klimarissa17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klimarissa17/subscriptions",
"organizations_url": "https://api.github.com/users/klimarissa17/orgs",
"repos_url": "https://api.github.com/users/klimarissa17/repos",
"events_url": "https://api.github.com/users/klimarissa17/events{/privacy}",
"received_events_url": "https://api.github.com/users/klimarissa17/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T21:32:55 | 2025-07-17T13:06:05 | 2025-07-17T13:06:05 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39459",
"html_url": "https://github.com/huggingface/transformers/pull/39459",
"diff_url": "https://github.com/huggingface/transformers/pull/39459.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39459.patch",
"merged_at": "2025-07-17T13:06:05"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39459/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39458/comments | https://api.github.com/repos/huggingface/transformers/issues/39458/events | https://github.com/huggingface/transformers/pull/39458 | 3,237,377,943 | PR_kwDOCUB6oc6fQ2BE | 39,458 | Bump AMD container for 2.7.1 PyTorch | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T21:16:55 | 2025-07-22T10:11:39 | 2025-07-22T10:11:38 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39458",
"html_url": "https://github.com/huggingface/transformers/pull/39458",
"diff_url": "https://github.com/huggingface/transformers/pull/39458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39458.patch",
"merged_at": "2025-07-22T10:11:38"
} | # What does this PR do?
This PR bumps the base image for the AMD GPU container to get PyTorch version 2.7.1. Since a wheel package is not available yet, we need to pin to this specific container.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39458/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39457/comments | https://api.github.com/repos/huggingface/transformers/issues/39457/events | https://github.com/huggingface/transformers/pull/39457 | 3,237,148,080 | PR_kwDOCUB6oc6fQDOh | 39,457 | [Paged-Attention] Handle continuous batching for repetition penalty | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T19:45:16 | 2025-07-22T16:13:45 | 2025-07-22T16:13:41 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39457",
"html_url": "https://github.com/huggingface/transformers/pull/39457",
"diff_url": "https://github.com/huggingface/transformers/pull/39457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39457.patch",
"merged_at": "2025-07-22T16:13:41"
} | # What does this PR do?
We handle the repetition penalty for continuous batching
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39457/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39456/comments | https://api.github.com/repos/huggingface/transformers/issues/39456/events | https://github.com/huggingface/transformers/pull/39456 | 3,236,588,848 | PR_kwDOCUB6oc6fOH8a | 39,456 | Fix quantized model initialization for int8 dtypes | {
"login": "Krish0909",
"id": 134591243,
"node_id": "U_kgDOCAWzCw",
"avatar_url": "https://avatars.githubusercontent.com/u/134591243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Krish0909",
"html_url": "https://github.com/Krish0909",
"followers_url": "https://api.github.com/users/Krish0909/followers",
"following_url": "https://api.github.com/users/Krish0909/following{/other_user}",
"gists_url": "https://api.github.com/users/Krish0909/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Krish0909/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Krish0909/subscriptions",
"organizations_url": "https://api.github.com/users/Krish0909/orgs",
"repos_url": "https://api.github.com/users/Krish0909/repos",
"events_url": "https://api.github.com/users/Krish0909/events{/privacy}",
"received_events_url": "https://api.github.com/users/Krish0909/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-16T16:26:30 | 2025-07-16T18:15:10 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39456",
"html_url": "https://github.com/huggingface/transformers/pull/39456",
"diff_url": "https://github.com/huggingface/transformers/pull/39456.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39456.patch",
"merged_at": null
} | # Fix quantized model initialization for int8 dtypes
This PR resolves a critical issue where loading quantized models (particularly llmcompressor W8A8 models) fails with:
RuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8
**Root Cause**: The model initialization code calls `normal_()` on int8 tensors during weight initialization, but PyTorch only supports this operation on floating-point tensors.
**Solution**: Skip weight initialization for quantized models since their weights are already loaded from checkpoints. Changed the conditional in `_load_pretrained_model` from `else:` to `elif not is_quantized:` to prevent calling `initialize_weights()` on quantized models.
**Impact**:
- Enables loading of llmcompressor quantized models without crashes
- No impact on non-quantized models
- Minimal, targeted fix with backward compatibility
Fixes the issue reported in the original GitHub discussion about RedHatAI/Qwen2.5-VL-7B-Instruct-quantized.w8a8 model loading.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? (No doc changes needed for this internal fix)
- [ ] Did you write any new necessary tests?
## Who can review?
@SunMarc @MekkCyber (quantization experts) | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39456/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39455/comments | https://api.github.com/repos/huggingface/transformers/issues/39455/events | https://github.com/huggingface/transformers/pull/39455 | 3,236,433,972 | PR_kwDOCUB6oc6fNmRD | 39,455 | Add eurobert | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-07-16T15:34:06 | 2025-08-20T11:44:10 | null | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39455",
"html_url": "https://github.com/huggingface/transformers/pull/39455",
"diff_url": "https://github.com/huggingface/transformers/pull/39455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39455.patch",
"merged_at": null
} | # What does this PR do?
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39455/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39454/comments | https://api.github.com/repos/huggingface/transformers/issues/39454/events | https://github.com/huggingface/transformers/pull/39454 | 3,236,260,908 | PR_kwDOCUB6oc6fNAYk | 39,454 | Transformers serve VLM | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T14:43:58 | 2025-07-23T15:03:20 | 2025-07-23T15:03:18 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39454",
"html_url": "https://github.com/huggingface/transformers/pull/39454",
"diff_url": "https://github.com/huggingface/transformers/pull/39454.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39454.patch",
"merged_at": "2025-07-23T15:03:18"
} | Add VLM support for `transformers serve`.
In case you're interested in playing around with it, Open WebUI is a good tool to work with.
Launching `transformers serve` with VLM support:
```
cd <PATH_TO_TRANSFORMERS>
git checkout transformers-serve-vlm
transformers serve --enable_cors
```
To install Open WebUI:
```
pip install open-webui
open-webui serve
```
To set it up to work with the transformers serve backend:
- Click on the top right icon
- Settings
- Connections
- Add connection
- Set URL to "http://localhost:8000/v1"
- Set Key to non-null string
- Verify connection
- All good!
You should now find a limited (but growing) list of available models:
<img width="779" height="416" alt="image" src="https://github.com/user-attachments/assets/1e709b26-e444-4708-80ed-f8c680266132" />
Enjoy chatting with the model! | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39454/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 8,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39454/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39453/comments | https://api.github.com/repos/huggingface/transformers/issues/39453/events | https://github.com/huggingface/transformers/pull/39453 | 3,236,195,290 | PR_kwDOCUB6oc6fMyCW | 39,453 | Update modernbertdecoder docs | {
"login": "orionw",
"id": 31665361,
"node_id": "MDQ6VXNlcjMxNjY1MzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/31665361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orionw",
"html_url": "https://github.com/orionw",
"followers_url": "https://api.github.com/users/orionw/followers",
"following_url": "https://api.github.com/users/orionw/following{/other_user}",
"gists_url": "https://api.github.com/users/orionw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orionw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orionw/subscriptions",
"organizations_url": "https://api.github.com/users/orionw/orgs",
"repos_url": "https://api.github.com/users/orionw/repos",
"events_url": "https://api.github.com/users/orionw/events{/privacy}",
"received_events_url": "https://api.github.com/users/orionw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T14:24:35 | 2025-07-21T23:40:23 | 2025-07-21T23:40:23 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39453",
"html_url": "https://github.com/huggingface/transformers/pull/39453",
"diff_url": "https://github.com/huggingface/transformers/pull/39453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39453.patch",
"merged_at": "2025-07-21T23:40:22"
} | # What does this PR do?
Updates the docs for the ModernBERTDecoder class to link to the paper and to the correct model name.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@stevhliu for docs, if you have a sec! | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39453/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39452/comments | https://api.github.com/repos/huggingface/transformers/issues/39452/events | https://github.com/huggingface/transformers/pull/39452 | 3,236,020,578 | PR_kwDOCUB6oc6fMLyt | 39,452 | Fix indentation bug in SmolVLM image processor causing KeyError | {
"login": "Krish0909",
"id": 134591243,
"node_id": "U_kgDOCAWzCw",
"avatar_url": "https://avatars.githubusercontent.com/u/134591243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Krish0909",
"html_url": "https://github.com/Krish0909",
"followers_url": "https://api.github.com/users/Krish0909/followers",
"following_url": "https://api.github.com/users/Krish0909/following{/other_user}",
"gists_url": "https://api.github.com/users/Krish0909/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Krish0909/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Krish0909/subscriptions",
"organizations_url": "https://api.github.com/users/Krish0909/orgs",
"repos_url": "https://api.github.com/users/Krish0909/repos",
"events_url": "https://api.github.com/users/Krish0909/events{/privacy}",
"received_events_url": "https://api.github.com/users/Krish0909/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T13:33:36 | 2025-07-16T15:59:29 | 2025-07-16T15:59:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39452",
"html_url": "https://github.com/huggingface/transformers/pull/39452",
"diff_url": "https://github.com/huggingface/transformers/pull/39452.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39452.patch",
"merged_at": "2025-07-16T15:59:29"
} | # What does this PR do?
This PR fixes an indentation bug in `SmolVLMImageProcessorFast` that causes a `KeyError` when `do_image_splitting=False`.
The issue was in line 434 where `split_images_grouped[shape] = stacked_images` was incorrectly indented outside the loop, causing only the last shape to be stored in the dictionary. When `reorder_images()` tries to access other shapes, it fails with a KeyError.
**Fix:** Move the assignment inside the loop with proper indentation (add 4 spaces).
**Before:**
```python
for shape, stacked_images in grouped_images.items():
# ... resize code ...
split_images_grouped[shape] = stacked_images # Outside loop - bug!
```
**After:**
```python
for shape, stacked_images in grouped_images.items():
# ... resize code ...
split_images_grouped[shape] = stacked_images # Inside loop - fixed!
```
Fixes #39442
Before submitting
This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
Did you write any new necessary tests?
Who can review?
@yonigozlan @amyeroberts @qubvel | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39452/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39451/comments | https://api.github.com/repos/huggingface/transformers/issues/39451/events | https://github.com/huggingface/transformers/pull/39451 | 3,235,745,933 | PR_kwDOCUB6oc6fLOI4 | 39,451 | Fix tests due to breaking change in accelerate | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T12:23:10 | 2025-07-17T12:51:51 | 2025-07-17T12:51:50 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39451",
"html_url": "https://github.com/huggingface/transformers/pull/39451",
"diff_url": "https://github.com/huggingface/transformers/pull/39451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39451.patch",
"merged_at": "2025-07-17T12:51:50"
} | # What does this PR do?
This PR updates trainer tests due to a breaking change in https://github.com/huggingface/accelerate/pull/3684.
To be merged when accelerate will be released | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39451/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39450/comments | https://api.github.com/repos/huggingface/transformers/issues/39450/events | https://github.com/huggingface/transformers/pull/39450 | 3,235,686,033 | PR_kwDOCUB6oc6fLBFo | 39,450 | Fix processor tests | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T12:03:53 | 2025-07-16T13:01:36 | 2025-07-16T13:01:35 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39450",
"html_url": "https://github.com/huggingface/transformers/pull/39450",
"diff_url": "https://github.com/huggingface/transformers/pull/39450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39450.patch",
"merged_at": "2025-07-16T13:01:35"
} | # What does this PR do?
Fix a failing test reported by @vasqu
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39450/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39449/comments | https://api.github.com/repos/huggingface/transformers/issues/39449/events | https://github.com/huggingface/transformers/pull/39449 | 3,235,556,347 | PR_kwDOCUB6oc6fKkPP | 39,449 | Fix logger warnings in Gemma model test files | {
"login": "ridima11",
"id": 153719980,
"node_id": "U_kgDOCSmUrA",
"avatar_url": "https://avatars.githubusercontent.com/u/153719980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ridima11",
"html_url": "https://github.com/ridima11",
"followers_url": "https://api.github.com/users/ridima11/followers",
"following_url": "https://api.github.com/users/ridima11/following{/other_user}",
"gists_url": "https://api.github.com/users/ridima11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ridima11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ridima11/subscriptions",
"organizations_url": "https://api.github.com/users/ridima11/orgs",
"repos_url": "https://api.github.com/users/ridima11/repos",
"events_url": "https://api.github.com/users/ridima11/events{/privacy}",
"received_events_url": "https://api.github.com/users/ridima11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-16T11:21:12 | 2025-07-16T11:46:44 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39449",
"html_url": "https://github.com/huggingface/transformers/pull/39449",
"diff_url": "https://github.com/huggingface/transformers/pull/39449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39449.patch",
"merged_at": null
} | This PR fixes logger warning issues in the test files for Gemma, Gemma2, Gemma3, and Gemma3n models under tests/models/. Updated the use of deprecated logger.warn() to logger.warning() to remove Deprecation Warnings during test runs.
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39449/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39448/comments | https://api.github.com/repos/huggingface/transformers/issues/39448/events | https://github.com/huggingface/transformers/pull/39448 | 3,235,436,988 | PR_kwDOCUB6oc6fKJIG | 39,448 | [`CI`] Fix partially red CI | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T10:48:11 | 2025-07-16T13:53:44 | 2025-07-16T13:53:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39448",
"html_url": "https://github.com/huggingface/transformers/pull/39448",
"diff_url": "https://github.com/huggingface/transformers/pull/39448.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39448.patch",
"merged_at": "2025-07-16T13:53:43"
} | - #37358 modified a test which fails
- Make `fix-copies` revealed some strange behavior with the new modernbert docstrings -> "Args:..." is not necessary | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39448/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39448/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39447/comments | https://api.github.com/repos/huggingface/transformers/issues/39447/events | https://github.com/huggingface/transformers/pull/39447 | 3,235,331,175 | PR_kwDOCUB6oc6fJxYa | 39,447 | [qwen2 vl] fix packing with all attentions | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T10:15:45 | 2025-07-24T20:50:16 | 2025-07-21T10:19:15 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39447",
"html_url": "https://github.com/huggingface/transformers/pull/39447",
"diff_url": "https://github.com/huggingface/transformers/pull/39447.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39447.patch",
"merged_at": "2025-07-21T10:19:15"
} | # What does this PR do?
As per title, and let's merge https://github.com/huggingface/transformers/pull/39341/files first. Added tests to make sure it works. If users want to use packing with any of the attentions, it is their responsibility to prepare correct position ids, since Qwen-VL needs positions for all vision grids
Fixes https://github.com/huggingface/transformers/issues/39400, https://github.com/huggingface/transformers/issues/38007 | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39447/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39447/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39446/comments | https://api.github.com/repos/huggingface/transformers/issues/39446/events | https://github.com/huggingface/transformers/pull/39446 | 3,235,311,729 | PR_kwDOCUB6oc6fJtB0 | 39,446 | Add StableAdamW Optimizer | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T10:10:15 | 2025-07-16T11:36:03 | 2025-07-16T11:35:53 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39446",
"html_url": "https://github.com/huggingface/transformers/pull/39446",
"diff_url": "https://github.com/huggingface/transformers/pull/39446.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39446.patch",
"merged_at": "2025-07-16T11:35:53"
} | # What does this PR do?
Supersedes https://github.com/huggingface/transformers/pull/36606 as the github CI is broken | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39446/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39445/comments | https://api.github.com/repos/huggingface/transformers/issues/39445/events | https://github.com/huggingface/transformers/pull/39445 | 3,235,172,156 | PR_kwDOCUB6oc6fJOTy | 39,445 | Fix missing definition of diff_file_url in notification service | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T09:26:01 | 2025-07-16T10:09:18 | 2025-07-16T10:09:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39445",
"html_url": "https://github.com/huggingface/transformers/pull/39445",
"diff_url": "https://github.com/huggingface/transformers/pull/39445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39445.patch",
"merged_at": "2025-07-16T10:09:18"
} | # What does this PR do?
Yesterday's PR to track the performed tests in CI (https://github.com/huggingface/transformers/pull/39198) resulted in a small error, this PR fixes that:
https://github.com/huggingface/transformers/actions/runs/16309221900/job/46061828202
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39445/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39444/comments | https://api.github.com/repos/huggingface/transformers/issues/39444/events | https://github.com/huggingface/transformers/pull/39444 | 3,235,108,273 | PR_kwDOCUB6oc6fJAOb | 39,444 | docs: add missing numpy import to minimal example | {
"login": "IliasAarab",
"id": 69244507,
"node_id": "MDQ6VXNlcjY5MjQ0NTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/69244507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IliasAarab",
"html_url": "https://github.com/IliasAarab",
"followers_url": "https://api.github.com/users/IliasAarab/followers",
"following_url": "https://api.github.com/users/IliasAarab/following{/other_user}",
"gists_url": "https://api.github.com/users/IliasAarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IliasAarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IliasAarab/subscriptions",
"organizations_url": "https://api.github.com/users/IliasAarab/orgs",
"repos_url": "https://api.github.com/users/IliasAarab/repos",
"events_url": "https://api.github.com/users/IliasAarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/IliasAarab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T09:07:00 | 2025-07-16T11:57:49 | 2025-07-16T11:57:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39444",
"html_url": "https://github.com/huggingface/transformers/pull/39444",
"diff_url": "https://github.com/huggingface/transformers/pull/39444.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39444.patch",
"merged_at": "2025-07-16T11:57:13"
} | # What does this PR do?
Add missing numpy import to minimal example
## Before submitting
- [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39444/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39443/comments | https://api.github.com/repos/huggingface/transformers/issues/39443/events | https://github.com/huggingface/transformers/pull/39443 | 3,234,623,695 | PR_kwDOCUB6oc6fHWl8 | 39,443 | enable triton backend on awq xpu | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T06:14:36 | 2025-07-23T12:11:13 | 2025-07-23T12:10:39 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39443",
"html_url": "https://github.com/huggingface/transformers/pull/39443",
"diff_url": "https://github.com/huggingface/transformers/pull/39443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39443.patch",
"merged_at": "2025-07-23T12:10:38"
} | IPEX is going to be deprecated. As autoawq already archived, we can only change transformers to enable triton kernel on XPU. Only XPU chanegd from ipex to triton as the AWQ triton kernel does not work on CPU. | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39443/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39442/comments | https://api.github.com/repos/huggingface/transformers/issues/39442/events | https://github.com/huggingface/transformers/issues/39442 | 3,234,523,893 | I_kwDOCUB6oc7Ayur1 | 39,442 | Missing 4 spaces in SmolVLMImageProcessorFast | {
"login": "gnobitab",
"id": 1157982,
"node_id": "MDQ6VXNlcjExNTc5ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1157982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gnobitab",
"html_url": "https://github.com/gnobitab",
"followers_url": "https://api.github.com/users/gnobitab/followers",
"following_url": "https://api.github.com/users/gnobitab/following{/other_user}",
"gists_url": "https://api.github.com/users/gnobitab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gnobitab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gnobitab/subscriptions",
"organizations_url": "https://api.github.com/users/gnobitab/orgs",
"repos_url": "https://api.github.com/users/gnobitab/repos",
"events_url": "https://api.github.com/users/gnobitab/events{/privacy}",
"received_events_url": "https://api.github.com/users/gnobitab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-16T05:29:06 | 2025-07-16T15:59:30 | 2025-07-16T15:59:30 | NONE | null | null | null | null | ### System Info
transformers==4.53.0 the bug still exists in the latest github version
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Set do_image_splitting=False
### Expected behavior
It reports: KeyError: torch.Size([590, 2048])
I checked the code and I believe four spaces are missing in this line:
https://github.com/huggingface/transformers/blob/0dc2df5ddafe3cb5824ad24e85beba13e0aa6726/src/transformers/models/smolvlm/image_processing_smolvlm_fast.py#L434
I fixed the spacing issue and it runs good on my local machine. Could you have a look and fix this in the next version? Thx. @yonigozlan
| {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39442/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39441/comments | https://api.github.com/repos/huggingface/transformers/issues/39441/events | https://github.com/huggingface/transformers/pull/39441 | 3,234,376,689 | PR_kwDOCUB6oc6fGg8t | 39,441 | ๐ [i18n-KO] Translated `perf_infer_gpu_multi.md` to Korean | {
"login": "luckyvickyricky",
"id": 75977640,
"node_id": "MDQ6VXNlcjc1OTc3NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/75977640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckyvickyricky",
"html_url": "https://github.com/luckyvickyricky",
"followers_url": "https://api.github.com/users/luckyvickyricky/followers",
"following_url": "https://api.github.com/users/luckyvickyricky/following{/other_user}",
"gists_url": "https://api.github.com/users/luckyvickyricky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckyvickyricky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckyvickyricky/subscriptions",
"organizations_url": "https://api.github.com/users/luckyvickyricky/orgs",
"repos_url": "https://api.github.com/users/luckyvickyricky/repos",
"events_url": "https://api.github.com/users/luckyvickyricky/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckyvickyricky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-16T04:12:27 | 2025-07-21T16:17:45 | 2025-07-21T16:14:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39441",
"html_url": "https://github.com/huggingface/transformers/pull/39441",
"diff_url": "https://github.com/huggingface/transformers/pull/39441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39441.patch",
"merged_at": "2025-07-21T16:14:15"
} | # What does this PR do?
Translated the `perf_infer_gpu_multi.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (๋ฒ์ญ ๋๋ฝ/์ค๋ณต ๊ฒ์ฌ)
- [x] Grammar Check (๋ง์ถค๋ฒ ๊ฒ์ฌ)
- [x] Review or Add new terms to glossary (์ฉ์ด ํ์ธ ๋ฐ ์ถ๊ฐ)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview๋ก ์ ์์๋ ํ์ธ)
## Who can review? (Initial)
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39441/reactions",
"total_count": 8,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/39441/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39440/comments | https://api.github.com/repos/huggingface/transformers/issues/39440/events | https://github.com/huggingface/transformers/issues/39440 | 3,233,928,591 | I_kwDOCUB6oc7AwdWP | 39,440 | Add Interactive Multi-Modal Attention Visualization for Vision-Language Models | {
"login": "sisird864",
"id": 137139127,
"node_id": "U_kgDOCCyTtw",
"avatar_url": "https://avatars.githubusercontent.com/u/137139127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sisird864",
"html_url": "https://github.com/sisird864",
"followers_url": "https://api.github.com/users/sisird864/followers",
"following_url": "https://api.github.com/users/sisird864/following{/other_user}",
"gists_url": "https://api.github.com/users/sisird864/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sisird864/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sisird864/subscriptions",
"organizations_url": "https://api.github.com/users/sisird864/orgs",
"repos_url": "https://api.github.com/users/sisird864/repos",
"events_url": "https://api.github.com/users/sisird864/events{/privacy}",
"received_events_url": "https://api.github.com/users/sisird864/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-15T23:10:25 | 2025-07-30T02:47:21 | null | NONE | null | null | null | null | ### Feature request
Implement a comprehensive attention visualization toolkit specifically designed for multi-modal transformers that can:
- Visualize attention patterns between text tokens and image patches
- Show hierarchical attention across different transformer layers
- Support interactive exploration of attention heads
- Export visualizations for research papers and presentations
### Motivation
With the growing adoption of multi-modal models (CLIP, DALL-E, Flamingo, BLIP-2), understanding how these models attend to different modalities is crucial for debugging, interpretability, and research. Currently, Transformers library lacks comprehensive tools for visualizing cross-modal attention patterns.
### Your contribution
I would like to submit a PR for this issue.
Technical Specification
Core Components:
1. Attention Extraction Module (~250 lines)
```
class MultiModalAttentionExtractor:
def __init__(self, model, layer_indices=None):
self.model = model
self.attention_maps = {}
self._register_hooks()
def extract_attention(self, text_inputs, image_inputs):
# Extract and organize attention patterns
pass
```
2. Visualization Engine (~400 lines)
Interactive heatmap generation for image-text attention
Token-level attention flow diagrams
Head-wise attention pattern comparison
Support for different color schemes and normalization methods
3. Interactive Dashboard (~300 lines)
Web-based interface using Gradio/Streamlit integration
Real-time attention exploration
Export functionality (PNG, SVG, JSON)
Comparative visualization for multiple inputs
4. Model Compatibility Layer (~200 lines)
Support for CLIP, BLIP, Flamingo, and other vision-language models
Automatic attention layer detection
Handling different attention mechanisms (self, cross, multi-head)
Implementation Example
```
from transformers import CLIPModel, CLIPProcessor
from transformers.visualization import MultiModalAttentionVisualizer
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
visualizer = MultiModalAttentionVisualizer(model)
# Process inputs
inputs = processor(
text=["a photo of a cat"],
images=image,
return_tensors="pt"
)
# Generate visualization
attention_viz = visualizer.visualize(
inputs,
layer_indices=[6, 11], # Visualize specific layers
attention_type="cross", # Focus on cross-modal attention
interactive=True
)
# Launch interactive dashboard
attention_viz.show()
```
Key Features
Layer-wise Analysis: Examine how attention patterns evolve through the network
Head-wise Decomposition: Understand specialized roles of different attention heads
Attention Statistics: Compute entropy, sparsity, and other metrics
Comparative Mode: Compare attention patterns across different inputs or models
Research Export: Generate publication-ready visualizations
Use Cases
Research: Understanding model behavior and attention mechanisms
Debugging: Identifying attention-related issues in multi-modal models
Education: Teaching how vision-language models work
Model Development: Optimizing attention mechanisms
Implementation Timeline
Phase 1: Core attention extraction module and visualization engine
Phase 2: Interactive dashboard and model compatibility layer
Phase 3: Documentation, examples, and testing
Why This Feature Matters
As multi-modal models become increasingly prevalent in production applications, understanding their internal mechanics is crucial for debugging, optimization, and research. This tool would provide unprecedented insights into how vision-language models process and relate different modalities.
I'm excited to contribute this feature to enhance the interpretability of multi-modal models in the Transformers ecosystem. This tool would be valuable for both researchers and practitioners working with vision-language models.
Estimated Lines of Code: 1200-1500 lines (excluding tests and documentation) | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39440/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39440/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39439/comments | https://api.github.com/repos/huggingface/transformers/issues/39439/events | https://github.com/huggingface/transformers/pull/39439 | 3,233,504,634 | PR_kwDOCUB6oc6fDlE6 | 39,439 | Improve @auto_docstring doc and rename `args_doc.py` to `auto_docstring.py` | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-15T19:47:49 | 2025-07-18T18:00:35 | 2025-07-18T18:00:35 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39439",
"html_url": "https://github.com/huggingface/transformers/pull/39439",
"diff_url": "https://github.com/huggingface/transformers/pull/39439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39439.patch",
"merged_at": "2025-07-18T18:00:35"
} | # What does this PR do?
As the title says :).
Cc @Cyrilvallez @stevhliu | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39439/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39438/comments | https://api.github.com/repos/huggingface/transformers/issues/39438/events | https://github.com/huggingface/transformers/pull/39438 | 3,233,284,005 | PR_kwDOCUB6oc6fC0lc | 39,438 | [gemma3] Fix do_convert_rgb in image processors. | {
"login": "MohitIntel",
"id": 49886570,
"node_id": "MDQ6VXNlcjQ5ODg2NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/49886570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MohitIntel",
"html_url": "https://github.com/MohitIntel",
"followers_url": "https://api.github.com/users/MohitIntel/followers",
"following_url": "https://api.github.com/users/MohitIntel/following{/other_user}",
"gists_url": "https://api.github.com/users/MohitIntel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MohitIntel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohitIntel/subscriptions",
"organizations_url": "https://api.github.com/users/MohitIntel/orgs",
"repos_url": "https://api.github.com/users/MohitIntel/repos",
"events_url": "https://api.github.com/users/MohitIntel/events{/privacy}",
"received_events_url": "https://api.github.com/users/MohitIntel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-15T18:23:08 | 2025-07-18T12:33:34 | 2025-07-18T12:33:01 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39438",
"html_url": "https://github.com/huggingface/transformers/pull/39438",
"diff_url": "https://github.com/huggingface/transformers/pull/39438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39438.patch",
"merged_at": "2025-07-18T12:33:01"
} | * Adds the do_convert_rgb=True statement in image_processing_gemma3_fast.py as suggested in code review for https://github.com/huggingface/transformers/pull/39375
This PR in conjunction with https://github.com/huggingface/transformers/pull/39376 and https://github.com/huggingface/transformers/pull/39375 is needed to make lm-eval run on gemma3.
See https://github.com/huggingface/transformers/pull/39376 for more details on the command used.
* Fix typo in comments of do_pan_and_scan input variable.
Fixes # https://github.com/huggingface/transformers/issues/23447
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39438/timeline | null | null | null | null | true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.